DCPS has today given a press release stating what the overall results were for the DC-CAS standardized testing regime for 2011, but you really can’t tell very much. Appaently there wasn’t much change from last year.
Henderson’s press release mostly touts the gain from 2007, the year of mayoral takeover of the DC public school system. Well, you could also point out the tremendous improvement might simply be due to the fact that in 2006, DCPS hired a new testing company to produce a new testing system, the DC-CAS, replacing the SAT-9, so scores plummeted that year. Just like they do in every single school district that switches from one testing company to another. And, as teachers and administrators learned more about the new test, and began teaching more effectively to the new test, students’ scores began improving.
The entire press release can be read here.
Unfortunately, when they say “elementary”, it doesn’t exactly mean what you think. I have found that in past years there were lots of 6th, 7th, and 8th graders included in that category, because of the large numbers of K-8 schools. And when they say “secondary”, again, there are a fair number of 3rd – 6th graders there, too. So, one has to look at the entire DCPS LEA and break it all down by grade level, which is quite tedious.
Also unfortunately, we won’t be able to look at the school-by-school results for another two weeks (July 22, today being the 8th), because the individual school principals are being given these fourteen days to “improve” the data some more. (Exactly how, I don’t know.)
And lastly, there is almost no way of telling the cause of any of these improvements — or lack thereof. Here are the difficulties that stand in the way of taking any of this stuff very seriously:
- It’s possible that there was, once again, wide-spread cheating by adults to make the scores of their students better (and this cheating can take many different forms: erasing wrong answers and writing in correct answers is only one of many);
- The public doesn’t know if security precautions this year were better or worse than in past years, and whether that made a difference (see #1);
- We know that there has been tremendous narrowing of teaching to the few topics that are actually tested with a significant number of questions, and thus, improved scores may in fact mean less learning;
- The stupidity and general cluelessness of the folks who write the test questions (they ain’t classroom teachers, after all) causes the scores to mean almost nothing in the first place;
- The inanity of the actual scoring methods for the ‘brief constructed responses’ means that the scores on those questions only have a vague connection to what the student demonstrated that she or he understands…