I quote from an official DCPS report written by a consultant named Rachel Curtis in the employ of the Aspen Institute:
“DCPS analyzed the relationship between TLF rubric scores and individual teacher value-added scores based on the DC-CAS.
“At this early stage in the use of value-added analysis nationally, the hope is that there is a strong correlation between a teachers’ score on an instructional rubric and his or her value-added score. This would validate the instructional rubric by showing that doing well in instruction produces better student outcomes. DCPS analysis at the end of the first year
of IMPACT suggests that there is a modest correlation between the two ratings (0.34).
DCPS’s correlations are similar to those of other districts that are using both an instructional
rubric and value-added data. A moderate correlation suggests that while there is a correlation between the assessment of instruction and student learning as measured by standardized tests (for the most part), it is not strong. At this early stage of using value-added data this is an issue that needs to be further analyzed.”
Ya know, if if the educational Deformers running the schools today were honest, they would admit that they’re still working the bugs and kinks out of this weird evaluation system. They would run a few pilot studies here or there, no stakes on anyone, so nobody cheats, and see how it goes. Then either revise it or get rid of it entirely.
Instead, starting in Washington, DC just a few years ago, with Michelle Rhee and Adrian Fenty leading the way locally and obscenely rich financiers funding the entire campaign, they rushed through an elaborate system of secret formulas and rigid rubrics, known as IMPACT. It appears that their goal of demoralizing teachers and convincing the public that public schools need to be closed and be turned over to the same hedge fund managers that brought us the current Great Depression, high unemployment rates, foreclosures. While the gap between the very wealthiest and the rest of the population, especially the bottom 50%, has become truly phenomenal.
Here’s a little table from the report, same page:
(Just so you know, I’ve been giving r^2 in my previous columns, not r. I believe they are using r; to compare that to my previous analyses, if you take 0.34 and square it, you get about 0.1156. That means that the IVA “explains” about 12% of the TLF, and vice versa. Pretty weak stuff.
Would I be alone in suggesting that the “hope” of a strong correlation has not been fulfilled? In fact, I think that’s a pretty measley correlation, and it suggests to me the possibility that neither the formal TLF evaluation rubrics done by administrators, nor the Individual Value-Added magic secret formulas, do an adequate or even competent job of measuring the output of teachers.