As you probably know, a handful of agricultural researchers and economists have come up with extremely complicated “Value-Added” Measurement (VAM) systems that purport to be able to grade teachers’ output exactly.
These economists (Hanushek, Chetty and a few others) claim that their formulas are
magically mathematically able to single out the contribution of every single teacher to the future test scores and total lifetime earnings of their students 5 to 50 years into the future. I’m not kidding.
Of course, those same economists claim that the teacher is the single most important variable affecting their student’s school and trajectories – not family background or income, nor peer pressure, nor even whole-school variables. (Many other studies have shown that the effect of any individual teacher, or all teachers, is pretty small – from 1% to 14% of the entire variation, which corresponds to what I found during my 30 years of teaching … ie, not nearly as much of an impact as I would have liked [or feared], one way or another…)
Diane Ravitch has brought to my attention an important study by Stuart Yen at UMinn that (once again) refutes those claims, which are being used right now in state after state and county after county, to randomly fire large numbers of teachers who have tried to devote their lives to helping students.
According to the study, here are a few of the problems with VAM:
1. As I have shown repeatedly using the New York City value-added scores that were printed in the NYTimes and NYPost, teachers’ VAM scores vary tremendously over time. (More on that below; note that if you use VAM scores, 80% of ALL teachers should be fired after their first year of teaching) Plus RAND researchers found much the same thing in North Carolina. Also see this. And this.
2. Students are not assigned randomly to teachers (I can vouch for that!) or to schools, and there are always a fair number of students for whom no prior or future data is available, because they move to other schools or states, or drop out, or whatever; and those students with missing data are NOT randomly distributed, which pretty makes the whole VAM setup an exercise in futility.
3. The tests themselves often don’t measure what they are purported to measure. (Complaints about the quality of test items are legion…)
Here is an extensive quote from the article. It’s a section that Ravitch didn’t excerpt, so I will, with a few sentences highlighted by me, since it concurs with what I have repeatedly claimed on my blog:
A largely ignored problem is that true teacher performance, contrary to the main assumption underlying current VAM models, varies over time (Goldhaber & Hansen, 2012). These models assume that each teacher exhibits an underlying trend in performance that can be detected given a sufficient amount of data. The question of stability is not a question about whether average teacher performance rises, declines, or remains flat over time.
The issue that concerns critics of VAM is whether individual teacher performance fluctuates over time in a way that invalidates inferences that an individual teacher is “low-” or “high-” performing.
This distinction is crucial because VAM is increasingly being applied such that individual teachers who are identified as low-performing are to be terminated. From the perspective of individual teachers, it is inappropriate and invalid to fire a teacher whose performance is low this year but high the next year, and it is inappropriate to retain a teacher whose performance is high this year but low next year.
Even if average teacher performance remains stable over time, individual teacher performance may fluctuate wildly from year to year. (my emphasis – gfb)
While previous studies examined the intertemporal stability of value-added teacher rankings over one-year periods and found that reliability is inadequate for high-stakes decisions, researchers tended to assume that this instability was primarily a function of measurement error and sought ways to reduce this error (Aaronson, Barrow, & Sander, 2007; Ballou, 2005; Koedel & Betts, 2007; McCaffrey, Sass, Lockwood, & Mihaly, 2009).
However, this hypothesis was rejected by Goldhaber and Hansen (2012), who investigated the stability of teacher performance in North Carolina using data spanning 10 years and found that much of a teacher’s true performance varies over time due to unobservable factors such as effort, motivation, and class chemistry that are not easily captured through VAM. This invalidates the assumption of stable teacher performance that is embedded in Hanushek’s (2009b) and Gordon et al.’s (2006) VAM-based policy proposals, as well as VAM models specified by McCaffrey et al. (2009) and Staiger and Rockoff (2010) (see Goldhaber & Hansen, 2012, p. 15).
The implication is that standard estimates of impact when using VAM to identify and replace low-performing teachers are significantly inflated (see Goldhaber & Hansen, 2012, p. 31).
As you also probably know, the four main ‘tools’ of the billionaire-led educational DEform movement are:
* firing lots of teachers
* breaking their unions
* closing public schools and turning education over to the private sector
* changing education into tests to prepare for tests that get the kids ready for tests that are preparation for the real tests
They’ve been doing this for almost a decade now under No Child Left Untested and Race to the Trough, and none of these ‘reforms’ have shown to make any actual improvement in the overall education of our youth.