Texas Decision Slams Value Added Measurements

And it does so for many of the reasons that I have been advocating. I am going to quote the entirety of Diane Ravitch’s column on this:


Audrey Amrein-Beardsley of Arizona State University is one of the nation’s most prominent scholars of teacher evaluation. She is especially critical of VAM (value-added measurement); she has studied TVAAS, EVAAS, and other similar metrics and found them deeply flawed. She has testified frequently in court cases as an expert witness.

In this post, she analyzes the court decision that blocks the use of VAM to evaluate teachers in Houston. The misuse of VAM was especially egregious in Houston, which terminated 221 teachers in one year, based on their VAM scores.

This is a very important article. Amrein-Beardsley and Jesse Rothstein of the University of California testified on behalf of the teachers; Tom Kane (who led the Gates’ Measures of Effective Teaching (MET) Study) and John Friedman (of the notorious Chetty-Friedman-Rockoff study) testified on behalf of the district.

Amrein-Beardsley writes:

Of primary issue will be the following (as taken from Judge Smith’s Summary Judgment released yesterday): “Plaintiffs [will continue to] challenge the use of EVAAS under various aspects of the Fourteenth Amendment, including: (1) procedural due process, due to lack of sufficient information to meaningfully challenge terminations based on low EVAAS scores,” and given “due process is designed to foster government decision-making that is both fair and accurate.”

Related, and of most importance, as also taken directly from Judge Smith’s Summary, he wrote:

HISD’s value-added appraisal system poses a realistic threat to deprive plaintiffs of constitutionally protected property interests in employment.

HISD does not itself calculate the EVAAS score for any of its teachers. Instead, that task is delegated to its third party vendor, SAS. The scores are generated by complex algorithms, employing “sophisticated software and many layers of calculations.” SAS treats these algorithms and software as trade secrets, refusing to divulge them to either HISD or the teachers themselves. HISD has admitted that it does not itself verify or audit the EVAAS scores received from SAS, nor does it engage any contractor to do so. HISD further concedes that any effort by teachers to replicate their own scores, with the limited information available to them, will necessarily fail. This has been confirmed by plaintiffs’ expert, who was unable to replicate the scores despite being given far greater access to the underlying computer codes than is available to an individual teacher [emphasis added, as also related to a prior post about how SAS claimed that plaintiffs violated SAS’s protective order (protecting its trade secrets), that the court overruled, see here].

The EVAAS score might be erroneously calculated for any number of reasons, ranging from data-entry mistakes to glitches in the computer code itself. Algorithms are human creations, and subject to error like any other human endeavor. HISD has acknowledged that mistakes can occur in calculating a teacher’s EVAAS score; moreover, even when a mistake is found in a particular teacher’s score, it will not be promptly corrected. As HISD candidly explained in response to a frequently asked question, “Why can’t my value-added analysis be recalculated?”:

Once completed, any re-analysis can only occur at the system level. What this means is that if we change information for one teacher, we would have to re- run the analysis for the entire district, which has two effects: one, this would be very costly for the district, as the analysis itself would have to be paid for again; and two, this re-analysis has the potential to change all other teachers’ reports.

The remarkable thing about this passage is not simply that cost considerations trump accuracy in teacher evaluations, troubling as that might be. Of greater concern is the house-of-cards fragility of the EVAAS system, where the wrong score of a single teacher could alter the scores of every other teacher in the district. This interconnectivity means that the accuracy of one score hinges upon the accuracy of all. Thus, without access to data supporting all teacher scores, any teacher facing discharge for a low value-added score will necessarily be unable to verify that her own score is error-free.

HISD’s own discovery responses and witnesses concede that an HISD teacher is unable to verify or replicate his EVAAS score based on the limited information provided by HISD.

According to the unrebutted testimony of plaintiffs’ expert, without access to SAS’s proprietary information – the value-added equations, computer source codes, decision rules, and assumptions – EVAAS scores will remain a mysterious “black box,” impervious to challenge.

While conceding that a teacher’s EVAAS score cannot be independently verified, HISD argues that the Constitution does not require the ability to replicate EVAAS scores “down to the last decimal point.” But EVAAS scores are calculated to the second decimal place, so an error as small as one hundredth of a point could spell the difference between a positive or negative EVAAS effectiveness rating, with serious consequences for the affected teacher.

Hence, “When a public agency adopts a policy of making high stakes employment decisions based on secret algorithms incompatible with minimum due process, the proper remedy is to overturn the policy.”

Advertisements

Judge in NY State Throws Out ‘Value-Added Model’ Ratings

I am pleased that in an important, precedent-setting case, a judge in New York State has ruled that using Value-Added measurements to judge the effectiveness of teachers is ‘arbitrary’ and ‘capricious’.

The case involved teacher Sheri Lederman, and was argued by her husband.

“New York Supreme Court Judge Roger McDonough said in his decision that he could not rule beyond the individual case of fourth-grade teacher Sheri G. Lederman because regulations around the evaluation system have been changed, but he said she had proved that the controversial method that King developed and administered in New York had provided her with an unfair evaluation. It is thought to be the first time a judge has made such a decision in a teacher evaluation case.”

In case you were unaware of it, VAM is a statistical black box used to predict how a hypothetical student is supposed to score on a Big Standardized Test one year based on the scores of every other student that year and in previous years. Any deviation (up or down) of that score is attributed to the teacher.

Gary Rubinstein and I have looked into how stable those VAM scores are in New York City, where we had actual scores to work with (leaked by the NYTimes and other newspapers). We found that they were inconsistent and unstable in the extreme! When you graph one year’s score versus next year’s score, we found that there was essentially no correlation at all, meaning that a teacher who is assigned the exact same grade level, in the same school, with very similar  students, can score high one year, low the next, and middling the third, or any combination of those. Very, very few teachers got scores that were consistent from year to year. Even teachers who taught two or more grade levels of the same subject (say, 7th and 8th grade math) had no consistency from one subject to the next. See my blog  (not all on NY City) herehere, here,  here, herehere, here, here,  herehere, and here. See Gary R’s six part series on his blog here, here, here, here, here, and here. As well as a less technical explanation here.

Mercedes Schneider has done similar research on teachers’ VAM scores in Louisiana and came up with the same sorts of results that Rubinstein and I did.

Which led all three of us to conclude that the entire VAM machinery was invalid.

And which is why the case of Ms. Lederman is so important. Similar cases have been filed in numerous states, but this is apparently the first one where a judgement has been reached.

(Also read this. and this.)

Gary Rubenstein is Right: No correlation on Value Added Scores in NYC

One of the things that experimental scientists really should do is to try to replicate each other’s results to see if they are correct or not. I have begun doing that with the value-added scores awarded to teachers in New York City, and I find that I generally agree with the results obtained by Gary Rubenstein.

What I did is looked at the value-added scores, in percentiles, that were “awarded” to thousands of New York City public school teachers in school years 05-06, 06-07, and 07-08. I found that there is essentially no correlation between the scores of the exact same teacher from year to year. The r-squared coefficients are on the order of 0.08 to 0.09 – about as close to random as you can ever get in real life.

Here are my two graphs for the night:

I actually had Excel draw the line of regression, but it’s a joke: an r-squared correlation coefficient of 0.0877 means, as I said, that there is extremely little correlation between what any teacher got in school year 05-06 and what they got in SY 06-07. In the same school. With very similar kids. Teaching the same subject.

And, a similar graph comparing teachers’ scores for school year 06-07 with their scores for 07-08:

So, one year, a teacher might be around the 90th percentile. The next year, she might be around the 10th percentile. Or the other way around. Did the teacher suddenly get stupendously better (or worse)? I doubt it. By the time they are adults, most people are pretty consistent. But not according to this graph. In fact, if somebody is in the 90th to 100th percentile in school year 2006/07, then the probability that they would remain in the same 90th-to-100th-percentile bracket is roughly 1 in 4. If they are in the 0th to 10th percentile in 2006-2007, the chances that they would remain in the same bracket the following year is about 7%!!

What this shows is that using value-added scores to determine if someone should keep their job or get a bonus or a demotion is absolutely insane.

Published in: on March 3, 2012 at 11:28 pm  Comments (12)  
Tags: , ,

A Study on Whether the Los Angeles Value-Added Measurements are Correct

Here is the link to the article, which is pretty wonky:

http://nepc.colorado.edu/files/NEPC-RB-LAT-VAM_0.pdf

%d bloggers like this: