Someone who professes to understand Value-Added scores better than me claims that my graphs for NYC are meaningless because the scores for 2007 were inflated; he claimed that the overall year-to-year and year-to-career value-added correlation coefficients are much higher than what I found — thus, VA is really useful, just not my particular graphs..

Taking this objection seriously, I decided to leave out SY 0607, and compare SY 0506 to SY 0708. Same exact teachers, same exact subjects and grade levels, same exact schools, obviously different (but quite similar) kids.

Here is the scatterplot of what I found. Again, I asked Excel to calculate a line of best fit, and it drew it. Notice that the r-squared correlation value is about 0.05 — seriously LOW. Notice also that this scatterplot is basically a blob again, again a classic example of one variable showing very little correlation with another. (West Virginia’s map has a much more defined shape!) In any case, there are lots (hundreds? thousands?) of teachers with positive VA scores in the first year and negative VA scores the third year, and vice-versa. Only an easily countable handful of teachers have scores of +0.2 or better both years, or worse than -0.2 both years. Out of all of the thousands of teachers. And I bet those are all accidents as well.

So, in other words, I find, as did Gary Rubenstein, that there is extremely little correlation between two things that should be, you would think, very close to a perfect 1.00 correlation. (In the real world, of course, you almost never get a 1.00 correlation between any real entities or quantities. However, when you are talking about the scores of teachers who have been teaching IN THE SAME SCHOOL, THE SAME SUBJECT, THE SAME GRADE LEVEL for three straight years, then you would think that their performances would be rather similar all three years. If anything, they would normally get better unless they had suffered some sort of physically or mentally debilitating injury or illness (often from old age and the incredible amount of stress). In particular, a lot of teachers will admit to you that they absolutely sucked at teaching during their first year, but that they then figured out a lot of those errors and tried not to make the same ones the next year, so they really improved, or else they quit. But these folks didn’t quit. These are at the very least three-year veterans, which in DC would make them eligibility for department or grade level chair at their school as a result of seniority alone, since so many of the older teachers have quit or retired, and the turnover and attrition over the last few years among the newest hires in our school system is probably unprecedented in the history of education. (Perhaps not, but it’s a subject I’d like to pursue.)

Finally, while I admit that I exaggerated a bit (for effect) when I said that the shapes of these graphs, and the very low computed values for the r-squared coefficient of linear correlation, made value-added about as predictive as numerology. I thought about that particular exaggeration and wondered how serious it was. So, even though I have participated in a fairly large number of courses on calculating probabilities and distribution, it’s always a bit fraught with error: Have we counted all of the possibilities? Have we left any out? Have we double-counted any of them? Is there a much better, faster, or less error-prone method hidden right around the corner?

To make a long story short: the Monte Carlo method is a great way of deciding, say, how likely something is to happen. It’s called “Monte Carlo” because it’s very much like gambling in a casino, except you a4ren’t betting any5thing except your time. You just roll some dice (they might be funny-looking non-cubical polyhedra) or spinning a wheel or throwing darts or spattering paint or vaporized metal… And then you see what happens, and draw conclusions. Today, it’s 4really easy t6o do.

So I decided to see whether, in fact, the number of letters in the teachers’ names had any correlation with their Value Added scores. (I thought it was possible, tho not very likely.) I discovered that Excel found the r-squared constant was about 0.000000. That is zero correlation, my friends. Here is one such scatterplot:

The vertical axis, which goes up the middle, is the number of letter in the teachers’ first name times the number of letters in their last name as listed in the spreadsheet. The horizontal axis, which is at the bottom of the page, is their 2005-2006 value-added score, which can be either negative (theoretically bad) or positive (supposedly good). To me, it sort of looks like bush that hasn’t been pruned in several years – a classic case of no correlation at all.

I asked Excel to draw and calculate the line of best fit. It’s the green, nearly-horizontal line near the center of the graph. Notice the r-squared value: 6E-05, which for all of you innumerates out there, means 0.00006, which is seriously smaller (three orders of magnitude smaller) than 0.05; i.e., one-thousandth as big.

Notice that I’m only using r-squared. Someone objected that i should use just r. If you want, take the square root of all of the correlations I had my computer calculate, and you’ll get r. Compare and contrast.

So, in any case, I definitely did exaggerate.

[…] Turque’s excellent expose today in the Washington Post. ======================== Look at the scatter plots of the NYC value-added data. They are essentially random clouds. Meaning that the VAM score one year essentially explains […]

LikeLike