The mystery deepens regarding the Capital Gains program, apparently the brain-child of Harvard professor Roland Fryer. It is not clear (to me, at least) which schools are in the experimental group (that is, which schools have their kids being paid to be good), and which schools are in the control group (where the students did not receive the payments).
My original data assumed that the original DCPS announcement, about a year and a half ago, of 14 schools in each group (experimental and control) was correct. To perform my calculations, I used the two lists given at that time. However, alert readers pointed out to me that Hardy MS faculty (probably supported by the parents) told the Harvard researchers that the school did NOT want to participate. The DCPS website on the topic now lists not 14, but 15 schools in the experimental group (adding Randolph and Lincoln/Columbia Heights but dropping Hardy), but doesn’t name the control group. Using that information, I recalculated, making the assumption that the original 14 schools in the control group were joined by Hardy to make 15 non-participating schools.
When I did that, I discovered that the results were even MORE unfavorable for “Capital Gains” than with my original school line-up. So that you know what I was doing, here is the list that DCPS gives of the “experimental group” schools:
Brightwood, Browne JHS, Burroughs, Eliot-Hine, Emery, Garnet-Patterson-Shaw, Hart, Jefferson, Langdon, Lincoln/Columbia Heights, Kelly Miller, Raymond, Stuart-Hobson, Takoma, and Whittier.
Bottom line, if my list of participating and non-participating schools is correct?
The fraction of students proficient in reading in the CONTROL group went from 45.6% to 46.2%, a rise of 0.6% (not very much).
But the fraction of students proficient in reading in the EXPERIMENTAL group went from 40.2% to 38.4%, a drop of about 1.8% (a bit more).
Let me repeat that:
The group that was NOT bribed had a small (about one-half of a percent) rise in reading proficiency.
The group that WAS bribed (up to $100 every two weeks per student if they did everything right) had a decrease of nearly 2% in reading proficiency.
Furthermore, if you follow the same group of students from one year to the next, the results are even more dramatic. In the control group, the students who were 6th graders in 2007-8 and were (for the most part) 7th graders in 2008-9, went from 37.9% proficient to 44.3% proficient, a rise of about 6.5%. And over the same time period, the students who were 7th graders the first year and mostly became 8th graders the next year went from 49.5% to 49.9%, a rise of about 0.4%.
However, in the experimental group, the 6th graders in SY 2007-8 who generally became 7th graders in SY 2008-9 went from 40.7% proficient in reading to 32.1% proficient, a rather large DROP of about 8.6%. And the experimental group’s 7th graders the first year, who mostly became 8th graders the second year, went from 41.0% proficient to 36.5% proficient, a drop of about 4.5%.
So, it appears that the bribes are counter-productive in reading.
But wait: ANOTHER WRINKLE!
Soon after I made these calculations, I was given a brief look at a very preliminary report by Dr. Fryer that seemed to be only a few weeks old, about which I had never heard. Fryer appeared to have a whole lot more data than I did regarding family income, number of suspensions, attendance rates, and so on, and his experiment seems to include Washington DC, Chicago, New York City, and perhaps another location or two (I didn’t have an opportunity to take notes). Frankly, I didn’t really see any year-to-year comparison of changes in any of the rates of anything in any of the cities, which was rather disappointing. (After all, it’s been about almost two full school years since the experiment started!)
But something definitely caught my eye:
His group of experimental schools in DC includes not 14, not 15, but 17 (seventeen) schools!
Did anybody else in DC know this except for him?
Is his number even correct?
Unfortunately, he didn’t list the participating schools so that it could be double-checked. Nor did he give a list of non-participating schools.
As soon as I saw this (Friday 3-19-2010) , I looked up his Harvard faculty web page, and called him up by phone. Unfortunately, he didn’t answer, and his voice mail box was full. I called his administrative assistant and asked if she knew anything about the study, but she said she knew no details of it at all. So I sent them both an e-mail Friday requesting more information. Maybe he will answer. I Will be very disappointed if he doesn’t.
Here is a table showing the summary data for the 15 control group schools:
And here is the same sort of thing, for the experimental group: