More Problems With Value-Added Measurements for Teachers

I finally got around to reading and skimming the MATHEMATICA reports on VAM for schools and individual teachers in DCPS.

.
At first blush, it’s pretty impressive mathematical and statistical work. It looks like they were very careful to take care of lots of possible problems, and they have lots of nice greek letters and very learned and complicated mathematical formulas, with tables giving the values of many of the variables in their model. They even use large words like heteroscedasticity to scare off those not really adept at professional statistics (which would include even me). See pages 12 – 20 for examples of this mathematics of intimidation, as John Ewing of MfA and the AMS has described it. Here is one such learned equation:
value added equation
BUT:
.
However clever and complex a model might be, it needs to do a good job of explaining and describing reality, or it’s just another failed hypothesis that needs to be rejected (like the theories of the 4 humours or the Aether). One needs to actually compare its track record with the real world and see how well the model compares with the real world.
.
Which is precisely what these authors do NOT do, even though they claim that “for teachers with the lowest possible IMPACT score in math — the bottom 3.6 percent of DCPS teachers — one can say with at least 99.9 percent confidence that these teachers were below average in 2010.” (p. 5)
.
Among other things, such a model would need to be consistent over time, i.e., reliable. Every indication I have seen, including in other cities that the authors themselves cite (NYC–see p. 2 of the 2010 report) indicates that individual value-added scores for a given teacher jump around randomly from year to year in cases of a teacher working at the exact same school, exact same grade level, exact same subject; or in cases of a teacher teaching 2 grade levels in the same school; or in cases of a teacher teaching 2 subjects, during the same year. Those correlations appear to be in the range of 0.2 to 0.3, which is frankly not enough to judge who is worth receiving large cash bonuses or a pink slip.
.
Unless something obvious escaped me, the authors do not appear to mention any study of how teachers’ IVA scores vary over time or from class to class, even though they had every student’s DC-CAS scores from 2007 through the present (see footnote, page 7).
.
In neither report do they acknowledge the possibility of cheating by adults (or students).
.
They do acknowledge on page 2 that a 2008 study found low correlations between proficiency gains and value-added estimates for individual schools in DCPS from 2005-2007. They attempt to explain that low correlation by “changes in the compositions of students from one year to the next” — which I doubt. I suspect it’s that neither one is a very good measure.
.
They also don’t mention anything about correlations between value-added scores and classroom-observations scores. From the one year of data that I received, this correlation is also very low. It is possible that this correlation is tighter today than it used to be, but I would be willing to wager tickets to a professional DC basketball, hockey, or soccer game that it’s not over 0.4.
.
The authors acknowledge that “[t]he DC CAS is not specifically designed for users to compare gains across grades.” Which means, they probably shouldn’t be doing it. It’s also the case that many, many people do not feel that the DC-CAS does a very good job of measuring much of anything useful except the socio-economic status of the student’s parents.
.
In any case, the mathematical model they have made may be wonderful, but real data so far suggests that it does not predict anything useful about teaching and learning.
Published in: on January 21, 2014 at 11:08 am  Comments (2)  
Tags: , ,

Charter Schools Have Failed to Close the ‘Achievement Gap’ their Backers Claim they would Crush.

I am reposting an article that Diane Ravitch brought to my attention, but I’m deleting the crappy and incorrect headline. I am emphasizing a few parts that I think are key, since I think the Levines “buried the Lede” as a reporter would say.

They also could have used a few graphs to illustrate what they meant.

By Adeline Levine and Murray Levine

SPECIAL TO THE News

on October 13, 2013 – 12:01 AM

Charter schools are hailed by the U.S. Department of Education, by major foundations, and by corporate and philanthropic organizations as the prime solution to the alleged failures of traditional public schools to educate children, failures underscored by the poor performance of their minority and disadvantaged students.

Four large-scale studies by two respected research institutes, CREDO and Mathematica, comparing charter schools with traditional public schools were reported in 2013. Major newspapers, apparently relying on the press releases, trumpeted that charter schools had shown astonishing results in closing the achievement gap between disadvantaged and not-disadvantaged students.

Achievement tests are the major yardstick used to assess schools. CREDO conducted three national evaluation studies comparing the achievement test performance of students in charter schools with matched students in traditional public schools. Mathematica studied middle schools in the well-regarded KIPP charter school chain. All four studies compared the amount of “gain” or “growth” in achievement test scores over a school year, not the actual levels of achievement. Even with gains, the achievement level may still be well below norms for the test.

Buried deep in its report, one CREDO study states, “Only when the annual learning gain of these student [minority/poverty] subgroups exceeds that of white or non-poverty students can progress on closing the achievement gap be made.” Charter school minority and economically disadvantaged students made some very small gains in reading and math when compared to matched controls in public schools. However, the difference in achievement growth between white non-poverty students in traditional public schools and minority/poverty students in charter schools is the most relevant comparison.

The average gain, in standard deviation units, for minority or poverty students in charter schools when compared to their counterparts in traditional public schools, was about 0.03. However, the average gain for non-minority, non-poverty traditional public school white students was 0.80. The gain was up to 27 times the gain for poverty or minority students in charter schools. The Mathematica study of KIPP middle schools showed similar large gaps in gains.

The CREDO Institute states: “For many charter school supporters, improving education outcomes for historically disadvantaged is the paramount goal.” While all of the groups in both kinds of schools show gains over the years, the achievement gap remains, as it always has when students from homes in poverty are compared to non-poor ones, in this country and internationally. The “paramount goal” to level the field is not being met by charter schools.

Charter school advocates attribute the educational difficulties of disadvantaged students in traditional public schools to ineffective, uncaring teachers, their unions and bureaucratic restrictions. They insist that having a great teacher in every classroom will overcome every limitation. They claim that low expectations for disadvantaged children are the major problem, not the complex negative effects of poverty.

Charter schools are not hindered in their selection of teachers by bureaucratic restrictions, nor are charter school teachers prevented by union restrictions from pursuing the charter school programs. Allegedly, charter schools have great teachers in every classroom. If there are “no excuses” when disadvantaged students do less well than non-disadvantaged students in traditional public schools, the same rules should apply to charter schools.

What excuse do charters have for the persistent achievement test gap between disadvantaged students in charter schools compared to non-disadvantaged students in the public schools? And why continue down a path where the numbers show that the national policy favoring charter schools will make the majority-minority gap worse?

Charter schools are protected by powerful, wealthy individuals and foundations that profess free-market choice and hold anti-union sentiments and pro-privatization beliefs; some advocates are pursuing profit motives. The advocates seem not to be influenced by data despite their insistence they are data-driven.

The reality is that problems associated with a history of discrimination and the complex negative effects of poverty are not easily solved. The solutions require an enormous, long-term societal commitment. The current reforms, however, threaten the very existence of our public schools, which have long been the envy of the entire world.

Adeline Levine, Ph.D., is professor emeritas (sociology) at the University at Buffalo. A former chairwoman of the department, she is the author of “Love Canal: Science, Politics and People,” and other books and articles on educational subjects. Murray Levine, J.D., Ph.D., is distinguished service professor (psychology) emeritus at UB. He has published extensively on educational subjects.

Published in: on October 19, 2013 at 7:04 pm  Leave a Comment  
Tags: , , , , ,

DCPS Administrators Won’t or Can’t Give a DCPS Teacher the IMPACT Value-Added Algorithm

Does this sound familiar?

A veteran DCPS math teacher at Hardy MS has been asking DCPS top administrator Jason Kamras for details, in writing, on exactly how the “Value-Added” portion of IMPACT teacher evaluation system is calculated for teachers. To date, she has still not received an answer.

How the “Value-Added” portion of the IMPACT actually works is rather important: for a lot of teachers, it’s about half of their annual score. The general outline of the VAM is explained in the IMPACT documents, but none of the details. Supposedly, all of the scores of all of a teachers’ students’ in April are compared with all of those same students’ scores last April; and then, the socio-economic status and current achievement scores of those students are taken into account somehow, and the teacher is labeled with a single number that supposedly shows how his or her students gained during that year with respect to all other similar students.

But how those comparisons are made is totally unclear. So far  I have heard that the algorithm, or mathematical procedure, that has been used is designed to make it so that exactly half of all teachers are deemed, in non-technical terms, ‘below average’ in that regard — which of course will set them up to be fired sooner or later. Whether that’s an accurate description of the algorithm, I don’t know. Ms. Bax told me that she heard that DCPS expects that teachers with almost all Below-Basic students would be expected to achieve tremendous gains with their students. However, my own educational research indicates the opposite.

In any case, Kamras and his other staff haven’t put any details in writing. Yet.

At one place Kamras writes that “we use a regression equation to determine this score.” OK, Bax and Kamras and I all teach or taught math. We all understand a fair amount about regression equations. But there are lots of such equations! Just saying that there is a regression equation is involved is like saying Newton “used algebraic equations” in writing “Principia Mathematica”, or that Tolstoy used “words and sentences” when he wrote “War and Peace.” And just about equally informative.

I attach a series of emails between Ms. Bax, an 8th grade math teacher, and Mr. Kamras and a few other people in DCPS Central Administration. The emails were supplied to me, in frustration, by Ms. Bax. I used color to try to make it clear who was writing what: Green is Ms. Bax, and reds and browns and pinks denote those written by for various administrators. Note that this exchange of emails started in September of 2010.

Perhaps publicizing this exchange might prod Mr. Kamras to reveal details on a system that has already shown by Mathematica (the same group that designed the system) to be highly flawed and unreliable?

=========================================

From: “Bax, Sarah (MS)” <sarah.bax@dc.gov> Date: Mon, 13 Sep 2010 17:12:41 -0400

To: Jason Kamras jason.kamras@dc.gov Subject: Impact

Jason,

I hope the year is off to a great start for you.

I am writing concerning the IMPACT IVA score calculations.  I am very  frustrated with this process on a number of fronts.  First, I would like to have an actual explanation of how the growth scores are calculated. As they have been explained, the process seems quite flawed in actually measuring teacher effectiveness.  Further, I would like to know if teachers have any recourse in having their scores reexamined, etc.

Last year, 89% of the eighth graders at Hardy scored in the Proficient or Advanced range in Mathematics.  As the sole eighth grade mathematics teacher last year, I taught almost all of the students except for a handful that were pulled for special education services.  Beyond this accomplishment, I am extremely proud to report that 89% of our Black students were at that Proficient or Advanced level.

With statistics like these, I take issue with a report that scores my IVA at 3.4 (to add insult to this injury, even under your system if my students had earned just one-tenth more of a growth point, my IVA would be a 3.5 and I would be considered highly effective).

Frankly, I teach among the best of the best in DCPS– with very few of us rated highly effective.  The IMPACT scoring system has had a terrific negative impact on morale at our school.

Kindly,

Sarah Bax

———————————————–

From: Kamras, Jason (MS) Sent: Tue 9/14/2010 7:50 AM To: Bax, Sarah (MS) Subject: Re: Impact

Hi Sarah,

I’m disappointed to hear how frustrated you are. Can you give me a call at 202-321-1248 to discuss?

Thanks,

Jason

Jason Kamras

Director, Teacher Human Capital

——————————————————–

Jason,

I really do not have the time to call to discuss my concerns.  If you would forward the requested information regarding specific explanation about the growth scores calculation process I would be most obliged.

I would like specifics about the equation.  Please forward my inquiry to one of your technical experts so that he or she may email me with additional information about the mathematical model.

Kindly,

Sarah

—————————————————

From: Barber, Yolanda (OOC) Sent: Mon 12/20/2010 2:12 PM To: Bax, Sarah (MS)

Subject: FW: IMPACT Question

Ms. Bax,

Sorry for the barrage of emails, but I received a response concerning your question.  Please read the response below.  I hope this helps.  Please let me know if you’d like to continue with our session on the 4th.  Thanks again.

Best!

Yolanda Barber

Master Educator | Secondary Mathematics

District of Columbia Public Schools

Office of the Chancellor

——————————————————–

From: Rodberg, Simon (OOC) Sent: Monday, December 20, 2010 2:05 PM

To: Barber, Yolanda (OOC); Lindy, Benjamin (DCPS); Gregory, Anna (OOC) Subject: RE: IMPACT Question

Hi Yolanda,

We will be doing more training, including full information on Ms. Bax’s question, this spring. We’d like to give a coherent, full explanation at that time rather than give piecemeal  answers to questions in the meantime.

Thanks, and I hope you enjoy your break.

Simon

Simon Rodberg

Manager, IMPACT Design, Office of Human Capital

———————–

Yolanda,

I got notice a couple of weeks ago that I have jury duty on your office hours day at Hardy so I won’t be able to make the appointment.  I’m sorry to miss you, but appreciate your efforts to send my concerns to the appropriate office.

The response below is obviously no help at all as it clearly indicates the Office of Human Capital is unwilling to answer my specific question regarding the calculations involved in determining my rating.  I believe my only request was to have an accurate description of how the expected growth score is calculated.  My question has been left unanswered since last spring.  Can you imagine if a student of mine asked how his or her grade was determined and I told them I couldn’t provide a coherent explanation right now, but see me in a year?

Thanks again for your help.  I look forward to meeting you in person in the future!

Kindly,

Sarah

——————————-

From: Bax, Sarah (MS) Sent: Tuesday, December 21, 2010 11:09 AM To: Rodberg, Simon (OOC)

Cc: Henderson, Kaya (OOC) Subject: FW: Appointment #80 (from DCPS Master Educator Office Hours Signup)

Mr. Rodberg,

I am requesting a response to my inquiry below:    ‘explanation of actual algorithm to determine predicted growth score’.

Kindly,

S. Bax

—————————-

Ms. Bax,

What’s a good phone number to reach you on? I think it would be easiest to explain over the phone.

Thank you, and happy holidays.

Simon

Simon Rodberg

Manager, IMPACT Design, Office of Human Capital

—————————–

From: Kamras, Jason (DCPS) [mailto:jason.kamras@dc.gov] Sent: Sun 12/26/2010 1:42 PM

To: Bax, Sarah (MS) Cc: Henderson, Kaya (OOC) Subject: Value-added calculation

Hi Sarah,

The Chancellor informed me that you’re looking for a more detailed explanation of how your “predicted” score is calculated. In short, we use a regression equation to determine this score. If you’d like to know more about the specifics of the equation, please let me know and I can set up a time for your to meet with our technical experts.

Happy New Year!

Jason

Jason Kamras

Chief, Office of Human Capital

—————————————

On 12/27/10 12:17 PM, “Bax, Sarah (DCPS-MS)” <sarah.bax@dc.gov> wrote:

Jason,

I have requested an explanation of the value-added calculation since September, with my initial request beginning with you (see email exchange pasted below).  I would like specifics about the equation.  Please forward my inquiry to one of your technical experts so that he or she may email me with additional information about the mathematical model.

Kindly,

Sarah

————————————

On 12/27/10 12:23 PM, “Kamras, Jason (DCPS)” <jason.kamras@dc.gov> wrote:

My deepest apologies, Sarah. I’ll set this up as soon as I get back.

Jason Kamras

Chief, Office of Human Capital

—–Original Message—–

From: Kamras, Jason (DCPS) [mailto:jason.kamras@dc.gov] Sent: Tue 1/25/2011 11:02 PM

To: Bax, Sarah (MS) Subject: FW: Value-added calculation

Hi Sarah,

I just wanted to follow up on this. When could we get together to go over the equation?

Hope you’re well,

Jason

Jason Kamras

Chief, Office of Human Capital

———————————-

From: Bax, Sarah (MS) Sent: Fri 1/28/2011 1:15 PM To: Kamras, Jason (DCPS)

Subject: RE: Value-added calculation

Jason,

I really would just like something in writing that I can go over– and then I could contact you if I have questions.  It is difficult to carve out meeting time in my schedule.

Kindly,

Sarah

————————–

From: “Bax, Sarah (DCPS-MS)” <sarah.bax@dc.gov> Date: Thu, 10 Feb 2011 14:05:43 -0500

To: Jason Kamras <jason.kamras@dc.gov> Subject: FW: Value-added calculation

Jason,

I didn’t hear back from you after this last email.

Kindly,

Sarah

———-

From: Kamras, Jason (DCPS) [mailto:jason.kamras@dc.gov] Sent: Thu 2/10/2011 6:00 PM

To: Bax, Sarah (MS) Subject: Re: Value-added calculation

Ugh. So sorry, Sarah. The only thing we have in writing is the technical report, which is being finalized. It should be available on our website this spring. Of course, let me know if you’d like to meet before then.

Best,

Jason

Jason Kamras

Chief, Office of Human Capital

————————————-

On Feb 25, 2011, at 9:29 PM, “Bax, Sarah (MS)” <sarah.bax@dc.gov> wrote:

Jason,

How do you justify evaluating people by a measure [for] which you are unable to provide explanation?

-Sarah

————–

Sat, February 26, 2011 11:25:33 AM

Sarah,

To be clear, we can certainly explain how the value-added calculation works. However, you’ve asked for a level of detail that is best explained by our technical partner, Mathematica Policy Research. When I offered you the opportunity to sit down with them, you declined.

As I have also noted previously, the detail you seek will be available in the formal Technical Report, which is being finalized and will be posted to our website in May. I very much look forward to the release, as I think you’ll be pleased by the thoughtfulness and statistical rigor that have guided our work in this area.

Finally, let me add that our model has been vetted and approved by a Technical Advisory Board of leading academics from around the country. We take this work very seriously, which is why we have subjected it to such extensive technical scrutiny.

Best,
Jason

Jason Kamras
Chief, Office of Human Capital
——————————-

Jason,

To be clear, I did not decline the opportunity to speak with your technical partner.  On December 27th I wrote to you, “I would like specifics about the equation.  Please forward my inquiry to one of your technical experts so that he or she may email me with additional information about the mathematical model.” I never received a response to this request.

In addition, both you and Mr. Rodberg offered to provide information about the equation to me on the phone or in person, but have yet to agree to send any information in writing.  You have stated, “I just wanted to follow up on this.
When could we get together to go over the equation?”  Mr. Rodberg wrote, “What’s a good phone number to reach you on? I think it would be easiest to explain over the phone.”

Why not transpose the explanation you would offer verbally to an email?  Please send in writing the information that you do know about how the predicted growth score is calculated.  For instance, I would expect you are familiar with what variables are considered and which data sources are used to determine their value.  Let me know what you would tell me if I were to meet with you.

As a former teacher, you must realize the difficulty in arranging actual face-time meetings given my teaching duties.  And as a former mathematics teacher, I would imagine you could identify with my desire to have an understanding of the quantitative components of my evaluation.

Sincerely,
Sarah

Published in: on February 27, 2011 at 8:59 pm  Comments (23)  
Tags: , , , , ,
%d bloggers like this: