My Testimony Yesterday Before DC City Council’s Education Subcommittee ‘Roundtable’

Testimony of Guy Brandenburg, retired DCPS mathematics teacher before the DC City Council Committee on Education Roundtable, December 14, 2013 at McKinley Tech

 

Hello, Mr. Catania, audience members, and any other DC City Council members who may be present. I am a veteran DC math teacher who began teaching in Southeast DC about 35 years ago, and spent my last 15 years of teaching at Alice Deal JHS/MS. I taught everything from remedial 7th grade math through pre-calculus, as well as computer applications.

Among other things, I coached MathCounts teams at Deal and at Francis JHS, with my students often taking first place against all other public, private, and charter schools in the city and going on to compete against other state teams. As a result, I have several boxes full of trophies and some teaching awards.

Since retiring, I have been helping Math for America – DC (which is totally different from Teach for America) in training and mentoring new but highly skilled math teachers in DC public and charter schools; operating a blog that mostly concerns education; teaching astronomy and telescope making as an unpaid volunteer; and also tutoring [as a volunteer] students at the school closest to my house in Brookland, where my daughter attended kindergarten about 25 years ago.

But this testimony is not about me; as a result, I won’t read the previous paragraphs aloud.

My testimony is about how the public is being deceived with bogus statistics into thinking things are getting tremendously better under mayoral control of schools and under the chancellorships of Rhee and Henderson.

In particular, I want to show that the Value-Added measurements that are used to evaluate teachers are unreliable, invalid, and do not help teachers improve their methods of instruction. To the contrary: IVA measurements are driving a number of excellent, veteran teachers to resign or be fired from DCPS to go elsewhere.

I will try to show this mostly with graphs made by me and others, because in statistics, a good scatter plot is worth many a word or formula.

John Ewing, who is the president of Math for America and is a former executive director of the American Mathematical Society, wrote that VAM is “mathematical intimidation” and not reliable. I quote:

pic 1 john ewing

 

In case you were wondering how the formula goes, this is all that one of my colleagues was able to pry from Jason Kamras after SIX MONTHS of back-and-forth emails asking for additional information:

pic 2 dcps iva vam formula

One problem with that formula is that nobody outside a small group of highly-paid consultants has any idea what are the values of any of those variables. What’s more, many of those variables are composed of lists or matrices (“vectors”) of other variables.

In not a single case has the Office of Data and Accountability sat down with a teacher and explained, in detail, exactly how a teachers’ score is calculated, student by student, class by class, test score by test score.

Nor has that office shared that data with the Washington Teachers’ Union.

It’s the mathematics of intimidation, lack of accountability, and obfuscation.

I would ask you, Mr. Catania, to ask the Office of Data and Accountability to share with the WTU all IMPACT scores for every single teacher, including all the sub-scores, such as those for IVA and classroom observations.

To put a personal touch to my data, one of my former Deal colleagues shared with me that she resigned from DCPS specifically because her IVA scores kept bouncing around with no apparent reason. In fact, the year that she thought she did her very best job ever in her entire career – that’s when she earned her lowest value-added score. She now teaches in Montgomery County and recently earned the distinction of becoming a National Board Certified teacher – a loss for DCPS students, but a gain for those in Maryland.

Bill Turque of the Washington Post documented the case of Sarah Wysocki, an excellent teacher with outstanding classroom observation results, who was fired by DCPS for low IVA scores. She is now happily teaching in Virginia. I am positive that these two examples can be multiplied many times over.

Now let’s look at some statistics. As I mentioned, in many cases, pictures and graphs speak more clearly than words or numbers or equations.

My first graph is of completely random data points that should show absolutely no correlation with each other, meaning, they are not linked to each other in any way. I had my Excel spreadsheet to make two lists of random numbers, and I plotted those as the x- and y- variables on the following graph.

pic 3 - completely random points

I asked Excel also to draw a line of best fit and to calculate the correlation coefficient R and R-squared. It did so, as you can see, R-squared is very low, about 0.08 (eight percent). R, the square root of R-squared, is about 29 percent.

Remember, those are completely random numbers generated by Excel.

Now let’s look at a very strong correlation of real numbers: poverty rates and student achievement in a number of states. The first one is for Nebraska.

pic  4 - nebraska poverty vs achievement

R would be about 94% in this case – a very strong correlation indeed.

The next one is for Wisconsin:

pic 5 - wisconsin poverty vs achievement

Again, quite a strong correlation – a negative one: the poorer the student body, the lower the average achievement, which we see repeated in every state and every country in the world. Including DC, as you can see here:

 

pic 6 - poverty vs proficiency in DC

Now, how about those Value-Added scores? Do they correlate with classroom observations?

Mostly, we don’t know, because the data is kept secret. However, someone leaked to me the IVA and classroom observation scores for all DCPS teachers for SY 2009-10, and I plotted them. Is this a strong correlation, or not?

pic 7 - VAM versus TLF in DC IMPACT 2009-10

I would say this looks pretty much like no correlation at all. What on earth are these two things measuring? It certainly gives teachers no assistance on what to improve in order to help their students learn better.

And how stable are Value-Added measurements over time? If they are stable, that would mean that we might be able to use them to weed out the teachers who consistently score at the bottom, and reward those who consistently score at the top.

Unfortunately, since DCPS keeps all the data hidden, we don’t exactly know how stable these scores are here. However, the New York Times leaked the value-added data for NYC teachers for several years, and we can look at those scores to see.

Here is one such graph:

pic 8 - value added for 2 successive years Rubenstein NYC

That is very close to random.

How about teachers who teach the same subject to two different grade levels (say, fourth-grade math and fifth-grade math)? Again, random points:

pic 9 - VAM for same subject different grades NYC rubenstein

One thing that all veteran teachers agree on is that they stunk at their job during their first year and got a lot better their second year. This should show up on value-added graphs of year 1 versus year 2 scores for the same teachers, right?

Wrong.

Take a look:

pic 10 - VAM first yr vs second year same teacher rubenstein nyc

One last point:

Mayor Gray and chancellors Henderson and Rhee all claim that education in DC only started improving after mayoral control of the schools, starting in 2007.

Graphs and the NAEP show a different story. We won’t know until next week how DCPS and the charter schools did, separately, for 2013, but the following graphs show that reading andmath scores for DC fourth- and eighth-graders have been rising fairly steadily for nearly twenty years, or long before mayoral control or the appointments of our two chancellors (Rhee and Henderson).

 

 

pic 13 - naep reading 8th since 1998 scale scores many states incl dc

 

pic 12 naep 4th grade reading scale scores since 1993 many states incl dc

pic 11 - naep 8th grade math avge scale scores since 1990 many states incl dc

 

 

John Merrow on the Rhee-Henderson-Caveon Whitewash

John Merrow has a hard-hitting article on the multiple lies uttered by Michelle Rhee and her best friend, Kaya Henderson, and the whitewash they hired Caveon to perform. Here is a quote:

……………….

At the April 18th hearing Chairman Catania alluded to what he called Caveon’s ‘positive’ role in helping expose the Atlanta cheating.  That is an overstatement, to put it mildly. Prior to its work for DCPS, Caveon had been hired by the (so-called) “Blue Ribbon Committee” established to look into allegations of cheating in Atlanta.  Caveon looked–and reported finding nothing wrong in what turned out to be the epicenter of cheating by adults on standardized tests. [8] Dr. Fremer told me that while he ‘knew’ there was widespread cheating going on, that was not mentioned in his final report. “We did not try to find out who was cheating,” he said.  “Our purpose was to rank order the schools beginning with those with the most obvious problems (of unbelievably dramatic score increases), in order to make the task of investigating more manageable.”   In other words, Caveon produced a list!

Dr. Fremer admitted that he knew some Atlanta teachers were lying to him, but he said his hands were tied because he didn’t have subpoena power.

Georgia’s investigators are contemptuous of Caveon’s efforts, labelling it a ‘so-called investigation.’  Richard Hyde, one of the three leaders of the investigation, told me that “either by coincidence or design, it was certain to fail.”  Mr. Hyde denied that Caveon needed subpoena power because its investigators were representing a governmental agency, and under Georgia law it is a felony to lie to someone representing the government.  What’s more, Mr. Hyde said, Caveon had a fundamental conflict of interest–it was investigating its employer, at least indirectly, because the “Blue Ribbon Commission” (which Mr. Hyde dismisses as “The Whitewash Commission”) included a deputy superintendent of schools.

Robert Wilson, another leader of the Georgia investigation, is even blunter. Of course Caveon didn’t find cheating because “Caveon couldn’t find its own ass with either hand,” he scoffed.  Why anyone would hire Caveon was, he said, beyond him–unless they didn’t want to find out anything.

……………

3. Just how weak was Mr. Willoughby’s effort?  As we reported on Frontline in January, the Inspector General’s investigation is remarkable for what it did not investigate. He chose not to investigate 2008, the year with the most erasures. He chose not to investigate Aiton, the school Dr. Sanford had singled out for special attention because of its high wrong to right erasures. He did not examine the test answer sheets or perform an electronic analysis. And he did not investigate J.O Wilson – a school with excessive WTR erasures in 100% of its classrooms – simply because Chancellor Henderson had assured him that it was a good school.

Although more than half of DC’s schools had been implicated, he focused only on Noyes Education Campus, the school that USA Today had made the centerpiece of its investigation. Over the course of the next 17 months, his team interviewed just 60 administrators, teachers, parents and teachers, all from Noyes Education Campus. (Atlanta investigators interviewed over 2,000 people and reviewed 800,000 documents). Rather than seek outside experts (as Atlanta investigators had), he relied heavily on information from Caveon, which had been, of course, in the employ of DCPS. He did not ask to perform erasure analysis but relied on interviews–sometimes conducted over the phone.

Without the power to put people under oath, he told City Council member McDuffie in February that he just asked them if they had cheated. If they said they hadn’t, that was the end of it, because, he explained, he “wasn’t conducting a fishing expedition.” Test monitors sent by the central office to patrol Noyes for the 2010 test told Mr. Willoughby that they had been barred from entering classrooms. School officials denied that charge–and Mr. Willoughby believed them, not the monitors.

%d bloggers like this: