Part Two: Cheating in DCPS

DC Education Reform Ten Years After, 

Part 2: Test Cheats

Richard P Phelps

Ten years ago, I worked as the Director of Assessments for the District of Columbia Public Schools (DCPS). For temporal context, I arrived after the first of the infamous test cheating scandals and left just before the incident that spawned a second. Indeed, I filled a new position created to both manage test security and design an expanded testing program. I departed shortly after Vincent Gray, who opposed an expanded testing program, defeated Adrian Fenty in the September 2010 DC mayoral primary. My tenure coincided with Michelle Rhee’s last nine months as Chancellor. 

The recurring test cheating scandals of the Rhee-Henderson years may seem extraordinary but, in fairness, DCPS was more likely than the average US school district to be caught because it received a much higher degree of scrutiny. Given how tests are typically administered in this country, the incidence of cheating is likely far greater than news accounts suggest, for several reasons: 

·      in most cases, those who administer tests—schoolteachers and administrators—have an interest in their results;

·      test security protocols are numerous and complicated yet, nonetheless, the responsibility of non-expert ordinary school personnel, guaranteeing their inconsistent application across schools and over time; 

·      after-the-fact statistical analyses are not legal proof—the odds of a certain amount of wrong-to-right erasures in a single classroom on a paper-and-pencil test being coincidental may be a thousand to one, but one-in-a-thousand is still legally plausible; and

·      after-the-fact investigations based on interviews are time-consuming, scattershot, and uneven. 

Still, there were measures that the Rhee-Henderson administrations could have adopted to substantially reduce the incidence of cheating, but they chose none that might have been effective. Rather, they dug in their heels, insisted that only a few schools had issues, which they thoroughly resolved, and repeatedly denied any systematic problem.  

Cheating scandals

From 2007 to 2009 rumors percolated of an extraordinary level of wrong-to-right erasures on the test answer sheets at many DCPS schools. “Erasure analysis” is one among several “red flag” indicators that testing contractors calculate to monitor cheating. The testing companies take no responsibility for investigating suspected test cheating, however; that is the customer’s, the local or state education agency. 

In her autobiographical account of her time as DCPS Chancellor, Michelle Johnson (nee Rhee), wrote (p. 197)

“For the first time in the history of DCPS, we brought in an outside expert to examine and audit our system. Caveon Test Security – the leading expert in the field at the time – assessed our tests, results, and security measures. Their investigators interviewed teachers, principals, and administrators.

“Caveon found no evidence of systematic cheating. None.”

Caveon, however, had not looked for “systematic” cheating. All they did was interview a few people at several schools where the statistical anomalies were more extraordinary than at others. As none of those individuals would admit to knowingly cheating, Caveon branded all their excuses as “plausible” explanations. That’s it; that is all that Caveon did. But, Caveon’s statement that they found no evidence of “widespread” cheating—despite not having looked for it—would be frequently invoked by DCPS leaders over the next several years.[1]

Incidentally, prior to the revelation of its infamous decades-long, systematic test cheating, the Atlanta Public Schools had similarly retained Caveon Test Security and was, likewise, granted a clean bill of health. Only later did the Georgia state attorney general swoop in and reveal the truth. 

In its defense, Caveon would note that several cheating prevention measures it had recommended to DCPS were never adopted.[2] None of the cheating prevention measures that I recommended were adopted, either.

The single most effective means for reducing in-classroom cheating would have been to rotate teachers on test days so that no teacher administered a test to his or her own students. It would not have been that difficult to randomly assign teachers to different classrooms on test days.

The single most effective means for reducing school administratorcheating would have been to rotate test administrators on test days so that none managed the test materials for their own schools. The visiting test administrators would have been responsible for keeping test materials away from the school until test day, distributing sealed test booklets to the rotated teachers on test day, and for collecting re-sealed test booklets at the end of testing and immediately removing them from the school. 

Instead of implementing these, or a number of other feasible and effective test security measures, DCPS leaders increased the number of test proctors, assigning each of a few dozen or so central office staff a school to monitor. Those proctors could not reasonably manage the volume of oversight required. A single DC test administration could encompass a hundred schools and a thousand classrooms.

Investigations

So, what effort, if any, did DCPS make to counter test cheating? They hired me, but then rejected all my suggestions for increasing security. Also, they established a telephone tip line. Anyone who suspected cheating could report it, even anonymously, and, allegedly, their tip would be investigated. 

Some forms of cheating are best investigated through interviews. Probably the most frequent forms of cheating at DCPS—teachers helping students during test administrations and school administrators looking at test forms prior to administration—leave no statistical residue. Eyewitness testimony is the only type of legal evidence available in such cases, but it is not just inconsistent, it may be socially destructive. 

I remember two investigations best: one occurred in a relatively well-to-do neighborhood with well-educated parents active in school affairs; the other in one of the city’s poorest neighborhoods. Superficially, the cases were similar—an individual teacher was accused of helping his or her own students with answers during test administrations. Making a case against either elementary school teacher required sworn testimony from eyewitnesses, that is, students—eight-to-ten-year olds. 

My investigations, then, consisted of calling children into the principal’s office one-by-one to be questioned about their teacher’s behavior. We couldn’t hide the reason we were asking the questions. And, even though each student agreed not to tell others what had occurred in their visit to the principal’s office, we knew we had only one shot at an uncorrupted jury pool. 

Though the accusations against the two teachers were similar and the cases against them equally strong, the outcomes could not have been more different. In the high-poverty neighborhood, the students seemed suspicious and said little; none would implicate the teacher, whom they all seemed to like. 

In the more prosperous neighborhood, students were more outgoing, freely divulging what they had witnessed. The students had discussed the alleged coaching with their parents who, in turn, urged them to tell investigators what they knew. During his turn in the principal’s office, the accused teacher denied any wrongdoing. I wrote up each interview, then requested that each student read and sign. 

Thankfully, that accused teacher made a deal and left the school system a few weeks later. Had he not, we would have required the presence in court of the eight-to-ten-year olds to testify under oath against their former teacher, who taught multi-grade classes. Had that prosecution not succeeded, the eyewitness students could have been routinely assigned to his classroom the following school year.

My conclusion? Only in certain schools is the successful prosecution of a cheating teacher through eyewitness testimony even possible. But, even where possible, it consumes inordinate amounts of time and, otherwise, comes at a high price, turning young innocents against authority figures they naturally trusted. 

Cheating blueprints

Arguably the most widespread and persistent testing malfeasance in DCPS received little attention from the press. Moreover, it was directly propagated by District leaders, who published test blueprints on the web. Put simply, test “blueprints” are lists of the curricular standards (e.g., “student shall correctly add two-digit numbers”) and the number of test items included in an upcoming test related to each standard. DC had been advance publishing its blueprints for years.

I argued that the way DC did it was unethical. The head of the Division of Data & Accountability, Erin McGoldrick, however, defended the practice, claimed it was common, and cited its existence in the state of California as precedent. The next time she and I met for a conference call with one of DCPS’s test providers, Discover Education, I asked their sales agent how many of their hundreds of other customers advance-published blueprints. His answer: none.

In the state of California, the location of McGoldrick’s only prior professional experience, blueprints were, indeed, published in advance of test administrations. But their tests were longer than DC’s and all standards were tested. Publication of California’s blueprints served more to remind the populace what the standards were in advance of each test administration. Occasionally, a standard considered to be of unusual importance might be assigned a greater number of test items than the average, and the California blueprints signaled that emphasis. 

In Washington, DC, the tests used in judging teacher performance were shorter, covering only some of each year’s standards. So, DC’s blueprints showed everyone well in advance of the test dates exactly which standards would be tested and which would not. For each teacher, this posed an ethical dilemma: should they “narrow the curriculum” by teaching only that content they knew would be tested? Or, should they do the right thing and teach all the standards, as they were legally and ethically bound to, even though it meant spending less time on the to-be-tested content? It’s quite a conundrum when one risks punishment for behaving ethically.

Monthly meetings convened to discuss issues with the districtwide testing program, the DC Comprehensive Assessment System (DC-CAS)—administered to comply with the federal No Child Left Behind (NCLB) Act. All public schools, both DCPS and charters, administered those tests. At one of these regular meetings, two representatives from the Office of the State Superintendent of Education (OSSE) announced plans to repair the broken blueprint process.[3]

The State Office employees argued thoughtfully and reasonably that it was professionally unethical to advance publish DC test blueprints. Moreover, they had surveyed other US jurisdictions in an effort to find others that followed DC’s practice and found none. I was the highest-ranking DCPS employee at the meeting and I expressed my support, congratulating them for doing the right thing. I assumed that their decision was final.

I mentioned the decision to McGoldrick, who expressed surprise and speculation that it might have not been made at the highest level in the organizational hierarchy. Wasting no time, she met with other DCPS senior managers and the proposed change was forthwith shelved. In that, and other ways, the DCPS tail wagged the OSSE dog. 

* * *

It may be too easy to finger ethical deficits for the recalcitrant attitude toward test security of the Rhee-Henderson era ed reformers. The columnist Peter Greene insists that knowledge deficits among self-appointed education reformers also matter: 

“… the reformistan bubble … has been built from Day One without any actual educators inside it. Instead, the bubble is populated by rich people, people who want rich people’s money, people who think they have great ideas about education, and even people who sincerely want to make education better. The bubble does not include people who can turn to an Arne Duncan or a Betsy DeVos or a Bill Gates and say, ‘Based on my years of experience in a classroom, I’d have to say that idea is ridiculous bullshit.’”

“There are a tiny handful of people within the bubble who will occasionally act as bullshit detectors, but they are not enough. The ed reform movement has gathered power and money and set up a parallel education system even as it has managed to capture leadership roles within public education, but the ed reform movement still lacks what it has always lacked–actual teachers and experienced educators who know what the hell they’re talking about.”

In my twenties, I worked for several years in the research department of a state education agency. My primary political lesson from that experience, consistently reinforced subsequently, is that most education bureaucrats tell the public that the system they manage works just fine, no matter what the reality. They can get away with this because they control most of the evidence and can suppress it or spin it to their advantage.

In this proclivity, the DCPS central office leaders of the Rhee-Henderson era proved themselves to be no different than the traditional public-school educators they so casually demonized. 

US school systems are structured to be opaque and, it seems, both educators and testing contractors like it that way. For their part, and contrary to their rhetoric, Rhee, Henderson, and McGoldrick passed on many opportunities to make their system more transparent and accountable.

Education policy will not improve until control of the evidence is ceded to genuinely independent third parties, hired neither by the public education establishment nor by the education reform club.

The author gratefully acknowledges the fact-checking assistance of Erich Martel and Mary Levy.

Access this testimonial in .pdf format

Citation:  Phelps, R. P. (2020, September). Looking Back on DC Education Reform 10 Years After, Part 2: Test Cheats. Nonpartisan Education Review / Testimonials. https://nonpartisaneducation.org/Review/Testimonials/v16n3.htm


[1] A perusal of Caveon’s website clarifies that their mission is to help their clients–state and local education departments–not get caught. Sometimes this means not cheating in the first place; other times it might mean something else. One might argue that, ironically, Caveon could be helping its clients to cheat in more sophisticated ways and cover their tracks better.

[2] Among them: test booklets should be sealed until the students open them and resealed by the students immediately after; and students should be assigned seats on test day and a seating chart submitted to test coordinators (necessary for verifying cluster patterns in student responses that would suggest answer copying).

[3] Yes, for those new to the area, the District of Columbia has an Office of the “State” Superintendent of Education (OSSE). Its domain of relationships includes not just the regular public schools (i.e., DCPS), but also other public schools (i.e., charters) and private schools. Practically, it primarily serves as a conduit for funneling money from a menagerie of federal education-related grant and aid programs

What did Education Reform in DC Actually Mean?

Short answer: nothing that would actually help students or teachers. But it’s made for well-padded resumes for a handful of insiders.

This is an important review, by the then-director of assessment. His criticisms echo the points that I have been making along with Mary Levy, Erich Martel, Adell Cothorne, and many others.

Nonpartisan Education Review / Testimonials

Access this testimonial in .pdf format

Looking Back on DC Education Reform 10 Years After, 

Part 1: The Grand Tour

Richard P Phelps

Ten years ago, I worked as the Director of Assessments for the District of Columbia Public Schools (DCPS). My tenure coincided with Michelle Rhee’s last nine months as Chancellor. I departed shortly after Vincent Gray defeated Adrian Fenty in the September 2010 DC mayoral primary

My primary task was to design an expansion of that testing program that served the IMPACT teacher evaluation system to include all core subjects and all grade levels. Despite its fame (or infamy), the test score aspect of the IMPACT program affected only 13% of teachers, those teaching either reading or math in grades four through eight. Only those subjects and grade levels included the requisite pre- and post-tests required for teacher “value added” measurements (VAM). Not included were most subjects (e.g., science, social studies, art, music, physical education), grades kindergarten to two, and high school.

Chancellor Rhee wanted many more teachers included. So, I designed a system that would cover more than half the DCPS teacher force, from kindergarten through high school. You haven’t heard about it because it never happened. The newly elected Vincent Gray had promised during his mayoral campaign to reduce the amount of testing; the proposed expansion would have increased it fourfold.

VAM affected teachers’ jobs. A low value-added score could lead to termination; a high score, to promotion and a cash bonus. VAM as it was then structured was obviously, glaringly flawed,[1] as anyone with a strong background in educational testing could have seen. Unfortunately, among the many new central office hires from the elite of ed reform circles, none had such a background.

Before posting a request for proposals from commercial test developers for the testing expansion plan, I was instructed to survey two groups of stakeholders—central office managers and school-level teachers and administrators.

Not surprisingly, some of the central office managers consulted requested additions or changes to the proposed testing program where they thought it would benefit their domain of responsibility. The net effect on school-level personnel would have been to add to their administrative burden. Nonetheless, all requests from central office managers would be honored. 

The Grand Tour

At about the same time, over several weeks of the late Spring and early Summer of 2010, along with a bright summer intern, I visited a dozen DCPS schools. The alleged purpose was to collect feedback on the design of the expanded testing program. I enjoyed these meetings. They were informative, animated, and very well attended. School staff appreciated the apparent opportunity to contribute to policy decisions and tried to make the most of it.

Each school greeted us with a full complement of faculty and staff on their days off, numbering a several dozen educators at some venues. They believed what we had told them: that we were in the process of redesigning the DCPS assessment program and were genuinely interested in their suggestions for how best to do it. 

At no venue did we encounter stand-pat knee-jerk rejection of education reform efforts. Some educators were avowed advocates for the Rhee administration’s reform policies, but most were basically dedicated educators determined to do what was best for their community within the current context. 

The Grand Tour was insightful, too. I learned for the first time of certain aspects of DCPS’s assessment system that were essential to consider in its proper design, aspects of which the higher-ups in the DCPS Central Office either were not aware or did not consider relevant. 

The group of visited schools represented DCPS as a whole in appropriate proportions geographically, ethnically, and by education level (i.e., primary, middle, and high). Within those parameters, however, only schools with “friendly” administrations were chosen. That is, we only visited schools with principals and staff openly supportive of the Rhee-Henderson agenda. 

But even they desired changes to the testing program, whether or not it was expanded. Their suggestions covered both the annual districtwide DC-CAS (or “comprehensive” assessment system), on which the teacher evaluation system was based, and the DC-BAS (or “benchmarking” assessment system), a series of four annual “no-stakes” interim tests unique to DCPS, ostensibly offered to help prepare students and teachers for the consequential-for-some-school-staff DC-CAS.[2]

At each staff meeting I asked for a show of hands on several issues of interest that I thought were actionable. Some suggestions for program changes received close to unanimous support. Allow me to describe several.

1. Move DC-CAS test administration later in the school year. Many citizens may have logically assumed that the IMPACT teacher evaluation numbers were calculated from a standard pre-post test schedule, testing a teacher’s students at the beginning of their academic year together and then again at the end. In 2010, however, the DC-CAS was administered in March, three months before school year end. Moreover, that single administration of the test served as both pre- and post-test, posttest for the current school year and pretest for the following school year. Thus, before a teacher even met their new students in late August or early September, almost half of the year for which teachers were judged had already transpired—the three months in the Spring spent with the previous year’s teacher and almost three months of summer vacation. 

School staff recommended pushing DC-CAS administration to later in the school year. Furthermore, they advocated a genuine pre-post-test administration schedule—pre-test the students in late August–early September and post-test them in late-May–early June—to cover a teacher’s actual span of time with the students.

This suggestion was rejected because the test development firm with the DC-CAS contract required three months to score some portions of the test in time for the IMPACT teacher ratings scheduled for early July delivery, before the start of the new school year. Some small number of teachers would be terminated based on their IMPACT scores, so management demanded those scores be available before preparations for the new school year began.[3] The tail wagged the dog.

2. Add some stakes to the DC-CAS in the upper grades. Because DC-CAS test scores portended consequences for teachers but none for students, some students expended little effort on the test. Indeed, extensive research on “no-stakes” (for students) tests reveal that motivation and effort vary by a range of factors including gender, ethnicity, socioeconomic class, the weather, and age. Generally, the older the student, the lower the test-taking effort. This disadvantaged some teachers in the IMPACT ratings for circumstances beyond their control: unlucky student demographics. 

Central office management rejected this suggestion to add even modest stakes to the upper grades’ DC-CAS; no reason given. 

3. Move one of the DC-BAS tests to year end. If management rejected the suggestion to move DC-CAS test administration to the end of the school year, school staff suggested scheduling one of the no-stakes DC-BAS benchmarking tests for late May–early June. As it was, the schedule squeezed all four benchmarking test administrations between early September and mid-February. Moving just one of them to the end of the year would give the following year’s teachers a more recent reading (by more than three months) of their new students’ academic levels and needs.

Central Office management rejected this suggestion probably because the real purpose of the DC-BAS was not to help teachers understand their students’ academic levels and needs, as the following will explain.

4. Change DC-BAS tests so they cover recently taught content. Many DC citizens probably assumed that, like most tests, the DC-BAS interim tests covered recently taught content, such as that covered since the previous test administration. Not so in 2010. The first annual DC-BAS was administered in early September, just after the year’s courses commenced. Moreover, it covered the same content domain—that for the entirety of the school year—as each of the next three DC-BAS tests. 

School staff proposed changing the full-year “comprehensive” content coverage of each DC-BAS test to partial-year “cumulative” coverage, so students would only be tested on what they had been taught prior to each test administration.

This suggestion, too, was rejected. Testing the same full-year comprehensive content domain produced a predictable, flattering score rise. With each DC-BAS test administration, students recognized more of the content, because they had just been exposed to more of it, so average scores predictably rose. With test scores always rising, it looked like student achievement improved steadily each year. Achieving this contrived score increase required testing students on some material to which they had not yet been exposed, both a violation of professional testing standards and a poor method for instilling student confidence. (Of course, it was also less expensive to administer essentially the same test four times a year than to develop four genuinely different tests.)

5. Synchronize the sequencing of curricular content across the District. DCPS management rhetoric circa 2010 attributed classroom-level benefits to the testing program. Teachers would know more about their students’ levels and needs and could also learn from each other. Yet, the only student test results teachers received at the beginning of each school year was half-a-year old, and most of the information they received over the course of four DC-BAS test administrations was based on not-yet-taught content.

As for cross-district teacher cooperation, unfortunately there was no cross-District coordination of common curricular sequences. Each teacher paced their subject matter however they wished and varied topical emphases according to their own personal preference.

It took DCPS’s Chief Academic Officer, Carey Wright, and her chief of staff, Dan Gordon, less than a minute to reject the suggestion to standardize topical sequencing across schools so that teachers could consult with one another in real time. Tallying up the votes: several hundred school-level District educators favored the proposal, two of Rhee’s trusted lieutenants opposed it. It lost.

6. Offer and require a keyboarding course in the early grades. DCPS was planning to convert all its testing from paper-and-pencil mode to computer delivery within a few years. Yet, keyboarding courses were rare in the early grades. Obviously, without systemwide keyboarding training in computer use some students would be at a disadvantage in computer testing.

Suggestion rejected.

In all, I had polled over 500 DCPS school staff. Not only were all of their suggestions reasonable, some were essential in order to comply with professional assessment standards and ethics. 

Nonetheless, back at DCPS’ Central Office, each suggestion was rejected without, to my observation, any serious consideration. The rejecters included Chancellor Rhee, the head of the office of Data and Accountability—the self-titled “Data Lady,” Erin McGoldrick—and the head of the curriculum and instruction division, Carey Wright, and her chief deputy, Dan Gordon. 

Four central office staff outvoted several-hundred school staff (and my recommendations as assessment director). In each case, the changes recommended would have meant some additional work on their parts, but in return for substantial improvements in the testing program. Their rhetoric was all about helping teachers and students; but the facts were that the testing program wasn’t structured to help them.

What was the purpose of my several weeks of school visits and staff polling? To solicit “buy in” from school level staff, not feedback.

Ultimately, the new testing program proposal would incorporate all the new features requested by senior Central Office staff, no matter how burdensome, and not a single feature requested by several hundred supportive school-level staff, no matter how helpful. Like many others, I had hoped that the education reform intention of the Rhee-Henderson years was genuine. DCPS could certainly have benefitted from some genuine reform. 

Alas, much of the activity labelled “reform” was just for show, and for padding resumes. Numerous central office managers would later work for the Bill and Melinda Gates Foundation. Numerous others would work for entities supported by the Gates or aligned foundations, or in jurisdictions such as Louisiana, where ed reformers held political power. Most would be well paid. 

Their genuine accomplishments, or lack thereof, while at DCPS seemed to matter little. What mattered was the appearance of accomplishment and, above all, loyalty to the group. That loyalty required going along to get along: complicity in maintaining the façade of success while withholding any public criticism of or disagreement with other in-group members.

Unfortunately, in the United States what is commonly showcased as education reform is neither a civic enterprise nor a popular movement. Neither parents, the public, nor school-level educators have any direct influence. Rather, at the national level, US education reform is an elite, private club—a small group of tightly-connected politicos and academicsa mutual admiration society dedicated to the career advancement, political influence, and financial benefit of its members, supported by a gaggle of wealthy foundations (e.g., Gates, Walton, Broad, Wallace, Hewlett, Smith-Richardson). 

For over a decade, The Ed Reform Club exploited DC for its own benefit. Local elite formed the DC Public Education Fund (DCPEF) to sponsor education projects, such as IMPACT, which they deemed worthy. In the negotiations between the Washington Teachers’ Union and DCPS concluded in 2010, DCPEF arranged a 3 year grant of $64.5M from the Arnold, Broad, Robertson and Walton Foundations to fund a 5-year retroactive teacher pay raise in return for contract language allowing teacher excessing tied to IMPACT, which Rhee promised would lead to annual student test score increases by 2012. Projected goals were not metfoundation support continued nonetheless.

Michelle Johnson (nee Rhee) now chairs the board of a charter school chain in California and occasionally collects $30,000+ in speaker fees but, otherwise, seems to have deliberately withdrawn from the limelight. Despite contributing her own additional scandalsafter she assumed the DCPS Chancellorship, Kaya Henderson ascended to great fame and glory with a “distinguished professorship” at Georgetown; honorary degrees from Georgetown and Catholic Universities; gigs with the Chan Zuckerberg Initiative, Broad Leadership Academy, and Teach for All; and board memberships with The Aspen Institute, The College Board, Robin Hood NYC, and Teach For America. Carey Wright is now state superintendent in Mississippi. Dan Gordon runs a 30-person consulting firm, Education Counsel that strategically partners with major players in US education policy. The manager of the IMPACT teacher evaluation program, Jason Kamras, now works as Superintendent of the Richmond, VA public schools. 

Arguably the person most directly responsible for the recurring assessment system fiascos of the Rhee-Henderson years, then Chief of Data and Accountability Erin McGoldrick, now specializes in “data innovation” as partner and chief operating officer at an education management consulting firm. Her firm, Kitamba, strategically partners with its own panoply of major players in US education policy. Its list of recent clients includes the DC Public Charter School Board and DCPS.

If the ambitious DC central office folk who gaudily declared themselves leading education reformers were not really, who were the genuine education reformers during the Rhee-Henderson decade of massive upheaval and per-student expenditures three times those in the state of Utah? They were the school principals and staff whose practical suggestions were ignored by central office glitterati. They were whistleblowers like history teacher Erich Martel who had documented DCPS’ student records’ manipulation and phony graduation rates years before the Washington Post’s celebrated investigation of Ballou High School, and was demoted and then “excessed” by Henderson. Or, school principal Adell Cothorne, who spilled the beans on test answer sheet “erasure parties” at Noyes Education Campus and lost her job under Rhee. 

Real reformers with “skin in the game” can’t play it safe.

The author appreciates the helpful comments of Mary Levy and Erich Martel in researching this article. 

Access this testimonial in .pdf format

Final Listing of Completed and Failed Goals, But Some Analysis Will Follow

Part Fifteen of Many

 

Here we come to the last four of the 78 promises that Michelle Rhee made to get $64.5 million.

Did she and her successors reach any of these four last goals?

No.

As usual.

Even though they fiddled with the definition of “Free and Reduced-Price Lunches”, which almost surely made the numbers better than they would be otherwise, Rhee and Henderson have continued their long losing streak.

Today we look at the poor-nonpoor achievement gaps in 2013 for DC Public Schools.

More technically, we are comparing the percentages of students scoring at the “advanced” or “proficient” level in elementary and secondary math and reading. in two groups: those eligible for free or reduced-price lunches, and those who are NOT eligible. The USDoE and most school districts use the data entered by parents on lunch application forms to decide not only who is eligible for the lunch subsidies, but also as a proxy for poverty or the lack thereof.

Unfortunately for consistency in our ability to measure things over time, in SY 2012-2013 DCPS allowed schools with a sufficient number of students who did qualify as poor, to declare every single child in the school as ‘economically disadvantaged’. It meant free school lunches for the students, which in theory is a good thing (if the food is actually edible, which is sometimes but not always the case), but does make our data-crunching harder by making the data for 2010, 2011, and 2012 not really comparable to that for 2013 — if you are serious about measuring the ‘achievement gap’ between the poor and the non-poor in DC Public Schools. A statistician has told me that this change also probably had the effect of reducing the apparent achievement gap.

You can see in the following table that once again not a single goal was reached:

final gaps -- poor-nonpoor 2013 dccas

So, for example, and as usual starting at the top line, Rhee promised that in 2013 the difference in the ‘proficiency’ rates of poor and non-poor students in DCPS in reading would be 26.7%. (Keep in mind that a reduced gap is a Good Thing.) However, the gap was actually much wider: it was 46.5%. In elementary math, we were promised a gap of 26.9%, but it was actually 43.5%. And so on. I notice that the gaps are smaller at the secondary level; I suspect that may have something to do with the re-definition of FRPL, but cannot prove it.

In any case, here is the grand total of all of these failures:

Successes: 1.5 (one and a half)

Failures: 76.5

Total number of goals measured: 78

Success rate: 1.9%

Failure rate: 98.1%.

Mayor Gray, why are you enabling our bungling and failing Chancellor, Kaya Henderson?

City Council, why aren’t you calling hearings?

-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_

The saga so far:

  1.  https://gfbrandenburg.wordpress.com/2014/09/02/did-any-of-michelle-rhees-promises-actually-work-in-dc/
  2. https://gfbrandenburg.wordpress.com/2014/09/02/more-on-michelle-rhees-promises-concerning-dcps/
  3. https://gfbrandenburg.wordpress.com/2014/09/04/what-rhee-promised-to-the-billionaires-walton-gates-et-al-but-didnt-deliver/
  4. https://gfbrandenburg.wordpress.com/2014/09/04/two-more-promises-by-rhee-et-al-were-they-kept/
  5. https://gfbrandenburg.wordpress.com/2014/09/05/ten-more-promises-from-rhee-henderson-company-were-any-of-them-kept/
  6. https://gfbrandenburg.wordpress.com/2014/09/05/33-6-for-nearly-all-values-of-3-not-5/
  7. https://gfbrandenburg.wordpress.com/2014/09/05/5281/
  8. https://gfbrandenburg.wordpress.com/2014/09/07/more-failures-to-deliver-on-promises-by-michelle-rhee-and-her-acolytes/
  9. https://gfbrandenburg.wordpress.com/2014/09/08/another-day-another-bunch-of-failures-from-rhee-henderson/
  10. https://gfbrandenburg.wordpress.com/2014/09/09/even-more-missed-targets-dc-cas-proficiency-in-2010-and-2011/
  11. https://gfbrandenburg.wordpress.com/2014/09/13/rhees-failures-in-dc-the-continuing-saga-2012-dc-cas/
  12. https://gfbrandenburg.wordpress.com/2014/09/21/the-long-list-of-failures-by-rhee-and-henderson-continued/
  13. https://gfbrandenburg.wordpress.com/2014/09/21/did-michelle-rhee-actually-close-those-achievement-gaps/
  14. https://gfbrandenburg.wordpress.com/2014/09/22/twelve-more-testing-goals-assessed-today-how-many-did-rhee-succeed-at/
  15. (this one)

\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\

Once again, let me credit my colleague Erich Martel for coming up with the idea of going back to the original promises and seeing if they were kept or not, and sharing his findings with me. These calculations are generally my own, so if you find any mistakes, don’t blame him. Blame me.

You can find the original spreadsheet for 2012 DC-CAS scores here,  and the original letters containing the promises here.

 

Did Michelle Rhee Actually Close Those Achievement Gaps?

Part Thirteen Of Many

Here we look to see if Michelle Rhee lived up to her promises that she would close some of the ‘achievement gaps’ between privileged and underprivileged students in Washington, DC — all part of the set of targets that she put forth in writing in a series of letters between four billionaire’s foundations, DC Public Schools, and various DC financial authorities.

Specifically, did the gaps between white students and black ones get smaller as she predicted? Or between white students and hispanic students?

The short answer is, as usual in this series of columns, a very simple NO, at least not for 2013 – nor did she reach the goals she set in 2010 or 2011. (I’ll get to 2012 soon, I promise.)

Here is a summary table, showing that in eight target areas, Rhee and her successor Kaya Henderson did not come anywhere close to reaching those promised goals, even though they got almost complete freedom to fire teachers and administrators for almost any reason.

black-white and hispanic-white achievement gaps, dc-cas, 2013

In this situation, a low number is a Good Thing, because it means that the gaps between black and white students’ scores, or between white and hispanic students’ scores, are getting smaller. However, you will notice that in many cases, the gaps are twice or three times as wide as Rhee and her henchpeople promised the billionaires.

I’ll try to explain. Rhee promised that in 2013, the difference in percentages of white and black students who scored at the ‘advanced’ or ‘proficient’ level in reading in the elementary grades in DCPS would only be 26.7%. Unfortunately, it was really 56.9%. At the secondary level in reading, the prediction was that the black-white gap would only be 33.2%, whereas in reality it turned out to be 48.3%.

Moving to the hispanic-white achievement gap, Rhee promised that the gap at the elementary reading level would only be 20.1%, whereas actually it was 43.9%.

And so on.

In not a single one of these eight measurable areas did Rhee’s predictions even come close to reality.

So, adding these eight more failures, we now have an overall failure rate of one and a half successes out of 62 measured areas, for a current success rate of 2.5%, and hence a failure rate of 97.5%.

Let me emphasize that:

A failure rate of 97.5%.

failure rate out of 62

Why does anybody listen to Rhee or Kaya Henderson or anybody else involved in the Corporate Educational Deform movement?

Almost none of their promises pan out!

=============

Sources: the  letters containing the promises here. and you can find the spreadsheet containing the scores for 2013 here. I calculated and then added up the numbers and percentages of kids scoring ‘advanced’ or ‘proficient’ at grades 3, 4, 5 and 6 for the ‘elementary’ totals that I report above, and those at grades 7, 8 and 10 for the ‘secondary’..

 

——————

The saga so far:

  1.  https://gfbrandenburg.wordpress.com/2014/09/02/did-any-of-michelle-rhees-promises-actually-work-in-dc/
  2. https://gfbrandenburg.wordpress.com/2014/09/02/more-on-michelle-rhees-promises-concerning-dcps/
  3. https://gfbrandenburg.wordpress.com/2014/09/04/what-rhee-promised-to-the-billionaires-walton-gates-et-al-but-didnt-deliver/
  4. https://gfbrandenburg.wordpress.com/2014/09/04/two-more-promises-by-rhee-et-al-were-they-kept/
  5. https://gfbrandenburg.wordpress.com/2014/09/05/ten-more-promises-from-rhee-henderson-company-were-any-of-them-kept/
  6. https://gfbrandenburg.wordpress.com/2014/09/05/33-6-for-nearly-all-values-of-3-not-5/
  7. https://gfbrandenburg.wordpress.com/2014/09/05/5281/
  8. https://gfbrandenburg.wordpress.com/2014/09/07/more-failures-to-deliver-on-promises-by-michelle-rhee-and-her-acolytes/
  9. https://gfbrandenburg.wordpress.com/2014/09/08/another-day-another-bunch-of-failures-from-rhee-henderson/
  10. https://gfbrandenburg.wordpress.com/2014/09/09/even-more-missed-targets-dc-cas-proficiency-in-2010-and-2011/
  11. https://gfbrandenburg.wordpress.com/2014/09/13/rhees-failures-in-dc-the-continuing-saga-2012-dc-cas/
  12. https://gfbrandenburg.wordpress.com/2014/09/21/the-long-list-of-failures-by-rhee-and-henderson-continued/
  13. https://gfbrandenburg.wordpress.com/2014/09/21/did-michelle-rhee-actually-close-those-achievement-gaps/ (this one)

\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\

Once again, let me credit my colleague Erich Martel for coming up with the idea of going back to the original promises and seeing if they were kept or not, and sharing his findings with me. These calculations are generally my own, so if you find any mistakes, don’t blame him. Blame me.

 

Rhee’s Failures in DC, the Continuing Saga: 2012 DC-CAS

Part Eleven of Many

Today we have four more failures to reach promised goals by Michelle Rhee and her acolyte Kaya Henderson. This time they are goals set for the 2012 DC-CAS in math and reading at the elementary and secondary levels.

Once again, not a single target was reached, as you can see in this little table:

promised + actual percent prof + adv 2012 dc cas rdg math elem sec

For elementary math, the promise was that 63.0% of the students in DCPS would be ‘proficient’ or ‘advanced’ on the 2012 DC-CAS. In fact, only 45.9% reached that level. In secondary math, the target was 54.6% but the result was only 48.0%.

In reading, the promise was that at the elementary level, 63.8% of the students would be ‘proficient’ or ‘advanced’; however, the actual rate was only 44.2%. At the secondary level, the prediction was that 55.6% would be ‘proficient’ or ‘advanced’, but only 44.2% of the secondary students in DCPS reached that level.

So here is the score so far: out of 50 measured targets, exactly one and one-half of the goals were reached (and I was being generous on that one-half). That is a score of 3%.

Not 30%. THREE PERCENT.

failure rate out of 50

Can someone explain to me why Kaya Henderson still has a job, and why the foundations who funded this wild reformy scheme back in 2009-2010 didn’t ask for their money back?

=============

Sources and methods:

I worked with an OSSE spreadsheet containing all the scores for all of the charter and regular public schools in DC by grade. I took out just the regular DC public schools and sorted them by grade levels. I counted every tested student in grades 3, 4, 5 and 6 as ‘elementary’, and every student in grades 7, 8 and 10 as ‘secondary’, and added up how many were ‘proficient’ or ‘advanced’ at each level, and divided that by the total number of students tested at that level. Thank goodness for electronic spreadsheets! They do all the number crunching for you! You can find the original spreadsheet for 2012 DC-CAS scores here, (it’s the fourth listing) and the original letters containing the promises here.

==============

The saga so far:

  1.  https://gfbrandenburg.wordpress.com/2014/09/02/did-any-of-michelle-rhees-promises-actually-work-in-dc/
  2. https://gfbrandenburg.wordpress.com/2014/09/02/more-on-michelle-rhees-promises-concerning-dcps/
  3. https://gfbrandenburg.wordpress.com/2014/09/04/what-rhee-promised-to-the-billionaires-walton-gates-et-al-but-didnt-deliver/
  4. https://gfbrandenburg.wordpress.com/2014/09/04/two-more-promises-by-rhee-et-al-were-they-kept/
  5. https://gfbrandenburg.wordpress.com/2014/09/05/ten-more-promises-from-rhee-henderson-company-were-any-of-them-kept/
  6. https://gfbrandenburg.wordpress.com/2014/09/05/33-6-for-nearly-all-values-of-3-not-5/
  7. https://gfbrandenburg.wordpress.com/2014/09/05/5281/
  8. https://gfbrandenburg.wordpress.com/2014/09/07/more-failures-to-deliver-on-promises-by-michelle-rhee-and-her-acolytes/
  9. https://gfbrandenburg.wordpress.com/2014/09/08/another-day-another-bunch-of-failures-from-rhee-henderson/
  10. https://gfbrandenburg.wordpress.com/2014/09/09/even-more-missed-targets-dc-cas-proficiency-in-2010-and-2011/
  11. https://gfbrandenburg.wordpress.com/2014/09/13/rhees-failures-in-dc-the-continuing-saga-2012-dc-cas/  (this one)
  12. https://gfbrandenburg.wordpress.com/2014/09/21/the-long-list-of-failures-by-rhee-and-henderson-continued/

\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\

Once again, let me credit my colleague Erich Martel for coming up with the idea of going back to the original promises and seeing if they were kept or not, and sharing his findings with me. These calculations are generally my own, so if you find any mistakes, don’t blame him. Blame me.

Even More Missed Targets: DC-CAS Proficiency in 2010 and 2011

Part Ten of Many

Installment #10 in our lengthy saga of failures by the current and past Chancellors of the District of Columbia School system.

Today we look at overall proficiency rates in elementary and secondary math and reading on the DC-CAS for 2010 and 2011, which will add up to eight separate goals out of our grand total of 78.

(Up until now, out of 38 goals assessed, the dynamic duo of Rhee and Henderson managed to attain one and a half of them.)

Here is the summary table:

missed goals on dc cas proficiency

In this set of goals, a high number is good, because it means a higher proportion of students are ‘proficient’. Unfortunately for Rhee, Henderson and the various billionaire foundations, not a single one of these targets were met.

Not one.

In every single case, the ‘target’ was higher than the actual proficiency rate — and in some cases, the proficiecy rates actually declined a bit from 2010 to 2011, despite all the rosy predictions…

For example: in 2010, the promise was that 53.0% of all DCPS elementary students would be ‘proficient’ or ‘advanced’ on the DC-CAS in math. In reality, only 42.8% of our elementary kids met that standard. In 2011, the prediction was that 58% of all DCPS elementary students would be proficient or advanced in math on the DC-CAS, but in fact, only 41.8% were — and that was a decline of about 1% from the year before.

And the same sort of thing happened in every one of the eight categories measured here. In every single case, the predicted target was considerably higher than the actual result.

So with eight more failures out of eight more measurements, the total now is 44.5 failures and 1.5 successes, which is beyond pitiful: about THREE PERCENT success and 97% FAILURE.

failure rate out of 46

Again, why does Kaya Henderson still have a job?

And why didn’t these four foundations ask for their money back?

===================

My next task needs to be to investigate the results for 2012 and 2013, which will be a bit challenging because DCPS and OSSE completely changed the way data are reported. The new way looks all fun and interactive but — in my opinion — is a lot harder to extract actual information from. It might take me a couple of days.

The saga so far:

  1.  https://gfbrandenburg.wordpress.com/2014/09/02/did-any-of-michelle-rhees-promises-actually-work-in-dc/
  2. https://gfbrandenburg.wordpress.com/2014/09/02/more-on-michelle-rhees-promises-concerning-dcps/
  3. https://gfbrandenburg.wordpress.com/2014/09/04/what-rhee-promised-to-the-billionaires-walton-gates-et-al-but-didnt-deliver/
  4. https://gfbrandenburg.wordpress.com/2014/09/04/two-more-promises-by-rhee-et-al-were-they-kept/
  5. https://gfbrandenburg.wordpress.com/2014/09/05/ten-more-promises-from-rhee-henderson-company-were-any-of-them-kept/
  6. https://gfbrandenburg.wordpress.com/2014/09/05/33-6-for-nearly-all-values-of-3-not-5/
  7. https://gfbrandenburg.wordpress.com/2014/09/05/5281/
  8. https://gfbrandenburg.wordpress.com/2014/09/07/more-failures-to-deliver-on-promises-by-michelle-rhee-and-her-acolytes/
  9. https://gfbrandenburg.wordpress.com/2014/09/08/another-day-another-bunch-of-failures-from-rhee-henderson/
  10. https://gfbrandenburg.wordpress.com/2014/09/09/even-more-missed-targets-dc-cas-proficiency-in-2010-and-2011/ (this one)
  11. https://gfbrandenburg.wordpress.com/2014/09/13/rhees-failures-in-dc-the-continuing-saga-2012-dc-cas/
  12. https://gfbrandenburg.wordpress.com/2014/09/21/the-long-list-of-failures-by-rhee-and-henderson-continued/

\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\

Once again, let me credit my colleague Erich Martel for coming up with the idea of going back to the original promises and seeing if they were kept or not, and sharing his findings with me. These calculations are generally my own, so if you find any mistakes, don’t blame him. Blame me.

Another Day, Another Bunch of Failures from Rhee & Henderson

Part Nine of Many

Today, we look to see if the past and present Chancellors of DC Public Schools actually met any of the promises that they made to four billionaires’ foundations in regards to reducing the ‘achievement gaps’ on the 2010 DC-CAS (the local NCLB standardized test).

The short answer is, NO. Not a single target was met out of the twelve additional ones today. Not a single, solitary goal.

So their record is, one-and-a-half goals out of 38, which in percentages is 3.9%.

Not thirty-nine percent.

Instead, a bit under FOUR PERCENT.

goals met so far one and a half out of 38

Here is a more detailed table, for the 2011 reading score gaps:

reading ach gaps 2010 dc-cas promises and failures

I will try to explain, starting at the first line of results:

Rhee promised that on the 2010 DC-CAS, in reading, the difference in proficiency rates between black and white students would only be 41.7%. In actuality it was about ten percentage points higher, at 51.4%.

At the secondary level, she promised that the reading gap would be 48.2%, but it was really 49.5%. Yes, that is pretty close, but I’m not giving her that one — no cigar, as they say at the carnival fairgrounds.

On the next line, she promised that the White-Hispanic gap in proficiency rates would be 35.1%, but it was really 47.1% — not very close at all. At the secondary level, she said that she would (magically) achieve a 36.8% gap, but it was really 43.9%.

On the last line, the gap in proficiency rates between poor students and non-poor students (as measured by data on free or reduced-price lunch applications) would be 24.7% at the elementary level and 21.8% at the secondary level. In fact, those gaps were, respectively, 32.7% and 24.5%.

Recall that a lower gap is a good thing. Higher gaps are  bad things.

Now look at the gap table for math on the DC-CAS for 2011:

math ach gaps 2010 dc-cas promises and failures

As you can see, every single number in the pink column (reality) is larger than the number in the white column just to the left of it, which means that in every single instance, her promises were unfulfilled.

Way to go, Michelle and Kaya!

Your score is now UNDER FOUR PERCENT!

Do you two actually know enough math to know how bad that is?

{I really don’t know how Michelle Rhee and Kaya Henderson sleep at night, knowing they are such complete and utter failures. But then again, I don’t understand the psychopathic personality. Most people, like me, are aware of many of our own personal failures, and regret them deeply. {Some of us ask for forgiveness in formal religious settings, while others just stew over these failures all our lives…}

(Rhee, however, said on-camera in one of John Merrow’s initial laudatory films on her career that she had never done ANYTHING that she had any regrets for – or words to that effect. My jaw dropped when I saw her say that.) 

==============

As before, the original set of documents listing all those $64.5 million’s worth of promises to  billionaire’s foundations can be found on page 22, here.

And if you want to do the digging yourself to check my spreadsheet’s arithmetic, you can find the DC-CAS scores for 2011 here; but be prepared to do some tedious number-crunching!

I really don’t know how Michelle Rhee and Kaya Henderson sleep at night, knowing they are such complete and utter failures. But then again, I don’t understand the psychopathic personality.

 

==============

The saga so far:

  1.  https://gfbrandenburg.wordpress.com/2014/09/02/did-any-of-michelle-rhees-promises-actually-work-in-dc/
  2. https://gfbrandenburg.wordpress.com/2014/09/02/more-on-michelle-rhees-promises-concerning-dcps/
  3. https://gfbrandenburg.wordpress.com/2014/09/04/what-rhee-promised-to-the-billionaires-walton-gates-et-al-but-didnt-deliver/
  4. https://gfbrandenburg.wordpress.com/2014/09/04/two-more-promises-by-rhee-et-al-were-they-kept/
  5. https://gfbrandenburg.wordpress.com/2014/09/05/ten-more-promises-from-rhee-henderson-company-were-any-of-them-kept/
  6. https://gfbrandenburg.wordpress.com/2014/09/05/33-6-for-nearly-all-values-of-3-not-5/
  7. https://gfbrandenburg.wordpress.com/2014/09/05/5281/
  8. https://gfbrandenburg.wordpress.com/2014/09/07/more-failures-to-deliver-on-promises-by-michelle-rhee-and-her-acolytes/
  9. https://gfbrandenburg.wordpress.com/2014/09/08/another-day-another-bunch-of-failures-from-rhee-henderson/ (this one)
  10. https://gfbrandenburg.wordpress.com/2014/09/09/even-more-missed-targets-dc-cas-proficiency-in-2010-and-2011/
  11. https://gfbrandenburg.wordpress.com/2014/09/13/rhees-failures-in-dc-the-continuing-saga-2012-dc-cas/
  12. https://gfbrandenburg.wordpress.com/2014/09/21/the-long-list-of-failures-by-rhee-and-henderson-continued/

\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\

Once again, let me credit my colleague Erich Martel for coming up with the idea of going back to the original promises and seeing if they were kept or not, and sharing his findings with me. These calculations are generally my own, so if you find any mistakes, don’t blame him. Blame me.

More Failures to Deliver on Promises by Michelle Rhee and Her Acolytes

Part Eight of Many

As you may recall, Michelle Rhee made a whole lot of promises about the types of improvements in test scores that she could deliver if she was given free reign to fire administrators or teachers in the DC Public School system as she saw fit and give bonuses to those who increased test scores according to her mandates.

I’ve been looking into those 78 numerical goals, using the original documents and with prodding from my former colleague, Erich Martel (we are both retired DCPS teachers).

Up until now, out of 14 measurable goals that I have waded through, I have shown that she delivered on exactly one and a half of those promises.

I just now finished crunching the numbers for the 2011 DC-CAS to see how her promises fared concerning the ‘achievement gaps’. That term refers to the differences in the percentages of various groups that score at or above the ‘proficient’ label on the local NCLB standardized test, the DC-CAS.

Bottom line for today? Out of twelve measurable goals that I analyzed and that she promised, on the 2011 DC-CAS, in exchange for 64.5 million dollars, she and her successors achieved exactly NONE of them.

Not one.

So her score (and that of Kaya Henderson, her chief henchperson and successor), is now 1.5 out of 26, which is roughly six percent.

goals met so far 1+half out of 26

Six percent!

And falling!

Here is a table showing how Rhee and the rest of DCPS central administration failed to meet any of the goals on this round:

achievement gap promises and actual results for 2011 dc-cas

For example, and starting from the top, Rhee et al promised that in elementary reading, in 2011, the black-white proficiency gap in 2011 would only be 36.7%. In fact, it was 53.5% by my calculations,* and even higher than it was in 2007 when she was hired as chancellor!

She also promised that for elementary reading, the Hispanic-white gap would only be 30.1% but it was really 43.7%, again higher than it was in 2007!

Likewise, the promise was that the gap between the proficiency rates of poor students (i.e., those eligible for free or reduced-price lunch) and the non-poor, or non-disadvantaged, in elementary reading would only be 21.2%. However, it was really 31.5% — once again, higher than it was in 2007.

(If you aren’t clear on this, the general idea is that big gaps between proficiency rates of white and black students, or between white and hispanic students, or between poor and non-poor students, are bad. Lowering those gaps is good. I certainly agree with the goal. I just don’t think that Michelle Rhee had any clue as to how to go about doing it, but she was a really good huckster.)

In secondary reading (which I take to be grades 7, 8, and 10), the situation was much the same. A promised gap of 43.2% between black and white students was really 49.7%, the same as it was in 2009. The gap for hispanic vs white secondary students was promised to be 31.8% but was really 37.7%. The poor-nonpoor gap was promised to be 18.3% but was really 21.9%.

I won’t bore you with all six of her failures in the math department — you can read the table for yourself, keeping in mind that the salmon-colored boxes are the actual results, which are higher (and thus worse) than the very optimistic predictions that Rhee and her henchpeople made to the four billionaires’ foundations.

As I’ve asked many times before:

With an almost continuous legacy of lies, broken promises, and failures to deliver on anything, why does anybody listen to Michelle Rhee? And why does Kaya Henderson have a job as DCPS chancellor?

=========

* Here is the gist of how I calculated this: according to OSSE documents, there were 9132 African-American students tested in DCPS in reading in grades 3, 4, 5 and 6 in 2011. Of them, about 3234 tested as ‘proficient’ or ‘advanced’, which works out to 35.4% (about a third) being scored ‘passing’ according to the provisions of the No Child Left Behind Act. Among White non-Hispanic students in DCPS in the same grades, same subject, there were 1,226 students tested, of whom 1,090 were ‘proficient’ or ‘advanced’. That works out to 88.9% ‘passing’ among white students. If you subtract 35.4% from 88.9%, you get a ‘proficiency gap’ of about 53.5%. For comparison, the same gap in 2007 was 53.7%.

All of the calculations were done in the same manner, and I can post the spreadsheets on Google Drive if anybody is interested. If anybody finds any errors, their contribution will be cheerfully acknowledged and this page will be corrected, with credit.

======

Next time, I plan to look at the 2010 DC-CAS, then the ones for 2012 and 2013. Many apologies for not going in order.

 

  1. The saga so far:
    1.  https://gfbrandenburg.wordpress.com/2014/09/02/did-any-of-michelle-rhees-promises-actually-work-in-dc/
    2. https://gfbrandenburg.wordpress.com/2014/09/02/more-on-michelle-rhees-promises-concerning-dcps/
    3. https://gfbrandenburg.wordpress.com/2014/09/04/what-rhee-promised-to-the-billionaires-walton-gates-et-al-but-didnt-deliver/
    4. https://gfbrandenburg.wordpress.com/2014/09/04/two-more-promises-by-rhee-et-al-were-they-kept/
    5. https://gfbrandenburg.wordpress.com/2014/09/05/ten-more-promises-from-rhee-henderson-company-were-any-of-them-kept/
    6. https://gfbrandenburg.wordpress.com/2014/09/05/33-6-for-nearly-all-values-of-3-not-5/
    7. https://gfbrandenburg.wordpress.com/2014/09/05/5281/
    8. https://gfbrandenburg.wordpress.com/2014/09/07/more-failures-to-deliver-on-promises-by-michelle-rhee-and-her-acolytes/  (this one)
    9. https://gfbrandenburg.wordpress.com/2014/09/08/another-day-another-bunch-of-failures-from-rhee-henderson/
    10. https://gfbrandenburg.wordpress.com/2014/09/09/even-more-missed-targets-dc-cas-proficiency-in-2010-and-2011/
    11. https://gfbrandenburg.wordpress.com/2014/09/13/rhees-failures-in-dc-the-continuing-saga-2012-dc-cas/
    12. https://gfbrandenburg.wordpress.com/2014/09/21/the-long-list-of-failures-by-rhee-and-henderson-continued/

    \/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\

    Once again, let me credit my colleague Erich Martel for coming up with the idea of going back to the original promises and seeing if they were kept or not, and sharing his findings with me. These calculations are generally my own, so if you find any mistakes, don’t blame him. Blame me.

Math Targets for NAEP in DCPS – more broken promises

Part Seven of Many

Here we look at the math targets that Michelle Rhee promised DCPS students would reach on the NAEP TUDA from 2007 to 2013. Let’s see how many goals were actually achieved.

First, 8th grade math:

8th grade naep tuda targets math 2007-13

As you can see, Rhee promised that the DCPS average scale score in math for 8th graders would be 256 in 2011 and would be 262 in 2013. Neither goal was reached, because the scores were only 255 in 2011 and 260 in 2013. I drew brown diamonds to indicate NOT reaching the promised goals.

Now let’s look at 4th grade math:

4th grade math naep tuda targets 2007-13Again, as you can see, Rhee and her buddies running DCPS failed to reach the goals that were promised.

So for these four goals, Rhee & Henderson & company are zero for four.

Which means that the running total, so far, is one and a half out of fourteen, which is about 11% success.

Can someone remind me why Kaya Henderson still has a job?

====================

 

The saga so far:

  1.  https://gfbrandenburg.wordpress.com/2014/09/02/did-any-of-michelle-rhees-promises-actually-work-in-dc/
  2. https://gfbrandenburg.wordpress.com/2014/09/02/more-on-michelle-rhees-promises-concerning-dcps/
  3. https://gfbrandenburg.wordpress.com/2014/09/04/what-rhee-promised-to-the-billionaires-walton-gates-et-al-but-didnt-deliver/
  4. https://gfbrandenburg.wordpress.com/2014/09/04/two-more-promises-by-rhee-et-al-were-they-kept/
  5. https://gfbrandenburg.wordpress.com/2014/09/05/ten-more-promises-from-rhee-henderson-company-were-any-of-them-kept/
  6. https://gfbrandenburg.wordpress.com/2014/09/05/33-6-for-nearly-all-values-of-3-not-5/
  7. https://gfbrandenburg.wordpress.com/2014/09/05/5281/  (this one)
  8. https://gfbrandenburg.wordpress.com/2014/09/07/more-failures-to-deliver-on-promises-by-michelle-rhee-and-her-acolytes/
  9. https://gfbrandenburg.wordpress.com/2014/09/08/another-day-another-bunch-of-failures-from-rhee-henderson/
  10. https://gfbrandenburg.wordpress.com/2014/09/09/even-more-missed-targets-dc-cas-proficiency-in-2010-and-2011/
  11. https://gfbrandenburg.wordpress.com/2014/09/13/rhees-failures-in-dc-the-continuing-saga-2012-dc-cas/
  12. https://gfbrandenburg.wordpress.com/2014/09/21/the-long-list-of-failures-by-rhee-and-henderson-continued/

\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\

Once again, let me credit my colleague Erich Martel for coming up with the idea of going back to the original promises and seeing if they were kept or not, and sharing his findings with me. These calculations are generally my own, so if you find any mistakes, don’t blame him. Blame me.

3+3 = 6 for nearly all values of 3. Not 5.

Part Six of Many

Alert readers will have noticed that in the immediately prior post I originally added wrong (three plus three is generally six), so I gave Rhee and company a better score than they deserved.

This semi-alert writer caught his own mistake before anybody else notified him and rubbed his nose in it.

So, Rhee’s actual score is 1.5 out of 10 (15%) overall, not 1.5 out of 9 (about 17%). So far.

There are many more goals to go!

Any predictions of the final score will be, out of 78 specific promises made by Michelle Rhee?

====================

The saga so far:

  1.  https://gfbrandenburg.wordpress.com/2014/09/02/did-any-of-michelle-rhees-promises-actually-work-in-dc/
  2. https://gfbrandenburg.wordpress.com/2014/09/02/more-on-michelle-rhees-promises-concerning-dcps/
  3. https://gfbrandenburg.wordpress.com/2014/09/04/what-rhee-promised-to-the-billionaires-walton-gates-et-al-but-didnt-deliver/
  4. https://gfbrandenburg.wordpress.com/2014/09/04/two-more-promises-by-rhee-et-al-were-they-kept/
  5. https://gfbrandenburg.wordpress.com/2014/09/05/ten-more-promises-from-rhee-henderson-company-were-any-of-them-kept/
  6. https://gfbrandenburg.wordpress.com/2014/09/05/33-6-for-nearly-all-values-of-3-not-5/ (this one)
  7. https://gfbrandenburg.wordpress.com/2014/09/05/5281/
  8. https://gfbrandenburg.wordpress.com/2014/09/07/more-failures-to-deliver-on-promises-by-michelle-rhee-and-her-acolytes/
  9. https://gfbrandenburg.wordpress.com/2014/09/08/another-day-another-bunch-of-failures-from-rhee-henderson/
  10. https://gfbrandenburg.wordpress.com/2014/09/09/even-more-missed-targets-dc-cas-proficiency-in-2010-and-2011/
  11. https://gfbrandenburg.wordpress.com/2014/09/13/rhees-failures-in-dc-the-continuing-saga-2012-dc-cas/
  12. https://gfbrandenburg.wordpress.com/2014/09/21/the-long-list-of-failures-by-rhee-and-henderson-continued/

\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\

Once again, let me credit my colleague Erich Martel for coming up with the idea of going back to the original promises and seeing if they were kept or not, and sharing his findings with me. These calculations are generally my own, so if you find any mistakes, don’t blame him. Blame me.

%d bloggers like this: