Part Two: Cheating in DCPS

DC Education Reform Ten Years After, 

Part 2: Test Cheats

Richard P Phelps

Ten years ago, I worked as the Director of Assessments for the District of Columbia Public Schools (DCPS). For temporal context, I arrived after the first of the infamous test cheating scandals and left just before the incident that spawned a second. Indeed, I filled a new position created to both manage test security and design an expanded testing program. I departed shortly after Vincent Gray, who opposed an expanded testing program, defeated Adrian Fenty in the September 2010 DC mayoral primary. My tenure coincided with Michelle Rhee’s last nine months as Chancellor. 

The recurring test cheating scandals of the Rhee-Henderson years may seem extraordinary but, in fairness, DCPS was more likely than the average US school district to be caught because it received a much higher degree of scrutiny. Given how tests are typically administered in this country, the incidence of cheating is likely far greater than news accounts suggest, for several reasons: 

·      in most cases, those who administer tests—schoolteachers and administrators—have an interest in their results;

·      test security protocols are numerous and complicated yet, nonetheless, the responsibility of non-expert ordinary school personnel, guaranteeing their inconsistent application across schools and over time; 

·      after-the-fact statistical analyses are not legal proof—the odds of a certain amount of wrong-to-right erasures in a single classroom on a paper-and-pencil test being coincidental may be a thousand to one, but one-in-a-thousand is still legally plausible; and

·      after-the-fact investigations based on interviews are time-consuming, scattershot, and uneven. 

Still, there were measures that the Rhee-Henderson administrations could have adopted to substantially reduce the incidence of cheating, but they chose none that might have been effective. Rather, they dug in their heels, insisted that only a few schools had issues, which they thoroughly resolved, and repeatedly denied any systematic problem.  

Cheating scandals

From 2007 to 2009 rumors percolated of an extraordinary level of wrong-to-right erasures on the test answer sheets at many DCPS schools. “Erasure analysis” is one among several “red flag” indicators that testing contractors calculate to monitor cheating. The testing companies take no responsibility for investigating suspected test cheating, however; that is the customer’s, the local or state education agency. 

In her autobiographical account of her time as DCPS Chancellor, Michelle Johnson (nee Rhee), wrote (p. 197)

“For the first time in the history of DCPS, we brought in an outside expert to examine and audit our system. Caveon Test Security – the leading expert in the field at the time – assessed our tests, results, and security measures. Their investigators interviewed teachers, principals, and administrators.

“Caveon found no evidence of systematic cheating. None.”

Caveon, however, had not looked for “systematic” cheating. All they did was interview a few people at several schools where the statistical anomalies were more extraordinary than at others. As none of those individuals would admit to knowingly cheating, Caveon branded all their excuses as “plausible” explanations. That’s it; that is all that Caveon did. But, Caveon’s statement that they found no evidence of “widespread” cheating—despite not having looked for it—would be frequently invoked by DCPS leaders over the next several years.[1]

Incidentally, prior to the revelation of its infamous decades-long, systematic test cheating, the Atlanta Public Schools had similarly retained Caveon Test Security and was, likewise, granted a clean bill of health. Only later did the Georgia state attorney general swoop in and reveal the truth. 

In its defense, Caveon would note that several cheating prevention measures it had recommended to DCPS were never adopted.[2] None of the cheating prevention measures that I recommended were adopted, either.

The single most effective means for reducing in-classroom cheating would have been to rotate teachers on test days so that no teacher administered a test to his or her own students. It would not have been that difficult to randomly assign teachers to different classrooms on test days.

The single most effective means for reducing school administratorcheating would have been to rotate test administrators on test days so that none managed the test materials for their own schools. The visiting test administrators would have been responsible for keeping test materials away from the school until test day, distributing sealed test booklets to the rotated teachers on test day, and for collecting re-sealed test booklets at the end of testing and immediately removing them from the school. 

Instead of implementing these, or a number of other feasible and effective test security measures, DCPS leaders increased the number of test proctors, assigning each of a few dozen or so central office staff a school to monitor. Those proctors could not reasonably manage the volume of oversight required. A single DC test administration could encompass a hundred schools and a thousand classrooms.

Investigations

So, what effort, if any, did DCPS make to counter test cheating? They hired me, but then rejected all my suggestions for increasing security. Also, they established a telephone tip line. Anyone who suspected cheating could report it, even anonymously, and, allegedly, their tip would be investigated. 

Some forms of cheating are best investigated through interviews. Probably the most frequent forms of cheating at DCPS—teachers helping students during test administrations and school administrators looking at test forms prior to administration—leave no statistical residue. Eyewitness testimony is the only type of legal evidence available in such cases, but it is not just inconsistent, it may be socially destructive. 

I remember two investigations best: one occurred in a relatively well-to-do neighborhood with well-educated parents active in school affairs; the other in one of the city’s poorest neighborhoods. Superficially, the cases were similar—an individual teacher was accused of helping his or her own students with answers during test administrations. Making a case against either elementary school teacher required sworn testimony from eyewitnesses, that is, students—eight-to-ten-year olds. 

My investigations, then, consisted of calling children into the principal’s office one-by-one to be questioned about their teacher’s behavior. We couldn’t hide the reason we were asking the questions. And, even though each student agreed not to tell others what had occurred in their visit to the principal’s office, we knew we had only one shot at an uncorrupted jury pool. 

Though the accusations against the two teachers were similar and the cases against them equally strong, the outcomes could not have been more different. In the high-poverty neighborhood, the students seemed suspicious and said little; none would implicate the teacher, whom they all seemed to like. 

In the more prosperous neighborhood, students were more outgoing, freely divulging what they had witnessed. The students had discussed the alleged coaching with their parents who, in turn, urged them to tell investigators what they knew. During his turn in the principal’s office, the accused teacher denied any wrongdoing. I wrote up each interview, then requested that each student read and sign. 

Thankfully, that accused teacher made a deal and left the school system a few weeks later. Had he not, we would have required the presence in court of the eight-to-ten-year olds to testify under oath against their former teacher, who taught multi-grade classes. Had that prosecution not succeeded, the eyewitness students could have been routinely assigned to his classroom the following school year.

My conclusion? Only in certain schools is the successful prosecution of a cheating teacher through eyewitness testimony even possible. But, even where possible, it consumes inordinate amounts of time and, otherwise, comes at a high price, turning young innocents against authority figures they naturally trusted. 

Cheating blueprints

Arguably the most widespread and persistent testing malfeasance in DCPS received little attention from the press. Moreover, it was directly propagated by District leaders, who published test blueprints on the web. Put simply, test “blueprints” are lists of the curricular standards (e.g., “student shall correctly add two-digit numbers”) and the number of test items included in an upcoming test related to each standard. DC had been advance publishing its blueprints for years.

I argued that the way DC did it was unethical. The head of the Division of Data & Accountability, Erin McGoldrick, however, defended the practice, claimed it was common, and cited its existence in the state of California as precedent. The next time she and I met for a conference call with one of DCPS’s test providers, Discover Education, I asked their sales agent how many of their hundreds of other customers advance-published blueprints. His answer: none.

In the state of California, the location of McGoldrick’s only prior professional experience, blueprints were, indeed, published in advance of test administrations. But their tests were longer than DC’s and all standards were tested. Publication of California’s blueprints served more to remind the populace what the standards were in advance of each test administration. Occasionally, a standard considered to be of unusual importance might be assigned a greater number of test items than the average, and the California blueprints signaled that emphasis. 

In Washington, DC, the tests used in judging teacher performance were shorter, covering only some of each year’s standards. So, DC’s blueprints showed everyone well in advance of the test dates exactly which standards would be tested and which would not. For each teacher, this posed an ethical dilemma: should they “narrow the curriculum” by teaching only that content they knew would be tested? Or, should they do the right thing and teach all the standards, as they were legally and ethically bound to, even though it meant spending less time on the to-be-tested content? It’s quite a conundrum when one risks punishment for behaving ethically.

Monthly meetings convened to discuss issues with the districtwide testing program, the DC Comprehensive Assessment System (DC-CAS)—administered to comply with the federal No Child Left Behind (NCLB) Act. All public schools, both DCPS and charters, administered those tests. At one of these regular meetings, two representatives from the Office of the State Superintendent of Education (OSSE) announced plans to repair the broken blueprint process.[3]

The State Office employees argued thoughtfully and reasonably that it was professionally unethical to advance publish DC test blueprints. Moreover, they had surveyed other US jurisdictions in an effort to find others that followed DC’s practice and found none. I was the highest-ranking DCPS employee at the meeting and I expressed my support, congratulating them for doing the right thing. I assumed that their decision was final.

I mentioned the decision to McGoldrick, who expressed surprise and speculation that it might have not been made at the highest level in the organizational hierarchy. Wasting no time, she met with other DCPS senior managers and the proposed change was forthwith shelved. In that, and other ways, the DCPS tail wagged the OSSE dog. 

* * *

It may be too easy to finger ethical deficits for the recalcitrant attitude toward test security of the Rhee-Henderson era ed reformers. The columnist Peter Greene insists that knowledge deficits among self-appointed education reformers also matter: 

“… the reformistan bubble … has been built from Day One without any actual educators inside it. Instead, the bubble is populated by rich people, people who want rich people’s money, people who think they have great ideas about education, and even people who sincerely want to make education better. The bubble does not include people who can turn to an Arne Duncan or a Betsy DeVos or a Bill Gates and say, ‘Based on my years of experience in a classroom, I’d have to say that idea is ridiculous bullshit.’”

“There are a tiny handful of people within the bubble who will occasionally act as bullshit detectors, but they are not enough. The ed reform movement has gathered power and money and set up a parallel education system even as it has managed to capture leadership roles within public education, but the ed reform movement still lacks what it has always lacked–actual teachers and experienced educators who know what the hell they’re talking about.”

In my twenties, I worked for several years in the research department of a state education agency. My primary political lesson from that experience, consistently reinforced subsequently, is that most education bureaucrats tell the public that the system they manage works just fine, no matter what the reality. They can get away with this because they control most of the evidence and can suppress it or spin it to their advantage.

In this proclivity, the DCPS central office leaders of the Rhee-Henderson era proved themselves to be no different than the traditional public-school educators they so casually demonized. 

US school systems are structured to be opaque and, it seems, both educators and testing contractors like it that way. For their part, and contrary to their rhetoric, Rhee, Henderson, and McGoldrick passed on many opportunities to make their system more transparent and accountable.

Education policy will not improve until control of the evidence is ceded to genuinely independent third parties, hired neither by the public education establishment nor by the education reform club.

The author gratefully acknowledges the fact-checking assistance of Erich Martel and Mary Levy.

Access this testimonial in .pdf format

Citation:  Phelps, R. P. (2020, September). Looking Back on DC Education Reform 10 Years After, Part 2: Test Cheats. Nonpartisan Education Review / Testimonials. https://nonpartisaneducation.org/Review/Testimonials/v16n3.htm


[1] A perusal of Caveon’s website clarifies that their mission is to help their clients–state and local education departments–not get caught. Sometimes this means not cheating in the first place; other times it might mean something else. One might argue that, ironically, Caveon could be helping its clients to cheat in more sophisticated ways and cover their tracks better.

[2] Among them: test booklets should be sealed until the students open them and resealed by the students immediately after; and students should be assigned seats on test day and a seating chart submitted to test coordinators (necessary for verifying cluster patterns in student responses that would suggest answer copying).

[3] Yes, for those new to the area, the District of Columbia has an Office of the “State” Superintendent of Education (OSSE). Its domain of relationships includes not just the regular public schools (i.e., DCPS), but also other public schools (i.e., charters) and private schools. Practically, it primarily serves as a conduit for funneling money from a menagerie of federal education-related grant and aid programs

What did Education Reform in DC Actually Mean?

Short answer: nothing that would actually help students or teachers. But it’s made for well-padded resumes for a handful of insiders.

This is an important review, by the then-director of assessment. His criticisms echo the points that I have been making along with Mary Levy, Erich Martel, Adell Cothorne, and many others.

Nonpartisan Education Review / Testimonials

Access this testimonial in .pdf format

Looking Back on DC Education Reform 10 Years After, 

Part 1: The Grand Tour

Richard P Phelps

Ten years ago, I worked as the Director of Assessments for the District of Columbia Public Schools (DCPS). My tenure coincided with Michelle Rhee’s last nine months as Chancellor. I departed shortly after Vincent Gray defeated Adrian Fenty in the September 2010 DC mayoral primary

My primary task was to design an expansion of that testing program that served the IMPACT teacher evaluation system to include all core subjects and all grade levels. Despite its fame (or infamy), the test score aspect of the IMPACT program affected only 13% of teachers, those teaching either reading or math in grades four through eight. Only those subjects and grade levels included the requisite pre- and post-tests required for teacher “value added” measurements (VAM). Not included were most subjects (e.g., science, social studies, art, music, physical education), grades kindergarten to two, and high school.

Chancellor Rhee wanted many more teachers included. So, I designed a system that would cover more than half the DCPS teacher force, from kindergarten through high school. You haven’t heard about it because it never happened. The newly elected Vincent Gray had promised during his mayoral campaign to reduce the amount of testing; the proposed expansion would have increased it fourfold.

VAM affected teachers’ jobs. A low value-added score could lead to termination; a high score, to promotion and a cash bonus. VAM as it was then structured was obviously, glaringly flawed,[1] as anyone with a strong background in educational testing could have seen. Unfortunately, among the many new central office hires from the elite of ed reform circles, none had such a background.

Before posting a request for proposals from commercial test developers for the testing expansion plan, I was instructed to survey two groups of stakeholders—central office managers and school-level teachers and administrators.

Not surprisingly, some of the central office managers consulted requested additions or changes to the proposed testing program where they thought it would benefit their domain of responsibility. The net effect on school-level personnel would have been to add to their administrative burden. Nonetheless, all requests from central office managers would be honored. 

The Grand Tour

At about the same time, over several weeks of the late Spring and early Summer of 2010, along with a bright summer intern, I visited a dozen DCPS schools. The alleged purpose was to collect feedback on the design of the expanded testing program. I enjoyed these meetings. They were informative, animated, and very well attended. School staff appreciated the apparent opportunity to contribute to policy decisions and tried to make the most of it.

Each school greeted us with a full complement of faculty and staff on their days off, numbering a several dozen educators at some venues. They believed what we had told them: that we were in the process of redesigning the DCPS assessment program and were genuinely interested in their suggestions for how best to do it. 

At no venue did we encounter stand-pat knee-jerk rejection of education reform efforts. Some educators were avowed advocates for the Rhee administration’s reform policies, but most were basically dedicated educators determined to do what was best for their community within the current context. 

The Grand Tour was insightful, too. I learned for the first time of certain aspects of DCPS’s assessment system that were essential to consider in its proper design, aspects of which the higher-ups in the DCPS Central Office either were not aware or did not consider relevant. 

The group of visited schools represented DCPS as a whole in appropriate proportions geographically, ethnically, and by education level (i.e., primary, middle, and high). Within those parameters, however, only schools with “friendly” administrations were chosen. That is, we only visited schools with principals and staff openly supportive of the Rhee-Henderson agenda. 

But even they desired changes to the testing program, whether or not it was expanded. Their suggestions covered both the annual districtwide DC-CAS (or “comprehensive” assessment system), on which the teacher evaluation system was based, and the DC-BAS (or “benchmarking” assessment system), a series of four annual “no-stakes” interim tests unique to DCPS, ostensibly offered to help prepare students and teachers for the consequential-for-some-school-staff DC-CAS.[2]

At each staff meeting I asked for a show of hands on several issues of interest that I thought were actionable. Some suggestions for program changes received close to unanimous support. Allow me to describe several.

1. Move DC-CAS test administration later in the school year. Many citizens may have logically assumed that the IMPACT teacher evaluation numbers were calculated from a standard pre-post test schedule, testing a teacher’s students at the beginning of their academic year together and then again at the end. In 2010, however, the DC-CAS was administered in March, three months before school year end. Moreover, that single administration of the test served as both pre- and post-test, posttest for the current school year and pretest for the following school year. Thus, before a teacher even met their new students in late August or early September, almost half of the year for which teachers were judged had already transpired—the three months in the Spring spent with the previous year’s teacher and almost three months of summer vacation. 

School staff recommended pushing DC-CAS administration to later in the school year. Furthermore, they advocated a genuine pre-post-test administration schedule—pre-test the students in late August–early September and post-test them in late-May–early June—to cover a teacher’s actual span of time with the students.

This suggestion was rejected because the test development firm with the DC-CAS contract required three months to score some portions of the test in time for the IMPACT teacher ratings scheduled for early July delivery, before the start of the new school year. Some small number of teachers would be terminated based on their IMPACT scores, so management demanded those scores be available before preparations for the new school year began.[3] The tail wagged the dog.

2. Add some stakes to the DC-CAS in the upper grades. Because DC-CAS test scores portended consequences for teachers but none for students, some students expended little effort on the test. Indeed, extensive research on “no-stakes” (for students) tests reveal that motivation and effort vary by a range of factors including gender, ethnicity, socioeconomic class, the weather, and age. Generally, the older the student, the lower the test-taking effort. This disadvantaged some teachers in the IMPACT ratings for circumstances beyond their control: unlucky student demographics. 

Central office management rejected this suggestion to add even modest stakes to the upper grades’ DC-CAS; no reason given. 

3. Move one of the DC-BAS tests to year end. If management rejected the suggestion to move DC-CAS test administration to the end of the school year, school staff suggested scheduling one of the no-stakes DC-BAS benchmarking tests for late May–early June. As it was, the schedule squeezed all four benchmarking test administrations between early September and mid-February. Moving just one of them to the end of the year would give the following year’s teachers a more recent reading (by more than three months) of their new students’ academic levels and needs.

Central Office management rejected this suggestion probably because the real purpose of the DC-BAS was not to help teachers understand their students’ academic levels and needs, as the following will explain.

4. Change DC-BAS tests so they cover recently taught content. Many DC citizens probably assumed that, like most tests, the DC-BAS interim tests covered recently taught content, such as that covered since the previous test administration. Not so in 2010. The first annual DC-BAS was administered in early September, just after the year’s courses commenced. Moreover, it covered the same content domain—that for the entirety of the school year—as each of the next three DC-BAS tests. 

School staff proposed changing the full-year “comprehensive” content coverage of each DC-BAS test to partial-year “cumulative” coverage, so students would only be tested on what they had been taught prior to each test administration.

This suggestion, too, was rejected. Testing the same full-year comprehensive content domain produced a predictable, flattering score rise. With each DC-BAS test administration, students recognized more of the content, because they had just been exposed to more of it, so average scores predictably rose. With test scores always rising, it looked like student achievement improved steadily each year. Achieving this contrived score increase required testing students on some material to which they had not yet been exposed, both a violation of professional testing standards and a poor method for instilling student confidence. (Of course, it was also less expensive to administer essentially the same test four times a year than to develop four genuinely different tests.)

5. Synchronize the sequencing of curricular content across the District. DCPS management rhetoric circa 2010 attributed classroom-level benefits to the testing program. Teachers would know more about their students’ levels and needs and could also learn from each other. Yet, the only student test results teachers received at the beginning of each school year was half-a-year old, and most of the information they received over the course of four DC-BAS test administrations was based on not-yet-taught content.

As for cross-district teacher cooperation, unfortunately there was no cross-District coordination of common curricular sequences. Each teacher paced their subject matter however they wished and varied topical emphases according to their own personal preference.

It took DCPS’s Chief Academic Officer, Carey Wright, and her chief of staff, Dan Gordon, less than a minute to reject the suggestion to standardize topical sequencing across schools so that teachers could consult with one another in real time. Tallying up the votes: several hundred school-level District educators favored the proposal, two of Rhee’s trusted lieutenants opposed it. It lost.

6. Offer and require a keyboarding course in the early grades. DCPS was planning to convert all its testing from paper-and-pencil mode to computer delivery within a few years. Yet, keyboarding courses were rare in the early grades. Obviously, without systemwide keyboarding training in computer use some students would be at a disadvantage in computer testing.

Suggestion rejected.

In all, I had polled over 500 DCPS school staff. Not only were all of their suggestions reasonable, some were essential in order to comply with professional assessment standards and ethics. 

Nonetheless, back at DCPS’ Central Office, each suggestion was rejected without, to my observation, any serious consideration. The rejecters included Chancellor Rhee, the head of the office of Data and Accountability—the self-titled “Data Lady,” Erin McGoldrick—and the head of the curriculum and instruction division, Carey Wright, and her chief deputy, Dan Gordon. 

Four central office staff outvoted several-hundred school staff (and my recommendations as assessment director). In each case, the changes recommended would have meant some additional work on their parts, but in return for substantial improvements in the testing program. Their rhetoric was all about helping teachers and students; but the facts were that the testing program wasn’t structured to help them.

What was the purpose of my several weeks of school visits and staff polling? To solicit “buy in” from school level staff, not feedback.

Ultimately, the new testing program proposal would incorporate all the new features requested by senior Central Office staff, no matter how burdensome, and not a single feature requested by several hundred supportive school-level staff, no matter how helpful. Like many others, I had hoped that the education reform intention of the Rhee-Henderson years was genuine. DCPS could certainly have benefitted from some genuine reform. 

Alas, much of the activity labelled “reform” was just for show, and for padding resumes. Numerous central office managers would later work for the Bill and Melinda Gates Foundation. Numerous others would work for entities supported by the Gates or aligned foundations, or in jurisdictions such as Louisiana, where ed reformers held political power. Most would be well paid. 

Their genuine accomplishments, or lack thereof, while at DCPS seemed to matter little. What mattered was the appearance of accomplishment and, above all, loyalty to the group. That loyalty required going along to get along: complicity in maintaining the façade of success while withholding any public criticism of or disagreement with other in-group members.

Unfortunately, in the United States what is commonly showcased as education reform is neither a civic enterprise nor a popular movement. Neither parents, the public, nor school-level educators have any direct influence. Rather, at the national level, US education reform is an elite, private club—a small group of tightly-connected politicos and academicsa mutual admiration society dedicated to the career advancement, political influence, and financial benefit of its members, supported by a gaggle of wealthy foundations (e.g., Gates, Walton, Broad, Wallace, Hewlett, Smith-Richardson). 

For over a decade, The Ed Reform Club exploited DC for its own benefit. Local elite formed the DC Public Education Fund (DCPEF) to sponsor education projects, such as IMPACT, which they deemed worthy. In the negotiations between the Washington Teachers’ Union and DCPS concluded in 2010, DCPEF arranged a 3 year grant of $64.5M from the Arnold, Broad, Robertson and Walton Foundations to fund a 5-year retroactive teacher pay raise in return for contract language allowing teacher excessing tied to IMPACT, which Rhee promised would lead to annual student test score increases by 2012. Projected goals were not metfoundation support continued nonetheless.

Michelle Johnson (nee Rhee) now chairs the board of a charter school chain in California and occasionally collects $30,000+ in speaker fees but, otherwise, seems to have deliberately withdrawn from the limelight. Despite contributing her own additional scandalsafter she assumed the DCPS Chancellorship, Kaya Henderson ascended to great fame and glory with a “distinguished professorship” at Georgetown; honorary degrees from Georgetown and Catholic Universities; gigs with the Chan Zuckerberg Initiative, Broad Leadership Academy, and Teach for All; and board memberships with The Aspen Institute, The College Board, Robin Hood NYC, and Teach For America. Carey Wright is now state superintendent in Mississippi. Dan Gordon runs a 30-person consulting firm, Education Counsel that strategically partners with major players in US education policy. The manager of the IMPACT teacher evaluation program, Jason Kamras, now works as Superintendent of the Richmond, VA public schools. 

Arguably the person most directly responsible for the recurring assessment system fiascos of the Rhee-Henderson years, then Chief of Data and Accountability Erin McGoldrick, now specializes in “data innovation” as partner and chief operating officer at an education management consulting firm. Her firm, Kitamba, strategically partners with its own panoply of major players in US education policy. Its list of recent clients includes the DC Public Charter School Board and DCPS.

If the ambitious DC central office folk who gaudily declared themselves leading education reformers were not really, who were the genuine education reformers during the Rhee-Henderson decade of massive upheaval and per-student expenditures three times those in the state of Utah? They were the school principals and staff whose practical suggestions were ignored by central office glitterati. They were whistleblowers like history teacher Erich Martel who had documented DCPS’ student records’ manipulation and phony graduation rates years before the Washington Post’s celebrated investigation of Ballou High School, and was demoted and then “excessed” by Henderson. Or, school principal Adell Cothorne, who spilled the beans on test answer sheet “erasure parties” at Noyes Education Campus and lost her job under Rhee. 

Real reformers with “skin in the game” can’t play it safe.

The author appreciates the helpful comments of Mary Levy and Erich Martel in researching this article. 

Access this testimonial in .pdf format

The Pandemic Is Far From Over

While the rate of increase per day in the number of deaths is generally down, the COVID-19 pandemic is far from over. In general, more people are still dying each day in the US from this disease than the day before, as you can see from this data, which is taken from the CDC. The very tall bar on day 27 is when New York City finally added thousands of poor souls who had in fact died from this virus. (Day 27 means April 9, and Day 41 means April 30, which is today.)

Opening up the economy and encouraging everybody to go back to work, play, and school will mean a rebirth of exponential growth in deaths and in diagnosed cases after about 2 weeks, since this disease takes about that long to be noticed in those who have been exposed. And once everybody is back on the streets and in the stores and schools, the disease WILL spread exponentially. Opening wide right now, when we still can’t test or follow those who may be infected, would be a huge mistake.

us covid deaths per day

Only somebody as clueless as our current Grifter-In-Chief and his brainless acolytes could be recommending something so irresponsible, against the advice of every medical expert. Maybe they think that only the poor, the black, and the brown will get this disease. Wrong.

The shutdown, while painful, appears to have saved a LOT of lives so far

If you recall, the growth of the new corona virus disease in the US (and many other countries) at first looked to be exponential, meaning that the number of cases (and deaths) were rising at an alarming, fixed percent each and every single day.

Even if you slept through your high school or middle school math lessons on exponential growth, the story of the Shah and the chessboard filled with rice may have told you that the equation 2^x gets very, very hairy after a while. Pyramid schemes eventually run out of suckers people. Or perhaps you have seen a relatively modest credit-card bill get way out of hand as the bank applies 8 percent interest PER MONTH, which ends up multiplying your debt by a factor of 6 after just 2 years!

(If the total number of deaths were still increasing by 25 percent per day, as they were during the middle of March, and if that trend somehow continued without slowing down, then every single person residing inside America’s borders would be dead before the end of May. Not kidding! But it’s also not happening.)

However, judging by numbers released by the CDC and reported by my former colleague Ron Jenkins, I am quite confident that THE NUMBER OF CASES AND DEATHS FROM COVID-19 ARE NO LONGER following a fixed exponential curve. Or at least, the daily rate of increase has been going down. Which is good. But it’s still not zero.

Let me show you the data and fitted curves in a number of graphs, which often make complex things easier to visualize and understand.

My first graph is the total reported number of deaths so far in the US, compared to a best-fit exponential graph:

Deaths in US are not growing exponentially

During the first part of this pandemic, during the first 40 or so days, the data actually fit an exponential graph pretty well – that is, the red dotted line (the exponential curve of best fit) fit the actual cumulative number of deaths (in blue). And that’s not good. However, since about day 50 (last week) the data is WAY UNDER the red dots. To give you an idea of how much of a victory that is: find day 70, which is May 9, and follow the vertical line up until it meets the red dotted line. I’ll wait.

Did you find it? If this pandemic were still following exponential growth, now and into the future, at the same rate, we would have roughly a MILLION PEOPLE DEAD BY JUNE 9 in just the US, just from this disease, and 2 million the week after that, and 4 million the next week, then 8 million, then 16 million, and so on.

THAT AIN’T HAPPENIN’! YAY! HUZZAH!

As you can see — the blue and red graphs have diverged. Ignore the relatively high correlation value of 0.935 – it just ain’t so.

But what IS the curve of best fit? I don’t know, so I’ll let you look for yourself.

Is it linear?

Deaths in US are not growing in a linear fashion

This particular line of best doesn’t fit the data very well; however, if we start at day 36 or thereabouts, we could get a line that fits the data from there on pretty well, like so:

maybe this purple line

 

The purple line fits the blue dots quite well after about day 37 (about April 6), and the statistics algorithms quite agree. However, it still calls for over 80,000 Americans dead by May 8. I do not want the slope of that line to be positive! I want it to turn to the right and remain horizontal – meaning NOBODY ELSE DIES ANY MORE FROM THIS DISEASE.

Perhaps it’s not linear? Perhaps it’s one of those other types of equations you might remember from some algebra class, like a parabola, a cubic, or a quartic? Let’s take a look:

Deaths might be growing at a 2nd degree polynomial rate - still not good

This is a parabolic function, or a quadratic. The red dots do fit the data pretty well. Unfortunately, we want the blue dots NOT to fit that graph, because that would, once again, mean about a hundred thousand people dead by May 8. That’s better than a million, but I want the deaths to stop increasing at all. Like this piecewise function (which some of you studied). Note that the purple line cannot go back downwards, because generally speaking, dead people cannot be brought back to life.

maybe this purple line - nah, prefer horizontal

Well, does the data fit a cubic?

deaths fit a cubic very well

Unfortunately, this also fits pretty well. If it continues, we would still have about a hundred thousand dead by May 8, and the number would increase without limit (which, fortunately, is impossible).

How about a quartic (fourth-degree polynomial)? Let’s see:

4th degree polynomial is impossible - people do NOT come back to life

I admit that the actual data, in blue, fit the red calculated quartic red curve quite well, in fact, the best so far, and the number of deaths by Day 70 is the lowest so far. But it’s impossible: for the curve to go downwards like that would mean that you had ten thousand people who died, and who later came back to life. Nah, not happening.

What about logarithmic growth? That would actually be sweet – it’s a situation where a number rises quickly at first, but over time rises more and more slowly. Like this, in red:

logarithmic growth

I wish this described the real situation, but clearly, it does not.

One last option – a ‘power law’ where there is some fixed power of the date (in this case, the computer calculated it to be the date raised to the 5.377 power) which explains all of the deaths, like so:

no sign of a power law

I don’t think this fits the data very well, either. Fortunately. It’s too low from about day 38 to day 29, and is much too high from day 50 onwards. Otherwise we would be looking at about 230,000 dead by day 70 (May 8).

But saying that the entire number of deaths in the US is no longer following a single exponential curve doesn’t quite do the subject justice. Exponential growth (or decay) simply means that in any given time period, the quantity you are measuring is increasing (or decreasing) by a fixed percentage (or fraction). That’s all. And, as you can see, for the past week, the daily percentage of increase in the total number of deaths has been in the range of three to seven percent. However, during the first part of March, the rate of increase in deaths was enormous: 20 to 40 percent PER DAY. And the daily percent of increase in the number of cases was at times over A HUNDRED PERCENT!!! – which is off the chart below.

daily percentages of increases in covid 19 cases and deaths, USA, thru April 25

The situation is still not good! If we are stuck at a daily increase in the number of deaths as low as a 3%/day increase, then we are all dead within a year. Obviously, and fortunately, that’s probably not going to happen, but it’s a bit difficult to believe that the math works out that way.

But it does. Let me show you, using logs.

For simple round numbers, let’s say we have 50,000 poor souls who have died so far from this coronavirus in the USA right now, and that number of deaths is increasing at a rate of 3 percent per day. Let’s also say that the US has a population of about 330 million. The question is, when will we all be dead if that exponential growth keeps going on somehow? (Fortunately, it won’t.*) Here is the first equation, and then the steps I went through. Keep in mind that a growth of 3% per day means that you can multiply any day’s value by 1.03, or 103%, to get the next day’s value. Here goes:

in 10 months we are all dead

Sound unbelievable? To check that, let us take almost any calculator and try raising the expression 1.03 to the 300th power. I think you’ll get about 7098. Now take that and multiply it by the approximate number of people dead so far in the US, namely 50,000. You’ll get about 355,000,000 – well more than the total number of Americans.

So we still need to get that rate of increase in fatalities down, to basically zero. We are not there yet. With our current highly-incompetent national leadership, we might not.

===================================================================

* what happens in cases like this is you get sort of an s-shaped curve, called the Logistic or logit curve, in which the total number levels off after a while. That’s shown below. Still not pleasant.

I have no idea how to model this sort of problem with a logistic curve; for one thing, one would need to know what the total ‘carrying capacity’ – or total number of dead — would be if current trends continue and we are unsuccessful at stopping this virus. The epidemiologists and statisticians who make models for this sort of thing know a lot more math, stats, biology, and so on than I do, but even they are working with a whole lot of unknowns, including the rate of infectiousness, what fraction of the people feel really sick, what fraction die, whether you get immunity if you are exposed, what is the effect of different viral loads, and much more. This virus has only been out for a few months…

logistic curve again

 

What’s the best approach – should we lock down harder, or let people start to go back to work? Some countries have had lockdowns, others have not. How will the future play out? I don’t know. I do know that before we can decide, we need to have fast, plentiful, and accurate tests, so we can quarantine just the people who are infected or are carriers, and let everybody else get back on with their lives. We are doing this lockdown simply because we have no other choice.

More on the “false positive” COVID-19 testing problem

I used my cell phone last night to go into the problem of faulty testing for COVID-19, based on a NYT article. As a result, I couldn’t make any nice tables. Let me remedy that and also look at a few more assumptions.

This table summarizes the testing results on a theoretical group of a million Americans tested, assuming that 5% of the population actually has coronavirus antibodies, and that the tests being given have a false negative rate of 10% and a false positive rate of 3%. Reminder: a ‘false negative’ result means that you are told that you don’t have any coronavirus antibodies but you actually do have them, and a ‘false positive’ result means that you are told that you DO have those antibodies, but you really do NOT. I have tried to highlight the numbers of people who get incorrect results in the color red.

Table A

Group Total Error rate Test says they are Positive Test says they are Negative
Actually Positive 50,000 10% 45,000 5,000
Actually Negative 950,000 3% 28,500 921,500
Totals 1,000,000 73,500 926,500
Percent we assume are actually positive 5% Accuracy Rating 61.2% 99.5%

As you can see, using those assumptions, if you get a lab test result that says you are positive, that will only be correct in about 61% of the time. Which means that you need to take another test, or perhaps two more tests, to see whether they agree.

The next table assumes again a true 5% positive result for the population and a false negative rate of 10%, but a false positive rate of 14%.

Table B

Assume 5% really exposed, 14% false positive rate, 10% false negative
Group Total Error rate Test says they are Positive Test says they are Negative
Actually Positive 50,000 10% 45,000 5,000
Actually Negative 950,000 14% 133,000 817,000
Totals 1,000,000 178,000 822,000
Percent we assume are actually positive 5% Accuracy Rating 25.3% 99.4%

Note that in this scenario, if you get a test result that says you are positive, that is only going to be correct one-quarter of the time (25.3%)! That is useless!

Now, let’s assume a lower percentage of the population actually has the COVID-19 antibodies, say, two percent. Here are the results if we assume a 3% false positive rate:

Table C

Assume 2% really exposed, 3% false positive rate, 10% false negative
Group Total Error rate Test says they are Positive Test says they are Negative
Actually Positive 20,000 10% 18,000 2,000
Actually Negative 980,000 3% 29,400 950,600
Totals 1,000,000 47,400 952,600
Percent we assume are actually positive 2% Accuracy Rating 38.0% 99.8%

Notice that in this scenario, if you get a ‘positive’ result, it is likely to be correct only a little better than one-third of the time (38.0%).

And now let’s assume 2% actual exposure, 14% false positive, 10% false negative:

Table D

Assume 2% really exposed, 14% false positive rate, 10% false negative
Group Total Error rate Test says they are Positive Test says they are Negative
Actually Positive 20,000 10% 45,000 2,000
Actually Negative 980,000 14% 137,200 842,800
Totals 1,000,000 182,200 844,800
Percent we assume are actually positive 2% Accuracy Rating 24.7% 99.8%

Once again, the chances of a ‘positive’ test result being accurate is only about one in four (24.7%), which means that this level of accuracy is not going to be useful to the public at large.

Final set of assumptions: 3% actual positive rate, and excellent tests with only 3% false positive and false negative rates:

Table E

Assume 3% really exposed, 3% false positive rate, 3% false negative
Group Total Error rate Test says they are Positive Test says they are Negative
Actually Positive 30,000 3% 45,000 900
Actually Negative 970,000 3% 29,100 940,900
Totals 1,000,000 74,100 941,800
Percent we assume are actually positive 3% Accuracy Rating 60.7% 99.9%

Once again, if you test positive in this scenario, that result is only going to be correct about 3/5 of the time (60.7%).

All is not lost, however. Suppose we re-test all the people who tested positive in this last group (that’s a bit over seventy-four thousand people, in Table E). Here are the results:

Table F

Assume 60.7% really exposed, 3% false positive rate, 3% false negative
Group Total Error rate Test says they are Positive Test says they are Negative
Actually Positive 45,000 3% 43,650 1,350
Actually Negative 29,100 3% 873 28,227
Totals 74,100 44,523 29,577
Percent we assume are actually positive 60.7% Accuracy Rating 98.0% 95.4%

Notice that 98% accuracy rating for positive results! Much better!

What about our earlier scenario, in table B, with a 5% overall exposure rating, 14% false positives, and 10% false negatives — what if we re-test all the folks who tested positive? Here are the results:

Table G

Assume 25.3% really exposed, 14% false positive rate, 10% false negative
Group Total Error rate Test says they are Positive Test says they are Negative
Actually Positive 45,000 14% 38,700 6,300
Actually Negative 133,000 10% 13,300 119,700
Totals 178,000 52,000 126,000
Percent we assume are really positive 25.3% Accuracy Rating 74.4% 95.0%

This is still not very good: the re-test is going to be accurate only about three-quarters of the time (74.4%) that it says you really have been exposed, and would only clear you 95% of the time. So we would need to run yet another test on those who again tested positive in Table G. If we do it, the results are here:

Table H

Assume 74.4% really exposed, 14% false positive rate, 10% false negative
Group Total Error rate Test says they are Positive Test says they are Negative
Actually Positive 38,700 14% 33,282 5,418
Actually Negative 13,300 10% 1,330 11,970
Totals 52,000 34,612 17,388
Percent we assume are really positive 74.4% Accuracy Rating 96.2% 68.8%

This result is much better, but note that this requires THREE TESTS on each of these supposedly positive people to see if they are in fact positive. It also means that if they get a ‘negative’ result, that’s likely to be correct only about 2/3 of the time (68.8%).

So, no wonder that a lot of the testing results we are seeing are difficult to interpret! This is why science requires repeated measurements to separate the truth from fiction! And it also explains some of the snafus committed by our current federal leadership in insisting on not using tests offered from abroad.

 

============

EDIT at 10:30 pm on 4/25/2020: I found a few minor mistakes and corrected them, and tried to format things more clearly.

People are Not Cattle!

This apparently did not occur to William Sanders.

He thought that statistical methods that are useful with farm animals could also be used to measure effectiveness of teachers.

I grew up on a farm, and as both a kid and a young man I had considerable experience handling cows, chickens, and sheep. (These are generic critter photos, not the actual animals we had.)

I also taught math and some science to kids like the ones shown below for over 30 years.

guy teaching  deal students

Caring for farm animals and teaching young people are not the same thing.

(Duh.)

As the saying goes: “Teaching isn’t rocket science. It’s much harder.”

I am quite sure that with careful measurements of different types of feed, medications, pasturage, and bedding, it is quite possible to figure out which mix of those elements might help or hinder the production of milk and cream from dairy cows. That’s because dairy or meat cattle (or chickens, or sheep, or pigs) are pretty simple creatures: all a farmer wants is for them to produce lots of high-quality milk, meat, wool, or eggs for the least cost to the farmer, and without getting in trouble.

William Sanders was well-known for his statistical work with dairy cows. His step into hubris and nuttiness was to translate this sort of mathematics to little humans. From Wikipedia:

“The model has prompted numerous federal lawsuits charging that the evaluation system, which is now tied to teacher pay and tenure in Tennessee, doesn’t take into account student-level variables such as growing up in poverty. In 2014, the American Statistical Association called its validity into question, and other critics have said TVAAS should not be the sole tool used to judge teachers.”

But there are several problems with this.

  • We  don’t have an easily-defined and nationally-agreed upon goal for education that we can actually measure. If you don’t believe this, try asking a random set of people what they think should be primary the goal of education, and listen to all the different ideas!
  • It’s certainly not just ‘higher test scores’ — the math whizzes who brought us “collateralization of debt-swap obligations in leveraged financings” surely had exceedingly high math test scores, but I submit that their character education (as in, ‘not defrauding the public’) was lacking. In their selfishness and hubris, they have succeeded in nearly bankrupting the world economy while buying themselves multiple mansions and yachts, yet causing misery to billions living in slums around the world and millions here in the US who lost their homes and are now sleeping in their cars.
  • Is our goal also to ‘educate’ our future generations for the lowest cost? Given the prices for the best private schools and private tutors, it is clear that the wealthy believe that THEIR children should be afforded excellent educations that include very small classes, sports, drama, music, free play and exploration, foreign languages, writing, literature, a deep understanding and competency in mathematics & all of the sciences, as well as a solid grounding in the social sciences (including history, civics, and character education). Those parents realize that a good education is expensive, so they ‘throw money at the problem’. Unfortunately, the wealthy don’t want to do the same for the children of the poor.
  • Reducing the goals of education to just a student’s scores on secretive tests in just two subjects, and claiming that it’s possible to tease out the effectiveness of ANY teacher, even those who teach neither English/Language Arts or Math, is madness.
  • Why? Study after study (not by Sanders, of course) has shown that the actual influence of any given teacher on a student is only from 1% of 14% of test scores. By far the greatest influence is from the student’s own family background, not the ability of a single teacher to raise test scores in April. (An effect which I have shown is chimerical — the effect one year is mostly likely completely different the next year!)
  • By comparison, a cow’s life is pretty simple. They eat whatever they are given (be that straw, shredded newspaper, cotton seeds, chicken poop mixed with sawdust, or even the dregs from squeezing out orange juice [no, I’m not making that up.]. Cows also poop, drink, pee, chew their cud, and sometimes they try to bully each other. If it’s a dairy cow, it gets milked twice a day, every day, at set times. If it’s a steer, he/it mostly sits around and eats (and poops and pees) until it’s time to send  them off to the slaughterhouse. That’s pretty much it.
  • Gary Rubinstein and I have dissected the value-added scores for New York City public school teachers that were computed and released by the New York Times. We both found that for any given teacher who taught the same subject matter and grade level in the very same school over the period of the NYT data, there was almost NO CORRELATION between their scores for one year to the next.
  • We also showed that teachers who were given scores in both math and reading (say, elementary teachers), there was almost no correlation between their scores in math and in reading.
  • Furthermore, with teachers who were given scores in a single subject (say, math) but at different grade levels (say, 6th and 7th grade math), you guessed it: extremely low correlation.
  • In other words, it seemed to act like a very, very expensive and complicated random-number generator.
  • People have much, much more complicated inputs, and much more complicated outputs. Someone should have written on William Sanders’ tombstone the phrase “People are not cattle.”

Interesting fact: Jason Kamras was considered to be the architect of Value-Added measurement for teachers in Washington, DC, implemented under the notorious and now-disgraced Michelle Rhee. However, when he left DC to become head of Richmond VA public schools, he did not bring it with him.

 

Not So Fast, Betsy DeVos!

I attended the official roll-out of the results of the 2019 National Assessment of Educational Progress (NAEP) a couple of days ago at the National Press Club here in DC on 14th Street NW, and listened to the current education secretary, Betsy Devos, slam public schools and their administrators as having accomplished nothing while spending tons of money. She and other speakers held up DC, Mississippi, and Florida as examples to follow. Devos basically advocated abandoning public schools altogether, in favor of giving each parent a “backpack full of cash” to do whatever they want with.

Some other education activists I know here in DC shared their thoughts with me, and I decided to look at the results for DC’s white, black, and Hispanic students over time as reported on the NAEP’s official site. (You can find them here, but be prepared to do quite a bit of work to get them and make sense out of them!)

I found that it is true that DC’s recent increases in scores on the NAEP for all students, and for black and Hispanic students, are higher than in other jurisdictions.

However, I also found that those increases were happening at a HIGHER rate BEFORE DC’s mayor was given total control of DC’s public schools; BEFORE the appointment of Michelle Rhee; and BEFORE the massive DC expansion of charter schools.

Here are two graphs (which I think show a lot more than a table does) which give ‘average scale scores’ for black students in math at grades 4 and 8 in DC, in all large US cities, and in the nation as a whole. I have drawn a vertical red line at the year 2008, separating the era before mayoral control of schools (when we had an elected school board) and the era afterwards (starting with appointed chancellor Michelle Rhee and including a massive expansion of the charter school sector). These results include both regular DC Public School students and the charter school sector, but not the private schools.

I asked Excel to produce linear correlations of the average scale scores for black students in DC starting in 1996 through 2007, and also for 2009 through 2019. It wasn’t obvious to my naked eye, but the improvement rates, or slopes of those lines, were TWICE AS HIGH before mayoral control. At the 4th grade level, the improvement rate was 2.69 points per year BEFORE mayoral control, but only 1.34 points per year afterwards.

Yes, that is a two-to-one ratio AGAINST mayoral control & massive charter expansion.

At the 8th grade level, same time span, the slope was 1.53 points per year before mayoral control, but 0.77 points per year afterwards.

Again, just about exactly a two-to-one ratio AGAINST the status quo that we have today.

pre and post Rhee, 4th grade NAEP, black students in DC, nation, large cities

pre and post Rhee, 8th grade NAEP, black students in DC, large cities, and nation

Charter schools do NOT get better NAEP test results than regular public schools

It is not easy to find comparisons between charter schools and regular public schools, partly because the charter schools are not required to be nearly as transparent or accountable as regular public schools. (Not in their finances, nor in requests for public records, nor for student or teacher disciplinary data, and much more.) At the state or district level, it has in the past been hard or impossible to find comparative data on the NAEP (National Assessment of Educational Progress).

We all have heard the propaganda that charter and voucher schools are so much better than regular public schools, because they supposedly get superior test scores and aren’t under the thumb of  those imaginary ‘teacher union thugs’.

However, NCES has released results where they actually do this comparison. Guess what: there is next to no difference between the scores of all US charter schools on the NAEP in both reading and math at either the 4th grade or 8th grade level! In fact, at the 12th grade, regular public schools seem to outscore the charter schools by a significant margin.

Take a look at the two graphs below, which I copied and pasted from the NCES website. The only change I made was to paint orange for the bar representing the charter schools. Note that there is no data available for private schools as a whole.

public vs charter vs catholic, naep, math

If you aren’t good at reading graphs, the one above says that on a 500-point scale, in 2017 (which was the last year for which we have results), at the 4th grade, regular public school students scored an average of 239 points in math, three points higher than charter school students (probably not a significant difference). At the 8th grade level, the two groups scored identically: 282 points. At the 12th grade, in 2015, regular public school students outscored charter school students by a score of 150 to 133 on a 300-point scale (I suspect that difference IS statistically significant). We have no results from private schools, but Catholic schools do have higher scores than the public or charter schools.

The next graph is for reading. At the 4th grade, charter school students in 2017 outscored regular public school students by a totally-insignificant 1 point (222 to 221 on a 500 point scale) and the same thing happened at the 8th grade level (266 to 265 on a 500 point scale). However, at the 12th grade, the regular public school students outscore their charter school counterparts by a score of 285 to 269, which I bet is significant.

charter vs public vs catholic, naep, reading, 2017

 

 

Why A New Generation of Teachers is Angry at Self-Styled Education ‘Reformers’

This is an excellent essay at Medium that I learned about from Peter Greene of Curmudgucation. I copy and paste it in its entirety in case you don’t like signing into Medium.

Why New Educators Resent “Reformers”

Let’s consider why so many young educators today are in open rebellion.

How did we lose patience with politicians and policymakers who dominated nearly every education reform debate for more than a generation?

Recall first that both political parties called us “a nation at risk,” fretted endlessly that we “leave no child behind,” and required us to compete in their “race to the top.”

They told us our problems could be solved if we “teach for America,” introduce “disruptive technology,” and ditch the textbook to become “real world,” 21st century, “college and career ready.”

They condemned community public schools for not letting parents “choose,” but promptly mandated a top-down “common core” curriculum. They flooded us with standardized tests guaranteeing “accountability.” They fetishized choice, chopped up high schools, and re-stigmatized racial integration.

They blamed students who lacked “grit,” teachers who sought tenure, and parents who knew too much. They declared school funding isn’t the problem, an elected school board is an obstacle, and philanthropists know best.

They told us the same public schools that once inspired great poetry, art, and music, put us on the moon, and initiated several civil rights movements needed to be split, gutted, or shuttered.

They invented new school names like “Green Renaissance College-Prep Academy for Character, the Arts, and Scientific Careers” and “Hope-Horizon Enterprise Charter Preparatory School for New STEM Futures.” They replaced the district superintendent with the “Chief Educational Officer.”

They published self-fulfilling prophecies connecting zip-coded school ratings, teacher performance scores, and real estate values. They viewed Brown v. Board as skin-deep and sentimental, instead of an essential mandate for democracy.

They implied “critical thinking” was possible without the Humanities, that STEM alone makes us vocationally relevant, and that “coding” should replace recess time. They cut teacher pay, lowered employment qualifications, and peddled the myth anyone can teach.

They celebrated school recycling programs that left consumption unquestioned, gave lip-service to “student-centered civic engagement” while stifling protest, and talked up “multiple intelligences” while defunding the arts.

They instructed critics to look past poverty, inequality, residential segregation, mass incarceration, homelessness, and college debt to focus on a few heartwarming (and yes, legitimate) stories of student resilience and pluck.

They expected us to believe that a lazy public-school teacher whose students fail to make “adequate yearly progress” was endemic but that an administrator bilking an online academy or for-profit charter school was “one bad apple.”

They designed education conferences on “data-driven instruction,” “rigorous assessment,” and “differentiated learning” but showed little patience for studies that correlate student performance with poverty, trauma, a school-to-prison pipeline, and the decimation of community schools.

They promised new classroom technology to bridge the “digital divide” between rich, poor, urban, and rural, while consolidating corporate headquarters in a few elite cities. They advertised now-debunked “value-added” standardized testing for stockholder gain as teacher salaries stagnated.

They preached “cooperative learning” while sending their own kids to private schools. They saw alma mater endowments balloon while donating little to the places most Americans earn degrees. They published op-eds to end affirmative action but still checked the legacy box on college applications.

They were legitimately surprised when thousands of teachers in the reddest, least unionized states walked out of class last year.

Meanwhile……

The No Child Left Behind generation continues to bear the fullest weight of this malpractice, paying a steep price for today’s parallel rise in ignorance and intolerance.

We are the children of the education reformer’s empty promises. We watched the few decide for the many how schools should operate. We saw celebrated new technologies outpace civic capacity and moral imagination. We have reason to doubt.

We are are the inheritors of “alternative facts” and “fake news.” We have watched democratic institutions crumble, conspiracies normalized, and authoritarianism mainstreamed. We have seen climate change denied at the highest levels of government.

We still see too many of our black brothers and sisters targeted by law enforcement. We watched as our neighbor’s promised DACA protections were rescinded and saw the deporters break down their doors. We see basic human rights for our LGBTQ peers refused in the name of “science.”

We have seen the “Southern strategy” deprive rural red state voters of educational opportunity before dividing, exploiting, and dog whistling. We hear climate science mocked and watch women’s freedom erode. We hear mental health discussed only after school shootings.

We’ve seen two endless wars and watched deployed family members and friends miss out on college. Even the battles we don’t see remind us that that bombs inevitably fall on schools. And we know war imposes a deadly opportunity tax on the youngest of civilians and female teachers.

Against this backdrop we recall how reformers caricatured our teachers as overpaid, summer-loving, and entitled. We resent how our hard-working mentors were demoralized and forced into resignation or early retirement.

Our collective experience is precisely why we aren’t ideologues. We know the issues are complex. And unlike the reformers, we don’t claim to have the answers. We simply believe that education can and must be more humane than this. We plan to make it so.

We learned most from the warrior educators who saw through the reform facade. Our heroes breathed life into institutions, energized our classrooms, reminded us what we are worth, and pointed us in new directions. We plan to become these educators too.

Some debate in Chevy Chase (DC) on significance of latest NAEP scores …

On a local DC list-serve for the region where I last taught (and also went to Junior High School), I posted this:

==========================================================

Those of us with kids in Chevy Chase – DC, either now, in the future, or in the past, have seen many changes in education here in DC, especially since 2007, when the elected board of education was stripped of all powers under PERAA and Chancellor Rhee was appointed by Mayor Fenty.
[I personally went to Junior High School here at Deal back in the early 1960s, taught math in DCPS from 1978 to 2009, including 15 years at Deal (much to my surprise) and my own kids went K-12 in DCPS, graduating from Walls and Banneker, respectively]
Was mayoral control of schools in DC a success? Is the hype we have all heard about rising test scores for real?
We now have statistics from  NAEP* for about two decades, and we can compare scores for various subgroups before and after that 2007 milestone.
Did Black students make faster improvements after PERAA than beforehand? Nope. To contrary: their scores were inching up faster *before* 2007 than they have been doing since that time.
Did Hispanic students make faster improvements under the reformers? Nope, again.
How about students whose parent(s) didn’t graduate high school, and/or those who finished grade 12 but either never went to college or else didn’t earn a degree – surely they did better after Rhee, Henderson et al. took over? Again, no.
Then what group of students in Washington DC *did* make more progress on the NAEP after the Reformers took over?
You guessed it, I bet:
White students, and students with parents who earned a college degree.
Amazing.
Guy Brandenburg
*National Assessment of Educational Progress
======================================================================
Another person contested my assessment and wrote the following:
=======================================================================
The NAEP is cross-sectional data, i.e. it does nothing to adjust for changes in composition of test-takers over time, which is why Steve Glazerman refers to comparisons of NAEP scores over time as “misNAEPery” [https://ggwash.org/view/ 31061/bad-advocacy-research- abounds-on-school-reform] and I have referred to the same thing as “jackaNAEPery” [https://www.urban.org/urban- wire/how-good-are-dcs-schools] .
There has been a dramatic, even shocking, compositional change since 2000 in births across the city, entering cohorts of students, and exit rates from DC schools and the city.
Most noticeably in NW, better educated parents are substantially more likely to have kids in DC, enroll them in DC public schools, and stay past 3rd grade.
Any analysis of test score change needs to grapple with that compositional change.
But more importantly, the compositional change itself is a policy outcome of note, which the DC Council and Mayor have an interest in promoting.
The only evidence one should accept must *at minimum* use longitudinal data on students to compute *learning* as opposed to static achievement, e.g. this analysis of 2008 school closures:
A lot of other things happened 1996-2008 of course, including a rapid expansion of charters, a shrinking proportion of DC residents attending private schools, etc.In 2008 alone, a lot of Catholic schools closed, and some converted to public charter schools.
During this time, we also had a voucher program that produced some gains early on, and then began to lower test scores relative to public options:
All of this is not to say DCPS and charter schools shouldn’t serve less advantaged students better than they do–obviously they should! But the evidence is nuanced, and DC has made huge gains across the board since the 1990’s that make attributing any changes to policy rather than shifting population composition problematic at best.
Interestingly, the NAEP data explorer [https://www. nationsreportcard.gov/ndecore/ xplore/nde]does not report scores for white 8th graders in 1990, 1992, and 1996, presumably because too few were tested. I.e. the means by race show a lot of  “‡ Reporting standards not met.
[I personally attended DCPS (Hyde, Hardy, and School Without Walls) 1976-1989, have 2 children currently in Deal and SWW.]
Austin Nichols
========================================================================
I wrote a response to Nichols, but it hasn’t been posted yet, and might never be:
========================================================================
My previous reply got lost somewhere in cyberspace.
If looking at long-term trends in the NAEP and TUDA is ‘misnaepery’ or ‘jacknaepery’, as Mr Austin would have us believe, then the entire NAEP bureaucracy has been doing just that. (In fact, an entire branch of the National Center for Education Statistics is devoted to, yes, Long Term Trends: https://nces.ed.gov/nationsreportcard/ltt/ )
It’s a laughable idea that we could just use the tests chosen by DCPS and later by OSSE and administered every year, to tell how good DC public or charter schools are, over time. First of all, the tests administered here have changed dramatically. Back in the 1990s it was the CTBS. Then it was the SAT-9, developed by a different company. Then it was the DC-CAS, again, a different vendor. Now we have the PARCC produced by yet another vendor. We also know that in the past there has been major fraud with these tests, committed by adults, in order to gain bonuses and keep their jobs. We also have no way of comparing DC with any other city or state using those tests, since only a handful of states even use the PARCC and for all I know, their cut scores and questions might be different from what we use here in DC.
The idea of measuring median student improvement from year to year might appear to have some merit, until you talk to students and teachers involved. You discover that many of the older students see no reason to take the tests seriously; they bubble in, or click on, answers as fast as possible, without reading the questions, in order to be free to leave the room and go do something else. Any results from that test are simply unreliable, and it is simply not possible to tell whether DC education policies have improved over time based on the PARCC, DC-CAS, SAT-9, or CTBS, no matter what sort of fancy statistical procedures are employed.
With the NAEP, on the other hand, there has never been any suggestion of impropriety, and the same agency has been devising, administering, and scoring these tests for decades. We have no other nation-wide test that has been systematically given to a random sample of students for any length of time.
Obviously the 4th or 8th graders who took the NAEP in 2017 were not the same ones who took it in 2015. (Duh!) However, we do in fact have a record of NAEP scores in every state and DC since the 1990s, and they are also broken down by lots of subgroups. Obviously DC is gentrifying rapidly, and there are more white students in DCPS than there were 10 or 20 years ago. If we trace the various subgroups (say, African-American students, or Hispanics, or students whose parents didn’t finish high school, or whatever group you like), you can watch the trends over time in each subgroup. However, Mr Austin does inadvertently raise one valid point: since the proportion of black students in DC is decreasing, and the proportion of white students with college-educated parents is rising, then the natural conclusion would be that this gentrification has *inflated* overall scores for 4th and 8th grade students in DC (and DCPS), especially since 2007. Which is more evidence that ‘reform’ is not working. Not evidence that we should throw the scores out and ignore them completely.
Those trends show something quite different from what Mayor Bowser keeps proclaiming. For one thing, if you look at the simple graphs that I made (and you can examine the numbers yourselves) you can see that any improvements overall in DC, or for any subgroups, began a decade before the ‘reformers’ took over DC schools. ( see https://bit.ly/2K3UyZ1 to begin poking around.) Secondly, for most of the subgroups, those improvements over time were greater before Rhee was anointed Chancellor. Only two groups had better rates of change AFTER Rhee: white students, and those with parents with college degrees – the ones that are inflating overall scores for DC and DCPS during the last decade.
I would note also that the previous writer’s salary is paid by one of the Reform organizations supported by billionaires Gates and Arnold. You can look at the funding page yourself ( page 3 at https://urbn.is/2II1YQQ ). I suspect that when ‘reform’ advocates say not to look at our one consistent source of educational data, it’s because they don’t like what the data is saying.
Guy Brandenburg
%d bloggers like this: