Details on how and why adults cheated in Atlanta Public Schools

A must-read article in the New Yorker on exactly how teachers and administrators cheated on the NCLB tests in Atlanta, Georgia.

http://www.newyorker.com/reporting/2014/07/21/140721fa_fact_aviv?currentPage=all

 

Even more on the DC-CAS and the NAEP

In this installment, I’ll look at the reading scores for the District of Columbia as reported on the National Assessment of Educational Progress (NAEP) and by DC’s Office of the State Superintendent of Education (OSSE) by way of the DC-CAS (Comprehensive Assessment Program), written and published by CTB-McGraw Hill.

You may have noticed that I’ve been reporting ‘Average Scale Scores’ on both tests, rather than achievement levels — proficient, advanced, below basic and basic. It was suggested to me that this would allow us to compare the two tests (national and ‘state’) more directly, since the decision of what score falls in which category is so obviously open to negotiation and ‘fudging’ by powers-that-be.

In any case, I will continue giving Average Scale Scores, first for 4th graders in reading:

naep + dccas for 4th grade reading compared

As before, the blue scale is the average scale score for DC’s fourth-graders in reading, divided by five so that it would fit on the same grid as the DC-CAS scores for the same subject, same grade level. As before, the DC-CAS scale scores for the fourth grade go from 400 to 499, which I treat as going from 0 to 100, and the NAEP scores go from 0 to 500. As before, I had to find these scores in a variety of places; I gave the URLs in the previous post. Also, for a couple of years, I found two different scores for the same year, so I plotted and reported both of them.

You will notice that since about 1996, which is almost two decades, the scores on the NAEP for successive cohorts of Washington DC fourth-graders have been more or less slowly increasing, and there does not seem to be much difference between that progress before mayoral control of DC schools (labeled “Pre-Rhee”) and after the imposition of mayoral control of the schools (which I labeled “Post-Rhee”).

That’s the blue line.

However, the red line, representing the locally-funded DC-CAS tests for the same grade level, show much more volatility and overall growth, with the jump from 2007 to 2008 being most suspicious of all, knowing what we now know about the degree to which Michelle Rhee instructed each and every principal in DCPS to magically raise test scores or get fired.

Once again, I would much prefer to rely on the federal National Center for Educational Statistics than I would rely on CTB-McGraw Hill or the very-political appointees to DC’s OSSE.

Lastly, I present an image with the same pair of graphs, but  for 8th grade reading:

naep + dccas for 8th grade reading compared

The same comments apply here as with every one of the other three tests. NAEP scores show a little bit of steady growth since 1998 (16 years ago), whereas the DC-CAS seems to show less even but much more impressive growth since 2006.

As usual, I would very much discount what Mayor Gray, Chancellor Henderson, OSSE or CTB-McGraw-Hill have to say. I would recommend that you put much more trust in the federal civil servants at NCES and NAEP.

What about you?

==================

Links to my other articles on this:

Part One

Part Two

Part Three (all reading) — this one right here

Past Performance (on DC’s NAEP and its own CAS tests) Can Give Insight Into Future Performance

Sometimes, looking at the past gives you lots of clues about the future.

From looking at past evidence about the scores of Washington, DC students on both the National Assessment of Educational Progress (NAEP) and on its locally-mandated test (currently known as DC-CAS, or Comprehensive Assessment System), we can make some predictions about the importance (or lack of it) of the results for the 2014 CAS, which were administered about three months ago.

My prediction: the CAS results don’t really matter, one way or the other, because they are so volatile and do not correlate very well with much-more-reliable NAEP. Besides the fact that there have been many documented cases of cheating by adults on the DC-CAS for their own personal gain, it almost seems like the DC-CAS is designed to be manipulated for political and ideological gain, for two main reasons:

(1) In some subjects the DC-CAS has displayed enormous year-to-year score increases that are not at all reflected in the cheat-proof NAEP.

(2) This is despite the fact that between 34% and 42% of all questions are non-scored “anchor” questions designed simply to see if the test gives consistent results from year to year, according to testimony of Emily Durso last September before David Catania’s DC-Council subcommittee on education. Those question’s aren’t scored!

Think about that: something like a third to three-sevenths of all the questions a student is forced to answer isn’t even used to grade the students or teachers. It’s used to help out the testing company.

Which produces unreliable results anyway.

(And even though CTB-McGraw Hill has quite a variety of ways of using statistics to detect cheating by kids or adults, DCPS won’t pay for them to use those methods, I was told by CTB’s head econometrician.)

As promised, here is some of the evidence.

First, we have the DC-CAS (red) and NAEP (blue) scores over the past quarter-century in DC, for 8th grade math students:

naep + dccas 8th grade math scores compared

Once again, the NAEP scores (in blue) for DC’s 8th graders since 1990 seem to show more-or-less steady growth, especially since the year 2000.

(Keep in mind that in order to plot the NAEP and DC-CAS scores on the same grid, and since the NAEP scores go from 0 to 500 while the DC-CAS scores go from 0 to 100, I divided the NAEP scores by 5.)

The DC-CAS was first administered in 2006, so there are no records from before that year. Note that there are two scores given for the year 2009 (46.0 and 49.8), which are pretty far apart; I can only guess as to why they contradict each other. The two sources are here and here. For the NAEP, the reason for the slightly different scores is quite straightforward: one version of the test allowed accommodations for disabilities, and the other did not.

In any case, up until 2011, the gains by successive cohorts of 8th graders on the DC-CAS math test — whom I often taught when I was a teacher in DCPS — was fairly spectacular, but not at all mirrored by the slow, steady progress shown on the NAEP. And since then, those scores have been fairly flat, especially when we remember that the CAS is scored from 0 to 100 (or, to be technical, from 800 to 899; the fourth-grade test scores go from 400 to 499, and the 10th grade tests are scored from 900 to 999 (probably because they didn’t want to add a fourth digit).

======

Here are the links to the other parts of this article:

Part One  (fourth grade math)

Part Two (8th grade math) — this one right here

Part Three (all reading)

My Predictions for the 2014 DC-CAS Scores

Sometime this month, the Mayor of DC and the Chancellor of the DC Public Schools will make some sort of announcement on how DC public and charter school students did on the DC-CAS (Comprehensive Assessment System) – the test required by Federal law to be given to every single kid in grades 3 through 8 and in grade 10.

I don’t have a crystal ball, and I haven’t developed any sources willing to risk their jobs by leaking the results to me in advance, but I can make a few predictions:

1. If the results look bad, they will be released right before a holiday or a weekend (a basic public-relations tactic that all public officials learn).

2. If the scores as a whole look good, or if there is some part of the trends that look good, that will be highlighted heavily.

3. There won’t be much of a correlation between the trends on the DC-CAS scores and the National Assessment of Ednucational Progress, which has been measuring student achievement in grades 4 and 8 in reading and math since the 1970s by giving a carefully-selected sample of students in DC and across the nation a variety of different test items in math, reading, and a number of other areas.

4. Even though the DC-CAS results won’t be released to the public for a couple more weeks, clearly DCPS officials and Mathematica staff already have them; they have been firing teachers and principals and “adjusting” – with the benefit of hindsight – the rest of their evaluations to fit the DC-CAS scores and the magic secret formula called “Value Added Magic Measurement”.

You may ask, how can GFBrandenburg predict not much of a match between the DC-CAS and the NAEP?

By looking at the track record, which I will share with you.

I present the average scores of all DC students on both the DC-CAS and on the NAEP over the past quarter-century. The NAEP scores for the District of Columbia have either been pretty steady or have been rising slightly.

As far as I can tell, the statisticians at the National Center for Educational Statistics (NCES) who design, administer, and score the NAEP do a fine job of

A. making sure that there is no cheating by either students or adults,

B.  making up good questions that measure important topics, and

C. gathering, collating, and reporting the data in an honest manner.

On the DC -CAS, however, we have had many documented cases of cheating (see point A), I have shown that many of the questions are ridiculous and don’t measure what we teachers were supposed to be teaching (see point B), and I hope to show you that whatever they are doing with the scores does not seem to be trustworthy.

Exhibit number one is a graph where I plot the average scale scores of the students in Washington DC on both the NAEP and on the DC-CAS for fourth grade math:

naep + dccas 4th grade math comparison

Allow me to explain.

The bottom blue curve is what DC’s fourth-graders average scale scores were on the NAEP starting in 1992 and going on through 2013. As you can see, since 1996, there has been what appears like more-or-less steady improvement.

(It is very hard, in fact, to see much of a difference in trends before mayoral control over the DC schools and after that time. I drew a vertical black line to separate the ‘Pre-Rhee” era from the “Post-Rhee” era, since Michelle Rhee was the very first Chancellor installed in the DC schools, after the annual tests were given in 2007.)

(As noted,  the NAEP scale scores go from 0 to 500, but the DC-CAS scores go from 0 to 100. I decided that the easiest way to have them both fit on the same graph would simply be to divide the NAEP scores by 5. The actual reported NAEP scores are in the little table, if you want to examine them for yourself. You can double-check my numbers by looking around at the NAEP and DC OSSE websites — which are unfortunately not easy to navigate, so good luck, and be persistent! You will also find that some years have two different scores reported, which is why I put those double asterisks at a couple of places on those curves.)

But here’s what’s really suspicious: the DC-CAS scores, shown in red, seem to jump around wildly and appear to show tremendous progress overall but also utterly un-heralded drops.

Which is it?

Slow, steady progress since 1996, or an amazing jump as soon as Wonder Woman Rhee comes on the scene?

In my opinion, I’d much rather trust the feds on this. We know that there has been all sorts of hanky-panky with the DC-CAS, as repeatedly documented in many places. I know for a fact that we math teachers have been changing the ways that we teach, to be more in line with the 1989 NCTM standards and the ways that math is tested on the NAEP. It’s also the case that there has been significant gentrification in DC, with the proportion of white kids with highly educated parents rising fairly steadily.

Slow improvement in math scores, going back a quarter of a century, makes sense.

Wild jumps don’t seem reasonable to me at all.

On the contrary, besides the known mass cheating episodes, it almost seems like DC education officials get together with McGraw-Hill CTB, which manufactures the DC-CAS, and decide how they want to get the scores to come out. THEN they decide which questions to count and which ones NOT to count, and what the cut-off scores will be for ‘advanced’, ‘proficient’ and so on.

Next time: 8th grade math; and 4th and 8th grade reading.

=======

Links to my other articles on this:

Part One  (fourth grade math)— this one right here

Part Two (8th grade math)

Part Three (all reading)

Higher Urban Charter School Market Share Linked to Lower NAEP Test Scores

With over a decade of data, now can answer quite a few questions about the billionaire-led so-called education reform that has shuttered so many schools, atomized low-income neighborhoods that used to be centered around their local schools, created so much churn in the teaching profession, and turned many urban schools into all-test-prep, all the time.

We can now tell whether students are actually learning more on national tests like the NAEP (National Assessment of Educational Progress) as a result of No Child Left Behind, Race to the Top, and the Common Core.

Along with many other researchers and commentators, I have been showing repeatedly, on this blog, that the answer is, “No.”

My latest piece of evidence comes from two sources: the NAEP Trial Urban District Assessment (TUDA) test scores for 21 urban school systems published by the National Center for Educational Statistics (NCES), and data on the percentages of students enrolled in charter schools in those cities, published by the National Alliance for Charter schools. 

We have been repeatedly told that more charter schools means better education for all; supposedly the competition with the charter schools will cause the regular public school systems to improve dramatically as well.

So if you plot the “market share” of charter schools in a bunch of cities against their NAEP math or reading scores in the 4th or 8th grade, you should see a strong positive correlation, something like the data I invented in the graph below:

hypothetical charter market share vs test score graph 2

Imaginary data showing that higher market share for charter schools (y-axis) is positively correlated to higher NAEP test scores (x-axis)

Well, it happens to be the other way around.

We do not see strongly focused scatterplots with linear correlations going up and to the right.

We instead see strong trendlines going DOWN and to the right.

Yup, for the 17 cities that NAEP TUDA and the National Alliance for Charter Schools both have data, for these 17 large cities, the higher the fraction of charter schools, then the worse the kids in the public schools do on the NAEP in 4th and 8th grade reading and math.

For example:

This greenish scatterplot has the “market share” of charter school students in these 21 cities on the y-axis on the left, and the NAEP grade 4 math average scale score for that entire city along the x-axis on the bottom. It’s quite clear that higher NAEP scores are linked with lower charter school enrollments.

naep 4th gr math vs charter market share

 

In the graph above, the value of R-squared is 0.454, and the value of R (the coefficient of correlation ) is 0.6738.  For completeness, I also plotted the average score for all US urban students (238) and he average for all US public school students (241). The topmost blue dot on the left represents Detroit, and the next highest dot, at about 43% market share, is Washington, DC. The dot near the bottom center at a NAEP score of 220 is Fresno. The system at the very bottom, with a score of 234, is Louisville KY (aka Jefferson County).

In the next graph (tan/blue), we see the exact same data, only for 8th grade:

naep gr 8 math vs charter market share

In this one, R is almost 0.7, and R-squared is about 0.49, both quite strong correlations.

In the next graph (gray and tan), we see the same data, only for 4th grade reading:naep 4th grade reading vs chaerter market share

 

The very alert reader may notice that this is the graph that I used to make up some phony statistics that DO NOT EXIST but are predicted by many pundits who haven’t been in a public school classroom for a very long time.

My final graph for today shows the same thing, but for 8th grade reading.

naep 8th grade reading vs charter market share

 

All of the correlations have been rather strong, but this one is the strongest of all.

Of course, correlation isn’t necessarily causation. We don’t know from the data alone which factor causes the other, or if there is a third factor causing  both changes.

But in any case, the argument that charter schools and choice — as defined by Gates, Wallton, Rhee and Duncan — would inherently lift all boats is definitely demolished.

==========================================

Unfortunately, we don’t have disaggregated average NAEP charter school scale scores in these 21 cities. Charter schools used to be included in each of the cities’ scores, but in most cases, that reporting stopped in 2009, so we only have the scores for the kids in the regular public schools, not the charters or the voucher or private or parochial students in these cities. (In a previous post, I tried to calculate the charter school NAEP average scale scores for Washington, DC, and they mostly agree with what Erich Martel calculated, but I’m not 100% sure about them yet because I don’t know if private school scores are, or are not,  included with the charter school scores. And I don’t have any data that would allow me to calculate any average charter school scores on the NAEP in any other city on this list.

Just how flat ARE those 12th grade NAEP scores?

Perhaps you read or heard that the 12th grade NAEP reading and math scores, which just got reported, were “flat“.

Did you wonder what that meant?

The short answer is: those scores have essentially not changed since they began giving the tests! Not for the kids at the top of the testing heap, not for those at the bottom, not for blacks, not for whites, not for hispanics.

No change, nada, zip.

Not even after a full dozen years of Bush’s looney No Child Left Behind Act, nor its twisted Obama-style descendant, Race to the Trough. Top.

I took a look at the official reports and I’ve plotted them here you can see how little effect all those billions spent on testing;  firing veteran teachers; writing and publishing new tests and standards; and opening thousands of charter schools has had.

Here are the tables:

naep 12th grade reading by percentiles over time

This first graph shows that other than a slight widening of the gap between the kids at the top (at the 90th percentile) and those at the bottom (at the 10th percentile) back in the early 1990s, there has been essentially no change in the average scores over the past two full decades.

I think we can assume that the test makers, who are professional psychometricians and not political appointees, tried their very best to make the test of equal difficulty every year. So those flat lines mean that there has been no change, despite all the efforts of the education secretaries of Clinton, Bush 2, and Obama. And despite the wholesale replacement of an enormous fraction of the nation’s teachers, and the handing over of public education resources to charter school operators.

naep 12th grade reading by group over time

 

This next graph shows much the same thing, but the data is broken down into ethnic/racial groups. Again, these lines are about as flat (horizontal) as you will ever see in the social sciences,

However, I think it’s instructive to note that the gap between, say, Hispanic and Black students on the one hand, and White and Asian students on the other, is much smaller than the gap between the 10th and 90th percentiles we saw in the very first graph: about 30 points as opposed to almost 100 points.
naep 12th grade math by percentiles over time

 

The third graph shows the  NAEP math scores for 12th graders since 2005, since that was the first time that the test was given. The psychometricians atNAEP claim there has been a :statistically significant” change since 2005 in some of those scores, but I don’t really see it. Being “statistically significant’ and being REALLY significant are two different things.

*Note: the 12th grade Math NAEP was given for the first time in 2005, unlike the 12th grade reading test.

naep 12th grade math by group over time

 

And here we have the same data broken down by ethnic/racial groups. Since 2009 there has been essentially no change, and there was precious little before that, except for Asian students.

Diane Ravitch correctly dismissed all of this as a sign that everything that Rod Paige, Margaret Spellings and Arne Duncan have done, is a complete and utter failure. Her conclusion, which I agree with, is that NCLB and RTTT need to be thrown out.

 

The Real Lesson of Singapore Math!

By now you’ve probably heard that Singapore and Shanghai are the two places on earth with the smartest kids in the entire world. We can see their PISA scores (go to page 5) are right at the top.

Case closed, right? Whatever they are doing in education, we in the US need to emulate that in order to catch up! Common Core! StudentsFirst! Teach for America! Race to the Top! PARCC! Bust those teacher unions! No more recess! All test prep all the time! Charter Schools! Turn the schools over to the billionaires (Gates, Bloomberg, Koch family, Walton family, and their hirelings and shills)!

But wait a second.

Have you noticed that an ENORMOUS fraction of the low-skilled, low-paid people living in Singapore are temporary foreign workers from various parts of Asia and Africa and are not allowed to bring their kids with them? Those kids are raised back in the workers’ homelands by various relatives, far away, and only get to see their parents at long intervals (somebody has to fly somewhere); back home, jobs are even scarcer and worse-paid, so the parents go elsewhere to try support their families.

Now, everywhere in the world, family income is very, very closely linked to children’s test scores in school. It’s one of the tightest correlations there are in the social sciences, as you can see in the simple scatter-plots I have repeatedly shown in this blog over the past 4 or 5 years. (Try using terms like “poverty” “income” and “scores” together in the search box on this page and be prepared to look through a lot of posts with such graphs, from all over!)

If one-quarter to one-third of the population of a country was legally not permitted to have children in the schools, and it was the low-paying 1/4 to 1/3 of the population, then the scores of the remainder of the kids would, quite naturally, be pretty darned good, since the bottom 1/4 to 1/3 of the distribution just got cut off.

If we systematically excluded the poorest quarter or third of our American student population from taking PISA, we know that our scores would be pretty darned high as well.*

Hmm, maybe the leaning tower of PISA hype is falling.

 

=====================

*Let’s remember that this WAS official policy in many states of the USA up until 1865: a large fraction of the population (guess which one!) was forbidden to send their kids to schools at all and it was explicitly forbidden even to teach them to read privately. When Jim Crow was established from the 1870s to the early 1960s, school facilities for Blacks and Hispanics, BY DESIGN of the racist authorities, so inferior to those for whites that they were a national disgrace. Which is why the calls for going back to the good old days should be so infuriating. There WERE NO GOOD OLD DAYS.

Some Released PISA Questions

Yong Zhao and some other commentators have been criticising PISA for a number of reasons, one being that its sample populations are at times ‘gamed’ by two cities (Shanghai and Singapore – and that’s all they are, two cities that import their labor force from elsewhere and neither place educates or tests the children of that labor force) while ignoring the outstanding performance of certain individual US states on the exact same test. In her recent book “The Smartest Kids in the World” Amanda Ripley follows a handful of exchange students to and from the US and thinks that the PISA is a pretty good test and that it predicts real things about how societies are going; she appears to be a great fan of Poland these days.

Looking at some of the questions, I am beginning to have a lot less faith in PISA as a test itself and in those folks who claim that the sky is falling on American education based on our scores.

Some of the questions seem OK, some not. I have no idea whether these released items are of equal difficulty if written in French, Polish, Chinese, English, Arabic, Urdu or Swahili, but let’s pretend they are equivalent.

More importantly I read recently an argument that PISA is *not* in fact a test of creativity and original applications of things learned in school; instead, it IS things learned at school or else IQ-type logic puzzles, Even Rick Hess, a big friend of Michelle Rhee, apparently agrees, to my surprise.

Apparently there ARE tests of creativity that are, supposedly, quite reliable. I haven’t read scholarly critiques of THAT creativity test, but I’ve heard of the concept. I will need to  reserve judgment on the real records of the creativity test, but I did indeed recall that one PISA question I saw really was basically a little math/logic puzzle of a sort that I had seen in various puzzle books.  Let’s see if I can find it.

In any case, now that I’ve seen the sample questions, I have even less sympathy

Just now I went to look for some sample PISA items that have been declassified — i.e. it is legal to discuss and show them to people; nobody will lose their jobs for leaking their contents — as teachers and other school staff are threatened with, no matter how stupid a question might be or how many students complained that the problem didn’t make any sense at all and you saw that they weren’t kidding, yes, the problem makes no sense at all.

Let me show you one PISA test item that I think has a fatal flaw – it doesn’t make sense, because ALL of the answers are possible. Some have a higher probability of being correct, but that’s all.

Here is the question:

SEAL’S SLEEP

A seal has to breathe even if it is asleep in the water. Martin observed a seal for one hour. At the start of his observation, the seal was at the surface and took a breath. It then dove to the bottom of the sea and started to sleep. From the bottom it slowly floated to the surface in 8 minutes and took a breath again. In three minutes it was back at the bottom of the sea again. Martin noticed that this whole process was a very regular one.

After one hour the seal was

  1. At the Bottom
  2. On its way up
  3. Breathing
  4. On its way down

In my opinion, the phrase “Martin noticed that this whole process was a very regular one” does NOT mean the same as “Martin took very careful notes and timed a seal that he had learned to recognize for precisely one hour. What’s more, the water was so transparent that Martin could see everything the seal was doing. At exactly 9:00 AM, the seal was at the surface and took a breath that lasted ____ seconds and then dove … and so on, and then it floated to the top where it surfaced at exactly 9:08 AM, and so on”

Notice the details I added. If you are out in cold coastal waters where I myself have seen some seals during my lifetime, you often can’t see down to the bottom if you are on top; even if you are underwater in scuba gear, you generally can’t see a long way; and if it took this seal THREE WHOLE MINUTES to swim to the bottom going, I suppose, straight down at speed that no human swimmer could possibly achieve without mechanical help of some kind, then it’s gotta be pretty deep water, right? You are going to have an impossible time seeing that seal.

And then how does mythical Martin actually know that it’s the exact same seal?

Come on, now. This is a bullshit question, made up by someone who hasn’t actually watched seals at all. I’ve only watched a few dozen myself, but it’s BS.

And plus: animals do NOT act like clocks. Their behavior is not metronomic: it is influenced by what goes on around them. Even though the problem says “Martin noticed that this whole process was a very regular one”, and even if we allow that that is true, nowhere in the problem does the wording imply the kind of clock-like precise repetition that is required to be able to answer the question.

Plus: it doesn’t really say how loong the seal is breathing, nor does it say how long the seal is at the bottom. All the numbers are very vague. It is impossible to answer the question with the information that is ghiven — we are being asked to guess what the problem-writer really meant.

In my opinion, repeating that pattern as being precisely 3 minutes and zero seconds plus 8 minutes and zero seconds, for exactly 60 minutes, is absurd and unbelievable. Animals are NEVER that regular, as I complained earlier. The cycle will shift, somewhat, and those odd seconds do add up. And, as I said, the writer never told us the elapsed time on the sleeping or on the breathing.

So the question is utterly bogus.

We could talk about the PROBABILITY that the seal was in one of those four categories, but only if we knew a whole lot of information. Any child who has ever observed animals knows that they will not keep up the exact same pattern for a full hour measurable to the exact second, no matter what. Not even if they are imprisoned in a cage or a zoo and go all insane and repetitive will they repeat to the exact second.

Bogus.

Published in: on April 9, 2014 at 1:09 am  Leave a Comment  
Tags:

How Money Talks in Westchester County, New York

If you think it’s only in your school district that wealthy kids do better in school, think again. It’s all over the nation — and it starts when children are quite young and poor ones are often not spoken to or read to nearly as much by their parents, so that kids from poor families actually start preschool with a vocabulary disadvantage.

A recent article by Dave Greene, a teacher, author and activist in Westchester County, NY, puts that into focus by examining a local magazine centerfold that gives average family household income and a bunch of other data about schools so that home-buyers can figure out how “good” the schools are.

The old real-estate saying is that the three most important things about a house are its location, its location, and its location. That’s not quite true: it really should be, the average income of the other folks in the neighborhood (or AIOFN), AIOFN, and AIOFN.

It’s also true with the schools, as the data make clear — and it’s even clearer still if you put the data into a graph, which the original author did not do.

So I did.

Here are two such graphs:

sat and family income westchester co ny

I hadn’t realized that there were poor as well as rich areas in Westchester County, but apparently there are. The line of best fit that Excel calculated shows a very, very strong correlation: r-squared is 0.8819, which means that R itself is about 93.9% — about the strongest correlation you’ll ever see in the social sciences. The two variables here are average household income and average SAT score (these go from 600 to 2400).

The next graph shows average family income versus a composite score of college readiness as measured by the New York State Regents.

family income and college readiness westchester co ny

Once again, an extremely tight correlation between average family income and college readiness score.

Read the original article for the original data and its source. Here is my spreadsheet:

westchester raw data

Jack’s Famous 427-316 Common Core Math Problem

See below in Green for some corrections.

The math problem listed here has been making the rounds. It’s supposedly from the common core. If you haven’t seen it, it supposedly shows Jack using some number line to subtract 427 minus 316.

A lot of writers have been dumping on it.

I think they’re missing something — there are at least TWO errors in the work of this imaginary Jack.

The idea of trying to figure out where someone else got something wrong isn’t the worst idea in the world. However. what Jack was allegedly doing would need to be done in the head, because this method is so unwieldy if written out — as many people have pointed out. Also, if that was a carefully printed out number line, then I hope the problem is entirely imaginary, because unless we are teaching about logarithmic plots, then mathematicians take care to make sure that scales are linear (meaning that the distance from 100 to 200 equals the distance from 0 to 100, which are each exactly ten times the distance from 90 to 80 or from 57 to 67.)

As a mental exercise, number lines like this are not an entirely useless method.

Nobody seems to have pointed out that Jack made two math errors, not just one.

Since this problem asks 427-316, if you are doing this in your head, you could either count backwards from 4 by 3 units, or ask yourself how far it is from 3 to 4 — obviously 1. Writing the number line out is a lot of work, but saying silently to yourself, “427, 327, 127″ isn’t much work. So far so good.

But it’s not only in the mathematics that the imaginary Jack made an error:

I’m not sure, but it seems like the problem writer wanted Jack to confuse 16 and 60. This is not hard to do when HEARING the number, but more difficult if you are SEEING it written out. So this makes the problem a bit more, well, problematic, because nowhere in the problem is there any hint that Jack tends to hear poorly.

So this imaginary Jack mistakenly counts backwards in the tens place by tens by going 127, 107, 97, 87, 77, 67, 57 — which appears to be his final answer. Maybe. I can’t quite tell on this sheet.

So that means that “Jack” made another error in leaving out the decade 117.

Since it looks like this problem was written by some low-paid contract worker (think of “call centers” in Malaysia or India) with little scrutiny afterwards, we don’t know if the intention of the  problem writer was for the student to realize that Jack is both hard of hearing and mis-counted by skipping the 117? If so, you are really asking a lot of a kid looking at the problem — and notice, if I’m right, then a whole lot of adults missed that point as well.

Did they really intend for the problem to be that difficult?

Sounds like an error on the part of the error-writer, but I could be wrong.

ADDED LATER: If the writer were aware that there were two mistakes in the problem, shouldn’t they have written “Find his error(s)” rather than “find his error”?

Post-p0st script:

It turns out that I misinterpreted the problem by assuming that some of the writing was done by ‘Jack” when it was really done by the parent-engineer.

Here is more or less how the problem looked originally, minus all the blank space:

dear jack 427-316 problem

So it was the PARENT who missed the decade 117, not ‘Jack’.  We see that ‘Jack’ counted back from 427 by hundreds three times to arrive at 127. Then ‘Jack’ counted backwards six times by ones from 127 to 121, which I’ve indicated by writing in the unwritten numbers below in red:

dear jack 427-316 problem sort of fixed

 

The mistake that imaginary ‘Jack’ made was neglecting to count backwards by ten one time; thus his answer was ten too large.  The parent was the one who counted backwards from 127 by tens, six times, which I can sort of excuse, because the distance between the single units (127 to 126, then 126 to 127, and so on, is nothing like 100 times as small as he distance from 427 to 327).

I think the parent over-reacted, however, and made an embarrassing mistake.

My mistake was thinking that the numbers the parent wrote on the number line were ones that appeared in the original problem.

So, lots of errors all around.

I agree with my still-working math department colleagues that the Common Core standards in math for the middle school are not too bad, as written – they even include positive ideas about approaching math from many different angles (ahem), which I’ve espoused for a long time. It’s all the stuff that the CCSS are bound up with that is the problem: the constant top-down directives, the idea that every single teacher and every single student in the nation is supposed to be on the exact same page every day (which contradicts much of the verbiage of the standards), filling out umpteen useless data sheets and other paperwork every day, and the fact that a teacher’s job is tied to a random-number generator known as “Value Added”.
—————–
Another critique of this math homework and the parent’s reply can be found here.
 Reply
Published in: on March 23, 2014 at 10:24 am  Comments (31)  
Follow

Get every new post delivered to your Inbox.

Join 388 other followers