2013 DC-CAS Scores Released (Sort of)

The DC Office of the State Superintendent has released a spreadsheet with overall proficiency rates in reading and math and a combination of the two for the  spring 2013 DC-CAS multiple-choice test. You can see the whole list here. and look for your favorite school, charter or public.

Overall, percentages of students deemed ‘proficient’ or ‘advanced’ seem to be up a bit from last year. At some schools, the scores went up enormously — which we know is often a suspicious sign.

But there are several major problems with this data:

(1) We have no idea whether this year’s test was easier, harder, or the same as previous years’ tests, since the manufacturer is allowed to keep every single item secret. The vast majority of teachers are forbidden even to look at the tests they are administering, to see if the items make sense, match the curriculum, are ridiculously hard, or are ridiculously easy.

(2) We have no idea what the cutoff scores are for any of the categories: “basic”, “below basic”, “proficient”, or “advanced”. Remember the scams that the education authorities pulled in New York State a few years ago on their state-wide required student exams? If not, let me remind you: every year they kept reducing the cutoff scores for passing, and as a result, the percentages of students passing got better and better. However, those rising scores didn’t match the results from the NAEP. It was shown that to get a passing grade on certain tests, a student only had to guess about 44% of the answers to get a proficient score — on a test where each question had four possible answers (A, B, C, D). (NYT article on this)

(3) In keeping with their usual tendency to make information hard to find, the OSSE data does not include any demographic data on the student bodies. We don’t know how many students took the tests, or what percentages belong to which ethnic group or race, or how many are on free or reduced-price lunch, or are in special education, or are immigrants with little or no English. Perhaps this information will come out on its own, perhaps not. It is certainly annoying that nearly every year they use a different format for reporting the data.

I think it’s time for a Freedom of Information Act request to get this information.

Michael Martin on Standardized Testing

A recent post by Michael Martin (and I hope he forgives me):

I think the fallacy here is to conflate pretest/posttest comparisons in specific subject areas with broad high-stakes tests that are removed in time from the instruction. There is always a problem with using multiple choice tests to measure anything other than what is termed “inert knowledge” but like any measuring instrument it makes a big difference on how it is used.

As UCLA Professor James Popham, an expert in testing and former president of the American Educational Research Association, wrote in a March, 1999, essay titled “Why Standardized Tests Don’t Measure Educational Quality” published in Educational Leadership:

“Employing standardized achievement tests to ascertain educational quality is like measuring temperature with a tablespoon. Tablespoons have a different measurement mission than indicating how hot or cold something is.”

There are numerous valid uses of professional testing. In the experience I have, there is very little appreciation for how difficult it is to actually develop a valid question to measure what you want to measure. I actually took a survey methodology course in college where each week we designed and administered a small survey with the restriction that we had to ask each question in two different ways to see if the responses matched up. What I learned was that we were never able to reliably design a survey that gave similar results from the two questions process.

People have an unwarranted trust in tests much like they have in lie detector tests. Experts in both fields explain that they don’t do what people think they do.

Harvard University Professor Daniel Koretz, a national expert on achievement testing, writes in his 2008 book “Measuring Up” that there is “a single principle” that should guide the use of tests, “don’t treat ‘her score on the test’ as a synonym for ‘what she has learned.’”

In a May 8, 2005, news story, a Cox News Service reporter interviewed experts on standardized testing and reported that at “the Lindquist Center – located on the University of Iowa campus and named for the grandfather of standardized testing – you won’t find a lot of fans of No Child Left Behind.” The story, titled “U.S. testing craze worries experts behind the scores,” explained “the consensus is that standardized tests weren’t created for such a sweeping, high-stakes purpose” and continued:

“That’s the position of our entire field,” said Steve Dunbar, head of Iowa Testing Programs, developer of the Iowa Test of Basic Skills. … Experts in the Lindquist Center … expect the No Child Left Behind to run its course, confident the politically driven pendulum will swing back to a more reasonable view of the value of testing. Dunbar predicts public support will wane because of results that don’t seem to make sense. “The tests,” Dunbar said, “will lose credibility.”

One of the foremost experts on academic testing in the world, Professor Robert Linn, wrote in a 1998 technical paper for the Center for the Study of Evaluation:

“As someone who has spent his entire career doing research, writing, and thinking about educational testing and assessment issues, I would like to conclude by summarizing a compelling case showing that the major uses of tests for student and school accountability during the past 50 years have improved education and student learning in dramatic ways. Unfortunately, that is not my conclusion. Instead I am led to conclude that in most cases the instruments and technology have not been up to the demands that have been placed on them by high-stakes accountability.”

Economists are simply ignorant of the reality in testing. They think that all numbers are accurate measures. I have a background in which after I had graduated from college and worked in the field for several years I enrolled and then dropped out of a masters in economics program because I could not believe how naïve the instructors were. I took several classes and in each one I would put a vertical line on my notebook and write what they told me on the left of the line and write what was actually true from my knowledge on the right of the line. The back breaker was a course in international trade in which most of the semester was spent teaching the Hecksher-Ohlin theory and in the last two weeks they revealed the Leontief Paradox in which economist Wassily Leontief had tested the Hecksher-Ohlin theory and found it was wrong. Leontief later received the Nobel Prize. They spend an entire semester teaching student economists a theory that had already been proven wrong.

I also should point out that at Arizona State University where I took these courses, and presumably in others, they offered a B.A. and a B.S. in economics, where were unfortunately named in reverse. I had to work with people who graduated with a B.A. in economics and what they learned was mostly BS that required to mathematical training. The B.S. in economics had an entirely different curriculum involving mathematics and they were even disparagingly called “quants” by the B.A. people for their quantitative approaches. It was a bizarre Alice in Wonderland world. The only saving grace is that the one professor I actually respected was later made chairman of the department.

So it doesn’t surprise me at all that the most foolish reports about using data in education come from economists. A lot of them are way out of their depth. On the other hand, one of the most influential studies that I’ve seen regarding education was a study done back around 1979 by the Philadelphia Federal Reserve in conjunction with the school board to use statistics to associate what education variables were associated with gains in fourth grade reading scores. The Federal Reserve economists regularly used econometric models to work with economic data so they were experts in using quantitative methods. What impressed me was that they found little correlation between test score gains and whether the teacher had a background in reading instruction, but a strong correlation with whether the principal had a background in read[ing] instruction. Something to think about when you consider value added over forty years later.

Michael T. Martin
Research Analyst
Arizona School Boards Association
2100 N. Central Ave, Suite 200
Phoenix, Az 85004

I have a puzzle for you: Can You Spot the Baltimore-Rhee Miracle of 1993-1995?

Is Michelle Rhee a liar, or is she honest? You decide.

You remember that Michelle Rhee said that when she taught for three years in Baltimore, after a bit of a rough spot during her first year, she brought her students from the very bottom to the very top, right? If that’s true, then it should be really easy to spot those scores, especially since there were exactly TWO third-grade classes at her Baltimore school during her final year, and she says that she team-taught with the other second-grade, later third-grade, teacher during those last two years. (Or maybe there were two teachers in her class – I can’t tell from her account.) But no matter. A jump that large should be really, really obvious.

In last month’s Washingtonian Magazine, she told an interviewer:

In my second year of teaching, we took them from the bottom to the top on academics, and what I learned from that experience was these kids were getting screwed because people wanted to blame their low achievement levels on the single-parent households and on the poverty in the community. In that two-year period, none of those things changed. Their parents didn’t change.

“What changed?
“What we were doing with them in school.”

And as I pointed out in my previous post, her official resume says “Over a two-year period, moved students scoring on average at the 13th percentile on national standardized tests to 90% of students scoring at the 90th percentile or higher.” (emphasis added by me)

Here comes the puzzle.

I looked up the CTBS reading scores for nine different schools in Baltimore for the period 1992 through 1995. I converted all of the CTBS NCE reading scores in the second grade for 1992, 1993, and 1994, and for the third grade in 1995 into percentile ranks, because that is the measurement that Rhee refers to. The CTBS is, as far as I can tell, the only nationally-standardized test that was given in Baltimore. The MSPAP, which was also given during at least some of those years, is a Maryland state-wide test, and so far, I haven’t found scores on the MSPAP for 1995.

Here are the graphs showing the CTBS reading scores in nine different schools (or clusters of schools) during the years Michelle Rhee claims to have worked her miracle. I included all of the seven Tesseract/EAI schools, including Harlem Park where Rhee taught, and I also included some of the regular public schools that were officially designated as comparison schools in the study that was supposed to figure out whether Edison was doing a good job or not.

I will NOT tell you which graph is Harlem Park. It’s your job to figure out which one it was.

Hint: Rhee was still in college for SY 1992. She worked at Harlem Park for SY 1993, 1994, and 1995. She taught second grade for the first two years, and then apparently followed the students into the third grade for SY 1995.

Let’s look at the graphs:

OK, boys and girls. Which school was it? A, B, C, D, E, F, G, H, or I?

(No more hints today. I’ll give the identities of these schools tomorrow.)

==================================

Error notice: I noticed this morning that I had accidentally inverted the percentile ranks and NCE scores in several places, which made some of these graphs wrong and the question harder to answer. It is now fixed, but I wish I was a better proofreader. I apologize to all.

Published in: on January 28, 2011 at 3:04 am  Comments (5)  
Tags: , , ,
%d bloggers like this: