Jersey Jazzman On NY State Charter Schools and Indentured Servitude by their Teachers…

Very interesting article on the methods by which certain New York State charter school chains plan to make sure that their teachers don’t depart for less onerous working conditions and better pay in the regular public school sector. JJ says that the plan won’t work in the long run. Interesting reading, if you can follow his reasoning.

Here is the link:

http://jerseyjazzman.blogspot.com/2017/07/shooting-themselves-in-foot-teacher.html

Why Does Eva Moskowitz Get to Avoid Following the Rules?

You may know that Eva Moskowitz runs the Success Academies charter school chain in New York City, whose students score extremely well on the mandatory New York state-wide ELA and math tests – better than any schools in the state.

This can only partly be explained by the very high attrition rates from SA schools – many, many students drop out or are pushed out, and not replaced. For example, a class of 73 first graders becomes 26 ninth graders much later on.

However, not a single one of SA graduates has EVER scored well enough for entry into any of the specialized public New York City magnet schools.

In addition, they have refused to release the Regents’ exam scores for any of their students, even though every other school must do so. Even though the students’ OWN TEACHERS at SA get to grade their students’ Regents exams – something no regular public school is ever allowed to do.

Something is extremely fishy, and teacher-blogger Gary Rubinstein is trying to uncover it without much help from anybody.

Read his account here.

 

See Jersey Jazzman use the Gaussian Distribution to Show that Arne Duncan and Mike Petrilli are full of it

Excellent lesson from Jersey Jazzman showing that the old tests produce pretty much the same distribution of scores as the new tests.

old and new tests

He has superimposed the green scores from 2008 on top of the 2014 scores for New York state in 8th grade reading, and basically they have almost the same distribution. Furthermore, a scatter plot shows nearly the same thing, and that there is a nearly perfect correlation between the old scores and the new scores, by school.

old and new tests again

Read his article, which is clear and concise. I don’t have time to go into this in depth.

http://jerseyjazzman.blogspot.com/2015/09/common-core-testing-whos-real-liar.html?spref=fb

A Scathing Review of Joel Klein’s Book on New York Public Schools

Gary Rubenstein has a guest post that gives a devastating review of the many lies and inconsistencies in the recent book by Joel Klein, the recent chancellor of New York City Public Schools. It’s a bit long, but worth reading. Here is the URL:

http://garyrubinstein.wordpress.com/2014/11/18/guest-post-a-review-of-joel-kleins-new-book/

Now I Understand Why Bill Gates Didn’t Want The Value-Added Data Made Public

It all makes sense now.

At first I was a bit surprised that Bill Gates and Michelle Rhee were opposed to publicizing the value-added data from New York City and other cities.

Could they be experiencing twinges of a bad conscience?

No way.

That’s not it. Nor do these educational Deformers think that value-added mysticism is nonsense. They think it’s wonderful and that teachers’ ability to retain their jobs and earn bonuses or warnings should largely depend on it.

The problem, for them, is that they don’t want the public to see for themselves that it’s a complete and utter crock. Nor to see the little man behind the curtain.

I present evidence of the fallacy of depending on “value-added” measurements in yet another graph — this time using what NYCPS says is the actual value-added scores of all of the many thousands of elementary school teachers for whom they have such value-added scores in the school years that ended in 2006 and in 2007.

I was afraid that by using the percentile ranks as I did in my previous post, I might have exaggerated or distorted how bad “value added” really was.

No worries, mate – it’s even more embarrassing for the educational deformers this way.

In any introductory statistics course, you learn that a graph like the one below is a textbook case of “no correlation”. I had Excel draw a line of best fit anyway, and calculate an r-squared correlation coefficient. Its value? 0.057 — once again, just about as close to zero correlation as you are ever going to find in the real world.

In plain English, what that means is that there is essentially no such thing as a teacher who is consistently wonderful (or awful) on this extremely complicated measurement scheme. How teacher X does one year in “value-added” in no way allows anybody to predict how teacher X will do the next year. They could do much worse, they could do much better, they could do about the same.

Even I find this to be an amazing revelation. What about you?

And to think that I’m not making any of this up. (unlike Michelle Rhee, who loves to invent statistics and “facts”.)

PS:

I neglected to give the links to where you can find the raw data. (Warning: some of these spreadsheets are enormous); Here they are:

http://www.ny1.com/content/top_stories/156599/now-available–2007-2010-nyc-teacher-performance-data#doereports
or, if you prefer a shorter URL, try this one:

http://tinyurl.com/836a8cj

Important Ignored Research: Teacher Incentives are Counter-Productive

Latest News You Never Read or Saw in the Mainstream Media:

Harvard Researcher Conducts Randomized Experiment in New York Schools and Concludes That Paying Teachers to Improve Student Achievement Doesn’t Work

This is an extremely important study, and was therefore ignored. After all, the ruling class feels that it’s much more important for you to know the latest updates on Michelle Rhee’s posturings, the stupidity of TV show American Idol, felon Charles Colson’s delusions of grandeur, and Charlie Sheen’s antics.

This study was done by Harvard researcher and wunderkind Roland Fryer, a partner of Michelle Rhee in an abortive experiment in DCPS and elsewhere to raise student achievement by paying students for doing the right thing — an experiment which also had no positive results.

A couple of excerpts from the study:
“The results from our incentive experiments are informative. Providing incentives to teachersbased on school’s performance on metrics involving student achievement, improvement, and thelearning environment did not increase student achievement in any statistically meaningful way.If anything, student achievement declined. Intent-to-treat estimates yield treatment e!ects of -0.015 … standard deviations (hereafter [SD]) in mathematics and -0.011 [SD] … in reading forelementary schools, and -0.048 [SD] …  in math and -0.032 SD … in reading for middle schools, per year. Thus, if an elementary school student attended schools that implemented the teacher incentive program for three years, her test scores would decline by -0.045 [SD] in math and by -0.033 [SD] in reading – neither of which is statistically significant. For middle school students, however, the negative impacts are more sizeable: -0.144 [SD] in math and -0.096 [SD] in reading over a three-year period.

“The impact of teacher incentives on student attendance, behavioral incidences, and alternative achievement outcomes such as predictive state assessments, course grades, Regents exam scores, and high school graduation rates are all negligible. Furthermore, we find no evidence that teacher incentives affect teacher behavior, measured by retention in district or in school, number of personal absences, and teacher responses to the learning environment survey, which partly determinedwhether a school received the performance bonus.” (p. 5 of the study0

“[A]ll estimates of the effect of teacher incentives on student achievement are negative in both elementary and middle school – and statistically significant so in middle school. The … effect of the teacher incentive scheme is -0.011 SD [standard deviations] …in reading and -0.015 s … in math for elementary schools, and -0.032 [SD] … in reading and -0.048 s … in math for middle schools. The effect sizes in middle school are non-trivial – a student who attends a participating middle school for three years of our experiment is expected to lose 0.096 [SD] in reading and 0.144 [SD] in math.” (p.16)

Fryer compares the failure of his experiment to supposed ‘successes’ in performance incentives in third-world countries. However, let’s look at two of those ‘success’ stories:

“Duflo and Hanna (2005) randomly sampled 60 schools in rural India, and provided them with financial incentives to reduce absenteeism. The incentive scheme was simple; teachers’ pay was linear in their attendance, at the rate of Rs 50 per day, after the first 10 days of each month. They found that teacher absence rate was significantly lower in treatment schools (22 percent) compared to control schools (42 percent), and that student achievement in treatment schools were 0.17 [SD] higher than in control schools.”

(Duflo, Esther and Rema Hanna. 2005. “Monitoring Works: Getting Teachers to Come to School.” NBER [National Bureau of Economic Research] Working Paper No. 11880.)

NOTE: I was shocked to find that 50 Indian Rupees both today and in 2005 are worth a little more than one US dollar! (See this page on exchange rates, for example.) If all it takes to reduce teacher absenteeism from 42% of the teachers absent each day (two out of five) to 22% absent each day(one teacher out of five) is a bit more than a single dollar per day, imagine what it would be like if teachers in India were paid an extra TEN dollars per day to go to work!!

“Glewwe et al. (2010) report results from a randomized evaluation that provided teachers for grades 4 through 8 in Kenya with group incentives based on test scores and find that while test scores increased in program schools in the short run, students did not retain the gains after the incentive program ended. They interpret these results as being consistent with teachers expending effort towards short-term increases in test scores but not towards long-term learning.”

(Glewwe, Paul, Nauman Ilias, and Michael Kremer. 2010. ”Teacher Incentives.” American Economic Journal, 2(3): 205-227.)

NOTE: Given the amount of corruption allegedly present in many African and other third-world countries, there is another possible (but more cynical) explanation: plain, outright manipulation and cheating by the teachers and local school administrators in order to earn a bit more money by producing transiently higher scores. Such dishonesty has been documented many times here in the US, so why not in Kenya?

Published in: on March 23, 2011 at 10:34 am  Comments (1)  
Tags: , , ,

And an analysis of New York City Public Schools: “Doing Less With More”

I didn’t write the following, but I wanted to be sure people got a chance to read about this study. Thanks to Robert Bligh for bringing it to my attention.  Here goes:

===================================

http://nepc.colorado.edu

Doing Less With More?

Taking a Second Look at New York City Charter Schools

National Education Policy Center – Boulder, CO – Jan. 27, 2011

New study finds NYC charters benefiting from resources but not producing better student test scores than traditional public schools.

Advocates for charter schools have pointed to New York City as an exemplar of how charters can show better results than traditional public schools. Charter advocates have also stated that these schools are able to do more with lesser amounts of funding.

But both of these claims are not correct, according to a new study that closely examines funding and charter school students. “Adding Up the Spending: Fiscal Disparities and Philanthropy among New York City Charter Schools”, a study by Rutgers University professor Bruce Baker and doctoral student Richard Ferris, was published today by the National Education Policy Center (NEPC) at the University of Colorado at Boulder. The study points out that any meaningful understanding of public resources for New York City charters is highly dependent on three factors:

• Does the charter serve students with greater or lesser needs? The City’s charters disproportionately serve lower percentages of poor and English-learner students, who require more resources.

• Are the schools (charter and traditional public) that are compared serving the same grade levels? Charters overwhelmingly serve elementary aged students, and traditional public schools serving those same grades typically have fewer resources than schools serving upper grades.

• Does the Board of Education provide a facility? About half of the City’s charters are given a public facility. Once the first two factors are considered, the study finds that charter schools not housed in Board of Education facilities receive $517 less in public funding than do non-charters. However, charter schools housed in BOE facilities receive significantly more resources ($2,200 on average more per pupil). But that’s not the end of the story.The authors ask one additional question: Does the charter receive substantial resources from private donors? They examine audited annual financial reports and IRS tax filings and they discover that the best-endowed charters in the City receive additional resources amounting annually to more than$10,000 per pupil in private funding.

According to lead researcher Bruce Baker, “Finding little truth to the test score claims or the spending claims does not, and should not, end discussions of what we can learn from these New   charter schools, but it does point to the hypocrisy and emptiness of arguments by charter advocates that additional resources would do little to help traditional public schools.Such arguments are particularly troubling in NYC where high-spending charters far outspend nearby traditional public schools. Equitable and adequate resources do matter, but there appear to be a considerable number of charters schools in NYC doing less with more.”

Find “Adding Up the Spending: Fiscal Disparities and Philanthropy among New York City Charter Schools” by Bruce Baker and Richard Ferris on the web at:

http://nepc.colorado.edu/publication/NYC-charter-disparities

The mission of the National Education Policy Center is to produce and disseminate high-quality, peer reviewed research to inform education policy discussions. We are guided by the belief that the democratic governance of public education is strengthened when policies are based on sound evidence.

For more information on NEPC, please visit http://nepc.colorado.edu.

This research brief was made possible in part by the support of the Great Lakes Center for Education Research and Practice.

CONTACT: Bruce Baker or William Mathis 732-932-7496 ext. 8232  or 802-383-0058

bruce.baker@gse.rutgers.edu

william.mathis@colorado.edu

%d bloggers like this: