280 days is a mini-documentary of the heroic efforts of the members of the Columbus Education Association in their fight against Senate Bill 5. Using never-before seen photographs and video footage, the film follows the legislation's path from its introduction in the Senate to the citizen veto that repealed SB 5 in the name of the middle class.
The real fight over SB5 is still ahead
Yesterday was the filing deadline for candidates wishing to run for the Ohio General Assembly. We had looked earlier at the impact of incumbents of the Ohio House of Representatives voting for SB5 would have on their reelection chances.
Such a swing, could put a halt to the Governors radical agenda and turn the remaining 2 years of his first term into a lame duck effort.
Now some of this calculation is complicated by the recent redistricting, but as Gongwer notes, the 2012 elections are shaping up to be a continuation of the fight over SB5
House Democrats, for example, noted that a number of educators have filed to run and Speaker Batchelder said the GOP newcomers include an ample amount of businesspeople.
[...]
Rep. Debbie Phillips (D-Athens), the House Democratic Caucus Campaign co-chair, said 2002 Teacher of the Year Maureen Reedy, who is seeking the open 24th House District seat in Franklin County, is among at least 10 teachers running for the House as Democrats.
"State budget cuts and the unfair attacks in SB5 have put educators and our children's education directly in the crosshairs of the Republican's anti-middle class agenda and teachers are standing up, fighting back and getting involved," Rep. Phillips said in a release. "We are very excited to have so many great teachers running for office. They are trusted and well known in their communities, which are two key components of electoral success."
While some candidates might have a difficult task ahead of them due to the gerrymandering of districts, the overwhelming rejection of SB5 is likely to create some very sharp contrasts for voters to decide upon.
Bloomberg's brain dead brainwave
Billionaire Mayor of New York, and wannabe corporate education reformer Mike Bloomberg has suggested a radically absurd idea
“Double the class size with a better teacher is a good deal for students.”
Bloomberg's opinion is based upon a misguided and factually wrong premise, one he continues to hold to
No doubts haunt the mayor. In 2008 he insisted that class-size research was “unambiguous.”
“I don’t even understand why the subject comes up anymore,” he said, adding that all that mattered was teacher quality.
Let's examine class sizes and see if they matter. Michael C. Morrison, Ph.D. has analyzed 9,000 school districts to determine the impact on class sizes and graduation. His findings are unambiguous.
Here's the graph of results
This isn't the only study of course, it's a subject that has been well and extensively researched. The National Bureau of Economic Research (NBER) found
Michael Morrison has detailed further studies on the subject, here.
As for Mayor Bloomberg, he doesn't practice what he is preaching
Imagine if they packed those billionaire's kids into classrooms of 63!
School rankings raise serious concerns
Last month the state released a preliminary look at their new school rankings list. After digesting this list and its construction, people are asking interesting questions and observing uncomfortable patterns.
- The PI calculation is based on passage rates of Ohio Achievement Assessments (grades 3–8) and the Ohio Graduation Test (grades 10 and 11). The proficiency “cut scores” are so low that students can be determined “proficient” even when they answer less than 50% of test questions correctly.
- The PI calculation gives schools and districts “partial” credit for students who fail to meet the proficient standard.
- The PI calculation does not include a growth component. Districts and schools can be highly ranked even if students are learning little from year to year. The PI is a clumsy instrument that does not allow the average person to distinguish the true performance of districts. For example, 50 districts have PI scores of 100.XXXX [with the X’s representing the digits after the decimal point]. Is there any real difference in performance between the district ranked 210 of 611 or 260 of 611 districts?
Indeed, with the somewhat arbitrary nature of the weightings of the PI calculation, how much of variation in these scores is a consequence of those design choices?
The most disturbing result however is this
In general, districts’ rankings are directly related to how many low-income students they enroll. Even just looking at the rankings of urban school districts, for most (but not all) of the districts in the top 25 percent, less than half of their students are from low-income families.
There's about twelves months before these preliminary results become real ones, and one can only hope that some of these design problems and errata are resolved by then, but we're not hopeful.
What Value-Added Research Does And Does Not Show
Worth reading in it's entirety.
And then there are all the peripheral contributions to understanding that this line of work has made, including (but not limited to):
- That experience does matter;
- That the quality of peers affects teacher performance;
- That teachers perform differently in different schools;
- And that students’ backgrounds explain more of the variation in their performance than school related factors
Prior to the proliferation of growth models, most of these conclusions were already known to teachers and to education researchers, but research in this field has helped to validate and elaborate on them. That’s what good social science is supposed to do.
Conversely, however, what this body of research does not show is that it’s a good idea to use value-added and other growth model estimates as heavily-weighted components in teacher evaluations or other personnel-related systems. There is, to my knowledge, not a shred of evidence that doing so will improve either teaching or learning, and anyone who says otherwise is misinformed.*
As has been discussed before, there is a big difference between demonstrating that teachers matter overall – that their test-based effects vary widely, and in a manner that is not just random –and being able to accurately identify the “good” and “bad” performers at the level of individual teachers. Frankly, to whatever degree the value-added literature provides tentative guidance on how these estimates might be used productively in actual policies, it suggests that, in most states and districts, it is being done in a disturbingly ill-advised manner.
[readon2 url="http://shankerblog.org/?p=4358&mid=5417"]Read entire article[/readon2]
Two steps back
When a major aspect of your mission is to promote the potential benefits of charter schools and corporate education reform ideas, having to write an accountability report titled "Two steps forward, one step back" was likely a painful prospect. But that is the title of the Fordham Foundation's annual report.
As we detailed in our highly read series "Fordham Exposed" (part I, part II), Fordham sponsored charters have not performed well.
With the results being difficult to spin, especially with the added scrutiny corporate education reformers are now starting to receive, Fordham's report decided to make their results appear more robust via comparison to other large charter sponsors.
By creating a graph that put Fordham at the top of the pile however, it demonstrates how poorly the other large authorizers are performing. If Fordham has taken one step back, their peers have taken two, or even three.
A significant part of the education reform debate revolves around the cost of delivering a high quality, universal education. So it is disappointing to note that Fordham's report does not address the cost of the results they have produced. One can only surmise, by the decision to omit this data, that those results are not flattering either. We would call upon the Fordham Foundation to publish cost data in its annual reports going forward.
Two Steps Forward, One Step Back: Fordham’s 2010-11 Sponsorship Accountability Report
