Science Fact

Corporate education reform science fiction, is having an unintended(?) science fact effect.

First the science

If VAM scores are at all accurate, there ought to be a significant correlation between a teacher's score one year compared to the next. In other words, good teachers should have somewhat consistently higher scores, and poor teachers ought to remain poor. He created a scatter plot that put the ratings from 2009 on one axis, and the ratings from 2010 on the other axis. What should we expect here? If there is a correlation, we should see some sort of upward sloping line.

There is one huge takeway from all this. VAM ratings are not an accurate reflection of a teacher's performance, even on the narrow indicators on which they focus. If an indicator is unreliable, it is a farce to call it "objective."

This travesty has the effect of discrediting the whole idea of using test score data to drive reform. What does it say about "reformers" when they are willing to base a large part of teacher and principal evaluations on such an indicator?

That travesty is now manifesting itself in real personal terms.

In 2009, 96 percent of their fifth graders were proficient in English, 89 percent in math. When the New York City Education Department released its numerical ratings recently, it seemed a sure bet that the P.S. 146 teachers would be at the very top.

Actually, they were near the very bottom.
Though 89 percent of P.S. 146 fifth graders were rated proficient in math in 2009, the year before, as fourth graders, 97 percent were rated as proficient. This resulted in the worst thing that can happen to a teacher in America today: negative value was added.

The difference between 89 percent and 97 percent proficiency at P.S. 146 is the result of three children scoring a 2 out of 4 instead of a 3 out of 4.

While Ms. Allanbrook does not believe in lots of test prep, her fourth-grade teachers do more of it than the rest of the school.

In New York City, fourth-grade test results can determine where a child will go to middle school. Fifth-grade scores have never mattered much, so teachers have been free to focus on project-based learning. While that may be good for a child’s intellectual development, it is hard on a teacher’s value-added score.

These teachers are not the only ones.

Bill Turque tells the story of teacher Sarah Wysocki, who was let go by D.C. public schools because her students got low standardized test scores, even though she received stellar personal evaluations as a teacher.

She was evaluated under the the D.C. teacher evaluation system, called IMPACT, a so-called “value-added” method of assessing teachers that uses complicated mathematical formulas that purport to tell how much “value” a teacher adds to how much a student learns.

As more data is demanded, more analysis can be done to demonstrate how unreliable it is for these purposes, and consequently we are guaranteed to read more stories of good teachers becoming victims of bad measurements. It's unfortunate we're going to have to go through all this to arrive at this understanding.

We're gonna need a bigger boat

Bigger BoatThere's a line in the movie Jaws, where it dawns upon Martin Brody that they are up against a serious shark and need a bigger boat. Well there's an article in the Dayton Daily News today that suggests school districts might need a bigger boat too, if they are to comply with some of the crazy provisions of S.B.5

The new merit pay system mandated in Senate Bill 5 will be applied to Ohio’s 146,000 K-12 teachers and indirectly impact 1.78 million students in 613 school districts.
Senate Bill 5 calls for teachers to be evaluated each year by April 1. The reviews would be based on: licensure level; whether teachers attain ‘highly qualified’ status; student test scores; at least two observations of at least 30 minutes each; and other criteria picked by the local school board.

Pay, firings and layoffs will be based on these evaluations.

Let's stop there just for one second. We won't dwell on licensure level, status or even student test scores. We'll get to those for sure another time.

Let's just think for a minute about these observations.

There must be 2 per year per teacher of at least 30 minutes each. 30 minutes + 30 minutes = 1 hour. 1 hour x 146,000 teachers = 146,000 hours of observation per year.

But these observers aren't just going to magically appear. They will need time to organize the observations, to get to the classes, to record their findings and to issue a report. Conservatively this adds another hour per year per teacher to the effort.

Now we are at 292,000 hours per year just for this provision alone.

If someone were to work 8 hours a day, 5 days a week, 52 weeks a year it would take them over 140 years to complete this task. Since these observations have to be completed annualy that means we're going to need at least 140 more administrators just for this provision alone!

Is this what was meant by providing school districts the tools they need to save money?

Ohio School Boards Association lobbyist Damon Asbury, a former school district superintendent who has assessed evaluation systems, said the best ones boil down to using multiple data points, including observations made by different observers. Asbury said high quality, annual evaluations of every teacher will put heavy pressure on administrators.

“That is not to say it can’t be done but it’ll require more time and effort,” he said. “We may find ourselves in need of more administrators.”

We're going to need a bigger boat, or at least one that doesn't have so many holes in it.