error

The end of Michelle Rhee?

As for Rhee: I suspect she’s not planning on going anywhere, but all this error, corruption, and cover-up is taking a toll on her reputation. To the extent that her movement is about education reform rather than about Michelle Rhee, at some point they’ll have to find a more credible leader, no?

Click the link for more.

Value-Added Versus Observations

Value-Added Versus Observations, Part One: Reliability

Although most new teacher evaluations are still in various phases of pre-implementation, it’s safe to say that classroom observations and/or value-added (VA) scores will be the most heavily-weighted components toward teachers’ final scores, depending on whether teachers are in tested grades and subjects. One gets the general sense that many – perhaps most – teachers strongly prefer the former (observations, especially peer observations) over the latter (VA).

One of the most common arguments against VA is that the scores are error-prone and unstable over time – i.e., that they are unreliable. And it’s true that the scores fluctuate between years (also see here), with much of this instability due to measurement error, rather than “real” performance changes. On a related note, different model specifications and different tests can yield very different results for the same teacher/class.

These findings are very important, and often too casually dismissed by VA supporters, but the issue of reliability is, to varying degrees, endemic to all performance measurement. Actually, many of the standard reliability-based criticisms of value-added could also be leveled against observations. Since we cannot observe “true” teacher performance, it’s tough to say which is “better” or “worse,” despite the certainty with which both “sides” often present their respective cases. And, the fact that both entail some level of measurement error doesn’t by itself speak to whether they should be part of evaluations.*

Nevertheless, many states and districts have already made the choice to use both measures, and in these places, the existence of imprecision is less important than how to deal with it. Viewed from this perspective, VA and observations are in many respects more alike than different.

[readon2 url="http://shankerblog.org/?p=5621"]Continue reading part I[/readon2]

Value-Added Versus Observations, Part Two: Validity

In a previous post, I compared value-added (VA) and classroom observations in terms of reliability – the degree to which they are free of error and stable over repeated measurements. But even the most reliable measures aren’t useful unless they are valid – that is, unless they’re measuring what we want them to measure.

Arguments over the validity of teacher performance measures, especially value-added, dominate our discourse on evaluations. There are, in my view, three interrelated issues to keep in mind when discussing the validity of VA and observations. The first is definitional – in a research context, validity is less about a measure itself than the inferences one draws from it. The second point might follow from the first: The validity of VA and observations should be assessed in the context of how they’re being used.

Third and finally, given the difficulties in determining whether either measure is valid in and of itself, as well as the fact that so many states and districts are already moving ahead with new systems, the best approach at this point may be to judge validity in terms of whether the evaluations are improving outcomes. And, unfortunately, there is little indication that this is happening in most places.

Let’s start by quickly defining what is usually meant by validity. Put simply, whereas reliability is about the precision of the answers, validity addresses whether we’re using them to answer the correct questions. For example, a person’s weight is a reliable measure, but this doesn’t necessarily mean it’s valid for gauging the risk of heart disease. Similarly, in the context of VA and observations, the question is: Are these indicators, even if they can be precisely estimated (i.e., they are reliable), measuring teacher performance in a manner that is meaningful for student learning?

[readon2 url="http://shankerblog.org/?p=5670"]Continue reading part II[/readon2]

A Worthington teacher testifies against HB153

OEA member and WEA President Mark Hill's written testimony against HB 153

Chairman Widener, Ranking Member Skindell, and members of the Senate Finance Committee, my name is Mark Hill. I am a math teacher in the Worthington City Schools currently serving as president of the Worthington Education Association. Thank you for allowing me to offer testimony on HB153.

I come today to talk to you about the teacher accountability provisions in HB 153. I have some concerns about the structure for accountability that is in the version passed out of the House.

I would like to begin by saying that I don’t have a problem with a rigorous evaluation system for teachers nor do I disagree with the notion of removing ineffective teachers from the classroom. That may sound unusual coming from a leader of a local teachers union but I am a parent, too, and I care about access to a high quality education for my kids. The teachers I represent take a great deal of pride in teaching in an excellent school district; many of them live in the district and all of them want it to remain excellent; none of them want to work alongside a bad teacher.

HB153, as passed by the House, goes too far. It requires teachers to be rated highly effective, effective, needs improvement, or unsatisfactory based on an evaluation in which 50% of the score is measuring student growth through value added scores averaged over three years. It requires the Superintendent of Public Instruction to set a minimum level value added measure for a teacher for each of the rating levels. Furthermore, it imposes draconian penalties for teachers who are rated as unsatisfactory or needs improvement including imposition of unpaid leave on a teacher rated at those levels if their principal does not consent to placing them in their building the next year effectively ending their careers.

Value added scores are a great concept but as a statistical measure, they are fraught with error. Scores fluctuate by random error; in Houston’s value added system only 38% of the top fifth remained in the top rating the next year. 23% of the top fifth in performance ended up being in the bottom fifth the next year and vice versa. Fluctuations like that defy reason; it is highly unlikely that a fourth of the top teachers in Houston one year were poor performers the next.

According to another study done for the US Department of Education’s National Center for Education Evaluation found that, using three years of data, a teacher who should be rated as average has a 25% chance of being rated significantly below average. A teacher who should be rated as a top performer has a 10% chance of being rated significantly below average. This means under HB 153, 25% of the average teachers in Ohio and 10% of the good teachers in Ohio would be in jeopardy of losing their jobs due to statistical error. I hope the Ohio General Assembly would not want to add a “Wheel of Fortune” element to teachers’ careers.

Under this system, who would take care of the kids? There are teachers who ask for the students with behavior problems and learning disabilities because they care about them and believe they deserve an education. Under HB 153, these teachers would be putting their career at risk to do so. My own son has Aspergers Syndrome, which is a condition on the autism spectrum – who will want to teach him? Under HB153, math and reading teachers are far more at risk for losing their jobs than other teachers because those are the only areas with enough scores to build a value-added modeling system. Who would want to work in an area where you are constantly worrying about losing your job due to a statistical error?

I don’t come just to complain but to offer solutions. First, you’ve already passed this framework for evaluation in Senate Bill 5. There is no logical reason to duplicate it in HB153 – frankly, I don’t believe it belongs in either bill but should be a subject of debate on its own.

Second, instead of mandating 50% value added, allow the local education agency to decide how to best fit value added in their evaluations. This is the system under Race to the Top – Worthington is a Race to the Top district, so we have already agreed to rate teachers’ effectiveness through evaluation using value added modeling. A top down statewide approach will have serious unintended consequences.

Thank you for listening.

Please contact your State Senator and ask them to remove the SB5 provisions from HB153 (the budget bill).

Voucher Poll

A recent poll, noted in the Pittsburgh Tribune Review shows that most Pennsylvanians oppose vouchers

Nearly two-thirds of Pennsylvanians oppose creating a voucher system that would use tax dollars to pay private-school tuition, according to a public opinion poll released yesterday.
[...]
In the March poll of 807 adults, 61 percent said they were opposed to the voucher idea, while 37 percent said they supported it. The margin of error was 3.4 percent.

Citizens continue to understand the benefits of a great public education and the means whereby vouchers undermine it, for the benefit of the few.