Stealing the headline from this NYT article, to bring to your attention a report on the IMPACT rubric for teacher evaluation in Washington DC. Ohio's new evaluation system passed in the state budget draws some of its heritage from this, so we thought it would be valuable to consider it for a moment.
Her fears were not unfounded: 165 Washington teachers were fired last year based on a pioneering evaluation system that places significant emphasis on classroom observations; next month, 200 to 600 of the city’s 4,200 educators are expected to get similar bad news, in the nation’s highest rate of dismissal for poor performance.
The evaluation system, known as Impact, is disliked by many unionized teachers but has become a model for many educators. Spurred by President Obama and his $5 billion Race to the Top grant competition, some 20 states, including New York, and thousands of school districts are overhauling the way they grade teachers, and many have sent people to study Impact.
Ohio's new system involves each teacher receiving two 30 minute in-class observations also. Education Sector, a non-profit think tank recently produced a paper on IMPACT and took at look at some of the ways this new system has affected Washinton DC teachers. We urge you to read the paper in full, below, but we've also pulled out some of the interesting pieces to entice you.
It is a measure of how weak and meaningless observations used to be that these pop visits can fill teachers, especially the less experienced ones, with the anxiety of a 10th-grader assigned an impromptu essay on this week’s history unit for a letter grade. The stress can show up in two ways—the teacher chokes under the pressure, thereby earning a poor score, or she changes her lesson in a way that can stifle creativity and does not always serve students. Describing these observations, IMPACT detractors use words like “humiliating,” “infantilizing,” “paternalistic,” and “punitive.” “It’s like somebody is always looking over your shoulder,” said a high school teacher who, like most, did not wish to be named publicly for fear of hurting her career.
“Out of 22 students, I have five non-readers, eight with IEPs [individual educational plans, which are required by federal law for students with disabilities], and no co-teacher,” says the middle school teacher. “The observers don’t know that going in, and there is no way of equalizing those variables.”
Bill Rope is not young, or particularly bubbly, but he is a respected teacher who sees this unusual relationship from the confident perspective of an older man who went into education after a 30-year career in the foreign service. Rope, who now teaches third grade at Hearst Elementary School in an affluent neighborhood of Northwest D.C., was rated “highly effective” last year and awarded a bonus that he refused to accept in a show of union solidarity.
But a more recent evaluation served to undermine whatever validation the first one may have offered. In the later one, a different master educator gave him an overall score of 2.78—toward the low end of “effective.”
So how did it all shake out? At the end of IMPACT’s first year, 15 percent of teachers were rated highly effective, 67 percent were judged effective, 16 percent were deemed minimally effective, and 2 percent were rated ineffective and fired.
Theoretically, a teacher’s value-added score should show a high correlation with his rating from classroom observations. In other words, a teacher who got high marks on performance should also see his students making big gains. And yet DCPS has found the correlation between these two measures to be only modest, with master educators’ evaluations onlyslightly more aligned with test scores than those of principals.