Providing a broad based education for K-12 is a very complex endeavor. It's that complexity which makes it difficult to distill what policies work and what don't, when so many variables affect student outcomes.
However, a cottage industry is being developed to reduce the entire topic of public education to a letter grade or a single number. It's as though these architects of simplicity have read Hitchhikers Guide To The Galaxy and determined that maybe because the answer to "the earth, universe and everything" is 42, it's should be even easier to grade a school as a simple C, or a teacher to a 1.07.
The problem of course is that much like Douglas Adams' novel, this thinking is also science fiction. To distill complexities down to such simple terms, means making highly subjective decisions and ignoring, or worse, being oblivious to, multiple variables.
A few cases in point. Ohio is about to deploy a new school rating system, based upon subjective measures, and ignoring a host of other factors.
As an educator and parent I could rail for days about the lack of actual meaning behind any letter grade, whether an A or an F, and this is a decision that even Rick Santorum would call anachronistic. If your child brings home a ‘C’ on his report card, what does that mean? Does than mean he’s working his ass off and completing all of his homework but struggling with expressing his knowledge on written tests? Or does it mean that he’s uninterested in completing homework that isn’t challenging him while attaining perfect scores on assessments? Or does it just mean that he is earning consistent C’s on every single assignment whether in-class, homework, quizzes, or tests? Perhaps it’s some combination of the above, so what does that tell you or your son about what he needs to do to improve?
See how clear those letter grades are?
Separating the effects of economic factors from school performance doesn't appear to have been one of the major efforts undertaken, even though we have known for a long time that a students socioeconomic status, and that of the school district is the leading predictor of performance, by far.
Another recent example has been the use of teacher level value add scores by New York's print media
The article is a pretty remarkable display of both poor journalism and poor research. The reporters not only attempted to do something they couldn’t do, but they did it badly to boot. It’s unfortunate to have to waste one’s time addressing this kind of thing, but, no matter your opinion on charter schools, it’s a good example of how not to use the data that the Daily News and other newspapers released to the public.
However, we can’t take the performance categories – or the Daily News’ “analysis” of them – at face value. Their approach has one virtue – it’s easy to understand, and easy to do. But it has countless downsides, one of them being that it absolutely cannot prove – or even really suggest – what they’re saying it proves. I don’t know if the city’s charter teachers have higher value-added scores. It’s an interesting question (by my standards, at least), but the Daily News doesn’t address it meaningfully.
Though far from the only one, the reporters’ biggest problem was right in front of them. The article itself notes that only about half (32) of the city’s charter schools chose to participate in the rating program (it was voluntary for charters). This is actually the total number of participating schools in 2008, 2009 and 2010, most of which rotated in and out of the program each year. It’s apparently lost on these reporters that only a minority of charters participating means that the charter teachers in the TDR data do not necessarily reflect the population overall. This issue by itself renders their assertions invalid and irresponsible.
Simple thinking, and bad education reporting is a major impediment to real education reform that will improve the quality of our schools.
Why is it that a politician, such as Mayor Frank Jackson, can put forward plans to eliminate teacher seniority, and it not be pointed out that teaching experience matters, and that if his goal is to improve the quality of Cleveland's public schools, his chosen policy preference is antithetical to that?
Research suggests that students learn more from experienced teachers (those with at least five years of experience) than they do from less experienced teachers (NCES 2000d; Rivkin, Hanushek, and Kain 1998; Murnane and Phillips 1981.) These studies point primarily to the difference between teachers with fewer than five years of experience (new teachers) and teachers with five or more years of experience.
If that wasn't simple enough, of course there are even more complex answers to this question.
But these overall findings ignore the fact that the experience/achievement relationship differs a great deal by context. For instance, the returns to experience appear to vary by where teachers work. The relationship is more consistent among elementary school teachers (especially compared with those in high schools). The effect of experience on teacher productivity may also be mediated by the quality of their peers in the same school – i.e., that novice teachers with more effective peers in the same school do better.
Similarly, there is evidence that experience matters less – or less consistently – in poorer schools (also see here). There are several plausible explanations for this discrepancy, such as the possibility that teachers in poorer schools burn out more rapidly, or that there are difficulties in teaching lower-income children that are harder to adjust to.
The experience factor not only varies by where you teach, but also by what you teach. Math teachers seem to improve more quickly (and consistently) than reading teachers, while newer evidence suggests that the same is true for teachers who remain in the same grade for multiple years.
Finally, it bears mentioning the obvious point that the effect of teacher experience might be totally different if we were able to look at outcomes other than test scores. The idea that experience doesn’t matter after five or so years incorrectly implies that test scores are the only relevant outcome. Nobody believes that is the case. (And, for what it’s worth, teachers with whom I’ve spoken find the idea that they stop improving after four or five years laughable.)
Instead the debate over the Mayor's plan has not revolved around whether it has any basis in supportable fact, but instead around simplistic stories of the politics involved.
There are enough bad actors in the corporate school reform movement willing to put aside hard truths and solid facts in favor of their desire to profit from public education, but that should be no excuse for others to not challenge simplistic thinking and unsupported asertions which can be equally as damaging to the goal of delivering a quality education to all students.