rubric

Ohio Teacher Evaluation System: Dishonest, Unrealistic, and Not Fully Supported by Academic Research

A great article that appeared on Dailykos a few days ago

I've spent the past three days at an OTES (Ohio Teacher Evaluation System) training. This system is being phased in over the next two years, and will serve as the vehicle by which all teachers in Ohio are evaluated. The workshop culminates with a post-assessment, taken some time after the classes end, resulting in licensure and the ability to evaluate instructional staff. OTES is described by ODE as a system that will
provide educators with a richer and more detailed view of their performance, with a focus on specific strengths and opportunities for improvement.

I talked to a number of administrators and teachers who had already taken the training before attending. Without exception, they were all struck by the rigidity of the rubric. I agree, but there's more here. Any system that wields so much power must be realistic, honest, and rooted in the consensus of academic research. The OTES rubric fails this basic test.

Words Matter
Check out the Ohio Standards for the Teaching Profession (starting on page 16) approved in October of 2005. Now look at the OTES rubric. The first thing you will notice is that the OTES rubric has four levels, and that the Ohio Standards only have three. I think it's fair to say that the Ohio Standards did not include the lowest level. (The document says as much.) The top three levels of the OTES Rubric align with the three levels of the Ohio Standards. The snag? The terminology used in the OTES rubric. Proficient has been replaced by Developing, Accomplished by Proficient, and Distinguished by Accomplished. Each level has been relegated!

One might argue that this doesn't matter. But, it does. Teacher evaluations are public record. School performance, or at least the percentage of teachers that fall into each category, will be published. Newspapers will ask for names of teachers and their ratings. And, as we will see as I unpack the rubric in greater detail, the very best teachers are likely to fall into the Proficient category. What's the one relationship between public education and the word Proficient already burned into the minds of parents? The minimal level of performance required to pass the Ohio Graduation Test. Dishonest.

[readon2 url="http://www.dailykos.com/story/2012/11/15/1161894/-Ohio-Teacher-Evaluation-System-Dishonest-Unrealistic-and-Not-Fully-Supported-by-Academic-Researc"]Continue reading...[/readon2]

Rethinking Teacher Evaluation in Chicago

The Consortium On Chicago School Research At The University Of Chicago Urban Education Institute just released an interesting report on the Chicago teacher evaluations rubric. We bring this to our readers attention because their process includes elements such as observations, that will surely be included in the forthcoming Ohio evaluation rubric. The conclusion begins

Our study of the Excellence in Teaching Pilot in Chicago reveals some positive outcomes: the observation tool was demonstrated to be reliable and valid. Principals and teachers reported they had more meaningful conversations about instruction. The majority of principals in the pilot were engaged and positive about their participation. At the same time, our study identifies areas of concern: principals were more likely to use the Distinguished rating.

Our interviews with principals confirm that principals intentionally boost their ratings to the highest category to preserve relationships. And, while principals and teachers reported having better conversations than they had in the past, there are indications that both principals and teachers still have much to learn about how to translate a rating on an instructional rubric into deep conversation that drives improvement in the classroom. Future work in teacher evaluation must attend to these critical areas of success, as well as these areas of concern, in order to build effective teacher evaluation systems.

Though practitioners and policymakers rightly spend a good deal of time comparing the effectiveness of one rubric over another, a fair and meaningful evaluation hinges on far more than the merits of a particular tool. An observation rubric is simply a tool, one which can be used effectively or ineffectively. Reliability and validity are functions of the users of the tool, as well as of the tool itself. The quality of implementation depends on principal and observer buy-in and capacity, as well as the depth and quality of training and support they receive.

We would add that this kind of tool could be very dangerous absent due process collective bargaining protections.

Rethinking Teacher Evaluation in Chicago

How Socrates would fare on new teacher evaluation plan

This is a pretty entertaining piece

The upstart Gates-funded organization Educators 4 Excellence has just put forth a proposal for teacher evaluations in New York City. They would accord 25 percent of the evaluation to student value-added growth data; 15 percent to data from local assessments; 30 percent to administrator observations; 15 percent to independent outside observations; 10 percent to student surveys; and 5 percent to support from the community.

The observations, they say, should follow a rubric. What sort of rubric should this be? The proposal states:

Observations should focus on three main criteria:

1. Observable teacher behaviors that have been demonstrated to impact student learning. For example, open-ended questions are more effective at improving student learning than closed questions.

2. Student behaviors in response to specific teacher behaviors and overall student engagement.

3. Teacher language that is specific and appropriate to the grade level and content according to taxonomy, such as Bloom’s. For example, kindergarten teachers should use different language than high school biology teachers.

Let’s see how Socrates might fare under these conditions. As I recall, he asked a fair number of closed questions. He did this to show his interlocutors a contradiction between what they assumed was true and what they subsequently reasoned to be true.

[readon2 url="http://www.washingtonpost.com/blogs/answer-sheet/post/how-socrates-would-fare-on-new-teacher-evaluation-plan/2011/06/13/AGGdhfTH_blog.html#pagebreak"]Continue reading...[/readon2]

“I Couldn’t Believe It Happened to Me”

Most teachers are likely to go through their entire career without being unfairly targeted for dismissal by administrators. But that shouldn’t be left to chance.

For example, what if this happened to you?

You’re a high school teacher. You work out with your students a rubric for grading a small-group project. One group, unfortunately, really blows this project off. According to the rubric, they deserve a D, which you deliver. Parents complain to the principal. He tells you to raise the grade. You say, no, and you point out that the students took part in designing the rubric that guided you in giving them the D.

Do you lose your job?

That can depend on whether you have a strong and enforceable due-process system for dismissal, generally called “tenure” but often misinterpreted as a guaranteed job.

[readon2 url="http://neatoday.org/2011/04/11/i-couldnt-believe-it-happened-to-me/"]Continue reading...[/readon2]