fact

ODE publishes propaganda

prop·a·gan·da
/ˌpräpəˈgandə/
Noun
1. Information, esp. of a biased or misleading nature, used to promote or publicize a particular political cause or point of view.
2. The dissemination of such information as a political strategy.

That aptly describes the latest document published by the Ohio Department of Education, titled "Myths vs. Facts about the Ohio Teacher Evaluation System". The document lists 10 alleged myths about the teacher evaluation system being created. We thought we'd take a closer look at some of these alleged "myths".

1. Myth: The state is telling us what to do in local evaluations.

ODE, under a bulleted list discussing local board flexibility in creating evaluations, state "The percentages within the given range for student growth measures for the teachers in that district;" This is no longer true for teacher who have Value-add scores. These teachers (over 30% of Ohio's teaching corps) will have 50% of their evaluation based on student test scores. On this, local boards have zero flexibility, it's a state mandate. We judge aspects of this myth to actually be true

2. Myth: This is just a way to fire teachers.

ODE goes to great length to discuss how these evaluations will be great for teachers in identifying areas of improvement (though no money has been allocated for professional development). Utterly lacking is any discussion of the provision within HB153 prohibits giving preference based on seniority in determining the order of layoffs or in rehiring teachers when positions become available again, except when choosing between teachers with comparable evaluations. It is no secret that corporate education reformers such as Michelle Rhee desperately want to use evaluations for the basis of firing what they purportedly measure to be "ineffective" teachers. After all, this is exactly the process used in Washington DC where she came from. It's far too soon to call this a myth, it's more like a corporate educators goal.

3. Myth: One test in the spring will determine my fate.

It's nice that ODE stresses the importance of using multiple measures, but once again they fail to acknowledge that HB555 removed those multiple measures for 30% of Ohio's teachers. For those teachers their fate will be determined by tests. This myth is therefore true.

5. Myth: The state has not done enough work on this system – there are too many unanswered questions.

How can it be a myth when even this documents fails to state that "we're ready". SLO's have yet to be developed, Common Core is almost upon us but no one knows what the tests will be, the legislature keeps changing the rules of the game and no where near enough evaluator training has taken place to evaluate all of Ohio's teachers. Ohio isn't ready for this and that's a fact, not a myth.

6. Myth: “Value-Added” is a mysterious formula and is too volatile to be trusted.

This is perhaps one of the most egregious points of all. Study after study after study has demonstrated that Value add is volatile, unreliable and inappropriate for measuring teacher effectiveness. Their explanation conflates the use of value-add as a diagnostic tool and its use in evaluating teachers. Those are 2 very different use cases indeed.

As for it being mysterious, the formula used in Ohio is secret and proprietary - it doesn't get more mysterious than that! This claim by ODE is simply untrue and ridiculous, they ought to be embarrassed for publishing it. This myth is totally true and real and backed up by all the available scientific evidence.

7. Myth: The current process for evaluating teachers is fine just as it is.

Their explanation: "Last year, 99.7 percent of teachers around the country earned a “satisfactory” evaluation, yet many students didn’t make a year’s worth of progress in reading and are not reading at grade level." Right out of the corporate education reformers message book. Blame the teacher. Still think this isn't going to end up being about firing teachers? This myth is a straw-man, no one argues the current system is ideal, but the proposed OTES is dangerously constructed.

8. Myth: Most principals (or other evaluators) don’t have time to do this type of evaluation, so many will just report that teachers are proficient.

ODE states "Fact: Most principals are true professionals who want the teachers in their buildings to do well." But wait a minute, in Myth #7 these very same principals were handing out "satisfactory" grades like candy to 99.7% of teachers. Which is it? Are they professionals who can fairly evaluate teachers, or aren't they? We wrote about the massive administrative task faced by school administrators almost 2 years ago. Nothing has happened to alleviate those burdens, other than a $2 billion budget cut. This myth is 100% true.

9. Myth: This new evaluation system is like building the plane while we’re flying it.

ODE states: "Fact: Just as the Wright brothers built a plane, tried it by flying it, landed it, and then refined the plane they built, the new evaluation system was built, tried and revised. "

We'll just point out that 110 years have passed since the Wright Brothers first flew and the world has developed better design and project management tools since then.

10. Myth: It will be easy to implement the new teacher evaluation system.

Has anyone, anywhere said this? Or did the ODE brainstorming session run out of bad ideas at 9, and this is all they could come up with? Talk about ending with a straw-man, which frankly, given the rest of the document is probably the most appropriate ending.

ODE ought to withdraw this piece of propaganda from public view.

Retention needs reforms too

Every year tens of thousands of teachers quit the profession.

With approximately 1.6 million teachers set to retire in the next decade, replenishing America’s teaching force should be a top priority. But filling classrooms with new teachers is only half the battle. Retaining them is equally important.

Numerous studies show that teachers perform best after being in the classroom for at least five years. According to a McKinsey study, 14 percent of American teachers leave after only one year, and 46 percent quit before their fifth year. In countries with the highest results on international tests, teacher turnover rates are much lower—around 3 percent.

Few if any corporate education reformer seem to want to address this problem, which is not particulaly suprising. Having high turnover is a mechanism for keeping costs low, by constantly replenishing large percentages of the workforce with younger, cheaper employees. However, for those interested in the critical importance of teacher retention and few interesting articles were published recently that indicated that school management plays a criticl role.

Principal Plays Surprising Role in Why New Teachers Quit

Why do so many beginning teachers quit the profession or change schools? Surprising new research finds it's not a heavy workload or lack of resources that has the most significant effect, but instead the relationship between teachers and their principal.
[…]
The study gauged novice teachers' intent to remain teaching and the factors that might influence that decision. Youngs said he was surprised to learn the frequency with which novices met with their school-assigned mentor teachers did not make them more or less likely to continue teaching.

In fact, the most important factor that influenced commitment was the beginning teacher's perception of how well the school principal worked with the teaching staff as a whole. This was a stronger predictor of intent to remain teaching than having adequate resources, the amount of administrative duties the teacher had or the size of their workload.

Another, unrelated article in Forbes, hinted at this too

First, Public Agenda found, at the nine successful schools “principals lead with a strong and clear vision . . . and never lose sight” of their goals. What’s more, “these principals earn trust and respect by engaging and supporting their staff in building the structures, practices and confidence necessary to fulfill this vision.”

Public Agenda, referenced above produced a report titled "Failure is not an Option". It laid out a number of factors that affected success in nine of Ohio's high-poverty, high-achieving schools

Second, according to Public Agenda, leaders of the successful schools “provide genuine opportunities and incentives for teachers to collaborate, and teachers say that collaboration and sharing best practices are keys to their effectiveness.” Most every organization, of course, insists that its employees work together seamlessly. More often than not, they’re mistaken or lying. “Even within the same company,” Drucker observed in Managing in a Time of Great Change, “people tend to resist sharing information.”

Third, teachers at the successful schools “regard student data as clarifying and helpful, and they use it to plan instruction.” In fact, “examining student data and talking about how to address the specific problems it reveals often produce further opportunities for staff to work together and learn from one another.”

In other words, while everyone is held accountable for results, test data is used to help foster a culture of continuous improvement; it is not used as a cudgel. Whenever any organization—whether a school or corporation—turns measurement into an excuse for punishment, Drucker noted in The Practice of Management, it will destroy morale, and employees will invariably find a way “not to obtain the best performance but to obtain the best showing” on the test or audit by gaming the system.

We need to start a discussion on policies that will lead to greater teacher retention - this is far more critical to maintaining a high quality education system than Rube Goldberg mechanisms to weed out a few underperformers.

Donna O'Connor

This arrived in our mail box this morning and we wanted to share. Not many candidates can enthuse this many volunteers to go canvassing on the last, cold, Saturday of a campaign. Let alone a first time candidate for the Ohio House. But that's just what Special Ed teacher Donna O'Connor has been able to do every weekend.

Volunteers tell JTF that they have been recieving an incredibly positive response to Donna's message from everyone they speak too - and they have spoken to a lot of people. Well over 30,000 in fact!

What's in your portfolio?

A reader pointed out this exchange and segment on CNBC, a business channel. There can be no doubt that the financiers that brought us the great recession see education as the next area ripe for looting

Anchor: Charter schools have become very popular... But are charter schools a wise addition to your investment portfolio? Well let’s ask David Brain, President and CEO of Entertainment Properties Trust. David, why would I want to add charter schools into my portfolio?

DB: Well I think it’s a very stable business, very recession-resistant. It’s a high-demand product. There’s 400,000 kids on waiting lists for charter schools, the industry’s growing about 12-14% a year. So it’s a high-growth, very stable, recession-resistant business. It’s a public payer, the state is the payer on this category, and if you do business with states with solid treasuries then it’s a very solid business.

Anchor: Well let me ask you about potential risks, here, to your charter school portfolio, because I understand that three of your nine “Imagine” schools are scheduled to actually lose their charters for the next school year. Does this pose a risk to investors?

DB: Well, occasionally—our Imagine arrangement’s on a master lease, so there’s no loss of rents to the company, although occasionally there are losses of charters...In this case it’s a combination of relationship with the supervisory authorities and educational quality; sometimes the educational quality is very difficult to change in one, two, or three years. It’s a long-term proposition, so there are some of these that occur, but we’ve structured our affairs so this is not going to impact our rent-roll and in fact you see this is maybe even a good experience as the industry thins out some of the less-performing schools...

I don’t—there’s not a lost of risk...the fact is this has bipartisan support. It’s part of the Republican platform and Arne Duncan, Secretary of Education in the Obama Administration, has been very high on it throughout their work in public education. So we have both political parties are solidly behind it, you have high demand, high growth, you have performance across the board...it’s our highest growth and most appealing sector right now of the portfolio. It’s the most high in demand, it’s the most recession-resistant. And a great opportunity set with 500 schools starting every year. It’s a two and a half billion dollar opportunity set annually.

Catastrophic failure of teacher evaluations in TN

If the on going teacher evaluation failure in Tennessee is any guide, Ohio has some rough waters ahead. Tennessee's recently passed system is very similar to Ohio's.

It requires 50 percent of the evaluation to be comprised of student achievement data—35 percent based on student growth as represented by the Tennessee Value-Added Assessment System (TVAAS) or a comparable measure and the other 15 percent based on additional measures of student achievement adopted by the State Board of Education and chosen through mutual agreement by the educator and evaluator. The remaining 50 percent of the evaluation is determined through qualitative measures such as teacher observations, personal conferences and review of prior evaluations and work.

Tennessee’s new way of evaluating classrooms “systematically failed” to identify bad teachers and provide them more training, according to a state report published Monday.

The Tennessee Department of Education found that instructors who got failing grades when measured by their students’ test scores tended to get much higher marks from principals who watched them in classrooms. State officials expected to see similar scores from both methods.

“Evaluators are telling teachers they exceed expectations in their observation feedback when in fact student outcomes paint a very different picture,” the report states. “This behavior skirts managerial responsibility.”

The education administration in Tennessee are pointing the fingers towards the in classroom evaluations, but as one commentator on the article notes,

Perhaps what we are seeing with these disparities is not a need to retrain the evaluators, but rather confirmation of what many know but the Commissioner and other proponents of this hastily conceived evaluation system refuse to see -- the evaluation criteria mistakenly relies too much on TVAAS scores when they do not in fact accurately measure teacher effectiveness.

It has been stated over, and over, that the use of value add at the teacher level is not appropriate, subject to too much variation, instability, and error. Yet when these oft warned about problems continue to manifest, they are ignored, excused and other factors scapegoated instead.

As if to make matters worse, the report (read it below) suggests that "school-wide value-added scores should be based on a one-year score rather than a three-year score. While it makes sense, where possible, to use three-year averages for individuals because of smaller sample sizes, school-wide scores can and should be based on one-year data."

So how did the value add scores stack up against observations? With 1 being the lowest grade and 5 the highest

Are we really supposed to believe that a highly educated and trained workforce, such as teachers are failing at a 24.6% rate (grades 1's and 2's). Not even the most ardent corporate education reformer has claimed that kind of number! It becomes even more absurd when one looks at student achievement. It's hard to argue that a quarter of our workforce is substandard when your student achievement is at record highs.

Instead it seems more reasonable that a more modest 2%-3% of teachers are ineffective and that the observations by professional, experienced evaluators are accurately capturing that situation.

Sadly, no where in the Tennessee report is a call for further analysis of their value add calculations.

Teacher Evaluation in Tennessee: A Report on Year 1 Implementation