With 2014 just beginning we thought it would be useful to lay down a marker on what to expect in education this year. In no particular order, then.
The 3rd Grade Reading Guarantee
This list may be in no particular order, but one of the first big issues we're likely to see is large numbers of 3rd graders failing their high stakes reading test and having to repeat 3rd grade. According to ODE, more than 1/3 of 3rd graders recently failed the state reading test this fall. We expect the number of failures in the spring to be somewhat lower than 1/3, but it is still going to be a substantial number, unevenly spread-out across the state. Urban and rural districts will be especially at risk.
The impacts of this policy will be severely felt this summer as districts scramble to find the resources to coach up students in order to promote them, and then in the new school year having to deal with a much larger 3rd grade cohort, and a smaller 4th grade class. Many elementary teachers (and teachers with reading endorsements) are going to find themselves being shuffled around.
Parents of students who are retained are not going to be happy either, and lawmakers are undoubtedly going to hear from the education community and parents seeking more flexibility and local control - something the legislature is going to find hard to resist.
Districts will continue to struggle implementing OTES, with SLO creation and resources for the non-test related components stressing past breaking point. A large number of districts (especially none RttT) have delayed implementation work on OTES, that tactic isn't going to be sustainable. The legislature has an opportunity to relieve some of the unnecessary burden by passing SB229 (which passed the Ohio senate unanimously late in 2013).
*Academic growth factor: Lowers the academic growth factor percentage required on teacher evaluations to 35% from the current 50%. A school district may attribute an additional percentage to the academic growth factor not to exceed fifteen percent of an evaluation. The academic growth factor under the OTES is based on value-added and/or other student growth measures, depending on the subjects and grades in a teacher’s course load.
*Frequency of evaluations: Authorizes local school boards to reduce the frequency of evaluations required for teachers who receive an evaluation rating of “Skilled” or “Accomplished” (the top two ratings).
Word is that the House is having a hard time reconciling the fact this is something they must do. If they fail to pass SB229, or water the relief down, then OTES is almost certain to collapse under its own weight.
Common Core and the online PARCC assessments that come with it require schools to have substantial and robust technology in place to deliver these online tests and handle the massive amounts of data that is going to be flowing as a consequence. The legislature, once again, has failed to recognize the scale of this endeavor and provide adequate funding. Just $10 million has been set aside - a drop in the ocean for over 600 districts to purchase technology and upgrade infrastructure to handle the bandwidth requirements.
ODE recently reported on their technology survey of districts, and the news wasn't good. 1/3 of respondents said they weren't going to be ready, and a staggering 50% more didn't even respond.
What this means is that chaos is going to ensue. Schools, with limited technology resources, are going to be on weeks long rotation of testing, likely with outages and serious downtime problems. A large number will simply be performing paper and pencil testing - causing Ohio to have a 2 track testing system. An online testing regime for rich districts and pencil and paper for the poorer.
We envisage the legislature delaying the requirements by at least a year, and having to commit serious resources to technology purchases in the next budget.
The charter school boondoggle is now a $1 billion a year business. Big enough that their catastrophic failure is becoming a mainstream issue. Tax payers and parents are noticing that the charter school promise is an empty one, and instead of providing more quality choices and competition, is siphoning resources away from higher performing public schools who are having to curb curriculum, institute pay-to-play, and delay building upgrades.
2014 will see calls to reform Ohio's charter schools laws gain in volume and diversity. The legislature, paralyzed by the millions of dollars in campaign contributions from charter operators, will try to resist calls for reforms. It will be the seminal fight over public education in Ohio over the next few years.
Perhaps the hardest policy area to predict is Common Core. It's a complex issue tied up with standards and testing and something for everyone to hate, and perhaps like. The development of Common Core and its implementation has been a huge disaster. Far too little input from educators and parents, not enough early explanation of what it is and why it is needed. This has led to all manner of crazy conspiracy theories and very real distrust.
We suspect as different states react differently to Common Core implementation, we're likely to see less standardization adopted, and a slow down in implementation.
Corporate reformers are going to be on the other end of the accountability stick moving forward. Their ideas and policies are going to be scrutinized against their original claims. Given that most corporate education reform isn't supported by sound research and on the ground findings, they are going to be found wanting. One thing's for sure, the "no excuses" gang are going to be offering a lot of their own excuses.
Join the Future will be here for 2014 documenting, analyzing and reporting on all these issues, and many, many more.
Submitted by a teacher
‘Tis the season for gift-giving, and with so many test-driven “school reform” policies being passed at the Ohio Statehouse this year, now would be a great time to present our lawmakers with gift-wrapped copies of one of the most forward-thinking children’s books ever written, Hooray for Diffendoofer Day. This thought-provoking picture book was primarily written by that great American philosopher, Theodor Seuss Geisel, but he died before he was able to finish it. Adding to Dr. Seuss’s original notes, bits of verses, and rough sketches, author Jack Prelutsky and illustrator Lane Smith finished the fable in 1991.
This insightful book is about an outside-of-the-box kind of school staffed by appropriately named workers, such as the nurse, Miss Clotte, the custodian, Mr. Plunger, and three cooks named McMunch. Diffendoofer School teachers provide knowledge-based lessons mingled with some important skills not found on any list of standards:
Miss Bobble teaches listening, Miss Wobble teaches smelling,
Miss Fribble teaches laughing, and Miss Quibble teaches yelling.
The quirkiest teacher of all is the main character in the book:
My teacher is Miss Bonkers, she’s as bouncy as a flea.
I’m not certain what she teaches, but I’m glad she teaches me.
Of all the teachers in our school, I like Miss Bonkers best.
Our teachers are all different, but she’s different-er than the rest.
One day, Diffendoofer’s worried little principal, Mr. Lowe, makes a special announcement:
All schools for miles and miles around must take a special test,
To see who’s learning such and such- to see which school’s the best.
If our small school does not do well, then it will be torn down,
And you will have to go to school in dreary Flobbertown.
Like most of the children in Ohio’s public schools, Diffendoofer students are immediately stressed at the thought of taking such a high-stakes test, and they fret about the prospect of being removed from their beloved school and forced to attend monotonous Flobbertown, where “everyone does everything the same.” They continue to agonize over the test, until Miss Bonkers reminds them:
“Don’t fret,” she said, “you’ve learned the things you need
To pass that test and many more- I’m certain you’ll succeed.
We’ve taught you that the earth is round, that red and white make pink,
And something else that matters more- we’ve taught you how to think.”
Of course, Miss Bonkers is right, and the students get “the very highest score” and pass the dreaded test using background knowledge, combined with the critical and creative thinking skills they acquired through a variety of innovative activities at Diffendoofer School.
The Ohio Legislature’s over-reliance on high-stakes testing for its public schools has forced many districts to re-focus their precious economic resources on hard copy and digital curricula that will aid them in teaching for the test. Could it be merely a coincidence that the same educational companies, that produce the tests and sell those testing resources, also contribute to the campaign coffers of some of the legislators who sponsor the “school reform” laws? One can only speculate.
In this test-driven era, Art, Music, and Physical Education programs are being slashed in many school districts. Field trips are no longer considered affordable. Schools are cutting way back on recess as well, hoping it will “give the students more time to learn what’s needed to pass the tests.” It’s sad to see the demise of activities that round out our students’ knowledge-based learning with important critical and creative thinking, yet these are desperate times for many of our public schools, and they’re trying to get the most test-score bang for their bucks. Unfortunately, this kind of programming will eventually lead to more schools like dreary Flobbertown, where everyone does everything the same.
Before another test-driven “school reform” bill is considered in Ohio, it would be wise for lawmakers to invite public school teachers from around the state to come to the Statehouse to lead a series of book-talks about Dr. Seuss’s Hooray for Diffendoofer Day, accompanied by Diane Ravitch’s book, The Death and Life of the Great American School System: How Testing and Choice Are Undermining Education. Then our elected officials might begin to understand what Dr. Seuss figured out more than two decades ago- continued high-stakes testing is taking its toll on our children, as well as on the institution of public education.
Judging by the lack of teacher input requested by our legislators in recent years, that idea may be no more than another children’s fable.
From the Harvard Business Review, a look at how unreliable the kinds of performance measures being implemented in education are, and why business is abandoning the practice.
Microsoft has decided to dump the practice of rating individuals’ performance on a numerical scale – a decision I applauded in a recent post. I argued that such rating systems don’t accomplish the task managers expect from them, which is to accelerate the performance of their people. At best, they serve other goals: allocating compensation fairly, and aligning each individual’s goals with the values and strategies of the company.
However, even if these were sufficient goals, managers would still be frustrated by how poorly ratings-based Human Capital Management (HCM) systems achieves them. Here are the two intractable problems with today’s approach.
All current HCM systems are based on the notion that a manager can be guided to become a reliable rater of another person’s strengths and skills. The assumption is that, if we give you just the right scale, and just the right words to anchor that scale, and if we tell you to look for certain behaviors, and to rate this person a “4” if you see these behaviors frequently, and a “3” if you see them less frequently, then, over time, you and your fellow managers will become reliable raters of other people’s performance. Indeed, your ratings will come to have such high inter-rater reliability (meaning that two managers would give the same employee’s performance the same rating) that the company will use your ratings to pinpoint low performers, promote top performers, and pay everyone.
Unfortunately there is no evidence that this happens. Instead, an overwhelming amount of evidence shows that each of us is a horribly unreliable rater of another person’s strengths and skills. It appears that, when it comes to rating someone else, our own strengths, skills, and biases get in the way and we end up rating the person not on some wonderfully objective scale, but on our own scale. Our rating of the other person simply answers the question: “Does she have more or less of this strength or skill than I do?” If she does, her rating is high; if she doesn’t, it is low. Thus our rating is really a rating of us, not of her.
Some companies have tried to neutralize this effect by training the manager how to look for specific clues to the desired strength or skill. This may result in managers becoming more observant, but it doesn’t turn them into better raters. This inability to rate reliably is so entrenched that even when organizations spend millions of hours and dollars training up a roster of experts whose only job is rating, they still don’t get the reliability they seek.
As an example, over the last few years every US state has done precisely that. Each state created a cadre of experts to evaluate, in extraordinary detail, the performance of teachers. One would have expected variation, with some good teachers, some not so good, and some differently good reflected in a range of ratings from the experts. But as The New York Times reported earlier this year, the results of these ratings have revealed alarmingly little variation. These expert raters are simply not very reliable.
Scour the literature and you will discover similar studies all confirming our struggles with rating the strengths and skills of others. Our ratings of others certainly look precise. They look like objective data. But they aren’t. They offer precision, but it is a false precision. So when we decide to promote someone based upon their “4” rating, or when we say that a certain choice assignment is open only to those employees who scored an “exceeds expectations” rating, or when we pay someone based on these ratings, or suggest a particular training course based upon them, we are making decisions on bad data. Earlier this month, in a spirited defense of the forced curve, Jack Welch advocated rating people on lists of competencies so that you can, in his words, “let them know where they stand.” This is a worthy sentiment, but given how poor we are as raters, competency ratings will only ever serve to confuse people as to where they stand. As they say in the data world: “Garbage in, garbage out.”
Bad practice, streamlined
We know how great managers manage. They define very clearly the outcomes they want, and then they get to know the person in as much detail as possible to discover the best way to help this person achieve the outcomes. Whether you call this an individualized approach, a strengths-based approach, or just common sense, it’s what great managers do.
This is not what our current performance management systems do. They ignore the person and instead tell the manager to rate the person on a disembodied list of strengths and skills, often called competencies, and then to teach the person how to acquire the competencies she lacks. This is hard, and not just the rating part. The teaching part is supremely tricky — after all, what is the best way to help someone learn how to be a better “strategic thinker” or to display “learning agility?” In recognition of just how hard this is, current performance management systems attempt to streamline the process by supplying the manager with writing tips on how to phrase feedback about the person’s competencies, or lack thereof, and then by integrating the competency rating with the company’s Learning Management System so that it spits out a training course to fix a particular competency “gap.”
The problem with all of this is not just the lack of credible research proving that the best performers possess the entire list of competencies, or any showing that if you acquire competencies you lack, your performance improves – or even that, as I described above, managers are woefully inaccurate at rating the competencies of others. No, the chief problem with all of this is that it is not what the best managers actually do.
They don’t look past the real person to a list of theoretical competencies. Instead the person, with her unique mix of strengths and skills, is their singular focus. They know they can’t ignore the individual. After all, the person’s messy uniqueness is the very raw material they must mold, shape, and focus in order to create the performance they want. Cloaking it with a generic list of competencies is inherently counter-productive.
Some say that we need to rate people on their competencies because this creates “differentiation,” a necessary practice of great companies. Of course they are right in theory — companies need to be able to differentiate between their people. But the practice is outdated. Differentiation cannot mean rating people on a pre-set list of competencies. These competencies are, by definition, formulaic and so they will actually serve to limit differentiation. True differentiation means focusing on the individual — understanding the strengths of each individual, setting the right expectations for each individual, recognizing the individual, putting the right career plan together for the individual. This is what the best managers do today. They seek to understand, and capitalize on the whole individual. This is hard enough to do when you work with the person every day. It’s nigh on impossible when you are expected to peer through the filter of a formula.
Telegraph Trumps Pony Express
In 1850 it took the average piece of mail five weeks to travel from St. Joseph, Missouri to the California coast. This was frustrating, since in 1848 somebody had discovered gold in the California hills and the wild and crazy rush was on. America was moving west and needed a much more efficient, streamlined way to communicate with its West Coast, full of riches. The Pony Express was the answer. Four hundred horses. A hundred and fifty small wiry riders. Two hundred stations, and the innovation of lightweight, leather cantinas to carry the mail westward. It was a fantastically complicated arrangement requiring careful forethought, detailed planning, and not inconsiderable daring. And, having woven together this complicated system, the inventors managed to streamline the process so well that, on its very first journey, what was once a five-week trek turned into a ten-day sprint from St. Joe to Sacramento. Speeches were made, fireworks fired, a great innovation was celebrated.
And then, Baron Pavel Schilling destroyed it all.
He didn’t do it deliberately of course. But he did invent the telegraph. And with that one invention, that one concept, he created a new worldview, one that rendered obsolete the entire system that they had worked so hard to streamline.
Our current performance management systems are the Pony Express — worthy efforts to streamline a labor-intensive, time-consuming, and unnecessarily complicated process. Who is our Baron Schilling? Well let’s give that role to Microsoft’s Lisa Brummel, the executive who declared “no more ratings.”
And then there’s the biggest question. What’s the telegraph? A topic for the next post.