level

Public schools neglected in favor of private choice expansion

From William Phillis, Ohio E & A

"The public common school," Horace Mann said, "is the Greatest Discovery made by man." It constitutes a social compact established for the benefit of all the children of all the people, community by community, across Ohio and across America. It has been the primary force for the common good in America.

Although a state system in Ohio, required to be thorough and efficient by constitutional decree, it is operated at the community level by elected boards of education-the fourth branch of government. In spite of inadequate levels of state funding through the decades, the public common school in Ohio and throughout the nation has nurtured this country to which millions and millions in every generation have migrated.

The public common school, typically, on a modest and constrained budget, has attempted to meet the individual needs of students. Programs for vocational/technical training, programs for those with disabilities and special needs, have been a part of the common school fabric. Typically, education options have been limited by the fiscal resources available to school districts.

In the past two decades, the political will to maintain and strengthen the public common school, and thus the social compact, the common good, has dwindled in a frenzied untested "quick fix" strategy that is fueled by many who want to take public money to the altar of the god of school choice.

The public common school system, due to the transfer of resources from the system to private choices, is less able to provide for choice; hence, students within the public common school system are being denied choices due to choice expansion outside the system.

The school funding measures in HB 59 (school funding level and non-formula school funding formula) are detrimental to most school districts while favoring the school choice movement. The state has the constitutional responsibility to maintain and nurture the common school system, not give it away.

The 130th General Assembly should put a moratorium on the expansion of school choice and establish a bipartisan, bicameral legislative research committee to study the current choice program.

Bill Gates Dances Around the Teacher Evaluation Disaster He Sponsored

No one in America has done more to promote the raising of stakes for test scores in education than Bill Gates.

Yesterday, Mr. Gates published a column that dances around the disaster his advocacy has created in the schools of our nation.

You can read his words there, but his actions have spoken so much more loudly, that I cannot even make sense out of what he is attempting to say now. So let's focus first on what Bill Gates has wrought.

No Child Left Behind was headed towards bankruptcy about seven years ago. The practice of labeling schools as failures and closing them, on the basis of test scores, was clearly causing a narrowing of the curriculum. Low income schools in Oakland eliminated art, history and even science in order to focus almost exclusively on math and reading. The arrival of Arne Duncan and his top level of advisors borrowed from the Gates Foundation created the opportunity for a re-visioning of the project.

Both the Race to the Top and the NCLB waivers processes required states and districts to put in place teacher and principal evaluation systems which placed "significant" weight on test scores. This was interpreted by states to mean that test scores must count for at least 30% to 50% of an evaluation.

The Department of Education had told the states how high they had to jump, and the majority did so.

[readon2 url="http://blogs.edweek.org/teachers/living-in-dialogue/2013/04/bill_gates_dances_around_the.html"]Continue reading...[/readon2]

Ohio Teacher Evaluation System: Dishonest, Unrealistic, and Not Fully Supported by Academic Research

A great article that appeared on Dailykos a few days ago

I've spent the past three days at an OTES (Ohio Teacher Evaluation System) training. This system is being phased in over the next two years, and will serve as the vehicle by which all teachers in Ohio are evaluated. The workshop culminates with a post-assessment, taken some time after the classes end, resulting in licensure and the ability to evaluate instructional staff. OTES is described by ODE as a system that will
provide educators with a richer and more detailed view of their performance, with a focus on specific strengths and opportunities for improvement.

I talked to a number of administrators and teachers who had already taken the training before attending. Without exception, they were all struck by the rigidity of the rubric. I agree, but there's more here. Any system that wields so much power must be realistic, honest, and rooted in the consensus of academic research. The OTES rubric fails this basic test.

Words Matter
Check out the Ohio Standards for the Teaching Profession (starting on page 16) approved in October of 2005. Now look at the OTES rubric. The first thing you will notice is that the OTES rubric has four levels, and that the Ohio Standards only have three. I think it's fair to say that the Ohio Standards did not include the lowest level. (The document says as much.) The top three levels of the OTES Rubric align with the three levels of the Ohio Standards. The snag? The terminology used in the OTES rubric. Proficient has been replaced by Developing, Accomplished by Proficient, and Distinguished by Accomplished. Each level has been relegated!

One might argue that this doesn't matter. But, it does. Teacher evaluations are public record. School performance, or at least the percentage of teachers that fall into each category, will be published. Newspapers will ask for names of teachers and their ratings. And, as we will see as I unpack the rubric in greater detail, the very best teachers are likely to fall into the Proficient category. What's the one relationship between public education and the word Proficient already burned into the minds of parents? The minimal level of performance required to pass the Ohio Graduation Test. Dishonest.

[readon2 url="http://www.dailykos.com/story/2012/11/15/1161894/-Ohio-Teacher-Evaluation-System-Dishonest-Unrealistic-and-Not-Fully-Supported-by-Academic-Researc"]Continue reading...[/readon2]

How Do High-Performing Nations Evaluate Teachers?

Who decides if a teacher is effective and how is that determination made? School systems across the United States are struggling to answer that question as they try to design and implement teacher evaluation systems that are fair and accurate. It’s no easy task and is not limited to public schools in this country. School systems around the world are tackling the same issue and are finding consensus among education stakeholders to be elusive.

Teacher evaluations were the main topic of discussion at the 2013 International Summit on the Teacher Profession (ISTP) Summit held last week in Amsterdam. Now in its third year, the ISTP brought together leaders from teacher unions and education ministries to discuss issues around teacher quality, specifically the criteria used to determine teacher effectiveness and its purpose.

In most nations, teacher evaluation systems are essentially a “work in progress,” says Andreas Schleicher of the Organization for Economic Cooperation and Development (OECD). Schleicher, who attended the ISTP, is the principal author of the study that was presented at the summitt. The report, Teachers for the 21st Century: Using Evaluations to Improve Teaching, takes a look at how different nations are tackling this thorny issue (or not tackling it) and identifying specific models that appear to work – that is, have buy-in from key stakeholders and can point to demonstrable results in student achievement. Because consensus is so frustratingly elusive, most nations are treading carefully, although there is widespread acknowledgement that improved evaluation systems have to be on the menu of education policy reforms.

Of the 28 countries surveyed in the OECD report, 22 have formal policy frameworks in place at the national level to regulate teacher evaluations. The six education systems that do not have such frameworks include Denmark, Finland, Iceland, Norway and Sweden, but teachers in the countries still received professional feedback. In Denmark, for example, teachers receive feedback from their school administrators once a year. In Norway, teacher-appraisal policies are designed and implemented at the local or school level. In Iceland, evaluation is left to the discretion of individual schools and school boards.

[readon2 url="http://neatoday.org/2013/03/25/how-do-high-performing-nations-evaluate-teachers/"]Continue reading...[/readon2]

Correlation? What correlation?

Dublin teacher, Kevin Griffin, brings to our attention this graph, which he describes thusly

The chart plots the Value-Added scores of teachers who teach the same subject to two different grade levels in the same school year. (ex. Ms. Smith teaches 7th Math and 8th Math, and Mr. Richards 4th Grade Reading and 5th Grade Reading.) The X-axis represents the teachers VA score for one grade level and the Y-axis represents the VA score from the other grade level taught.

If the theory behind evaluating teachers based on value-added is valid then a “great” 7th grade math teacher should also be a “great” 8th grade math teacher (upper right corner) and a “bad” 7th grade math teacher should also be a “bad” 8th grade math teacher (lower left corner). There should, in theory, be a straight line (or at least close) showing a direct correlation between 7th grade VA scores and 8th grade VA scores since those students, despite being a grade apart, have the same teacher.

Here's the graph

Looks morel ike a random number generator to us. Would you like your career to hinge on a random number generator?

Do Value-Added Methods Level the Playing Field for Teachers?

Via

Highlights

  • Value-added measures partially level the playing field by controlling for many student characteristics. But if they don't fully adjust for all the factors that influence achievement and that consistently differ among classrooms, they may be distorted, or confounded (An estimate of a teacher’s effect is said to be confounded when her contribution cannot be separated from other factors outside of her control, namely the the students in her classroom.)
  • Simple value-added models that control for just a few tests scores (or only one score) and no other variables produce measures that underestimate teachers with low-achieving students and overestimate teachers with high-achieving students.
  • The evidence, while inconclusive, generally suggests that confounding is weak. But it would not be prudent to conclude that confounding is not a problem for all teachers. In particular, the evidence on comparing teachers across schools is limited.
  • Studies assess general patterns of confounding. They do not examine confounding for individual teachers, and they can't rule out the possibility that some teachers consistently teach students who are distinct enough to cause confounding.
  • Value-added models often control for variables such as average prior achievement for a classroom or school, but this practice could introduce errors into value-added estimates.
  • Confounding might lead school systems to draw erroneous conclusions about their teachers – conclusions that carry heavy costs to both teachers and society.

Introduction

Value-added models have caught the interest of policymakers because, unlike using student tests scores for other means of accountability, they purport to "level the playing field." That is, they supposedly reflect only a teacher's effectiveness, not whether she teaches high- or low-income students, for instance, or students in accelerated or standard classes. Yet many people are concerned that teacher effects from value-added measures will be sensitive to the characteristics of her students. More specifically, they believe that teachers of low-income, minority, or special education students will have lower value-added scores than equally effective teachers who are teaching students outside these populations. Other people worry that the opposite might be true - that some value-added models might cause teachers of low-income, minority, or special education students to have higher value-added scores than equally effective teachers who work with higher-achieving, less risky populations.

In this brief, we discuss what is and is not known about how well value-added measures level the playing field for teachers by controlling for student characteristics. We first discuss the results of empirical explorations. We then address outstanding questions and the challenges to answering them with empirical data. Finally, we discuss the implications of these findings for teacher evaluations and the actions that may be based on them.

[readon2 url="http://www.carnegieknowledgenetwork.org/briefs/value-added/level-playing-field/"]Continue reading...[/readon2]