effect

How Recent Education Reforms Undermine Local School Governance

Via

Local control has historically been a prominent principle in education policymaking and governance. Culminating with the passage of No Child Left Behind (NCLB), however, the politics of education have been nationalized to an unprecedented degree, and local control has all but disappeared as a principle framing education policymaking.

This brief examines what the eclipse of local control means for our democracy. It distinguishes two dimensions of democracy that are at issue—democratic policymaking and democratic education—and concludes that the effect of NCLB has been to frustrate our democracy along both of these dimensions.

Pb Localcontrol

An Open Letter to Ohio Women

Playing fair and playing by the rules are two of the most important lessons we teach our children. Unfortunately, Ohio politicians don’t want to play fair and they want to make their own rules. The system is rigged to allow the majority party to draw Statehouse and Congressional district lines to protect their own seats and their political party. Drawing district lines that determine who gets elected is how the politicians hold on to their power. In effect, they have turned our government from “We the People” into “We the Politicians”.

Passage of State Issue 2 will establish a system that takes the power away from politicians and gives good, decent people who want to fix our problems a real chance to compete against career politicians and win. We all want an impartial process AND WE CAN MAKE IT HAPPEN! The choices we make on November 6 will have a profound effect on the lives of our children and grandchildren.

Politicians will come and go, but the passage of State Issue 2 will help ensure that neither party can unfairly dominate state politics. When elections are fair and balanced the people of Ohio win.

In this election, you will have an opportunity to take a stand and vote YES on Issue 2. The system that decides who our elected officials are should be open to the public, transparent and without partisan manipulation.

As women, one a Republican and one a Democrat, we invite you to unite with us around issues of fairness and accountability. There is much wrong with politics but how we choose our elected officials should not be one of those wrongs. We can fix this problem once and for all.

Collectively, we must stand up and be heard. We must do this for our communities, our children, our values and our future. We have the chance to make a big difference in this election. Not in one politician’s life–but in the lives of all Ohioans.

Please help us by talking with your friends and neighbors about this important issue and share this message on Facebook, Twitter and your other social networks. To volunteer or learn how you can become more engaged on this issue, please email women@votersfirstohio.com and a Voters First representative will get back with you right away.

Leave a legacy. Vote for fairness, vote for our future, and vote YES on ISSUE 2.

Sincerely,
Joan Lawrence
Former Member Ohio House of Representatives
League of Women Voters of Ohio, since 1957 State of Ohio

Frances Strickland
Former First Lady, State of Ohio

Shaming teachers

The efforts by corporate education reformers to shame teachers by publishing value-add scores and evaluations is coming under mounting pressure. First Bill Gates penned an op-ed in the NYT titled "Shame Is Not the Solution, now comes 2 new pieces. The first is research from the National Education Policy Center, that finds the LA Times controversial efforts to shame California's teachers was grossly error ridden

In its second attempt to rank Los Angeles teachers based on “value-added” assessments derived from students’ standardized test scores, the Los Angeles Times has still produced unreliable information that cannot be used for the purpose the newspaper intends, according to new research released today by the National Education Policy Center, housed at the University of Colorado Boulder

Dr. Catherine Durso of the University of Denver studied the newspaper’s 2011 rankings of teachers and found that they rely on data yielding results that are unstable from year to year. Additionally, Durso found that the value-added assessment model used by the Times can easily impute to teachers effects that may in fact result from outside factors, such as a student’s poverty level or the neighborhood in which he or she lives.

“The effect estimate for each teacher cannot be taken at face value,” Durso writes. Instead, each teacher’s effect estimate includes a large “error band” that reflects the probable range of scores for a teacher under the assessment system.

“The error band . . . for many teachers is larger than the entire range of scores from the ‘less effective’ to ‘more effective’ designations provided by the LA Times,” Durso writes. As a consequence, the so-called teacher-linked effect for individual teachers “is also unstable over time,” she continues.
[...]
These failings have rendered the Times’ rankings not merely useless, but potentially harmful, according to Alex Molnar, NEPC’s publications director and a research professor at the University of Colorado Boulder.

“The Los Angeles Times has added no value to the discussion of how best to identify and retain the highest-quality teachers for our nation’s children,” Molnar says. “Indeed, it has made things worse. Based on this flawed use of data, parents are enticed into thinking their children’s teachers are either wonderful or terrible.”

“The Los Angeles Times editors and reporters either knew or should have known that their reporting was based on a social science tool that cannot validly or reliably do what they set out to quantify,” Molnar said. “Yet in their ignorance or arrogance they used it anyway, to the detriment of children, teachers, and parents.”

Their full report can be read here. Meanwhile in New York, which has long been at the cutting edge of corporate ed reform efforts has passed legislation that would eliminate this kind of teacher shaming

Senate Republicans agreed to take up Cuomo’s bill on the final day of the session. The bill will make public all teacher evaluations, without names attached. Parents would then be able to obtain the specific evaluations of their own child’s teacher. Assembly Democrats had already agreed to pass it. Senate Majority Leader Dean Skelos says it’s a reasonable compromise.

“It strikes a good balance between parents’ right to know and some form of confidentially,” Skelos said. Some GOP Senators were concerned that the bill would inadvertently result in the disclosure of the identities of teachers in small rural schools.

Senate Education Chair John Flanagan calls it a “work in progress,” and says the message of intent accompanying the bill will attempt to make clear the need to protect teacher privacy. “I’m hoping that if you’re in a small school and they release data by class, subject and grade that there’s some type of interpretation to protect people’s privacy,” said Flanagan.

Ohio's legislature should pass similar efforts in Ohio.

Politics and Education Don't Mix

Governors and presidents are no better suited to run schools than they are to run construction sites, and it's time our education system reflected that fact.

A central flaw of corporate paradigms, as is often noted in popular culture, is the mind-numbing and dehumanizing effect of bureaucracy. Sometimes we are horrified and sometimes we laugh, but arguments for or against the free market may be misguided if we fail to address bureaucracy's corrosive role in the business model.

Current claims about private, public, or charter schools in the education reform movement, which has its roots in the mid-nineteenth century, may also be masking a much more important call to confront and even dismantle the bureaucracy that currently cripples universal public education in the U.S. "Successful teaching and good school cultures don't have a formula," argued legal reformer Philip K. Howard earlier in this series, "but they have a necessary condition: teachers and principals must feel free to act on their best instincts....This is why we must bulldoze school bureaucracy."

Bureaucracy, however, remains an abstraction and serves as little more than a convenient and popular target for ridicule -- unless we unpack what actions within bureaucracy are the sources for many of the persistent failures we associate erroneously with public education as an institution. Bureaucracy fails, in part, because it honors leadership as a primary quality over expertise, commits to ideological solutions without identifying and clarifying problems first, and repeats the same reforms over and over while expecting different results: our standards/testing model is more than a century old.

Public education is by necessity an extension of our political system, resulting in schools being reduced to vehicles for implementing political mandates. For example, during the past thirty years, education has become federalized through dynamics both indirect ("A Nation at Risk" spurring state-based accountability systems) and direct (No Child Left Behind and Race to the Top).

As government policy and practice, bureaucracy is unavoidable, of course. But the central flaw in the need for structure and hierarchy is that politics prefers leadership characteristics above expertise. No politician can possibly have the expertise and experience needed in all the many areas a leader must address (notably in roles such as governor and president). But during the "accountability era" in education of the past three decades, the direct role of governors and presidents as related to education has increased dramatically--often with education as a central plank in their campaigns.

One distinct flaw in that development has been a trickle-down effect reaching from presidents and governors to state superintendents of education and school board chairs and members: people who have no or very little experience or expertise as educators or scholars attain leadership positions responsible for forming and implementing education policy.

The faces and voices currently leading the education reform movement in the U.S. are appointees and self-proclaimed reformers who, while often well-meaning, lack significant expertise or experience in education: Secretary of Education Arne Duncan, billionaire Bill Gates, Michelle Rhee (whose entrance to education includes the alternative route of Teach for America and only a few years in the classroom), and Sal Khan, for example.

[readon2 url="http://www.theatlantic.com/national/archive/2012/04/politics-and-education-dont-mix/256303/"]Continue reading[/readon2]

NYCS abandons merit pay failure

From the NYT, as the largest school district in the country abandons teacher merit pay because it didn't work, Ohio is about to adopt it

A New York City program that distributed $56 million in performance bonuses to teachers and other school staff members over the last three years will be permanently discontinued, the city Department of Education said on Sunday.

The decision was made in light of a study that found the bonuses had no positive effect on either student performance or teachers’ attitudes toward their jobs.

Study after study finds that student test scores do not improve because teachers are compensated with bonus's and merit pay. Instead what we are seeing as these corporate education reforms spread is more corporate type behaviors, such as pressure to cheat.

Some Hows and Whys of Value Add Modelling

We thought it would be useful to provide a quick primer on what Value Add actually is, and how it is calculated, in somewhat explainable terms. This is a good explanation via the American Statistical Association

The principal claim made by the developers of VAM—William L. Sanders, Arnold M. Saxton, and Sandra P. Horn—is that through the analysis of changes in student test scores from one year to the next, they can objectively isolate the contributions of teachers and schools to student learning. If this claim proves to be true, VAM could become a powerful tool for both teachers’ professional development and teachers’ evaluation.

This approach represents an important divergence from the path specified by the “adequate yearly progress” provisions of the No Child Left Behind Act, for it focuses on the gain each student makes, rather than the proportion of students who attain some particular standard. VAM’s attention to individual student’s longitudinal data to measure their progress seems filled with commonsense and fairness. There are many models that fall under the general heading of VAM. One of the most widely used was developed and programmed by William Sanders and his colleagues. It was developed for use in Tennessee and has been in place there for more than a decade under the name Tennessee Value-Added Assessment System. It also has been called the “layered model” because of the way each of its annual component pieces is layered on top of another.

The model begins by representing a student’s test score in the first year, y1, as the sum of the district’s average for that grade, subject, and year, say μ1; the incremental contribution of the teacher, say θ1; and systematic and unsystematic errors, say ε1. When these pieces are put together, we obtain a simple equation for the first year:

y1 = μ1+ θ1+ ε1 (1)
or
Student’s score (1) = district average (1) + teacher effect (1) + error (1)

There are similar equations for the second, third, fourth, and fifth years, and it is instructive to look at the second year’s equation, which looks like the first except it contains a term for the teacher’s effect from the previous year:

y2 = μ2+ θ1+ θ2+ ε2 . (2)
or
Student’s score (2) = district average (2) + teacher effect (1) + teacher (2) + error (2)

To assess the value added (y2 – y1), we merely subtract equation (1) from equation (2) and note that the effect of the teacher from the first year has conveniently dropped out. While this is statistically convenient, because it leaves us with fewer parameters to estimate, does it make sense? Some have argued that although a teacher’s effect lingers beyond the year the student had her/him, that effect is likely to shrink with time.

Although such a model is less convenient to estimate, it more realistically mirrors reality. But, not surprisingly, the estimate of the size of a teacher’s effect varies depending on the choice of model. How large this choice-of-model effect is, relative to the size of the “teacher effect” is yet to be determined. Obviously, if it is large, it diminishes the practicality of the methodology.

Recent research from the Rand Corporation shows a shift from the layered model to one that estimates the size of the change of a teacher’s effect from one year to the next suggests that almost half of the teacher effect is accounted for by the choice of model.

One cannot partition student effect from teacher effect without information about how the same students perform with other teachers. In practice, using longitudinal data and obtaining measures of student performance in other years can resolve this issue. The decade of Tennessee’s experience with VAM led to a requirement of at least three years’ data. This requirement raises the concerns when (i) data are missing and (ii) the meaning of what is being tested changes with time.

The Ohio Department of Education has papers, here, that discuss the technical details of how VAM is done in Ohio.

BattleforKids.org provided us this information

Here's a brief example of both analysis that are used in Ohio. Both are from the EVAAS methodology produced by SAS:

Value-added analysis is produced in two different ways in Ohio:
1. MRM analysis (Multivariate Response Model, also known as the mean gain approach); and
2. URM analysis (Univariate Response Model, also known as the predicted mean approach).

The MRM analysis is used for the Ohio value-added results in grades 4-8 math and reading. It can onlybe used when tests are uniformly administered in consecutive grades. Through this approach, district, school and teacher level results are compared to a growth standard. The OAA assessments provide the primary data for this approach.

The URM analysis is used for expanded value-added results. Currently this analysis is provided through the Battelle for Kids' (BFK) SOAR and Ohio Value-Added High Schools (OVAHS) projects. The URM analysis is used when tests are not given in consecutive grades. This approach "pools" together districts that use of the same sequence of particular norm reference tests. In the URM analysis, prior test data are used to produce a prediction of how a student is likely to score on a particular test, given the average experience in that school. For example, results from prior OAA and TerraNovaT results are used as predictors for the ACT end-of-course exams. Differences between students' predictions and their actual/observed scores are used to produce school and teacher effects. The URM analysis is normalized each year based on the performance of other schools in the pool that year. This approach means that a comparison is made to the growth of the average school or teacher for that grade/subject in the pool.