group

Rhee-ality check

You know a report titled "Rhee-ality check: the Failure of Students First" is going to be interesting, and indeed it is, opening with

Since its launch two years ago, StudentsFirst has made bold predictions about the organization’s impact on education policy, and what it will accomplish across the country.

This is the first report of its kind to examine whether this education advocacy group founded by Michelle Rhee has made progress toward its key goals. Gathered here for the first time is a body of evidence, data, and analysis showing that Students First has given its donors and supporters a poor return on their investment.

StudentsFirst has failed to live up to expectations in four main areas: fundraising, leadership, electoral politics, and grassroots organizing. These failures are described in detail below. A national education advocacy group with such a track record of ineffectiveness is not what Rhee’s investors signed up for.

Here's the full report

Rhee-ality check: the Failure of Students First

This report seems to fit in with a previous post, "THE END OF MICHELLE RHEE?", given how ineffective the organization she has created truly is.

Walmart gives $8 million to StudentsFirst

If you needed yet more proof that Michelle Rhee's StudentsFirst is nothing more than an anti-tax group, consider that Walmart has just given her $8 million to con tinue her corporate education agenda.

A foundation associated with the Wal-Mart family fortune has expanded its support for the education advocacy group run by former District of Columbia schools chancellor Michelle Rhee.

The Walton Family Foundation announced Tuesday an $8-million grant over two years to StudentsFirst, which is headquartered in Sacramento but has operations in 18 states.
[...]
The Walton funding is to support such activities as staff costs, lobbying and research. It's not for direct campaign donations, which are made from a separate arm of StudentsFirst.

Education News for 01-29-2013

State Education News

  • Ohio business leaders urge education reform (Canton Repository)
  • A group of major Ohio business leaders is urging Gov. John Kasich to push hard on educational changes…Read more...

  • Bucknell inflated its SAT scores (Columbus Dispatch)
  • Bucknell University has disclosed that for several years it reported inflated SAT scores for incoming freshmen, making the private liberal-arts school in Pennsylvania…Read more...

  • Ohio Governor John Kasich sets online 'town hall' on education (WEWS)
  • Ohio's governor is planning an online "town hall" session…Read more...

Local Education News

  • County Commissioners OK part-time deputies for schools (Chillicothe Gazette)
  • Within two weeks, a pair of Ross County sheriff’s deputies will begin making their rounds of the county’s schools…Read more...

  • Defeating the math monster (Cincinnati Enquirer)
  • The move to high school isn’t just for ninth-graders anymore. Next year, in an effort to bolster academics – especially math – Cincinnati Public Schools will expand all of its high schools to house grades 7-12 instead of the 9-12 model…Read more...

  • Girl’s suicide spurs second-guessing (Columbus Dispatch)
  • Hailey Petee hated her glasses, the ones with lenses so thick that they distorted the look of her pretty blue-gray eyes…Read more...

  • Centerville to make $2.6M in cuts (Dayton Daily News)
  • Centerville City Schools, which saw its November levy fail by less than a percentage point, approved on Monday night $2.6 million in cuts from its budget next school year…Read more...

  • Delaware County Schools Considering Changes To Safety Policies (WBNS)
  • The Delaware City Schools Board of Education plans to join with city council Monday night to host an annual meeting. One of the topics will be school safety…Read more...

  • Jurors in T.J. Lane Chardon shooting case to be sequestered (Willoughby News Herald)
  • Jurors selected in the Thomas Lane III aggravated murder trial will be sequestered during deliberations, Geauga County Common Pleas Judge David L. Fuhry ruled in an opinion made public Monday…Read more...

  • Group calls board member’s comments racist (Youngstown Vindicator)
  • A community group is taking offense at what it calls “racist” comments made by a veteran city school board member and threatens sanctions if the school board doesn’t take corrective action…Read more...

Editorial

  • Put the money on early education (Akron Beacon Journal)
  • I was watching a gymnastics competition on television, admiring the strength of those tight and incredibly lithe bodies when something caught my ear from the chatter of the commentators…Read more...

  • Unfunded guarantee (Akron Beacon Journal)
  • On Thursday, John Kasich plans to unveil his proposal for revamping the way the state pays for public schools. He will be the fourth governor to take a stab at the problem since the Ohio Supreme Court found the funding formula…Read more...

  • Ohio school board member's Facebook fiasco warrants an apology (Cleveland Plain Dealer)
  • Gov. John Kasich ought not to fire Debe Terhar, president of the State Board of Education, despite her offensive Facebook posting quoting…Read more...

  • A lesson on life (Toledo Blade)
  • Usually, teachers teach and students learn in a classroom. But kids can also teach adults a thing or two. Take Perrysburg Junior High School seventh-grader Michael Skotynsky…Read more...

  • Liberty plan has already been charted (Warren Tribune Chronicle)
  • A local state audit that showed a $12,720 overpayment to former Liberty schools treasurer Tracey Obermiyer is peanuts…Read more...

The Science of Value-Added Evaluation

"A value-added analysis constitutes a series of personal, high-stakes experiments conducted under extremely uncontrolled conditions".

If drug experiments were conduted like VAM we might all have 3 legs or worse

Value-added teacher evaluation has been extensively criticized and strongly defended, but less frequently examined from a dispassionate scientific perspective. Among the value-added movement's most fervent advocates is a respected scientific school of thought that believes reliable causal conclusions can be teased out of huge data sets by economists or statisticians using sophisticated statistical models that control for extraneous factors.

Another scientific school of thought, especially prevalent in medical research, holds that the most reliable method for arriving at defensible causal conclusions involves conducting randomized controlled trials, or RCTs, in which (a) individuals are premeasured on an outcome, (b) randomly assigned to receive different treatments, and (c) measured again to ascertain if changes in the outcome differed based upon the treatments received.

The purpose of this brief essay is not to argue the pros and cons of the two approaches, but to frame value-added teacher evaluation from the latter, experimental perspective. For conceptually, what else is an evaluation of perhaps 500 4th grade teachers in a moderate-size urban school district but 500 high-stakes individual experiments? Are not students premeasured, assigned to receive a particular intervention (the teacher), and measured again to see which teachers were the more (or less) efficacious?

Granted, a number of structural differences exist between a medical randomized controlled trial and a districtwide value-added teacher evaluation. Medical trials normally employ only one intervention instead of 500, but the basic logic is the same. Each medical RCT is also privy to its own comparison group, while individual teachers share a common one (consisting of the entire district's average 4th grade results).

From a methodological perspective, however, both medical and teacher-evaluation trials are designed to generate causal conclusions: namely, that the intervention was statistically superior to the comparison group, statistically inferior, or just the same. But a degree in statistics shouldn't be required to recognize that an individual medical experiment is designed to produce a more defensible causal conclusion than the collected assortment of 500 teacher-evaluation experiments.

How? Let us count the ways:

  • Random assignment is considered the gold standard in medical research because it helps to ensure that the participants in different experimental groups are initially equivalent and therefore have the same propensity to change relative to a specified variable. In controlled clinical trials, the process involves a rigidly prescribed computerized procedure whereby every participant is afforded an equal chance of receiving any given treatment. Public school students cannot be randomly assigned to teachers between schools for logistical reasons and are seldom if ever truly randomly assigned within schools because of (a) individual parent requests for a given teacher; (b) professional judgments regarding which teachers might benefit certain types of students; (c) grouping of classrooms by ability level; and (d) other, often unknown, possibly idiosyncratic reasons. Suffice it to say that no medical trial would ever be published in any reputable journal (or reputable newspaper) which assigned its patients in the haphazard manner in which students are assigned to teachers at the beginning of a school year.
  • Medical experiments are designed to purposefully minimize the occurrence of extraneous events that might potentially influence changes on the outcome variable. (In drug trials, for example, it is customary to ensure that only the experimental drug is received by the intervention group, only the placebo is received by the comparison group, and no auxiliary treatments are received by either.) However, no comparable procedural control is attempted in a value-added teacher-evaluation experiment (either for the current year or for prior student performance) so any student assigned to any teacher can receive auxiliary tutoring, be helped at home, team-taught, or subjected to any number of naturally occurring positive or disruptive learning experiences.
  • When medical trials are reported in the scientific literature, their statistical analysis involves only the patients assigned to an intervention and its comparison group (which could quite conceivably constitute a comparison between two groups of 30 individuals). This means that statistical significance is computed to facilitate a single causal conclusion based upon a total of 60 observations. The statistical analyses reported for a teacher evaluation, on the other hand, would be reported in terms of all 500 combined experiments, which in this example would constitute a total of 15,000 observations (or 30 students times 500 teachers). The 500 causal conclusions published in the newspaper (or on a school district website), on the other hand, are based upon separate contrasts of 500 "treatment groups" (each composed of changes in outcomes for a single teacher's 30 students) versus essentially the same "comparison group."
  • Explicit guidelines exist for the reporting of medical experiments, such as the (a) specification of how many observations were lost between the beginning and the end of the experiment (which is seldom done in value-added experiments, but would entail reporting student transfers, dropouts, missing test data, scoring errors, improperly marked test sheets, clerical errors resulting in incorrect class lists, and so forth for each teacher); and (b) whether statistical significance was obtained—which is impractical for each teacher in a value-added experiment since the reporting of so many individual results would violate multiple statistical principles.

[readon2 url="http://www.edweek.org/ew/articles/2013/01/16/17bausell.h32.html"]Continue reading...[/readon2]

StudentsFirst is an anti-tax group

StudentsFrist, the lobbying organization ran by Michelle Rhee, puts itself forward as an education reform organization, but when one carefully looks at their agenda it is clear what they really are is another extreme right wing anti-tax group.

Their goal is to transfer as much money from public schools to private enterprise, while eroding public schools themselves. Let's look at the clear evidence.

The NYT reports

In just a few short years, state legislatures and education agencies across the country have sought to transform American public education by passing a series of laws and policies overhauling teacher tenure, introducing the use of standardized test scores in performance evaluations and expanding charter schools.

Such policies are among those pushed by StudentsFirst, the advocacy group led by Michelle A. Rhee, the former schools chancellor in Washington. Ms. Rhee has generated debate in education circles for aggressive pursuit of her agenda and the financing of political candidates who support it.

In a report issued Monday, StudentsFirst ranks states based on how closely they follow the group’s platform, looking at policies related not only to tenure and evaluations but also to pensions and the governance of school districts. The group uses the classic academic grading system, awarding states A to F ratings.

With no states receiving an A, two states receiving B-minuses and 12 states branded with an F, StudentsFirst would seem to be building a reputation as a harsh grader.

Ohio received a C-. StateImpactOhio talked to StudentsFirst about this report.

You mentioned that we’re a C but there are things in action that – according to your standards – will improve education in Ohio. What are those things?

A: Currently we have a system where regardless of how a child performs, teachers’ evaluation, pay, performance is pretty much divorced from the students’ outcomes. When you evaluate teachers you have to factor in student performance in those evaluations, and so Ohio has now passed legislation saying that student performance has to play a role in terms of teacher pay and promotion. We think it needs to go further, we think tenure decisions need to be based on student performance.

This comes as no surprise. StudentsFirst supported SB5 which had similar goals. What should be eye opening is this policy goal itself. If the goal is to put students first, why would this organization choose to pursue a failed policy?

In Washington DC where Michelle Rhee was head of the schools, she implemented this system, and as we reported last year it has been an unmitigated disaster.

Washington DC has purged a vast number of experienced teachers pursuing the policies of Michelle Rhee and the results have been terrible for students

D.C. public schools have the largest achievement gap between black and white students among the nation’s major urban school systems, a distinction laid bare in a federal study released Wednesday.

The District also has the widest achievement gap between white and Hispanic students, the study found, compared with results from other large systems and the national average.

The study is based on the 2011 National Assessment of Educational Progress, federal reading and math exams taken this year by fourth- and eighth-graders across the country.

The country already has a teacher attrition problem. We need policies that will retain experienced teachers, not drive them faster from the profession

In what other policy arena would a group be taken seriously by arguing for policies that eliminated experience? It is clear that what StudentsFirst aim is then, is to reduce the cost of teachers in order to pursue low taxes and siphon that saved money to private enterprise.

Furthermore, the recent 2012 elections demonstrated that ideology, not putting students first, is the main goal of Rhee's organization

Rhee makes a point of applauding “leaders in both parties and across the ideological spectrum” because her own political success — and the success of school reform — depends upon the bipartisan reputation she has fashioned. But 90 of the 105 candidates backed by StudentsFirst were Republicans, including Tea Party enthusiasts

Many of those endorsed candidates include legislators who cut Ohio public schools funding by by $1.8 billion - a move decried by the majority of public school supporters, but found StudentsFirst silent on the matter.

When you separate the rhetoric from the results and the goals, it becomes far easier to understand StudetnsFirst not as an education reform group but instead as a right wing anti-tax group - something all the available evidence demonstrates.

How Should Educators Interpret Value-Added Scores?

Via

Highlights

  • Each teacher, in principle, possesses one true value-added score each year, but we never see that "true" score. Instead, we see a single estimate within a range of plausible scores.
  • The range of plausible value-added scores -; the confidence interval -; can overlap considerably for many teachers. Consequently, for many teachers we cannot readily distinguish between them with respect to their true value-added scores.
  • Two conditions would enable us to achieve value-added estimates with high reliability: first, if teachers' value-added measurements were more precise, and second, if teachers’ true value-added scores varied more dramatically than they do.
  • Two kinds of errors of interpretation are possible when classifying teachers based on value-added: a) “false identifications” of teachers who are actually above a certain percentile but who are mistakenly classified as below it; and b) “false non-identifications” of teachers who are actually below a certain percentile but who are classified as above it. Falsely identifying teachers as being below a threshold poses risk to teachers, but failing to identify teachers who are truly ineffective poses risks to students.
  • Districts can conduct a procedure to identify how uncertainty about true value-added scores contributes to potential errors of classification. First, specify the group of teachers you wish to identify. Then, specify the fraction of false identifications you are willing to tolerate. Finally, specify the likely correlation between value-added score this year and next year. In most real-world settings, the degree of uncertainty will lead to considerable rates of misclassification of teachers.

Introduction

A teacher's value-added score is intended to convey how much that teacher has contributed to student learning in a particular subject in a particular year. Different school districts define and compute value-added scores in different ways. But all of them share the idea that teachers who are particularly successful will help their students make large learning gains, that these gains can be measured by students' performance on achievement tests, and that the value-added score isolates the teacher's contribution to these gains.

A variety of people may see value-added estimates, and each group may use them for different purposes. Teachers themselves may want to compare their scores with those of others and use them to improve their work. Administrators may use them to make decisions about teaching assignments, professional development, pay, or promotion. Parents, if they see the scores, may use them to request particular teachers for their children. And, finally, researchers may use the estimates for studies on improving instruction.

Using value-added scores in any of these ways can be controversial. Some people doubt the validity of the achievement tests on which the scores are based, some question the emphasis on test scores to begin with, and others challenge the very idea that student learning gains reflect how well teachers do their jobs.

In order to sensibly interpret value-added scores, it is important to do two things: understand the sources of uncertainty and quantify its extent.

Our purpose is not to settle these controversies, but, rather, to answer a more limited, but essential, question: How might educators reasonably interpret value-added scores? Social science has yet to come up with a perfect measure of teacher effectiveness, so anyone who makes decisions on the basis of value-added estimates will be doing so in the midst of uncertainty. Making choices in the face of doubt is hardly unusual – we routinely contend with projected weather forecasts, financial predictions, medical diagnoses, and election polls. But as in these other areas, in order to sensibly interpret value-added scores, it is important to do two things: understand the sources of uncertainty and quantify its extent. Our aim is to identify possible errors of interpretation, to consider how likely these errors are to arise, and to help educators assess how consequential they are for different decisions.

We'll begin by asking how value-added scores are defined and computed. Next, we'll consider two sources of error: statistical bias and statistical imprecision.

[readon2 url="http://www.carnegieknowledgenetwork.org/briefs/value-added/interpreting-value-added/"]Continue reading...[/readon2]