Get Updates via Email

Primary election results quick snapshot

We'll be bringing more in-depth coverage of the results from yesterdays election. In the mean time, the Dispatch reports - Voters say yes in six of 11 districts

The Cleveland Plain Dealer reports - Parma, Garfield Heights voters pass school taxes, but Cuyahoga County voters defeat all other school issues

At the other end of the state, the Cincinnati Enquirer reports - Little Miami loses very close vote; Loveland, Norwood only winners out of seven

The Toledo Blade reports - Maumee levy fails; Sylvania's approved
  • Maumee City Schools - Failed
  • Sylvania City Schools - Passed
  • Woodmore - Passed
  • Wauseon - Passed
  • Benton-Carroll-Salem - Failed
  • Patrick Henry - Failed
  • Clyde-Green Springs - Failed
From Akron, Ohio.com reports - Only Nordonia Hills faces defeat in county. Highland squeaks out win in Medina. Portage levies fall. Full area results at the links below.

Some Hows and Whys of Value Add Modelling

We thought it would be useful to provide a quick primer on what Value Add actually is, and how it is calculated, in somewhat explainable terms. This is a good explanation via the American Statistical Association

The principal claim made by the developers of VAM—William L. Sanders, Arnold M. Saxton, and Sandra P. Horn—is that through the analysis of changes in student test scores from one year to the next, they can objectively isolate the contributions of teachers and schools to student learning. If this claim proves to be true, VAM could become a powerful tool for both teachers’ professional development and teachers’ evaluation.

This approach represents an important divergence from the path specified by the “adequate yearly progress” provisions of the No Child Left Behind Act, for it focuses on the gain each student makes, rather than the proportion of students who attain some particular standard. VAM’s attention to individual student’s longitudinal data to measure their progress seems filled with commonsense and fairness. There are many models that fall under the general heading of VAM. One of the most widely used was developed and programmed by William Sanders and his colleagues. It was developed for use in Tennessee and has been in place there for more than a decade under the name Tennessee Value-Added Assessment System. It also has been called the “layered model” because of the way each of its annual component pieces is layered on top of another.

The model begins by representing a student’s test score in the first year, y1, as the sum of the district’s average for that grade, subject, and year, say μ1; the incremental contribution of the teacher, say θ1; and systematic and unsystematic errors, say ε1. When these pieces are put together, we obtain a simple equation for the first year:

y1 = μ1+ θ1+ ε1 (1)
or
Student’s score (1) = district average (1) + teacher effect (1) + error (1)

There are similar equations for the second, third, fourth, and fifth years, and it is instructive to look at the second year’s equation, which looks like the first except it contains a term for the teacher’s effect from the previous year:

y2 = μ2+ θ1+ θ2+ ε2 . (2)
or
Student’s score (2) = district average (2) + teacher effect (1) + teacher (2) + error (2)

To assess the value added (y2 – y1), we merely subtract equation (1) from equation (2) and note that the effect of the teacher from the first year has conveniently dropped out. While this is statistically convenient, because it leaves us with fewer parameters to estimate, does it make sense? Some have argued that although a teacher’s effect lingers beyond the year the student had her/him, that effect is likely to shrink with time.

Although such a model is less convenient to estimate, it more realistically mirrors reality. But, not surprisingly, the estimate of the size of a teacher’s effect varies depending on the choice of model. How large this choice-of-model effect is, relative to the size of the “teacher effect” is yet to be determined. Obviously, if it is large, it diminishes the practicality of the methodology.

Recent research from the Rand Corporation shows a shift from the layered model to one that estimates the size of the change of a teacher’s effect from one year to the next suggests that almost half of the teacher effect is accounted for by the choice of model.

One cannot partition student effect from teacher effect without information about how the same students perform with other teachers. In practice, using longitudinal data and obtaining measures of student performance in other years can resolve this issue. The decade of Tennessee’s experience with VAM led to a requirement of at least three years’ data. This requirement raises the concerns when (i) data are missing and (ii) the meaning of what is being tested changes with time.
The Ohio Department of Education has papers, here, that discuss the technical details of how VAM is done in Ohio.

BattleforKids.org provided us this information
Here's a brief example of both analysis that are used in Ohio. Both are from the EVAAS methodology produced by SAS:

Value-added analysis is produced in two different ways in Ohio:
1. MRM analysis (Multivariate Response Model, also known as the mean gain approach); and
2. URM analysis (Univariate Response Model, also known as the predicted mean approach).

The MRM analysis is used for the Ohio value-added results in grades 4-8 math and reading. It can onlybe used when tests are uniformly administered in consecutive grades. Through this approach, district, school and teacher level results are compared to a growth standard. The OAA assessments provide the primary data for this approach.

The URM analysis is used for expanded value-added results. Currently this analysis is provided through the Battelle for Kids' (BFK) SOAR and Ohio Value-Added High Schools (OVAHS) projects. The URM analysis is used when tests are not given in consecutive grades. This approach "pools" together districts that use of the same sequence of particular norm reference tests. In the URM analysis, prior test data are used to produce a prediction of how a student is likely to score on a particular test, given the average experience in that school. For example, results from prior OAA and TerraNovaT results are used as predictors for the ACT end-of-course exams. Differences between students' predictions and their actual/observed scores are used to produce school and teacher effects. The URM analysis is normalized each year based on the performance of other schools in the pool that year. This approach means that a comparison is made to the growth of the average school or teacher for that grade/subject in the pool.

Events Reminder

A quick reminder of an important event TODAY that we're involved with and hope to see many of our readers at.

Rally for Good Jobs and Strong Communities
Thursday, May 5th, 2011 staring at 5:00pm
On May 5th, the House will be voting for Governor Kasich's budget. Come show your support for Good Jobs and Strong Communities in Ohio.
WHERE: Ohio Statehouse, Columbus

Value add high stakes use cautioned

The American Mathematics Society just published a paper titled "Mathematical Intimidation:Driven by the Data", that discusses the issues with using Value Add in high stakes decision making, such as teacher evaluation. It's quite a short read, and well worth the effort.
Many studies by reputable scholarly groups call for caution in using VAMs for high-stakes decisions about teachers.

A RAND research report: The esti- mates from VAM modeling of achieve- ment will often be too imprecise to support some of the desired inferences [McCaffrey 2004, 96].

A policy paper from the Educational Testing Service’s Policy Information Center: VAM results should not serve as the sole or principal basis for making consequential decisions about teach- ers. There are many pitfalls to making causal attributions of teacher effective- ness on the basis of the kinds of data available from typical school districts. We still lack sufficient understanding of how seriously the different technical problems threaten the validity of such interpretations [Braun 2005, 17].

A report from a workshop of the Na- tional Academy of Education: Value- added methods involve complex sta- tistical models applied to test data of varying quality. Accordingly, there are many technical challenges to ascer- taining the degree to which the output of these models provides the desired estimates [Braun 2010]
[...]
Making policy decisions on the basis of value- added models has the potential to do even more harm than browbeating teachers. If we decide whether alternative certification is better than regular certification, whether nationally board cer- tified teachers are better than randomly selected ones, whether small schools are better than large, or whether a new curriculum is better than an old by using a flawed measure of success, we almost surely will end up making bad decisions that affect education for decades to come.

This is insidious because, while people debate the use of value-added scores to judge teachers, almost no one questions the use of test scores and value-added models to judge policy. Even people who point out the limitations of VAM ap- pear to be willing to use “student achievement” in the form of value-added scores to make such judgments. People recognize that tests are an im- perfect measure of educational success, but when sophisticated mathematics is applied, they believe the imperfections go away by some mathematical magic. But this is not magic. What really happens is that the mathematics is used to disguise the prob- lems and intimidate people into ignoring them—a modern, mathematical version of the Emperor’s New Clothes.
The entire, short paper, can be read below.

Mathematical Intimidation: Driven by the Data

(c) Join the Future