Some Hows and Whys of Value Add Modelling

We thought it would be useful to provide a quick primer on what Value Add actually is, and how it is calculated, in somewhat explainable terms. This is a good explanation via the American Statistical Association

The principal claim made by the developers of VAM—William L. Sanders, Arnold M. Saxton, and Sandra P. Horn—is that through the analysis of changes in student test scores from one year to the next, they can objectively isolate the contributions of teachers and schools to student learning. If this claim proves to be true, VAM could become a powerful tool for both teachers’ professional development and teachers’ evaluation.

This approach represents an important divergence from the path specified by the “adequate yearly progress” provisions of the No Child Left Behind Act, for it focuses on the gain each student makes, rather than the proportion of students who attain some particular standard. VAM’s attention to individual student’s longitudinal data to measure their progress seems filled with commonsense and fairness. There are many models that fall under the general heading of VAM. One of the most widely used was developed and programmed by William Sanders and his colleagues. It was developed for use in Tennessee and has been in place there for more than a decade under the name Tennessee Value-Added Assessment System. It also has been called the “layered model” because of the way each of its annual component pieces is layered on top of another.

The model begins by representing a student’s test score in the first year, y1, as the sum of the district’s average for that grade, subject, and year, say μ1; the incremental contribution of the teacher, say θ1; and systematic and unsystematic errors, say ε1. When these pieces are put together, we obtain a simple equation for the first year:

y1 = μ1+ θ1+ ε1 (1)
or
Student’s score (1) = district average (1) + teacher effect (1) + error (1)

There are similar equations for the second, third, fourth, and fifth years, and it is instructive to look at the second year’s equation, which looks like the first except it contains a term for the teacher’s effect from the previous year:

y2 = μ2+ θ1+ θ2+ ε2 . (2)
or
Student’s score (2) = district average (2) + teacher effect (1) + teacher (2) + error (2)

To assess the value added (y2 – y1), we merely subtract equation (1) from equation (2) and note that the effect of the teacher from the first year has conveniently dropped out. While this is statistically convenient, because it leaves us with fewer parameters to estimate, does it make sense? Some have argued that although a teacher’s effect lingers beyond the year the student had her/him, that effect is likely to shrink with time.

Although such a model is less convenient to estimate, it more realistically mirrors reality. But, not surprisingly, the estimate of the size of a teacher’s effect varies depending on the choice of model. How large this choice-of-model effect is, relative to the size of the “teacher effect” is yet to be determined. Obviously, if it is large, it diminishes the practicality of the methodology.

Recent research from the Rand Corporation shows a shift from the layered model to one that estimates the size of the change of a teacher’s effect from one year to the next suggests that almost half of the teacher effect is accounted for by the choice of model.

One cannot partition student effect from teacher effect without information about how the same students perform with other teachers. In practice, using longitudinal data and obtaining measures of student performance in other years can resolve this issue. The decade of Tennessee’s experience with VAM led to a requirement of at least three years’ data. This requirement raises the concerns when (i) data are missing and (ii) the meaning of what is being tested changes with time.

The Ohio Department of Education has papers, here, that discuss the technical details of how VAM is done in Ohio.

BattleforKids.org provided us this information

Here's a brief example of both analysis that are used in Ohio. Both are from the EVAAS methodology produced by SAS:

Value-added analysis is produced in two different ways in Ohio:
1. MRM analysis (Multivariate Response Model, also known as the mean gain approach); and
2. URM analysis (Univariate Response Model, also known as the predicted mean approach).

The MRM analysis is used for the Ohio value-added results in grades 4-8 math and reading. It can onlybe used when tests are uniformly administered in consecutive grades. Through this approach, district, school and teacher level results are compared to a growth standard. The OAA assessments provide the primary data for this approach.

The URM analysis is used for expanded value-added results. Currently this analysis is provided through the Battelle for Kids' (BFK) SOAR and Ohio Value-Added High Schools (OVAHS) projects. The URM analysis is used when tests are not given in consecutive grades. This approach "pools" together districts that use of the same sequence of particular norm reference tests. In the URM analysis, prior test data are used to produce a prediction of how a student is likely to score on a particular test, given the average experience in that school. For example, results from prior OAA and TerraNovaT results are used as predictors for the ACT end-of-course exams. Differences between students' predictions and their actual/observed scores are used to produce school and teacher effects. The URM analysis is normalized each year based on the performance of other schools in the pool that year. This approach means that a comparison is made to the growth of the average school or teacher for that grade/subject in the pool.