Last month the state released a preliminary look at their new school rankings list. After digesting this list and its construction, people are asking interesting questions and observing uncomfortable patterns.
- The PI calculation is based on passage rates of Ohio Achievement Assessments (grades 3–8) and the Ohio Graduation Test (grades 10 and 11). The proficiency “cut scores” are so low that students can be determined “proficient” even when they answer less than 50% of test questions correctly.
- The PI calculation gives schools and districts “partial” credit for students who fail to meet the proficient standard.
- The PI calculation does not include a growth component. Districts and schools can be highly ranked even if students are learning little from year to year. The PI is a clumsy instrument that does not allow the average person to distinguish the true performance of districts. For example, 50 districts have PI scores of 100.XXXX [with the X’s representing the digits after the decimal point]. Is there any real difference in performance between the district ranked 210 of 611 or 260 of 611 districts?
Indeed, with the somewhat arbitrary nature of the weightings of the PI calculation, how much of variation in these scores is a consequence of those design choices?
The most disturbing result however is this
In general, districts’ rankings are directly related to how many low-income students they enroll. Even just looking at the rankings of urban school districts, for most (but not all) of the districts in the top 25 percent, less than half of their students are from low-income families.
There's about twelves months before these preliminary results become real ones, and one can only hope that some of these design problems and errata are resolved by then, but we're not hopeful.