Too Much? Students given 401 tests 6570 times in 1 School Year

The Council of the Great City Schools has just released their analysis of testing of students in k-12 in large urban schools districts, titled "Student Testing in America’s Great City Schools: An Inventory and Preliminary Analysis". The results of this in-depth analysis have already caused shocked waves in the education policy world, leading to the US department of Education to issue its recent mea culpa.

The reports top line highlights are unsurprising to anyone who has followed the issue of over-testing closely, or has spent any time in a classroom in recent years

  • In the 2014-15 school year, 401 unique tests were administered across subjects in the 66 Great City School systems.
  • Students in the 66 districts were required to take an average of 112.3 tests between pre-K and grade 12. (This number does not include optional tests, diagnostic tests for students with disabilities or English learners, school-developed or required tests, or teacher designed or developed tests.)
  • The average student in these districts will typically take about eight standardized tests per year, e.g., two NCLB tests (reading and math), and three formative exams in two subjects per year.
  • In the 2014-15 school year, students in the 66 urban school districts sat for tests more than 6,570 times. Some of these tests are administered to fulfill federal requirements under No Child Left Behind, NCLB waivers, or Race to the Top (RTT), while many others originate at the state and local levels. Others were optional.

These headline statistics, as the report notes are understated and do not capture the full picture or extent of over-testing

Moreover, tests that are purchased, acquired, developed, or used at the individual school level— including those by individual teachers—are not counted in the statistics we present in this report. There are a large number of these tests below the federal, state, and district levels, but there is no way to know how many or how extensively they are used without doing a survey of individual schools. At some point, this kind of analysis should be done.

Also, we have not attempted to quantify the amount of time that is devoted either to giving or administering the tests or to preparing for them (i.e., test prep). Test administration can be particularly time-consuming when the tests are given to one student at a time. These activities can be time-consuming, but we could not gauge how much existed in this study. Again, this should be the subject of future studies.

The report should also cast some doubt on the US Department of Educations recommendation that testing time not exceed 2% of school time. According to this analysis, 2% is already at the top end of over-testing - the report finds that the average amount of testing time devoted to mandated tests [...] was approximately 2.34 percent of school time. Clearly scaling back testing by 0.34% is not going to be adequate.

The report also finds that not only are students over-tested for the purposed of data collection, but that the results can be poor, too late, contradictory and lack context

tests are not always very good at doing what we need them to do, they don’t tell us everything that is important about a child, and they don’t tell us what to do when results are low. This occurs for a variety of reasons: Data come too late to inform immediate instructional needs; teachers aren’t provided the professional development they need on how to read, interpret, and make use of the results in their classrooms; teachers and administrators don’t trust the results, believe the tests are of low quality, or think the results are misaligned with the standards they are trying to teach; or the multiple tests provide results that are contradictory or yield too much data to make sense of. The result is that the data from all this testing aren’t always used to inform classroom practice.

Furthermore, it is noted that students fail to see the multitude of tests as important or relevant, and they do not always put forward their best efforts to do well on them. Something most classroom teachers have observed, and been alarmed about given the increased use of these tests to make high stakes employment decisions.

But at the end of the day, perhaps the most important finding is to be found at the end of this analysis.

Seventh, the fact that there is no correlation between testing time and student fourth and eighth grade results in reading and math on NAEP does not mean that testing is irrelevant, but it does throw into question the assumption that putting more tests into place will help boost overall student outcomes. In fact, there were notable examples where districts with relatively large amounts of testing time had very weak or stagnant student performance. To be sure, student

scores on a high-level test like NAEP are affected by many more factors than the amount of time students devote to test taking. But the lack of any meaningful correlation should give administrators pause.

All this testing isn't helping students, indeed, it may actually be harmful.

Here's the full report

Student Testing in America’s Great City Schools: An Inventory and Preliminary Analysis