The table below, from the report, highlights this issue
Below is just a selection of some of the traditional public schools that are losing vast amounts of money to lower performing charter schools
We encourage you to read and share the entire report, found below
One of my favorite studies to date about VAMs was conducted by John Papay, an economist once at Harvard and now at Brown University. In the study titled “Different Tests, Different Answers: The Stability of Teacher Value-Added Estimates Across Outcome Measures” published in 2009 by the 3rd best and most reputable peer-reviewed journal, American Educational Research Journal, Papay presents evidence that different yet similar tests (i.e., similar on content, and similar on when the tests were administered to similar sets of students) do not provide similar answers about teachers’ value-added performance. This is an issue with validity, in that, if a test is measuring the same things for the same folks at the same times, similar-to-the-same results should be realized. But they are not. Papay, rather, found moderate-sized rank correlations, ranging from r=0.15 to r=0.58, among the value-added estimates derived from the different tests.
Recently released, yet another study (albeit not yet peer-reviewed) has found similar results…potentially solidifying this finding further into our understandings about VAMs and their issues, particularly in terms of validity (or truth in VAM-based results). This study on “Comparing Estimates of Teacher Value-Added Based on Criterion- and Norm-Referenced Tests” released by the U.S. Department of Education and conducted by four researchers representing Notre Dame University, Basis Policy Research, and American Institutes of Research, provides evidence, again, that estimates of teacher value-added as based on different yet similar tests (i.e., in this case a criterion-referenced state assessment and a widely used norm-referenced test given in the same subject around the same time) yielded moderately correlated estimates of teacher-level value added, yet again.
If we had confidence in the validity of the inferences based on value-added measures, these correlations (or more simply put “relationships”) should be much higher than what they found, similar to what Papay found, in the range of 0.44 to 0.65. While the ideal correlation coefficient is a, in this case, r=+1.0, that is very rarely achieved. But for the purposes for which teacher-level value-added is currently being used, correlations above r=+.70/r=+.80 would (and should) be most desired, and possibly required before high-stakes decisions about teachers are to be made as based on these data.
In addition, researchers in this study found that on average, only 33.3% of teachers’ estimates from both sets of value-added estimates positioned them in the same range of scores (using quintiles or ranges including 20% bands of width) on both tests in the same school year. This too has implications for validity in that, again, teachers or teachers’ value-added estimates should fall in the same ranges, if and when using similar tests, if any valid inferences are to be made using value-added estimates.
Read the entire report belowPb - Class Size by National Education Policy Center
Throughout this list of Straight A Fund charter school applicants is evidence of why Ohio's charter school experiment is failing. Already an almost $1 billion industry, it needs to be reigned in, not have more money wasted on failure, especially when so many more higher performing traditional public schools have been starved by the governors' education funding policies and forced to fight over scraps via a funding lottery.
Two academic researchers from the University of Southern California and the University of Pennsylvania looked at these value-added measures in six districts around the nation and found that there was weak to zero relationship between these new numbers and the content or quality of the teacher’s instruction.
“These results call into question the fixed and formulaic approach to teacher evaluation that’s being promoted in a lot of states right now,” said Morgan Polikoff, one of the study’s authors, in a video that explains his paper, “Instructional Alignment as a Measure of Teaching Quality,” published online in Education Evaluation and Policy Analysis on May 13, 2014. ”These measures are not yet up to the task of being put into, say, an index to make important summative decisions about teachers.”
Polikoff of the University of Southern California and Andrew Porter of the University of Pennsylvania looked at the value-added scores of 327 fourth- and eighth-grade mathematics and English language arts teachers across all six school districts included in the Measures of Effective Teaching (MET) study (New York City, Dallas, Denver, Charlotte-Mecklenburg, Memphis, and Hillsborough County, Florida). Specifically, they compared the teachers’ value added scores with how closely their instructional materials aligned with their state’s instructional standards and the content of the state tests. But teachers who were teaching the right things weren’t getting higher value-added scores.
They also looked at other measures of teacher quality, such as teacher observations and student evaluations. Similarly, teachers who won high marks from professional observers or students were also not getting higher value-added scores.
“What we’re left with is that state tests aren’t picking up what we think of as good teaching,” Polikoff said.
What’s interesting is that Polikoff’s and Porter’s research was funded by the Gates Foundation, which had been touting how teachers’ effectiveness could be estimated by their students’ progress on standardized tests. The foundation had come under fire from economists for flawed analysis. Now this new Gates Foundation’ commissioned research has proved the critics right. (The Gates Foundation is also among the funders of The Hechinger Report).
Polikoff said that the value-added measures do provide some information, but they’re meaningless if you want to use them to improve instruction. “If the things we think of as defining good instruction don’t seem to producing substantially better student achievement, then how is it that teachers will be able to use the value-added results to make instructional improvements?” he asked.
Polikoff concludes that the research community needs to develop new measures of teacher quality in order to “move the needle” on teacher performance.
You can read the entire report below