Ohio charter high schools lowest ranked in state

Via 10th Period US News & World Report's rankings of the country's best high schools provides another sobering picture of Ohio's 16-year-old charter school experiment.

Not a single one of Ohio's 97 charter high schools outperformed the state average in reading and math. Meanwhile, 167 traditional public high schools did that. So that's an amazing 167-0 score. Not a single charter high school in Ohio rates in the top 115 high schools in the state -- the lowest rank given by U.S. News.

Share your comment:

Short-Changed: How Poor-Performing Charters Cost All Ohio Kids

Innovation Ohio has published a new report, titled "Short-Changed: How Poor-Performing Charters Cost All Ohio Kids", their principal findings were these:
  • The flawed way in which charter schools are funded in Ohio will result in traditional school students receiving, on average, 6.6% less state funding this year (around $256 per pupil) than the state itself says they need;
  • The table below, from the report, highlights this issue

  • Well over half of all state money sent to charters goes to schools that perform worse than traditional public schools on one or both of the state’s two major performance measurements (the Report Card and the Performance Index);
  • Below is just a selection of some of the traditional public schools that are losing vast amounts of money to lower performing charter schools

  • A number of high-performing suburban school districts are now among the biggest losers in per pupil funding;
  • On average, Ohio charters spend about double (23.5% vs. 13%) on non-instructional administrative costs than do traditional public schools;
  • 53% of children transferring into charter schools are leaving districts that perform better;
  • In 384 out of Ohio’s 612 school districts, every dime “lost” to charters went to schools whose overall performance was worse on the State Report Card.

We encourage you to read and share the entire report, found below

IO Report Short Changed

Share your comment:

Same Teachers, Similar Students, Similar Tests, Different Answers

Via Vamboozled

One of my favorite studies to date about VAMs was conducted by John Papay, an economist once at Harvard and now at Brown University. In the study titled “Different Tests, Different Answers: The Stability of Teacher Value-Added Estimates Across Outcome Measures” published in 2009 by the 3rd best and most reputable peer-reviewed journal, American Educational Research Journal, Papay presents evidence that different yet similar tests (i.e., similar on content, and similar on when the tests were administered to similar sets of students) do not provide similar answers about teachers’ value-added performance. This is an issue with validity, in that, if a test is measuring the same things for the same folks at the same times, similar-to-the-same results should be realized. But they are not. Papay, rather, found moderate-sized rank correlations, ranging from r=0.15 to r=0.58, among the value-added estimates derived from the different tests.

Recently released, yet another study (albeit not yet peer-reviewed) has found similar results…potentially solidifying this finding further into our understandings about VAMs and their issues, particularly in terms of validity (or truth in VAM-based results). This study on “Comparing Estimates of Teacher Value-Added Based on Criterion- and Norm-Referenced Tests” released by the U.S. Department of Education and conducted by four researchers representing Notre Dame University, Basis Policy Research, and American Institutes of Research, provides evidence, again, that estimates of teacher value-added as based on different yet similar tests (i.e., in this case a criterion-referenced state assessment and a widely used norm-referenced test given in the same subject around the same time) yielded moderately correlated estimates of teacher-level value added, yet again.

If we had confidence in the validity of the inferences based on value-added measures, these correlations (or more simply put “relationships”) should be much higher than what they found, similar to what Papay found, in the range of 0.44 to 0.65. While the ideal correlation coefficient is a, in this case, r=+1.0, that is very rarely achieved. But for the purposes for which teacher-level value-added is currently being used, correlations above r=+.70/r=+.80 would (and should) be most desired, and possibly required before high-stakes decisions about teachers are to be made as based on these data.

In addition, researchers in this study found that on average, only 33.3% of teachers’ estimates from both sets of value-added estimates positioned them in the same range of scores (using quintiles or ranges including 20% bands of width) on both tests in the same school year. This too has implications for validity in that, again, teachers or teachers’ value-added estimates should fall in the same ranges, if and when using similar tests, if any valid inferences are to be made using value-added estimates.

Share your comment:

Class Size Matters

A report recently published by the National Education Policy Center, titled "Class Size Matters" found that:
  • Class size is an important determinant of student outcomes, and one that can be directly determined by policy. All else being equal, increasing class sizes will harm student outcomes.
  • The evidence suggests that increasing class size will harm not only children’s test scores in the short run, but also their long-run human capital formation. Money saved today by increasing class sizes will result in more substantial social and educational costs in the future.
  • The payoff from class-size reduction is greater for low-income and minority children, while any increases in class size will likely be most harmful to these populations.
  • Policymakers should carefully weigh the efficacy of class-size policy against other potential uses of funds. While lower class size has a demonstrable cost, it may prove the more cost-effective policy overall.

Read the entire report below

Pb - Class Size by National Education Policy Center

Share your comment:

Failing Charter Schools Dine at the Straight A Fund Trough

The Ohio Department of Education released the list of Straight A applicants that will be moving on to the next scoring phase, as schools are pitted against each other for a chance to win $150 million dollars in the Kasich education funding lottery.

Of those applicant, 17 are charter schools (though some of these are part of a larger consortium, so the true number of charter schools is much higher) making a combined request of $31,397,903.35.

Let's take a closer look at some of these applicants.

The first on the list is Achieve Career Preparatory Academy, a drop out recovery school in Toledo, making a buzzword packed ("3-dimensional learning tools", "focus on student engagement") request for almost $200,000. Not mentioned in the request is that this dropout recovery school only graduates 14.3% of its students according to the latest ODE data. This school needs to be closed down, not handed more money via the Kasich education funding lottery.

Next on our list is the Buckeye On-Line School for Success, which is requesting close to $1 million for an IT system called "Virtualized Operations for Independent and Collaborative Education". The ironically named Buckeye On-Line School for Success is rated F for both performance and Value Add according to the latest ODE data. another example of a charter school that ought to be closed, not handed more taxpayer money.

The Lake Erie Academy is next in the spotlight, with a request for $116,000 "to improve the reading ability of K-8 students at Lake Erie Academy using the computer-based program, Read Naturally Live." This charter school has a D rating for performance and an F for value-add.

More troubling given the nature of this request, is that this charter schools reading performance has declined in each of the last 3 years. in 2010 3rd grade proficiency was 70%, in 2011 56% and in 2012 it was a paltry 25%. 8th grade reading proficiency decline from 61% in 2010 to 41% in 2012. Here we have a low performing charter school in serious decline. Rather than hand more money over so they can continue to fail in their mission, they also need to be closed down.

It should come as little surprise that one of the biggest requests comes from a charter school operated by the politically connected a David Brennan. The White Hat management ran school - Summit academy Secondary in Akron is requested in $6.2 million. As with all White Hat schools, this one is a low performer, meeting just one of its possible standards and receiving a D for performance. They want this $6.2 million to "leverage the power of technology and teacher training to show teachers how to address all student needs in an individualized way". That they are not already doing that is one indication of why these White Hat schools perform so badly.

Throughout this list of Straight A Fund charter school applicants is evidence of why Ohio's charter school experiment is failing. Already an almost $1 billion industry, it needs to be reigned in, not have more money wasted on failure, especially when so many more higher performing traditional public schools have been starved by the governors' education funding policies and forced to fight over scraps via a funding lottery.

Share your comment:

Researchers Give Failing Marks to Teacher Evaluation Systems

Via Hechinger Report.

School systems around the country are trying to use objective, quantifiable measures to identify which are the good teachers and which are the bad ones. One popular approach used in New York, Chicago and other cities, is to calculate a value-added performance measure (VAM). Essentially, you create a model that begins by calculating how much kids’ test scores, on average, increase each year. (Test score year 2 minus test score year 1). Then you give a high score to teachers who have students who post test-score gains above the average. And you give a low score to teachers whose students show smaller test-score gains. There are lots of mathematical tweaks, but the general idea is to build a model that answers this question: are the students of this particular teacher learning more or less than you expect them to? The teachers’ value-added scores are then used to figure out which teachers to train, fire or reward with bonuses.

Two academic researchers from the University of Southern California and the University of Pennsylvania looked at these value-added measures in six districts around the nation and found that there was weak to zero relationship between these new numbers and the content or quality of the teacher’s instruction.

“These results call into question the fixed and formulaic approach to teacher evaluation that’s being promoted in a lot of states right now,” said Morgan Polikoff, one of the study’s authors, in a video that explains his paper, “Instructional Alignment as a Measure of Teaching Quality,” published online in Education Evaluation and Policy Analysis on May 13, 2014. ”These measures are not yet up to the task of being put into, say, an index to make important summative decisions about teachers.”

Polikoff of the University of Southern California and Andrew Porter of the University of Pennsylvania looked at the value-added scores of 327 fourth- and eighth-grade mathematics and English language arts teachers across all six school districts included in the Measures of Effective Teaching (MET) study (New York City, Dallas, Denver, Charlotte-Mecklenburg, Memphis, and Hillsborough County, Florida). Specifically, they compared the teachers’ value added scores with how closely their instructional materials aligned with their state’s instructional standards and the content of the state tests. But teachers who were teaching the right things weren’t getting higher value-added scores.

They also looked at other measures of teacher quality, such as teacher observations and student evaluations. Similarly, teachers who won high marks from professional observers or students were also not getting higher value-added scores.

“What we’re left with is that state tests aren’t picking up what we think of as good teaching,” Polikoff said.

What’s interesting is that Polikoff’s and Porter’s research was funded by the Gates Foundation, which had been touting how teachers’ effectiveness could be estimated by their students’ progress on standardized tests. The foundation had come under fire from economists for flawed analysis. Now this new Gates Foundation’ commissioned research has proved the critics right. (The Gates Foundation is also among the funders of The Hechinger Report).

Polikoff said that the value-added measures do provide some information, but they’re meaningless if you want to use them to improve instruction. “If the things we think of as defining good instruction don’t seem to producing substantially better student achievement, then how is it that teachers will be able to use the value-added results to make instructional improvements?” he asked.

Polikoff concludes that the research community needs to develop new measures of teacher quality in order to “move the needle” on teacher performance.

You can read the entire report below

Educational Evaluation and Policy Analysis-2014-Polikoff

Share your comment:

Get Involved

 Email*
 First Name
 Last Name
 School District

Search