Innovation Ohio has published a new report, titled "Short-Changed: How Poor-Performing Charters Cost All Ohio Kids", their principal findings were these:
- The flawed way in which charter schools are funded in Ohio will result in traditional school students receiving, on average, 6.6% less state funding this year (around $256 per pupil) than the state itself says they need;
The table below, from the report, highlights this issue
- Well over half of all state money sent to charters goes to schools that perform worse than traditional public schools on one or both of the state’s two major performance measurements (the Report Card and the Performance Index);
Below is just a selection of some of the traditional public schools that are losing vast amounts of money to lower performing charter schools
- A number of high-performing suburban school districts are now among the biggest losers in per pupil funding;
- On average, Ohio charters spend about double (23.5% vs. 13%) on non-instructional administrative costs than do traditional public schools;
- 53% of children transferring into charter schools are leaving districts that perform better;
- In 384 out of Ohio’s 612 school districts, every dime “lost” to charters went to schools whose overall performance was worse on the State Report Card.
We encourage you to read and share the entire report, found below
IO Report Short Changed
One of my favorite studies to date about VAMs was conducted by John Papay, an economist once at Harvard and now at Brown University. In the study titled “Different Tests, Different Answers: The Stability of Teacher Value-Added Estimates Across Outcome Measures” published in 2009 by the 3rd best and most reputable peer-reviewed journal, American Educational Research Journal, Papay presents evidence that different yet similar tests (i.e., similar on content, and similar on when the tests were administered to similar sets of students) do not provide similar answers about teachers’ value-added performance. This is an issue with validity, in that, if a test is measuring the same things for the same folks at the same times, similar-to-the-same results should be realized. But they are not. Papay, rather, found moderate-sized rank correlations, ranging from r=0.15 to r=0.58, among the value-added estimates derived from the different tests.
Recently released, yet another study (albeit not yet peer-reviewed) has found similar results…potentially solidifying this finding further into our understandings about VAMs and their issues, particularly in terms of validity (or truth in VAM-based results). This study on “Comparing Estimates of Teacher Value-Added Based on Criterion- and Norm-Referenced Tests” released by the U.S. Department of Education and conducted by four researchers representing Notre Dame University, Basis Policy Research, and American Institutes of Research, provides evidence, again, that estimates of teacher value-added as based on different yet similar tests (i.e., in this case a criterion-referenced state assessment and a widely used norm-referenced test given in the same subject around the same time) yielded moderately correlated estimates of teacher-level value added, yet again.
If we had confidence in the validity of the inferences based on value-added measures, these correlations (or more simply put “relationships”) should be much higher than what they found, similar to what Papay found, in the range of 0.44 to 0.65. While the ideal correlation coefficient is a, in this case, r=+1.0, that is very rarely achieved. But for the purposes for which teacher-level value-added is currently being used, correlations above r=+.70/r=+.80 would (and should) be most desired, and possibly required before high-stakes decisions about teachers are to be made as based on these data.
In addition, researchers in this study found that on average, only 33.3% of teachers’ estimates from both sets of value-added estimates positioned them in the same range of scores (using quintiles or ranges including 20% bands of width) on both tests in the same school year. This too has implications for validity in that, again, teachers or teachers’ value-added estimates should fall in the same ranges, if and when using similar tests, if any valid inferences are to be made using value-added estimates.
A report recently published by the National Education Policy Center, titled "Class Size Matters" found that:
- Class size is an important determinant of student outcomes, and one that can be directly determined by policy. All else being equal, increasing class sizes will harm student outcomes.
- The evidence suggests that increasing class size will harm not only children’s test scores in the short run, but also their long-run human capital formation. Money saved today by increasing class sizes will result in more substantial social and educational costs in the future.
- The payoff from class-size reduction is greater for low-income and minority children, while any increases in class size will likely be most harmful to these populations.
- Policymakers should carefully weigh the efficacy of class-size policy against other potential uses of funds. While lower class size has a demonstrable cost, it may prove the more cost-effective policy overall.
Read the entire report below
Pb - Class Size by National Education Policy Center
The Ohio Department of Education released the list of Straight A applicants that will be moving on to the next scoring phase, as schools are pitted against each other for a chance to win $150 million dollars in the Kasich education funding lottery.
Of those applicant, 17 are charter schools (though some of these are part of a larger consortium, so the true number of charter schools is much higher) making a combined request of $31,397,903.35.Let's take a closer look at some of these applicants.The first on the list is Achieve Career Preparatory Academy, a drop out recovery school in Toledo, making a buzzword packed ("3-dimensional learning tools", "focus on student engagement") request for almost $200,000. Not mentioned in the request is that this dropout recovery school only graduates 14.3% of its students according to the latest ODE data. This school needs to be closed down, not handed more money via the Kasich education funding lottery.
Next on our list is the Buckeye On-Line School for Success, which is requesting close to $1 million for an IT system called "Virtualized Operations for Independent and Collaborative Education". The ironically named Buckeye On-Line School for Success is rated F for both performance and Value Add according to the latest ODE data. another example of a charter school that ought to be closed, not handed more taxpayer money.
The Lake Erie Academy is next in the spotlight, with a request for $116,000 "to improve the reading ability of K-8 students at Lake Erie Academy using the computer-based program, Read Naturally Live." This charter school has a D rating for performance and an F for value-add. More troubling given the nature of this request, is that this charter schools reading performance has declined in each of the last 3 years. in 2010 3rd grade proficiency was 70%, in 2011 56% and in 2012 it was a paltry 25%. 8th grade reading proficiency decline from 61% in 2010 to 41% in 2012. Here we have a low performing charter school in serious decline. Rather than hand more money over so they can continue to fail in their mission, they also need to be closed down.
It should come as little surprise that one of the biggest requests comes from a charter school operated by the politically connected a David Brennan. The White Hat management ran school - Summit academy Secondary in Akron is requested in $6.2 million. As with all White Hat schools, this one is a low performer, meeting just one of its possible standards and receiving a D for performance. They want this $6.2 million to "leverage the power of technology and teacher training to show teachers how to address all student needs in an individualized way". That they are not already doing that is one indication of why these White Hat schools perform so badly.
Throughout this list of Straight A Fund charter school applicants is evidence of why Ohio's charter school experiment is failing. Already an almost $1 billion industry, it needs to be reigned in, not have more money wasted on failure, especially when so many more higher performing traditional public schools have been starved by the governors' education funding policies and forced to fight over scraps via a funding lottery.
Via Hechinger Report.
School systems around the country are trying to use objective, quantifiable measures to identify which are the good teachers and which are the bad ones. One popular approach used in New York, Chicago and other cities, is to calculate a value-added performance measure (VAM). Essentially, you create a model that begins by calculating how much kids’ test scores, on average, increase each year. (Test score year 2 minus test score year 1). Then you give a high score to teachers who have students who post test-score gains above the average. And you give a low score to teachers whose students show smaller test-score gains. There are lots of mathematical tweaks, but the general idea is to build a model that answers this question: are the students of this particular teacher learning more or less than you expect them to? The teachers’ value-added scores are then used to figure out which teachers to train, fire or reward with bonuses.
Two academic researchers from the University of Southern California and the University of Pennsylvania looked at these value-added measures in six districts around the nation and found that there was weak to zero relationship between these new numbers and the content or quality of the teacher’s instruction.
“These results call into question the fixed and formulaic approach to teacher evaluation that’s being promoted in a lot of states right now,” said Morgan Polikoff, one of the study’s authors, in a video that explains his paper, “Instructional Alignment as a Measure of Teaching Quality,” published online in Education Evaluation and Policy Analysis on May 13, 2014. ”These measures are not yet up to the task of being put into, say, an index to make important summative decisions about teachers.”
Polikoff of the University of Southern California and Andrew Porter of the University of Pennsylvania looked at the value-added scores of 327 fourth- and eighth-grade mathematics and English language arts teachers across all six school districts included in the Measures of Effective Teaching (MET) study (New York City, Dallas, Denver, Charlotte-Mecklenburg, Memphis, and Hillsborough County, Florida). Specifically, they compared the teachers’ value added scores with how closely their instructional materials aligned with their state’s instructional standards and the content of the state tests. But teachers who were teaching the right things weren’t getting higher value-added scores.
They also looked at other measures of teacher quality, such as teacher observations and student evaluations. Similarly, teachers who won high marks from professional observers or students were also not getting higher value-added scores.
“What we’re left with is that state tests aren’t picking up what we think of as good teaching,” Polikoff said.
What’s interesting is that Polikoff’s and Porter’s research was funded by the Gates Foundation, which had been touting how teachers’ effectiveness could be estimated by their students’ progress on standardized tests. The foundation had come under fire from economists for flawed analysis. Now this new Gates Foundation’ commissioned research has proved the critics right. (The Gates Foundation is also among the funders of The Hechinger Report).
Polikoff said that the value-added measures do provide some information, but they’re meaningless if you want to use them to improve instruction. “If the things we think of as defining good instruction don’t seem to producing substantially better student achievement, then how is it that teachers will be able to use the value-added results to make instructional improvements?” he asked.
Polikoff concludes that the research community needs to develop new measures of teacher quality in order to “move the needle” on teacher performance.
You can read the entire report below
Educational Evaluation and Policy Analysis-2014-Polikoff
A good rundown of the changes the House made to SB229, from the Ennis, Roberts & Fischer Co law firm.
The Ohio House Education Committee has unveiled sweeping changes to Substitute Senate Bill 229 with regard to teacher and principal evaluations. The original version of SB 229, which passed the Senate unanimously on December 4th, 2013, modified frequency and composition of teacher evaluations and reduced some of the burden on school administrators. The new version of the Bill proposed by the House Education Committee, however, would modify both the OTES and OPES evaluation systems in ways that would undoubtedly place additional strain on the relatively untested evaluation systems. The proposed changes include the following:
Here's the Lesgislative Service commisions comparision document so oyu cna see what changes were made from the unanimously passed Senate version
- Bumps student growth measures back up to 50% from the 35% proposed by the Senate, unless a district elects to use an alternative “student survey” framework (available for grades 4-12), in which case the final rating would be comprised of 40% SGM, 40% teacher performance rating, and 20% student survey results;
- Requires that an evaluator use an average score if a teacher receives different scores on the observations and review components of the evaluations;
- Increases SGM from three to five total possible ratings: “Most Effective”, “Above Average”, “Average”, “Below Average”, and “Least Effective”;
- Adds new performance level rating of “Effective” that will exist in the realm between “Skilled” and “Developing”;
- Requires that at least one formal observation of a teacher be unannounced;
- Beginning in 2015, allows districts to evaluate “Accomplished” and “Skilled” teachers every other year, but only if the teacher’s SGM score is rated “Average” or higher (teachers must still receive one observation and a conference in the “off” year);
- District can elect not to evaluate 1) a teacher who is on leave for 70% or more of the year, and 2)a teacher who submitted notice of retirement before Dec. 1st;
- Teachers rated “Effective” “Developing” or “Ineffective” must be placed on an improvement plan;
- In 2015 and beyond, districts cannot assign students to a teacher who has been rated ineffective for two or more years (but does not specify what a district should do with these teachers!);
- A district is also prohibited from assigning a student teacher to a teacher who is “Developing” or “Ineffective” during the previous year;
- If a teacher with at least ten years of experience receives a designation of either “Least Effective” or “Below Average” on his/her SGM rating, that teacher may be rated “Developing” only once;
- Mandates that results of an evaluation must follow the teacher even if he/she is transferred to a new building or takes employment elsewhere;
- Requires ODE to develop a standardized framework for assessing SGM for all non-value added grade levels and subjects by 2016;
- By 2016, districts must administer assessments to students in each of grades K-12 for English Language Arts, Mathematics, Social Studies, and Science. Assessments must be selected by ODE and based on value-added progress dimension or vendor-developed student growth measures (may include assessments already required by law);
- Beginning next July, evaluators must verify completion of at least one evaluation training course outlined in the bill;
- After July 1, 2015, the State Board must ensure individuals seeking licensure as superintendent, assistant superintendent, principal, vocational director, administrative specialist, or supervisor have completed a teacher evaluator training;
- The revised bill mandates that the State Board of Education must develop a standards based system for principals and assistant principals, which districts must conform to;
- Third grade reading guarantee assessments must either be value-added or vendor-approved assessments;
- ODE must provide detailed report of school performance on evaluations to general assembly, and must accept comments for improvement from districts that it passes on to general assembly;
- Exempts from collective bargaining all amendments made by the bill to 3319.111, 3319.112, 3319.113, 3319.114, 3319.115, and 3319.117;
- Permits a district to enter into a MOU with union that stipulates value-added progress demission rating issued for 2014-2015 will not be used when making decisions regarding dismissal, retention, tenure or compensation.
SB229 Comparison Document