Get Updates via Email

When public schools get more money, students do better

Beginning 40 years ago, a series of court rulings forced states to reallocate money for education, giving more to schools in poor neighborhoods with less in the way of local resources. Critics such as Eric Hanushek, an economist at the Hoover Institution, argued these decisions were simply "throwing money at schools." His research found that there was little correlation between how much schools spent and how well their students performed on tests.

It's a view still held by many politicians today, including Gov. Andrew Cuomo (D-N.Y.). "We spend more than any other state in the country," he said a year ago. "It ain't about the money. It's about how you spend it -- and the results."

More recent research, however, has found that when schools have more money, they are able to give their students a better education. A new study on those who went to school during the school-finance cases a few decades ago found that those who attended districts that were affected by the rulings were more likely to stay in school through high school and college and are making more money today.

The authors, Kirabo Jackson and Claudia Persico of Northwestern University and Rucker Johnson of the University of California, Berkeley, released a revised draft of their as-yet-unpublished paper this week. The benefits were most obvious for students from poor families. They found that a 10 percent increase in the money available for each low-income student resulted in a 9.5 percent increase in students' earnings as adults. A public investment in schools, they wrote, returned 8.9 percent annually for a typical pupil who started kindergarten in 1980.

The findings are evidence that public schooling can be a way for children who grow up in poverty to overcome their circumstances, Johnson argued.

(Read more at the Washington Post).

Incorporating Stakeholder Feedback when Developing Value-Added Models

In light of the ODE report suggesting we're over-testing students, this study titled "Anticipating and incorporating stakeholder feedback when developing value-added models" offers further means to address the explosion in testing.
Abstract: State and local education agencies across the United States are increasingly adopting rigorous teacher evaluation systems. Most systems formally incorporate teacher performance as measured by student test-score growth, sometimes by state mandate. An important consideration that will influence the long-term persistence and efficacy of these systems is stakeholder buy-in, including buy-in from teachers. In this study we document common questions from teachers about value-added measures and provide research-based responses to these questions.
The study found four key issues that consistently came up with regard to the use of value-added for teacher evaluations:
1. Differentiated Students. How can the model deal with a teacher who has students who are different for some reason (e.g., poverty, special education, etc.)? Will that teacher be treated unfairly by the model?
2. Student Attendance. Will teachers be held accountable for students who do not regularly attend class?
3. Outside Events and Policies. How can the model account for major events (e.g., school closings for snow) or initiatives (e.g., Common Core implementation) that impact achievement?
4. Ex Ante Expectations. Why can’t teachers have their predicted scores – the target average performance levels for their students – in advance?
These questions still persist today, and are larely unanswered.

Here's the full report.

ODE thinks we've been over-testing kids by 20%

The Ohio Department of Education was tasked with producing a report detailing the amount of testing being performed in K-12 public schools. You can read the report itself, below.

The report find the following: Total Testing Time for the Average Student in a School Year, in Hours

Kindergarten 11.3
1 11.6
2 13.6
3 28
4 24
5 22.6
6 22.3
7 21.1
8 23
9 20.4
10 28.4
11 18.9
12 12.2
Total 257.4
Average 19.8

That's a lot of testing, and is not fully comprehensive as the report notes.

ODE goes on to provide 8 action steps being taken, and ends with a number of recommendations, including

This report includes a comprehensive package of legislative recommendations to shorten the amount of time students spend taking tests. These recommendations place limits on the overall time students spend taking tests each year, eliminate unnecessary tests and modify the Ohio Teacher Evaluation System. The following recommendations are contingent on each other and would require implementation as a comprehensive set of reforms. If this package of recommendations is adopted, the state can reduce the amount of time students are taking tests by nearly 20 percent.
A clear admission that the testing regime in Ohio has gotten out of hand by at least 20%

To get there ODE lays out the following recommendations
Recommendation 1: Limit the amount of time a student takes tests at the state and district levels to 2 percent of the school year, and limit the amount of time spent practicing for tests to 1 percent of the school year. These limits will encourage the state and districts to prioritize testing and guarantee to students and parents that the vast majority of time in the classroom will focus on instruction, not testing.


Recommendation 2: Eliminate the fall third-grade reading test and administer the test in the spring. Students who do not reach the required promotion score on the spring test will have a second opportunity to take the test in the summer.


Recommendation 3: Eliminate the state’s requirement that districts give mathematics and writing diagnostic tests to students in first grade through third grades.


Recommendation 4: Eliminate the use of student learning objective tests as part of the teacher evaluation system for grades pre-K to 3 and for teachers teaching in non-core subject areas in grades 4-12. The core areas are English language arts, mathematics, science and social studies. Teachers teaching in grades and subject areas in which student learning objectives are no longer permitted will demonstrate student growth through the expanded use of shared attribution, although at a reduced level overall. In cases where shared attribution isn’t possible, the department will provide guidance on alternative ways of measuring growth.
That last recommendation is a huge admission. Teachers and administrators have been hugely burdened developing and deploying SLO's and students have received little to no benefit from them. Here's the full report

Supt Ross Report on Testing

On Charter Schools, Auditor Yost Makes Promising Noises

During his swearing in, Auditor Yost had this to say
Charter schools will remain a major focus in Auditor Dave Yost's second term, the Republican announced Monday.

Mr. Yost, who was sworn in during a morning Statehouse ceremony, told reporters that while he plans to also prioritize public records, board accountability and data integrity, charter schools will be "front and center" during the first year of his new term.

"We audit every charter school now, if the legislature chooses to give us additional tools or greater responsibility we'll do that - we'll be part of that discussion too," he said. "I think there's some things that need to be addressed. There's multiple ways of doing it and that debate will unfold and I'll be part of it over the next few months."

The auditor launched an investigation into 19 charters managed by Concept Schools last year. (See Gongwer Ohio Report, October 14, 2014)

In addition to charter schools, Mr. Yost said he intends to focus on board accountability, saying he's "very concerned about the uneven quality of the unpaid boards that we charge with supervising various public functions."

"I'm not sure the lines of responsibility are sufficiently clear or that boards are being given the tools they need to succeed," he said. "And that goes from a small charter school board acting in a public behalf all the way up to large institutions spending millions of dollars."
That's good to hear. However, we are still waiting for his report on Horizon schools. He seemed much swifter, and public, with his investigation of data tampering at Columbus City Schools than he has been with this important investigation. A quick look at recent campaign finance reports show that Yost has been a significant beneficiary of contribtiuons from the charter sector, taking in almost $50,000 from David Brennan (White Hat) and William Lager (ECOT) alone. Only time will tell whether that was a sound investment by the for profit charter folks.

STUDY: Teachers Find No Value in the SAS Education Value-Added Assessment System

A new study published in the education policy analysis archives titled "Houston, We Have a Problem: Teachers Find No Value in the SAS Education Value-Added Assessment System (EVAAS®)" looks at the use of Value-add in the real world. Their findings are not shocking, but continue to be troubling as we enter a high-stakes phase of deployment.
Today, SAS EVAAS® is the most widely used VAM in the country, and North Carolina, Ohio, Pennsylvania and Tennessee use the model state-wide (Collins & Amrein-Beardsley, 2014). Despite widespread popularity of the SAS EVAAS®, however, no research has been done from the perspective of teachers to examine how their practices are impacted by this methodology that professedly identifies effective and ineffective teachers. Even more disconcerting is that districts and states are tying consequences to the data generated from the SAS EVAAS®, entrusting the sophisticated methodologies to produce accurate, consistent, and reliable data, when it remains unknown how the model actually works in practice.
As you can see, the findings here are directly relevant to educators in Ohio. The report looked at a number of factors, including reliability, which once again proves to be anything but
As discussed in related literature (Baker et al., 2010; Corcoran, 2010; EPI, 2010; Otterman, 2010; Schochet & Chiang, 2010) and preliminary studies in SSD (Amrein-Beardsley & Collins, 2012), it was evident that inconsistent SAS EVAAS® scores year-to-year were an issue of concern. According to teachers who participated in this study, reliability as measured by consistent SAS EVAAS® scores year-to-year was ironically, an inconsistent reality. About half of the responding teachers reported consistent data whereas the other half did not, just like one would expect with the flip of a coin (see also Amrein-Beardsley & Collins, 2012).

Reliability Implications
Unless school districts could prevent teacher mobility and ensure equal, random student assignment, it appears that EVAAS is unable to produce reliable results, at least greater than 50% of the time.
A random number generator isn't an appropriate tool for measuring anything, let alone educator effectiveness that might lead to high-stakes career decisions.

Furthermore, the study found that teachers are discovering that despite claims to the contrary, the SAS formula for calculating Value-add is highly dependent upon the student population

teachers repeatedly identified specific groups of students (e.g., gifted, ELL, transition, special education) that typically demonstrated little to no SAS EVAAS® growth. Other teachers described various teaching scenarios such as teaching back-to-back grade levels or switching grade levels which negatively impacted their SAS EVAAS® scores. Such reports contradict Dr. Sanders’ claim that a teacher in one environment is equally as effective in another (LeClaire, 2011).
In conclusion, the study finds
The results from this study provide very important information of which not only SSD administrators should be aware, but also any other administrators from districts or states currently using or planning to use a VAM for teacher accountability. Although high-stakes use certainly exacerbates such findings, it is important to consider and understand that unintended consequences will accompany the intended consequences of implementing SAS EVAAS®, or likely any other VAM. Reminiscent of Campbell’s law, the overreliance on value-added assessment data (assumed to have great significance) to make high-stakes decisions risks contamination of the entire educational process, for students, teachers and administrators (Nichols & Berliner, 2007). Accordingly, these findings also strongly validate researchers’ recommendations to not use value-added data for high-stakes consequences (Eckert & Dabrowski, 2010; EPI, 2010; Harris, 2011). While the SAS EVAAS® model’s vulnerability as expressed by the SSD EVAAS®-eligible teachers is certainly compounded by the district’s high-stakes use, the model’s reliability and validity issues combined with teachers’ feedback that the SAS EVAAS® reports do not provide sufficient information to allow for instructional modification or reflection, would make it seem inappropriate at this point to use value-added data for anything.
the full study can be read below.

Read more: STUDY: Teachers Find No Value in the SAS Education Value-Added Assessment System

(c) Join the Future