growth

Why Test Scores CAN'T Evaluate Teachers

From the National Education Policy Center. the entire post is well worth a read, here's the synopsis

The key element here that distinguishes Student Growth Percentiles from some of the other things that people have used in research is the use of percentiles. It's there in the title, so you'd expect it to have something to do with percentiles. What does that mean? It means that these measures are scale-free. They get away from psychometric scaling in a way that many researchers - not all, but many - say is important.

Now these researchers are not psychometricians, who aren't arguing against the scale. The psychometricians as who create our tests, they create a scale, and they use scientific formulae and theories and models to come up with a scale. It's like on the SAT, you can get between 200 and 800. And the idea there is that the difference in the learning or achievement between a 200 and a 300 is the same as between a 700 and an 800.

There is no proof that that is true. There is no proof that that is true. There can't be any proof that is true. But, if you believe their model, then you would agree that that's a good estimate to make. There are a lot of people who argue... they don't trust those scales. And they'd rather use percentiles because it gets them away from the scale.

Let's state this another way so we're absolutely clear: there is, according to Jonah Rockoff, no proof that a gain on a state test like the NJASK from 150 to 160 represents the same amount of "growth" in learning as a gain from 250 to 260. If two students have the same numeric growth but start at different places, there is no proof that their "growth" is equivalent.

Now there's a corollary to this, and it's important: you also can't say that two students who have different numeric levels of "growth" are actually equivalent. I mean, if we don't know whether the same numerical gain at different points on the scale are really equivalent, how can we know whether one is actually "better" or "worse"? And if that's true, how can we possibly compare different numerical gains?

[readon2 url="http://nepc.colorado.edu/blog/why-test-scores-cant-evaluate-teachers"]Continue reading...[/readon2]

Senate budget - good, bad, ugly

The much anticipated Senate budget, when it comes to education policy, could be titled "The Good, the Bad and the Ugly". We've already discussed the ugly, let's take a look at the good and the bad.

The Good

The statewide parent trigger, proposed by the governor and eliminated by the house, is not proposed by the Senate either and appears dead, for now.

The Senate also includes a fix to HB555 and the onerous teacher evaluation provisions it contained. Here's what the fix proposes

Prescribes that the student academic growth factor must account for 35% (rather than 50% as under current law) of each evaluation under the standards-based state framework for evaluation of teachers developed by the State Board of Education and permits a school district to attribute an additional percentage to the student academic growth factor, not to exceed 15% of each evaluation.

Specifies that, when calculating student academic growth for a teacher evaluation, students who have had 30 or more excused or unexcused absences for the school year must be excluded (rather than excluding students with 60 or more unexcused absences as under current law).

Ohio Revised Code labels a student as a chronic truant if they are absent 14 days, so 30 days is still a high number of absences to allow, but it is certainly better than the ridiculous 60 days in current law. The reduction in the use of VAM to 35% from 50% is a welcome improvement.

The Governor proposed eliminating the single salary schedule, and the House concurred. The Senate however strikes this proposal from their budget. We suspect there will be pressure applied to put this back in. Educators and support professional should continue to apply their own pressure on legislators to keep it out.

The Senate also eliminated the home-school freeloading provision the House added that would have allowed home schoolers to participate in district extra-curricular activities at no expense.

The Bad

The Governor proposed a massive statewide voucher expansion effort, the House concurred, and the Senate has left the proposal in too. With massive opposition to this proposal we were a little surprised the Senate left this unnecessary proposal in their budget.

Charter schools get a number of additional free passes from the Senate, including an e-school exemption for phys ed., an additional qualifying condition for vouchers, and a provision that would make charter school closures more difficult as LSC notes it "May be more difficult to close community schools after July 1, 2013 (compared with current law after that date).". The Senate also eliminates a charter school teacher quality provision for charters populated primarily with students with disabilities. A number of other smaller provisions setting charter schools on a longer path to failure are also propsed by the Senate, such as:

Exempts students of chartered nonpublic schools accredited through the Independent School Association of the Central States from passing the end-of-course examinations as a prerequisite for graduation from high school.

The Charter school business doesn't contribute millions of dollars a year to Republican politicians for nothing.

The challenging

The Senate adds a new levy type aimed at school safety

Authorizes school districts to levy a property tax exclusively for school safety and security purposes. Requires the levy to comply with the same requirements that apply to general school district levies in excess of the 10-mill limitation.

A good intentioned proposal aimed at lowering violence in schools, but there should be concern that a safety levy might reduce local taxpayers appetites for funding levies for normal school operations, the core purpose of schools themselves. School districts will have to be mindful in how they approach this issue.

Here's the full comparative document of the education section of the budget

Senate Sub HB59

The Foolish Endeavor of Rating Ed Schools by Graduates’ Value-Added

Via School Finance 101.

Knowing that I’ve been writing a fair amount about various methods for attributing student achievement to their teachers, several colleagues forwarded to me the recently released standards of the Council For the Accreditation of Educator Preparation, or CAEP. Specifically, several colleagues pointed me toward Standard 4.1 Impact on Student Learning:

4.1.The provider documents, using value-added measures where available, other state-supported P-12 impact measures, and any other measures constructed by the provider, that program completers contribute to an expected level of P-12 student growth.

http://caepnet.org/commission/standards/standard4/

Now, it’s one thing when relatively under-informed pundits, think tankers, politicians and their policy advisors pitch a misguided use of statistical information for immediate policy adoption. It’s yet another when professional organizations are complicit in this misguided use. There’s just no excuse for that! (political pressure, public polling data, or otherwise)

The problems associated with attempting to derive any reasonable conclusions about teacher preparation program quality based on value-added or student growth data (of the students they teach in their first assignments) are insurmountable from a research perspective.

Worse, the perverse incentives likely induced by such a policy are far more likely to do real harm than any good, when it comes to the distribution of teacher and teaching quality across school settings within states.

First and foremost, the idea that we can draw this simple line below between preparation and practice contradicts nearly every reality of modern day teacher credentialing and progress into and through the profession:

one teacher prep institution –> one teacher –> one job in one school –> one representative group of students

The modern day teacher collects multiple credentials from multiple institutions, may switch jobs a handful of times early in his/her career and may serve a very specific type of student, unlike those taught by either peers from the same credentialing program or those from other credentialing programs. This model also relies heavily on minimal to no migration of teachers across state borders (well, either little or none, or a ton of it, so that a state would have a large enough share of teachers from specific out of state institutions to compare). I discuss these issues in earlier posts.

Setting aside that none of the oversimplified assumptions of the linear diagram above hold (a lot to ignore!), let’s probe the more geeky technical issues of trying to use VAM to evaluate ed school effectiveness.

There exist a handful of recent studies which attempt to tease out certification program effects on graduate’s student’s outcomes, most of which encounter the same problems. Here’s a look at one of the better studies on this topic.

  • Mihaly, K., McCaffrey, D. F., Sass, T. R., & Lockwood, J. R. (2012). Where You Come From or Where You Go?

Specifically, this study tries to tease out the problem that arises when graduates of credentialing programs don’t sort evenly across a state. In other words, a problem that ALWAYS occurs in reality!

Researchy language tends to downplay these problems by phrasing them only in technical terms and always assuming there is some way to overcome them with statistical tweak or two. Sometimes there just isn’t and this is one of those times!

[readon2 url="http://schoolfinance101.wordpress.com/2013/02/25/revisiting-the-foolish-endeavor-of-rating-ed-schools-by-graduates-value-added/"]Continue reading...[/readon2]

Improving the Budget Bill Part I

Hb 59, the Governor's budget bill can be significantly improved during the legislative process. We're going to detail some of the ways improvements can be made.

Improvements can first start by correcting a major policy flaw inserted into HB555 at the last minute. HB 555 radically changed the method of calculating evaluations for about 1/3 of Ohio's teachers. If a teacher's schedule is comprised only of courses or subjects for which the value-added progress dimension is applicable - then only their value-add score can now be used as part of the 50% of an evaluation based on student growth. Gone is the ability to use multiple measures of student growth - i.e. Student Learning Objectives or SLO's.

Therefore we suggest the legislature correct this wrong-headed policy by repealing this provision of HB555.

Furthermore, greater evaluation fairness could be achieved by lowering the number of absences a student is allowed before their test scores can be excluded from a teacher's value-add score. Currently a student needs to be absent 60 times - or 1/3 of a school year. This is an absurd amount of schooling to miss and still have that student's score count towards the evaluation of his or her teacher. This absence exclusion should be lowered to a more reasonable 15 absences.

Value-add should not be used to punish teachers on evaluations, instead it should be just one component of a multiple measure framework, and a tool to help teachers improve student learning. HB555 moved us much further away from that goal.