Teachers experience online testing fiasco

Diane Ravitch published a letter from a New Hampshire school Principal. The Princiapls school teachers had just trailed the online Smarter Balance tests (In Ohio, we'll be using PARCC, not Smarter Balance, but much of thesame problems are expected to be encountered). Needless to say, it didn't go well.

This communication is to share the sentiment of the FMS staff as it relates to the Smarter Balance Test. As you know, our staff used the December Early Release to take this test with the goal that it would provide us with some insight how we might incorporate the “common core” and the format of the Smarter Balance Test into our instructional practices. We believe that we successfully incorporated the NECAP Test format and GLEs into our teaching practices, especially in our “Bell Activities. “

Although a few staff members shared that they believed that our test scores would improve over time, I was surprised with the responses from the FMS teachers when we gathered to debrief after taking the test. I was hopeful that we would have teachers sharing test vocabulary, ideas for test taking, and strategies to help prepare our students for the 2015 Smarter Balance Test. Instead, teachers shared frustrations they had when they were taking the test and disappointment in test format and the difficulties they had trying to use their computer to take this test.

The comments shared below come from successful dedicated veteran teachers. I have much respect for this staff and I not only appreciated the honesty of the staff in their response to the Smarter Balance Test; but, I am hopeful that the Nashua School District will accept these responses in a positive way and not look at the comments as “negative” or “unprofessional”. The FMS staff collectively believe that the Smarter Balance Test is inappropriate for our students at this time and that the results from this test will not measure the academic achievement of our students; but will be a test of computer skills and students’ abilities to endure through a cumbersome task.

Listed below are some of the concerns that were shared by our staff:

*I feel sad for the students who have to take this test — not many will be successful.

*Much is said about “depersonalizing” information as part of a learning strategy. This is not how students learn.

*There is too much “stuff” going on the screen at once. It is difficult to move the icons where you want them. Students don’t know how to use the “mouse” everything for them is “tough screen”.
If you leave the screen for a short period of time the information on the screen will be gone when you return. “I tried the grade six-grade math—it was humbling. It was scary.

*I had technology problems. If kids have these problems they’ll just quit.

*Double-wide monitors would help. I am a huge fan of concept maps but notepad does not let you do that on Smarter Balance. You can’t even copand paste from the notepad into the test.

*This was more of a test on the computer skills than on the math concepts. If I was a student I would just pickout an answer and move on.

*Too tedious—kids will get sick of it and just guess to move on. Kids won’t even get past the computer directions.

These are just a sample of the concerns that were raised at this meeting. We did shift to “what do we have to do from now until the spring of 2015 to prepare students. Sample answers include:

*Pay attention to the directions. Provide students with many opportunities to read directions for their assignments.

*You can’t just read this test and then respond. Students need to highlight and take notes—especially during the audio questions.

*Students need to learn to “read the question first”.

*Students need to be able to go back into the text passages to pull out data that will support their answers.

*Students need to read through the questions and all possible answers. Sometimes questions give the answers to other questions in the test.

*Kids need to know how to do “note taking”.

*We need to teach students “how to draw an inference”.

*Students need to learn how to write a transition sentence between two paragraphs.
*Students need to learn how to write using “the speakers” voice.
*Students need to memorize formulas in this test.
*Students will have difficulty writing in the boxes that expand because of the technology of the way the box expands.
*Students will have trouble reading and understanding the directions and what is being asked by the question. Is this test closely aligned to the “common core?” It is important that teachers know what the test will be assessing.

*I am concerned that the math test is not necessarily testing students’ math abilities since there is so much reading. This test seems to assess how well the students read the math questions more than their math skills. Thus, because of the amount of reading, I question the validity of our receiving a math ability score.

*When Measured Progress developed the NECAP there was a committee on bias to check for testing bias. Does Smarter Balance do the same? Also, math teachers were asked to evaluate the questions to eliminate unnecessary verbiage so that the Math was being tested.

*The opening pages of directions and computer information was ridiculous. I didn’t read it—I’m sure my students won’t. Suggestions: We should have posters made of the most important and often used keys to post in each math classroom. Students need to practice making equations in Word, including the fractions symbol. We need to teach students to distinguish between on correct answer and many correct answers. There are questions that tell the students to choose the correct answers.

*The test is difficult to navigate with so many keystrokes to juggle.
*The page layout makes it eye weary even though you can expand the screen and zoom in and out.
*The passages are lengthy and time consuming and made me consider just choosing “B” so I could move on. Some terms in the reading seemed out-dated—“Plumb crazy and millwright” for example.
I had to use multiple skills and at the same time multitask—id—the audio portions require me to listen and at the same time read possible answers while constructing a well written paragraph in my head.

*The test assumes the students are skilled in such areas as pre-reading and questions and if they are not, it assumes they will learn while taking the test to read the questions in advance of the reading.

*There wasn’t a flow or cadence to the questions. The type or style of questions changed from one to the next. The answers were not straight forward—for example on the math test they did not want the answer to the equation, they wanted to know if the answer was 2/3rd greater than what you started with. I understand this is import ant but this test will be exhausting for the kids.

*The idea of the best answer and then there being 2 or more good and appropriate answers. It felt like a trick. We’re going to look bad for a few years.

*I did 30 questions in an hour and then had to take a break. My eyes hurt and my shoulders felt strained. When I returned 5 minutes later the work was gone.

*Each question is totally different than the one before it creating confusion which creates more confusion for the test takers.

*Frustration level builds as your take the test creating mental despair—students will shut down.
Many of the math questions seemed to have no basis in the real world and skills that will never be used in life. Students will need to be taught the technology skills for the test.—scrolling through screens, highlighting, scanning the questions, touch typing, and more.

*The test does not encourage students to use writing webs, brain maps, organizers to assist with writing. Summary: In my opinion, this test is a sad indictment of how disconnected the people who design the test are from the typical students in the classroom. Assessment is necessary but it should be designed to be developmentally appropriate for the students being tested. Assessment should also all for different methods to demonstrate competency rather than one computer model. This test is designed for one type of student—the verbal learner with exceptional executive functioning skills.

*I took the Grade 7 Language Arts test which I believe is developmentally designed for adults, not seventh grade students. The questions were tedious and punitive. I’m not sure that any seventh grader in the St ate would be able to score well on this test. The worst part of this test was the directions. They were numerous and multifaceted. After observing middle school students take tests for over a decade, it is my firm belief that most kids will stop reading the directions because there are too many and they are far too complex. Students will fail this test and the test will destroy their confidence which is an important stage of their development. In addition, the results of this test will become a public relations night mare for the school and the school districts as children will fail in large numbers.

Share your comment:

Public schools are neither cars nor boxes of cereal - let's stop treating them like they are

Via Alternet

We Americans love choice. Just look at the cereal aisle in Giant Eagle. You could choose a different box every day of the month and still have more varieties left to try. But public schools are not corn flakes. Here’s the problem with “choice” when we’re talking about public education.

When we’re in the cereal aisle, we are consumers looking for our favorite brand, the best price, or perhaps grabbing a box of sugar filled junk with a toy surprise inside to appease our screaming two year old who won’t stay in the cart (been there). But schools are public goods, not consumer goods. Think about other public goods and services that you use, such as public safety. We don’t want to choose from different police providers, we want our local police department to be great: to offer high-quality service that meets the needs of our local community.

We don’t need more choices in public education. We need great public schools in every community, that any parent would be happy to send their children to, and that meet the needs of local families. We don’t really have any choice at all if our local public school is not a high quality option.

Choice is a free market ideology. Markets do a good job making stuff and selling it. But they also create extreme inequality, with winners and losers. Choice alone doesn’t guarantee quality: you can stick five kinds of dirt in those cereal boxes and offer them as a “choice,” but nobody wants to eat that. Pennsylvania teacher and blogger Peter Greene compares school choice to the drive to mediocrity in the cable TV industry and explains, “Market forces do not foster superior quality. Market forces foster superior marketability.”

The parent-as-consumer model promotes school choice as an individual choice, abrogating our responsibility as citizens to provide great public schools for all children. Public schools are community institutions that must meet the needs of communities.

Continue reading...

Share your comment:

Poll conducted by corporate Ed backers, backfires

The Friedman Foundation For Educational Choice, a corporate education booster, ran a poll. They didn't exactly get the results they were hoping for.
How does a policy of school choice compare to other reform initiatives in their perceived efficacy for school improvement?
Figure 8(below) includes the average perceived efficacy for each type of school reform after controlling for covariates. School choice in the form of vouchers is in the middle of the pack, with smaller class sizes, technology, and accountability perceived as more efficacious and reducing teachers’ unions’ influence, merit pay, and longer school days as less efficacious.

Even after an attempt to goose the results (pg 5 "source comes from a poll conducted by the Friedman Foundation that included a nationwide sample and an oversampling of mothers of school- age children (“school moms”)."), respondents didn't think much of merit pay, busting teachers unions or vouchers. What they cared most for was smaller class sizes and better technology.

If those results weren't bad enough for our corporate reformers, they had more bad news in their poll

Only 29% of respondents liked the idea of tax payer funded vouchers to pay for private schools,

Share your comment:

Too Many Bad Choices

The school "choice" expansion in Ohio from $0 to $1.1 billion annually is having a draining effect on traditional public schools. With most of money lost to lower performing, unregulated charter schools. This rapid unregulated expansion is also taking a tremendous toll on "choice" schools themselves.

The Columbus Dispatch reported in January that 17 charter schools in the city had failed this year alone.

Nine of the 17 schools that closed in 2013 lasted only a few months this past fall. When they closed, more than 250 students had to find new schools. The state spent more than $1.6 million in taxpayer money to keep the nine schools open only from August through October or November.

The problem is not isolated to Columbus, as the Toeldo Blade reports
Secor Gardens Academy’s short tenure in Toledo came to an abrupt end, as the school closed its doors over the weekend because of financial struggles.

The charter school, which opened in the fall, was based in the back of the Armory Church, 3319 Nebraska Ave. School Superintendent Samuel Hancock and others involved with the school realized the school’s finances had become untenable, said James Lahoski, superintendent of North Central Ohio Educational Service Center, the school’s sponsor.
Comically, the leader of this "school" couldn't even provide an enrollment figure. According to the Department of Education FY 14 Detail Funding Report this charter was receiving $220,952.70 for FY 2014. We wonder if the leader of this school is unable to account for that, too.

According to the dispatch 29% of Ohio's charter schools have closed, and the pace of opening and closing is accelerating

It took 15 years for Ohio’s list of closed charters to reach 134; then that number grew by almost 13 percent last year from charters closing in Columbus alone.
We expect the acceleration to continue, as the Cincinnati Enquirer notes, Forty-five new charter schools opened in Ohio this academic year, but with only 600 new students. That isn't sustainable. Schools cannot properly run with just a handful of students attending them, and they certainly shouldn't be ran out of the backs of churches. Students need a stable learning environment, with quality facilitates in order to thrive.

Charter schools in Ohio have been allowed to epxand faster than the pool of students wishing to attend them, and the oversight of the quality of the schools being opened has been cast aside.

The School Choice movement and the legislature in Ohio needs to step up and put an end to this train wreck, and take a long hard look at the charter school authorizing process.

Share your comment:

Teacher to lose $250 school supplies tax break

According to a survey by the National School Supply and Equipment Association, teachers spent $485 of their own money, on average, on supplies during the 2012-13 school year.

Most teachers say spending personal money on supplies is just part of the job. But while teachers could deduct as much as $250 of that from their taxable income in the past, they won't be able to going forward.

"We love our jobs too much just to worry about that tax thing," Foster said. "We're going to do our jobs whether we get the tax break or not."

"You need supplies to make sure to enhance what your kids are learning," he said. "If the parents don't have money to buy kids a lot of supplies, that's where teachers come in and make sure they have enough."

Congress let that tax break expire at the end of 2013 along with a bunch of others. Some lawmakers are working to bring it back, but for the moment, it's gone.

Mary Kusler, director of government relations for the National Education Association, said this isn't the first time the tax break has expired. Each time, it has come back, she said, and it probably will again.

Until it does, local teachers will make do with that they have.

Share your comment:

Reliability and Validity of Inferences About Teachers Based On Student Test Scores

A report titled "Reliability and Validity of Inferences About Teachers Based On Student Test Scores" by Edward H. Haertel of Stanford University, published by ETS Research & Development Center for Research on Human Capital and Education Princeton, becomes yet another paper that casts grave doubt on the use of Value-add for the purposes of evaluating teachers
Policymakers and school administrators have embraced value-added models of teacher effectiveness as tools for educational improvement. Teacher value-added estimates may be viewed as complicated scores of a certain kind. This suggests using a test validation model to examine their reliability and validity. Validation begins with an interpretive argument for inferences or actions based on value-added scores. That argument addresses (a) the meaning of the scores themselves — whether they measure the intended construct; (b) their generalizability — whether the results are stable from year to year or using different student tests, for example; and (c) the relation of value-added scores to broader notions of teacher effectiveness — whether teachers’ effectiveness in raising test scores can serve as a proxy for other aspects of teaching quality. Next, the interpretive argument directs attention to rationales for the expected benefits of particular value-added score uses or interpretations, as well as plausible unintended consequences. This kind of systematic analysis raises serious questions about some popular policy prescriptions based on teacher value-added scores
The whole report, included below is worth a read, or at least a skip to the conclusion
My first conclusion should come as no surprise: Teacher VAM scores should emphatically not be included as a substantial factor with a fixed weight in consequential teacher personnel decisions. The information they provide is simply not good enough to use in that way. It is not just that the information is noisy. Much more serious is the fact that the scores may be systematically biased for some teachers and against others, and major potential sources of bias stem from the way our school system is organized. No statistical manipulation can assure fair comparisons of teachers working in very different schools, with very different students, under very different conditions. One cannot do a good enough job of isolating the signal of teacher effects from the massive influences of students’ individual aptitudes, prior educational histories, out-of-school experiences, peer influences, and differential summer learning loss, nor can one adequately adjust away the varying academic climates of different schools. Even if acceptably small bias from all these factors could be assured, the resulting scores would still be highly unreliable and overly sensitive to the particular achievement test em- ployed. Some of these concerns can be addressed, by us- ing teacher scores averaged across several years of data, for example. But the interpretive argument is a chain of reasoning, and every proposition in the chain must be supported. Fixing one problem or another is not enough to make the case.

Reliability and Validity of Inferences About Teachers Based On Student Test Scores

Share your comment:

Get Involved

 Email*
 First Name
 Last Name
 School District

Search