Wednesday, July 20, 2011

The grand coalition against teachers - part 2

In part 1, we were looking at an op-ed by Joanne Barkin originally published in Dissent Magazine, and passed along as worthy of a read by NYSUT. It's a long article, so I was hitting the high points. The first two were:

1) Research shows that teachers are the most important in-school factor in student achievement. Research also shows that out-of-school factors weigh twice as much as in-school factors.

2) Education reformers have used international test results to stampede Americans toward school changes, with most of these changes centered on getting rid of "bad teachers." Do this, they say, and all will be well with our schools. Our schools, however, have a much greater level of poverty (e.g. 20.7% vs 3.4% for Finland). When you correct for poverty, our students are right up there with the best in the world.

Let's look at some additional points:

3) Reformers want to use VAM "value-added modeling" to measure the quality of a teacher's work. This produces a numerical score based on the performance of students on standardized tests. (NY State will use this number as 40% of a teacher's evaluation.) It sounds simple: every teacher gets a number generated by a complex formula.

How do the researchers--the people who use numbers on a daily basis in their work--feel about this? " So far, the consensus judgment of the research community is not positive. Experts at the National Research Council of the National Academy of Sciences, the National Academy of Education, RAND, and the Education Testing Service have repeatedly warned policy makers against using test scores to measure teacher effectiveness.... In a 2009 report to the U.S. Department of Education, the Board on Testing and Assessment of the National Research Council wrote, “Even in pilot projects, VAM estimates of teacher effectiveness that are based on data for a single class of students should not be used to make operational decisions because such estimates are far too unstable to be considered fair or reliable.” Oops!


"Yet reformers have not only made this approach the cornerstone of their project, they’ve successfully sold the idea to politicians across the country who are rushing to write it into state laws. And the public is going along....VAM has the appeal of being mathematical, complex, and data based. It’s the kind of technical fix that sounds convincing; it readily wins hearts, not minds."


For those of you who would like to "look under the hood" of VAM, Barkin provides an "Optional Introduction to VAM," explaining just how it works. Here's just a tiny bit:


"In education, “growth models” typically compare a student’s test score one year to previous test scores in the same subject. Value-added models are a type of growth model that statisticians use in education to estimate how much Teacher X added to the learning of her or his students over the course of a school year in one subject."


"The most commonly used variant of VAM compares the average score of a teacher’s students (in, say, fourth-grade math) at the end of the school year to the average score of the same students at the end of the previous year in the same subject. The difference between the two scores is the “actual growth” of the teacher’s students. Their actual growth is then compared to what is called their “expected growth,” which is the average growth of a comparison group....The difference between the actual growth and expected growth of a teacher’s students is the teacher’s value-added score. "


"VAM has serious flaws. First of all, the tests don’t account for the fact that the specific content in a subject changes from year to year....Year-to-year scaling is extremely complicated even in a subject like elementary school reading. It’s impossible when the skill set changes from something like algebra to an entirely different skill set like geometry....VAM has still other shortcomings. How do you calculate the value-added score in team teaching? How do you account for the effects of outside tutoring that only some students receive or (depending on when the tests are given) the widely differing gains and losses in learning over the summer? How should students who transfer into a class midyear be counted?"


"As an accurate and therefore useful tool to measure teacher effectiveness, VAM fails. Regrettably, this gives reformers no pause."


Florida has a new law (SB 736) which will require that a VAM score will be used as 50% of a teacher's evaluation. Even those who once thought this a good idea are beginning to have second thoughts: "On April 4, 2011, Frederick Hess, director of Education Policy Studies at the conservative American Enterprise Institute (AEI) and a tireless ed-reform advocate, wrote this about Florida’s law in his Education Week blog:

"I’ll bet right now that SB 736 is going to be a train wreck. Mandatory terminations will force some good teachers out of good schools because of predictable statistical fluctuations, and parents will be livid. Questions about cheating will rear their ugly head. A thrown-together growth model and rapidly generated tests, pursued with scarce resources and under a new Commissioner, are going to be predictably half-baked and prone to problems."

We'll take a look at additional point in the Barkin piece in future posts. Here is the link to the full article:

http://dissentmagazine.org/online.php?id=504

Feel free to post your thoughts.

No comments:

Post a Comment