Yesterday, a colleague and coauthor on two recent articles – Kevin Welner (U. of Colorado) – wrote a scathing critique of the manifesto on fixing urban schools that was released last week by several large city superintendents.
Kevin Welner’s commentary can be found here: http://voices.washingtonpost.com/answer-sheet/guest-bloggers/manifesto-should-be-resignatio.html
The manifesto can be found here: http://www.washingtonpost.com/wp-dyn/content/article/2010/10/07/AR2010100705078.html
Kevin Carey notes in his critique of Kevin Welner:
I highlight this because it’s crucial to understanding the worst intellectual pathologies of the education establishment. People like Welner don’t just think that Joel Klein, Michele Rhee, Andres Alonso, and Arlene Ackerman are making bad decisions in the course of helping poor children learn. Welner believes that by asserting that poor children can learn, the superintendents are hurting the cause of making poor children less poor. While many people believe this, most choose not to say it so clearly.
I urge you to take a look at what Kevin Welner actually said in his commentary. The centerpiece of Kevin Welner’s argument was that the superintendents and others behind the manifesto were making a strong sales pitch for fast-tracking education reform strategies for which the research base is mixed at best. Kevin Welner asks:
Are these adults acting responsibly when they advocate for even more test-based accountability and school choice? Over the past two decades, haven’t these two policies dominated the reform landscape – and what do we have to show for it? Wouldn’t true reform move away from what has not been working, rather than further intensifying those ineffective policies? Are they acting responsibly when they promote unproven gimmicks as solutions?
Are they acting responsibly when they do not acknowledge their own role in failing to secure the opportunities and resources needed by students in their own districts, opting instead to place the blame on those struggling in classrooms to help students learn?
And Kevin Welner summarizes the manifesto as follows:
Move money from neighborhood schools to charter schools!
Make children take more tests!
Move money from classrooms to online learning!
Blame teachers and their unions – make them easier to fire!
Tie teacher jobs and salaries to student test scores!
None – literally NONE – of these gimmicks is evidence-based.
I tend to agree that the findings on expansion of charters are mixed at best, and that tying teacher ratings to test scores is deeply problematic. Perhaps what irked Kevin Carey most here, is that he has convinced himself, through exceedingly flimsy logic, that he Kevin Carey is right, and that other Kevin, Kevin Welner is simply wrong on these points. Allow me to bring you back to a series of recent comments by Kevin Carey that display his completely distorted understanding of research on charters (and implications for policy) and the usefulness of value-added modeling to rate teachers.
Kevin Carey on Charters
Here’s a recent quote from Kevin Carey, attacking the civil rights framework on whether the evidence supports expansion of charter schools.
Here’s the problem: the contention that charters have “little or no evidentiary support” rests on studies finding that the average performance of all charters is generally indistinguishable from the average regular public school. At the same time, reasonable people acknowledge that the best charter schools–let’s call them “high-quality” charter schools–are really good, and there’s plenty of research to support this.
I have noted previously, here, that I find this to be one of the most patently stupid arguments I think I’ve seen in a long time.
To put it in really simple terms:
THE UPPER HALF OF ALL SCHOOLS OUTPERFORM THE AVERAGE OF ALL SCHOOLS!!!!!
or … Good schools outperform average ones. Really?
Why should that be any different for charter schools (accepting a similar distribution) that have a similar average performance to all schools?
This is absurd logic for promoting charter schools as some sort of unified reform strategy – Saying… we want to replicate the best charter schools (not that other half of them that don’t do so well).
Yes, one can point to specific analyses of specific charter models adopted in specific locations and identify them as particularly successful. And, we might learn something from these models which might be used in new charter schools or might even be used in traditional public schools.
But the idea that “successful charters” (the upper half) are evidence that charters are “successful” is just plain silly.
Kevin Carey on Value-Added Teacher Ratings
In the New York Times Room for Debate series on value-added measurement of teachers, Carey argued that Value-added measures would protect teachers from favoritism. Principals would no-longer be able to go after certain teachers based on their own personal biases. Teachers would be able to back up their “real” performance with hard data. Here’s a quote:
“Value-added analysis can protect teachers from favoritism by using hard numbers and allow those with unorthodox methods to prove their worth.” (Kevin Carey, here)
The reality is that value-added measures simply create new opportunities to manipulate teacher evaluations through favoritism. In fact, it might even be easier to get a teacher fired by making sure the teacher has a weak value-added scorecard. Because value-added estimates are sensitive to non-random assignment of students, principals can easily manipulate the distributions of disruptive students, students with special needs, students with weak prior growth and other factors, which, if not fully accounted for by the VA model will bias teacher ratings. More here!
Kevin Carey also claims as a matter of accepted fact, that VA measures “level the playing field for teachers who are assigned students of different ability.” This statement, as a general conclusion, is wrong.
- VA measures do account for the initial performance level of individual students, or they would not be VA measures. Even this becomes problematic when measures are annual rather than fall/spring, so that summer learning loss is included in the year to year gain. An even more thorough approach for reducing model bias is to have multiple years of lagged scores on each child in order to estimate the extent to which a teacher can change a child’s trajectory (growth curve). That makes it more difficult to evaluate 3rd or 4th grade teachers, where many lagged scores aren’t yet available. The LAT model may have had multiple years of data on each teacher, but didn’t have multiple lagged scores on each child. All that the LAT approach does is to generate a more stable measure for a teacher, even if it is merely a stable measure of the bias of which students that teacher typically has assigned to him/her.
- VA measures might crudely account for socio-economic status, disability status or language proficiency status, which may also affect learning gains. But, typical VA models, like the LA Times model by Buddin tend to use relatively crude, dichotomous proxies/indicators for these things. They don’t effectively capture the range of differences among kids. They don’t capture numerous potentially important, unmeasured differences. Nor do they typically capture classroom composition – peer group – effect which has been shown to be significant in many studies, whether measured by racial/ethnic/socioeconomic composition of the peer group or by average performance of the peer group.
- For students who have more than one teacher across subjects (and/or teaching aides/assistants), each teacher’s VA measures may be influenced by the other teachers serving the same students.
I could go on, but recommend revisiting my previous posts on the topic where I have already addressed most of these concerns.
Intellectual pathologies? Pot… kettle?