Follow up Question Guide for Ed Writers (on Teacher Evaluation)


I was reviewing the past few days of news coverage on NJ teacher evaluations and came across the following quote, which was not-so-amazingly left unchallenged:

Cerf said research shows test scores are “far and away” the best gauge of teacher effectiveness, and to not use test score data would be “very anti-child.”

http://www.nj.com/news/index.ssf/2013/05/state_board_of_education_adjus.html

Here’s a reporters’ guide to follow up questions….

Mr. Cerf… can you show me exactly what research comes to that conclusion? (this should always be the immediate follow up to the ambiguous “research shows” comment)

Exactly how is “far and away” measured in that research?

And what is meant by “best gauge of effectiveness?”

That is, what is the valid measure of effectiveness against which test scores are gauged? (answer… uh… test scores themselves)

So, Mr. Cerf, are you trying to tell me that the Gates MET study proved that test scores are “far and away” the best gauge of teacher effectiveness? (seems to be most common reference point of late)

Can you show me where they said that?

And how did they measure what was the best predictor of effectiveness? In other words… what did they use as the true measure of effectiveness?

So… you’re telling me that the Gates study found – not far and away, mind you – that test score based measures are, well… the best predictor of themselves a year later? Is that right?

That’s what you mean by best gauge? Right? That they are the best predictor of themselves… if we also use other measures to try to predict test scores? Right? Seems a bit circular doesn’t it?

And how well did test-score based measures predict themselves a year later? if we accept as logical that validity test?

Well, that seems like a rather modest relationship from year to year, doesn’t it?

How does that make test scores the best predictor of actual effectiveness if actual effectiveness is broader than test scores themselves?

Okay… moving on… Since we’re leaning on those Gates foundation findings as providing the basis for placing heavy weight on test scores in NJ teacher evaluation… I note here (pointing to NJDOE documents on SGPs) that New Jersey has chosen an approach called Growth Percentiles to measure teacher effectiveness…. Can you show me in the Gates studies where the authors find this approach to be appropriate – or even more specifically “far and away” the best approach for measuring teacher effectiveness?

I don’t see any reference to SGPs or MGPs in the Gates studies… why is that?

Are SGPs and VAMs the same thing? I’ve been told there are substantive differences.

Isn’t one of these, SGPs, not even designed to isolate the effect the teacher has on test score gains?

In which case, how can they possibly be “the best gauge of teacher effectiveness?”

Let’s save the “very anti-child” stuff for another day!

Advertisements

7 Comments

  1. This is a morbidly amusing post. (I mean that as a compliment.) However, I wonder how different Cerf and the press that cover him are from their counterparts across the country. From my vantage point, not much.

    1. I’ve certainly witnessed comparable misguided/misinformed/deceptive/deceitful rhetoric from commissioners in NY & CT. I’ve taken a few shots at King (NY) and Pryor on this blog. And I’ve heard from my colleagues across the country of similar issues elsewhere. So, this is just where we are at in the current ed reform environment.

      However… my central issue here is actually with the uncritical media. In my view, the uncritical media is partly to blame for this. Letting this kind of stuff pass… not even asking for validation of these flimsy claims… not even seeking substantive critique (crap… not even googling the issue to see what’s been written)… and even if there’s some attempt – failing miserably at distilling substantive arguments from complete BS… Therein lies the problem. The overconfidence coupled with weak analytic skills among reporters covering education. There are a notable few exceptions.

      One thing I’ve seen as a persistent problem with the modern media – is their assumption that the media themselves are the experts – that having written a handful of articles on a topic… having a BA in journalism… and perhaps doing a minimal amount of legwork in the first year or so on the job… then qualifies the reporter – in his/her own mind – as expert. As such, they feel no need to seek real expertise to critique these claims. Notice how many times these days we see expert panel discussions on education policy where the experts on the panel are merely (and I do mean merely) education writers – reporters – not researchers – individuals with little depth beyond the past few years of current events – little understanding of research and little or no comprehension of data and statistics.

      I’m not sure if this really is a “new” issue, but it seems like it. It may be coupled with the pervasiveness of equally under-informed recent grads who find their way into DC think tanks and quickly become self-identified experts. So, when ed-writers interact with them, perhaps they do feel like they’ve got a leg up on expertise.

      The author if this SL article is actually a seasoned ed writer, so this is a little more disappointing – and speaks to the possibility that the editorial board simply won’t support asking tough questions (or writing critical analysis).

  2. You stated: “The overconfidence coupled with weak analytic skills among reporters covering education. There are a notable few exceptions.”

    May I, acting as a reporter, turn the tables on you. “Dr. Baker, you have stated that ‘There are a notable few exceptions’. Will you please name those ‘notable few exceptions’ as I haven’t seen any comparable lists anywhere?”

    1. I didn’t want to offend ALL of the education writers I know by suggesting they ALL suffer a combination of overconfidence & weak analytic skills. There are some who I believe work much harder at understanding their analytic weaknesses (esp. quant/statistical), not becoming overconfident and as a result, presenting more sound, critical analysis. Rather than naming names, for now, I’ll try to find good and bad examples for a future blog post. I’d love to have the time to construct ratings of education writers. Some rather popular ones frequently piss me off with shallow misguided analyses. But others do go deep, with insight and nuance and do it well.

Comments are closed.