So in my previous post I created a set of hypothetical research studies that might be presented at the Reformy Education Research Association annual meeting. In my creation of the hypotheticals I actually tried to stay pretty close to reality, setting up reasonable tables with information that is actually quite probable. Now, when we get down to the real reformy stuff that’s out there, it’s a whole lot worse. In fact, had I presented the “real” stuff in my previous post, I’d have been criticized for fabricating examples that are just too stupid to be true. Let’s take a look at some real “reformy” examples here:
According to the DFER web site post which includes this graph:
True, there are some great, traditional public schools in Indiana and throughout the nation. We’re also fortunate that a vast majority of our educators excel at their jobs and are dedicated to doing whatever it takes to help students succeed. However, that doesn’t mean we should turn a blind eye to what ISN’T working. Case in point? The following diagram displays how all 5th grade classes in the span of a year in one central Indiana school district are doing on a set of state Language Arts student academic standards. Because 5th grade classes in Indiana are only taught by one teacher, the dots can be translated to display how well the students of individual teachers are doing.
Now, ask yourself this: In which dot or class would you want your child? And, imagine if your child were in the bottom performing classroom for not one but MULTIPLE years. In spite of lofty claims made by those who defend the current system, refusal to offer constructive alternatives to rectify charts such as the one above represents the sad state of education dialogue in America today.
So, here we have a graph… a line graph of all things, across classrooms (3rd grade graphing note – a bar graph would be better, but still stupid). This graph shows the average pass rates on state assessments for kids in each class. Nothin’ else. Not gains. Just average scores. Gains wouldn’t necessarily tell us that much either. But this is truly absurd. The author of the DFER post makes the bold leap that the only conclusion one can draw from differences in average pass rates across a set of Indiana classrooms is that some teachers are great and others suck! Had I used this “real” example to criticize reformers, most would have argued that I had gone overboard.
Now I’ve already written about this graph, or at least the post in which it occurs, but I didn’t include the graph itself.
Gates uses this chart to advance the argument:
Over the last four decades, the per-student cost of running our K-12 schools has more than doubled, while our student achievement has remained flat, and other countries have raced ahead. The same pattern holds for higher education. Spending has climbed, but our percentage of college graduates has dropped compared to other countries… For more than 30 years, spending has risen while performance stayed flat. Now we need to raise performance without spending a lot more.
Among other things, the chart includes no international comparison, which becomes the centerpiece of the policy argument. Beyond that, the chart provides no real evidence of a lack of connection between spending and outcomes across districts within U.S. States. Instead, the chart juxtaposes completely different measures on completely different scales to make it look like one number is rising dramatically while the others are staying flat. This tells us NOTHING. It’s just embarrassing. Simply from a graphing standpoint, a blogger at Junk Charts noted:
Using double axes earns justified heckles but using two gridlines is a scandal! A scatter plot is the default for this type of data. (See next section for why this particular set of data is not informative anyway.)
Not much else to say about that one. Again, had I used an example this absurd to represent reformy research and thinking, I’ d have likely faced stern criticism for mis-characterizing the rigor of reformy research!
Hat tip to Bob Calder on Twitter, for finding an even more absurd representation of pretty much the same graph used by Gates above. This one comes to us from none other than Andrew Coulson of Cato Institute. Coulson has a stellar record of this kind of stuff. So, what would you do to the Gates graph above if you really wanted to make your case that spending has risen dramatically and we’ve gotten no outcome improvement? First, use total rather than per pupil spending (and call it “cost”) and then stretch the scale on the vertical axis for the spending data to make it look even steeper. And then express the achievement data in percent change terms because NAEP scale scores are in the 215 to 220 range for 4th grade reading, for example, but are scaled such that even small point gains may be important/relevant but won’t even show as a blip if expressed as a percent over the base year.
And here’s the Student’s First version of the same old story:
3. Original promotional materials from the reformy documentary, The Cartel (a manifesto on New Jersey public schools)
The Cartel is essentially the ugly step-cousin of Waiting for Superman and The Lottery. I’ve written extensively about the Cartel when it was originally released and then when it made its Jersey tour. Thankfully, it didn’t get much beyond that. Back when it was merely a small time, low budget, ill-conceived, and even more poorly researched pile of reformy drivel, The Cartel had a promotional web site (different from the current one) which included a page of documented facts explaining why reform was necessary in New Jersey. The central message was much the same as the Gates message above. The graphs that follow are nolonger there, but the message is – for example – here:
With spending as high as $483,000 per classroom (confirmed by NJ Education Department records), New Jersey students fare only slightly better than the national average in reading and math, and rank 37th in average SAT scores.
Here are the truly brilliant graphs that support this irrefutable conclusion:
I have discussed these graphs at length previously! I’m not sure it’s even worth reiterating my previous comments. But, just to clarify, it is entirely conceivable that participation rates for the SAT differ somewhat across states and may actually be an important intervening factor? Nah… couldn’t be.