(Real) Graph vs (Fake) Graph Friday


This post provides a quick follow up to yesterday’s post (late last night) when I critiqued a questionable graph from an NJDOE presentation here: State of NJ Schools presentation 2-29-2012

It turns out that the slide presentation had many comparable graphs that deserve at least some attention. First, there’s this graph which attempts to argue that early reading proficiency is a statewide issue, and not just a problem of low income urban neighborhoods:

Rather impressive eh? Certainly gives the impression that early reading deficits are concentrated not in the poorest districts but in the least poor ones.

Why would someone make such an argument? Well, one reason would be if this argument was being coupled with arguments to redistribute funding to those less poor district to help them out – to argue that educational “risk” is not concentrated in poor districts, but rather distributed across all districts.

The problem here is that it’s completely absurd to compare total counts of students who are non-proficient across groups without any regard for the total counts of all students. That is, what percent of kids are proficient in each poverty group. Well, here’s what that picture ends up looking like:

Pretty much as we might expect. Lack of reading proficiency in 3rd grade as measured on state assessments is a much bigger problem in higher poverty districts, with poverty here measured as % Free Lunch and with reading proficiency tabulated for general test takers

Here’s the next graph, which compares charter school reading and math proficiency rates in Newark to Newark Public Schools:

In this case, the title is somewhat appropriate in that charter school performance does indeed vary in Newark. But the graph is pretty much meaningless and deceptive.

The graph relates average Language Arts and Math proficiency across schools showing basically that schools which are higher on one are also higher on the other. That’s really no big surprise. But the graph ignores entirely the substantive student population differences that explain a large portion of the difference in these proficiency rates. The graph appears to be not-so-subtly constructed to reinforce the central point of this section of the presentation slides – that charters outperform district schools.  That point continues to be built on analyses that were already thoroughly debunked many times over. This graph goes a step further by then cherry picking a few charters to name – all of which appear superior to the “District.”

So, what does it look like if we take all of these schools, and separate the district into it’s schools, and plot the combined proficiency rates with respect to % Free Lunch? Well, here it is:INCLUDES NJASK3 TO NJASK8 (no HSPA)

Yes, this graph reinforces the title of the NJDOE graph, but in a much more reasonable light. That said, there are a number of other student population factors that would need to be accounted for in a more thorough analysis. 

Among other things, while the first graph appears to suggest that TEAM Academy is a relative laggard compared to schools like North Star or Robert Treat, my representation here shows that TEAM is actually further above it’s expected performance than either of the other two. TEAM simply serves a lower income population than the other two. Further, district schools serving similar populations do similarly well. And several charter schools do as poorly (and worse) than comparable district schools.

 

3 thoughts on “(Real) Graph vs (Fake) Graph Friday

  1. Dr. Baker,
    Also, the NJDOE and Chris Cerf use the mistaken label “Reading on Grade Level” but, the information comes from the NJASK. The NJASK does not measure grade level reading. The NJASK section in question measures reading AND writing but makes not mention at all to grade level reading ability. To someone not in the field of education, it would be easy to dismiss the error. ;However, for the NJDOE AND the Acting Commissoner of Education to be ignorant of the difference is quite embarrassing, to say the least.
    I guess you can use any tests however needed without regard to design or purpose.

    Thank you again for the interesting research.

    1. I noticed the same regarding the misinterpretation of the assessment. My apologies for replicating the “reading” labeling error!

  2. Dr. Baker,
    Also, all the data above is at the school level. It would be interesting to see the proficiency rates in the low poverty districts broken out by income level. My guess is that income will correlate very highly again. And those not “passing” in the lower poverty districts are at the bottom of the ” local” income distribution as well.

    Thank you

Comments are closed.