Mike Petrilli over at TB Fordham has made his case for why differences in national economic context do little to substantively explain variations in PISA scores.
He frames his argument in terms of Occam’s Razor, as if to sound well informed, deeply intellectual and setting the stage to share profound logical argument, summarized as follows:
“among competing hypotheses, the hypothesis with the fewest assumptions should be selected.”
Petrilli asserts that while some might perceive a modest association (actually, it’s pretty strong) between national economic context and average tested outcomes in math, for example… like this…
…that it is entirely illogical to assert that child poverty has anything to do with national aggregate differences in math performance at age 15.
That is, the various assumptions that must be made to accept this crazy assertion – that economic context matters in math performance – simply don’t hold water in Petrilli’s mind. Rather, the answer must be much simpler and lie in the classroom, with our good ol’ American ineptitude at teaching math.
As Petrilli concludes in his post:
So what’s an alternative hypothesis for the lackluster math performance of our fifteen-year-olds? One in line with Occam’s Razor?
Maybe we’re just not very good at teaching math, especially in high school.
Accepting the bad math teaching conclusion simply requires fewer tricky assumptions than asserting any role for economic context in determining national aggregate outcomes.
Let’s call this Petrilli’s Hammer! as an illogical, blunt & necessarily under-informed alternative to Occam’s Razor. When in doubt – when too lazy to develop disciplined understanding of the field on which you choose to opine and when data are just too hard to handle, get that hammer and everything can look like a nail! (e.g. the bad teacher conclusion)
These two quotes frame Petrilli’s argument:
First, one must assume that math is somehow more related to students’ family backgrounds than are reading and science, since we do worse in the former. That’s quite a stretch, especially because of much other evidence showing that reading is more strongly linked to socioeconomic class. It’s well known that affluent toddlers hear millions more words from their parents than do their low-income peers. Initial reading gaps in Kindergarten are enormous. And in the absence of a coherent, content-rich curriculum, schools have struggled to boost reading scores for kids coming from low-income families.
So the second assumption must be that “poverty” has a bigger impact on math performance for fifteen-year-olds than for younger students. But I can’t imagine why. If anything, it should have less of an impact, because our school system has had more time to erase the initial disadvantages that students bring with them into Kindergarten.
The problem is that both of these statements are a) conceptually foolish and b) statistically ignorant.
Let’s tackle the second issue conceptually first. These scores for 15 year olds are performance level – or status scores. Status scores reflect the cumulative effects of schooling and family background. Most notably in this case, status scores – math performance at age 15, reflect the cumulative influences of poverty – living in poverty – growing up in poverty – lacking resources over long periods of ones’ early life.
So… setting measurement issues aside here, we can logically expect gaps between lower and higher income kids to grow between earlier grade assessments and later grade assessments – if we choose to do little or nothing in policy terms about the circumstances under which these children live. Yes, we can and should leverage resources in schools to offset these gaps. But we’re not necessarily applying those resources either.
Accepting Petrilli’s second point above requires that we ignore entirely that our school system remains vastly disparate in many states and locations between rich and poor communities and reinforces (rather than erasing) the initial disadvantages that students bring with them to Kindergarten.
Now, backing up to his first point, where Petrilli argues that if higher poverty settings/contexts do worse relative to lower poverty settings on math than on reading assessments, there must be a simple answer for the math problem/disparity – like bad math teaching of course. There can be no logical explanation for why math scores might be more sensitive than reading scores to poverty variation. Assuming bad math teaching to be the reason for greater disparity in math than in reading is much simpler than exploring why it might appear that math test scores are more sensitive to context/poverty, etc. than reading scores. This is true because we all know that poverty affects reading more than math – or so Mike says without citation to any legitimate source validating his point.
This one is pretty simple. First, it may simply be the case that Mike Petrilli is wrong on all levels here. That conceptually and statistically, economic deprivation seems to have stronger affect on numeracy than on literacy. But even accepting the idea that poverty affects literacy more – in a substantive way – doesn’t mean that we’d find a stronger statistical relationship between a) variations in poverty across settings and b) variations in measured outcomes across settings. The fact is that variations in math assessments are often simply more predictable. They may be both more stable/consistent and may actually have more variation to predict.
I’m going to use state level NAEP data within the US here to provide statistical illustrations for the rather simple flat-out-wrongness of Mike Petrilli’s Hammer.
The following illustrations simply reveal how data of this type tend to play out, something anyone reasonably well versed in using assessment data along side economic data, at various levels of aggregation, would understand. Some of these patterns reveal conceptually sound underlying hypotheses, and some may simply be an artifact of typical issues occurring in the measurement of student outcomes at different ages and in different subjects.
So, for our first question we ask whether it can possibly be the case that there exists greater disparity in math outcomes in 8th grade than in 4th grade across US states of varying degrees of poverty (setting aside the substantive explanations for why such gaps increase).
Now, careful here, this one requires using a little algebra – slope/intercept analysis. The first figure here shows the variation in NAEP math outcomes for 8th graders and for 4th graders, both in 2013.
This figure shows us first of all, that 8th grade math scores are more predictably disparate as a function of poverty than are 4th grade math scores. For 8th grade, poverty alone explains 63% of the cross state variation in math scores, but marginally less (59%) for 4th grade.
The figure also shows us that by 8th grade, an additional 1% poverty is associated with 1.13 point lower state average scale score, whereas in 4th grade, 1% higher poverty rate is associated only with .83 points lower in state average scale score. That is, the negative slope is greater for 8th than for fourth grade.
There can be many, many reasons for this. Among these reasons might be that as time goes on, cumulative poverty related deficits do increase. Persistent disadvantage makes gaps grow. It may also be a measurement issue, pertaining to the precision of measurement of mathematics knowledge and skill, or it may even be an issue of the stability and predictability of tests on early grade math content given to 9 year olds versus tests on stuff like algebra and pre-algebra given to older, hopefully more mature kids (who’ve also taken far more tests by that time).
But, instead of gettin’ all thoughtful about these possibilities and arming ourselves with well-conceived arguments grounded in data and knowledge of the literature, we could simply use Petrilli’s Hammer to assert that the one and only logical answer is that math teachers in high poverty states like Alabama and Mississippi suck and math teachers in low poverty states like New Jersey and Massachusetts rock! It’s bad math teaching that is making this negative slope get worse between grade 4 and grade 8 – bad math teaching exclusively in high poverty states!
Is there greater disparity in Grade 8 Math than in Grade 4 Math by Contextual Poverty?
The next question then is how can it ever be that math scores might be more disparate as a function of poverty when we all know that poverty affects reading more?
The next figure shows the relationship between poverty by state, and math and reading scores in grade 4. Rather amazingly, math scores are more predictable as a function of poverty than are reading scores – note the difference in variance explained (r-squared). Now, (almost) anyone who has ever plotted reading and math “level” (status) scores, or even estimated value added scores for reading and math in relation to poverty or nearly any other covariate knows that this is common. Variation in math scores – level or value added – is often much more predictable than is variation in reading scores. As above, this may be for many, many reasons. Maybe we’re just not as good on the measurement side at teasing out differences in underlying skill on reading, with either 9 or 14 year olds?
That math scores are more predictably a function of poverty than reading scores – across states – doesn’t mean that our math teaching is better or worse than our reading teaching. Even though the math scores at 4th grade are more predictable than the reading scores, the reading slope appears slightly more disparate (steeper negative). And that doesn’t mean either that our reading teaching is more disparate, or that the 4th grade scores are picking up some differential on the baggage kids bring to school with them. It’s a statistical artifact of the data – based on how math and reading are being measured. It may mean something, but who knows what? It may mean absolutely nothing.
Are Grade 4 Math Scores more predictably a function of poverty than Grade 4 Reading Scores across contexts?
Finally, here’s the 8th grade math and reading. Here, math is marginally more predictable as a function of poverty and math outcomes are more disparate as a function of poverty.
At least by these measures – NAEP math and reading scores – aggregated to the state level – which is similar to making national comparisons – reading is NOT as Petrilli so confidently argues above “more strongly linked to socioeconomic class” than math.
International comparisons work much the same.
What about Grade 8 Math and Reading?
Indeed, Petrilli is attempting to assert that there exists an incongruity between the data and the underlying reality – that yes, reading scores are affected by poverty, but math not so much. Thus, if the data show that math scores are more affected by poverty than are reading scores, then something much more nefarious must be going on – Yes – the bad teacher/teaching problem!
It couldn’t possibly have anything to do with measurement issues or the significant possibility that the full range of student outcomes measured are similarly affected by economic deprivation. That would just be way too much to swallow.
But, if we want to go there… if we want to accept Petrilli’s argument that there’s simply no excuse for U.S. students to fall where they do on international math comparisons, because poverty doesn’t affect 15 year olds or math, only younger kids and reading, then we must apply Petrilli’s hammer to state-by-state comparisons as well.
And thus we logically conclude that math teaching in DC, MS, AL, LA stink and math teaching in NJ, MA VT and NH is great! And that poverty really has nothing to do with it?