About a week ago, I commented that charter school average performance was not much, if any different from the average performance of the poorest urban public schools. This is admittedly an oversimplified comparison, but not one I would have made had I believed it to be deceptive, which it is not – given the available data on New Jersey schools.
Here, I will walk through a more complicated though still imperfect analysis of elementary school performance in host districts and in charter schools based on data from 2004 to 2006 (data I had already compiled for related work). First, let’s begin with some descriptive characteristics of the charter schools and schools of similar grade level (elementary in this case) in their host districts based largely on school reports data from those years.
The table below shows that the data set includes 28 charter schools per year and 173 host district schools of same grade level. The charters serve about 1,000 tested students and the host district schools about 11,000 tested students. While the free/reduced lunch share is roughly the same between the two, the free lunch share is higher in the host district schools (these are the poorer students). These differences vary by host district and charters. Newark Charters, for example, are on average (though not all) relatively high poverty.
Note that the average free lunch share in DFG A schools, used in my previous comparison, is 63% (much higher than charters or their hosts on average).
Also higher in the host district schools are the share of children who are LEP/ELL and who are classified as having disabilities. But, the host district schools do have higher total certified salaries per pupil (compiled from state database on personnel salaries).
Three year average scale scores are also listed, for the 2004 to 2006 period.
But, the big question is what happens when you throw this all into the mix of a statistical model to evaluate whether charters outperform host district schools, controlling for the fact that they have less needy populations, but fewer resources to work with? Again, this is a simple school level model, which does not account for individual children’s relative gains in charters (treatment effect) compared to otherwise similar children not in charters but in host district schools. It would be wonderful to be able to conduct such analyses in NJ.
This school level model includes a dummy variable for each district that is a host district, such that charter performance in the model is measured against performance of the host district of that charter. The model includes only host districts and their respective charters. The overall charter effect is essentially the average of differences between charters and hosts, across hosts (and their respective charters).
What we see in this model is that charters, on average, are no different from their hosts on the combined math and language scale scores for NJASK from 2004 to 2006. While the statewide model of the same data shows a strong effect of cumulative salaries per pupil on outcomes, the model within host districts of charters does not – an interesting point to explore. But, other factors play out quite logically – with each student need factor statistically significantly depressing scale scores.
So, what does this more complicated, but still not complicated enough analysis tell us? It tells us that average charter school performance from 2004 to 2006 on elementary assessments is no different from that of average performance in other poor urban schools – specifically the host districts of those charters. It just says this in a more complicated way. Sometimes simple averages – when not deceptive – can be sufficient.
One factor that could turn the findings in favor of charters (as treatment effect) would be if the average starting performance level of charter students, compared to otherwise similar host school students, is lower than that of host school students – which could occur if there is a tendency for parents to look to charters when their children are under-performing. This appears to be the case in the Missouri data in the CREDO study noted below. But, this is unlikely to create a substantial effect.
Again, this is just playing with the numbers, albeit a more rigorous play than my previous posts – leading to the same conclusions.
For more thorough discussions of charter school research, see:
Check out specifically, the original NYC Hoxby study, and critique of it, and the CREDO 16 state study and RAND 8 state study. Exercise caution in linking any specific findings to the New Jersey context.