Why do states with the “best” data systems have the worst schools?

Okay, so the title of this blog is a bit over the top and potentially inflammatory, but let’s take a look at those states, which, according to the Data Quality Campaign, have achieved the best possible state data systems by having all 10 elements recognized by the campaign. I should note that I appreciate the 10 data elements, especially as a data geek myself. It’s good stuff and this post is not intended to criticize the Data Quality Campaign. Rather, this post is intended to question whether this focus – or obsession – we have had of late, to rate the quality of state education systems by two criteria alone – a) whether they have certain data linked to certain other data and b) whether they have caps on charter schools – has created an unfortunate diversion. This obsession has caused us to take our eye off the ball – to applaud states who have, in reality, put little or no effort into improving their education systems – states who have, over time, dreadfully under-supplied public schooling, and states who have consistently produced the lowest educational outcomes (not merely as a function of the disadvantages of their student populations).

So, here’s a quick run-down. First, let’s begin with a look at the number of data quality elements compiled by states in relation to the percent of Gross State Product (Gross Domestic Product by State) allocated in the form of State and Local Revenue per Pupil to local public schools. There’s no real tight relationship here, but as we can see, Delaware, Louisiana and Tennessee are 3 states which now have all 10 data elements – HOORAY – but have very low educational effort. Utah and Washington also have low educational effort.

This might be inconsequential if it was… well… inconsequential. That is, if there was also no relationship to educational outcomes. Here’s a plot of the mean NAEP Math and Reading Grades 4 and 8 for 2007 (% Proficient) along with # of Data Quality Elements. In this case, there’s actually some relationship. Yep, states with better data have lower outcomes. Maybe having better data will increase the likelihood that they figure this out. A somewhat unfair argument given that many of these states are relatively poor states, but it’s not all about poverty (in fact, higher poverty would require greater effort to improve outcomes – but it doesn’t play out that way for these states. See this post for a discussion of poverty variation across states). Low effort, low performing, but high data quality states include Lousiana and Tennessee.  Yet, somehow, when viewed through a data quality lens alone – these states become superstars!

This next figure looks at the predicted per pupil state and local revenue in each state for a district having 10% poverty (relatively average for U.S. Census Poverty rates). The point here is to compare a truly comparable state and local revenue figure corrected for poverty variation, regional wage variation, economies of scale and population density. Here, we see that Utah and Tennessee (again) are standouts – having the lowest state and local revenue per pupil. Recall that both are also low to very low effort. Their revenue to districts is not low because they poor, but rather because they don’t put up the effort. But hey, they’ve got great data!!!!!

Another relevant “effort” related point to consider is just how many children of school age in the state are actually even served by the public system. If we were discussing child health care across states or even pre-school, we would most certainly consider the extent of “coverage.” We tend to ignore “coverage” in k-12 education because we too often assume near universal coverage. But that’s not the case. And coverage varies widely across states. Here, I measure coverage by the % of 6 to 16 year olds (American Community Survey of 2007) enrolled in public schools.

Not only are Lousiana and Delaware very low in their effort for schools, and Lousiana low on outcomes, both are also very low on Coverage. They don’t even serve 80% of 6 to 16 year olds in their public school system (remember, charter schools are part of the public system)!!!! Yet somehow, having good data on those who remain in the public system is a substitute making the state worthy of praise!!!!!

One might speculate that these differences are mainly about the wealth of states – especially when it comes to the ability of states to spend on their schools and the outcomes achieved in those schools. This is indeed true to a significant extent. But, as it turns out, the effort a state puts up toward public school spending is actually more strongly related than wealth (per capita gross state product) to predicted state and local revenues per pupil. That is, states which put up more effort, do raise more per pupil for their schools. Yes, states like Mississippi are at a disadvantage because they lack wealth. Tennessee and Utah have much less excuse! Delaware’s unique economic position allows it to raise significant revenue with little effort.

Finally, the effort –> revenue relationship would be of little consequence if it was not also the case that the predicted state and local revenue differences across states are associated with those pesky NAEP outcomes. Yes, there does exist a modest relationship (with many entangled underlying factors) between state and local revenues and NAEP outcomes.

There is indeed a lot tangled up in the various relationships presented above. But one thing is clear – DATA QUALITY ALONE PROVIDES LITTLE USEFUL INFORMATION ABOUT THE QUALITY OF A STATE’S EDUCATION SYSTEM! Our obsession with comparing states on this basis has caused us and policymakers to take their eye off the ball (former tennis coach speaking here!). Applauding states and financially rewarding them (RttT) merely for collecting better data with little attention to the actual school systems and children served (or not served) by those systems is, at best, disingenuous. 

To quote John McEnroe – You cannot be serious!

Published by schoolfinance101

Bruce Baker is an Professor in the Graduate School of Education at Rutgers, The State University of New Jersey. From 1997 to 2008 he was a professor at the University of Kansas in Lawrence, KS. He is lead author with Preston Green (Penn State University) and Craig Richards (Teachers College, Columbia University) of Financing Education Systems, a graduate level textbook on school finance policy published by Merrill/Prentice-Hall. Professor Baker has written a multitude of peer reviewed research articles on state school finance policy, teacher labor markets, school leadership labor markets and higher education finance and policy. His recent work has focused on measuring cost variations associated with schooling contexts and student population characteristics, including ways to better design state school finance policies and local district allocation formulas (including Weighted Student Funding) for better meeting the needs of students. Baker, along with Preston Green of Penn State University are co-authors of the chapter on Conceptions of Equity in the recently released Handbook of Research Education Finance and Policy, and co-authors of the chapter on the Politics of Education Finance in the Handbook of Education Politics and Policy and co-authors of the chapter on School Finance in the Handbook of Education Policy of the American Educational Research Association. Professor Baker has also consulted for state legislatures, boards of education and other organizations on education policy and school finance issues and has testified in state school finance litigation in Kansas, Missouri and Arizona. He is a member of the Think Tank Review Panel, a group of academic researchers who conduct technical reviews of publicly released think tank reports on education policy issues.

4 thoughts on “Why do states with the “best” data systems have the worst schools?

Comments are closed.

%d bloggers like this: