Education Week Grading System Gets a Failing Grade

Posted on January 8, 2009



It’s that time of year again. Time for Education Week Quality Counts to grade the states on a number of education policy issues – ranging from accountability systems to school finance systems. But, once again, Education Week’s Quality Counts ratings of state school finance policies simply lack understanding of the goals of today’s state school finance policies and methods for better understanding and evaluating state school finance policies. We have been working diligently to develop an alternative set of indicators to be released sometime in the near future. I will attempt to provide my critique of the Ed Week indicators herein without divulging to much detail about our alternatives – yet.

Here is a blurb I wrote a short while ago in which I lay  out the initial critique of two popular state school finance rating systems:

====

Two existing reports are disseminated annually and highly publicized. The first is the Education Trust Funding Gap Report which has as its focus, characterizing the differences in average per pupil state and local revenues between high and low minority concentration districts and between high and low poverty concentration school districts within states. The report appears to have significant traction in policy circles but is methodologically problematic and deceptive in a number of ways. First, the report calculates its funding gaps with respect to “need adjusted” estimates of state and local revenues per pupil. In order to generate these need adjusted estimates, the authors must adopt a set of weights that prescribe how much more a child from impoverished background is expected to need and how much more a child with disabilities is expected to need. That is, the method requires an a priori assumption of the magnitude of differential need for certain populations.

Second, the Funding Gap report overlooks entirely other major factors that affect the costs of providing equal educational opportunity, leading to misinformed conclusions. For example, the report fails to account for differences in costs associated with economies of scale and the interplay between district size and poverty distributions within states across small rural and larger urban and suburban districts. For example, in Kansas, many very small rural districts show elevated poverty rates even when compared to poor urban districts in the state. Small districts in Kansas have much higher revenues per pupil as a function of the state aid formula which favors small districts (but not necessarily high poverty districts). The higher revenues of small districts in Kansas has, in many years of the Education Trust report led to a favorable gap in poverty related funding in contrast to the state’s large unfavorable gap in minority related funding. Education Trust has acknowledged this apparent discrepancy and its cause but has not attempted to reconcile it in subsequent reports.

Education Week also publishes equity analyses of state school finance data in their annual report Quality Counts. Education Week also adopts a set of a priori “cost/need” adjustment factors (for student characteristics and for regional wage variation), and then from their cost adjusted revenue measure, calculates a series of standard school finance equity indices including coefficients of variation and McLoone Indices to characterize each states’ school finance data. These analyses also can lead to misinformed conclusions. For example, a McLoone Index measures the extent to which the average resources of the lower half of the students in a system are approaching the median level of resources. Education Week uses this index as an Adequacy measure. States like New York and Kansas which have very large minority funding gaps in the Education Trust Report often have among the best McLoone Indices in the Education Week report – precisely because all of the states’ poorest minority students are clustered in one or a handful of districts whose revenues are the lower half of the distribution and include or approach the median.

To illustrate the potential negative impact of these two reports, in 2003 in the context of state school finance litigation in Kansas, attorneys for the State submitted in defense of the school funding formula, both the Education Trust finding that higher poverty districts had higher revenue per pupil and the Education Week finding that Kansas showed a good McLoone index. The state’s attorneys and local news outlets did not understand why Kansas received good ratings on these indices nor did they care as long as those indices were from highly publicized, publicly recognized sources. Plaintiffs pointed out that Education Trust finding was not a function of systematic poverty related support, but rather a function of small rural school support which left out the poorer urban and large town districts and that the “good” McLoone index was a function of having nearly half of the state’s children and nearly all of the state’s poor minority children attending six districts with below average revenues. These points were difficult to make in the face of media accolades for state’s supposed achievements regarding school funding equity and adequacy. The district court and eventually Supreme Court of Kansas declared the state school finance system unconstitutional, but not without at least a few vocal critics chastising the judges who would give the legislature a failing grade for a school finance system that had received a grade of “B” from a leading national media outlet.

======

Education Week’s 2009 version uses pretty much the same indicators they used in reports that I previously critiqued including the McLoone Index and Coefficient of Variation. And again, states like New York, where large shares of poor and minority children are clustered in a single, relatively underfunded school district, do quite well on the McLoone Index. Also, states that have aggressively differentiated funding to meet varied needs and costs across districts, like New Jersey, fair poorly on the Coefficient of Variation – a measure of raw variation in spending levels across districts without sufficient accounting for need and cost variation across districts. The bottom line is that there are two types of variations in resources across public school districts – good variations (cost and need related variations) and bad variations (variations related to wealth and/or fiscal capacity variations among school districts). A good school finance system might/should have a great deal of variation in resources across schools and districts, assuming that needs and costs vary across schools and districts.

Education Week also attempts to compare spending of districts in each state to national average spending –  using the percent of children in districts at or above national average as an “adequacy benchmark.” But, just as districts within states vary, so too do states. The average poverty level in some states is much higher than in others – and the concentrations of limited English proficient children also higher. Further some states have far more children served in concentrated urban poverty settings and some states have far greater shares of children served in remote rural settings and necessarily small schools. All of these factors affect the costs of equitable and adequate education.

Our general strategy to rating state school finance systems is (a) to evaluate whether those state school finance systems do, in fact, attempt to target greater resources to school districts having greater costs associated with educational needs such as poverty (and controlling for all of the other costs noted above) and (b) to estimate the expected per pupil state and local revenue levels for a district with X, Y and Z characteristics in each state – that is, what is the predicted level of resources for a school district of 2,000 students in an average wage labor market, at a specific poverty level, and other student needs.

Without giving you the technical details, here’s how our factors line up with Ed Week’s 2009 grades.

This first figure shows the average “poverty sensitivity” or progressiveness scores for states by their Ed Week grades. That is, did the state provide, on average, more or less resources to higher poverty districts after controlling for other factors. A progressiveness index below 1.0 indicates “regressive” funding – higher in lower poverty districts and an index above 1.0 indicates “progressive” funding. Interestingly, Ed Week’s highest grades went to states that were, on average “regressive.”

slide11

This second figure provides the average “predicted spending” for a district with 20% poverty (census poverty) holding other cost factors constant. It’s a sort of – relative adequacy – measure – focused on relatively high poverty (but constant across states) districts. In this case, we also see that the “relative adequacy” of funding in those states receiving the highest grades from Ed Week is lower than the relative adequacy of funding in those states receiving some lower grades.

slide21

So, if Ed Week’s grading system (a) doesn’t capture the extent to which state school finance systems target resources to those districts where they are needed most, and (b) doesn’t capture the true “relative adequacy” of resources across states (accounting for wage, scale and need differences), then what does it capture? I’m just not sure.

Cheers!

Posted in: Uncategorized