A common assertion among public education critics is that public schools simply spend too much on administration and too little “in the classroom.” For example, this was the basis of a reform promoted a few years back which was called the 65 cent solution (which turned out to be a divisive scam as exposed by reporters from the Austin American Statesman in Texas). One reader of my blog recently commented that he/she thought that Bob Bowdon was in fact making a reasonable argument that the problem with public schools is the administrative bloat, or blob as it has been referred to by others. I do not know whether Bowdon explicitly makes this particular argument or not. But it again raises the question of whether it even makes sense to make this argument in light of what the best empirical research actually says on this topic.
Here’s an excerpt of a literature review from one of my recent articles in Educational Policy (detailed citations available on request) –
The core assumption of the 65% solution is that increasing the share of spending to areas labeled as “instruction” will improve student outcomes without increasing overall levels of education spending. Implicit in this argument, and highlighted by some anecdotal examples provided on the FCE web site, is the notion that schools are presently wasting too much money in areas such as administration. FCE argues the districts can reallocate that money to instruction. In the 1990s, while schools endured the aftermath of A Nation at Risk and the subsequent criticisms of rising education spending and stagnant outcomes, many policy analysts conducted studies on education spending. These studies made the forgone conclusion that central administrative expenses were necessarily inefficient and therefore harmful for students, and that higher percentages of dollars allocated “to the classroom” were efficient, and therefore beneficial to students. Programmers developed software for school districts to track dollars to the classroom and studies reported instructional expenditures in New York City schools at only 21.9% in an attempt to validate the inefficiency of large urban school districts (Speakman et al., 1996). However, few methodologically strong studies were able to directly link student outcomes to the ratio of resources districts allocated to administrative and other non-instructional expenses and classroom instructional expenses.
A significant point of confusion in the literature on instructional spending relates to the difference between instructional spending levels and instructional spending as a share of total spending. For example, proponents of the 65% Solution point to a policy brief prepared for Texas legislators (Patterson, 2005) citing the research of Wenglinsky (1997) as finding a positive relationship between instructional spending and student outcomes. Wenglinsky, however, does not evaluate tradeoffs between instructional and other spending an outcomes, but rather finds that either instructional or administrative spending increases, both of which appear related to increased overall staffing and class size reduction, lead to improved educational outcomes.
Like Wenglinsky (1997), Ferguson and Ladd (1996) find in Alabama that instructional spending has a positive effect on test scores. Using data from Oklahoma school districts, Jacques and Borsen (2002) evaluate the effects of spending levels on student outcomes across a variety of categories, finding “Test scores were positively related to expenditures on instruction and instructional support, and are negatively related to expenditures on student support, such as counseling and school administration.” (p. 997) The authors raise concerns however with deriving causal implications from their findings, noting: “It could be that schools with problems hire more administrators and counselors.” (p.997) Taken together, these findings suggest that when policy makers add new money to education systems, adding that money to instruction areas while holding other areas constant may improve outcomes. In each case, however, researchers evaluated the level of resources allocated to schools, but not tradeoffs or potential reallocation of existing levels of resources. A core tenet of both the 65 and 100% solutions is not that states raise the level of funding for schools, but rather that lawmakers’ require districts to reallocate existing funds.
Bedard and Brown (2000), in an unpublished working paper, attempt the leap from evaluating levels of spending across categories to evaluating relative proportions, and find that reallocation from administration specifically toward classroom instruction might lead to increased outcomes. “Either the reallocation of $100 from administrative to classroom spending, with no change in overall expenditures, or an $100 increase aimed directly at the classroom moves the average California high school approximately 5 percentage points higher in the state test score rankings.” (p. 1) But, Taylor, Grosskopf and Hayes (2007) also in an unpublished working paper, using data on Texas schools to test directly the 65% solution, find that “the analysis suggests that schools that spend a larger share of their budgets on instruction are significantly less efficient than other public schools.” (p. 1)
Two other published, peer reviewed studies specifically examine the relationship between administrative expenses and student outcomes also yielded conflicting findings. In one, Brewer (1996) found little relationship between non-instructional expenses and student outcomes. Marlow (2001), contrasting with Brewer’s findings to an extent, found that: “While numbers of teachers do not influence performance measures, numbers of administrators are shown to positively affect performance — results that suggest that too many teachers, but too few administrators, are employed.”
Finally, Huang and Yu (2002) combine NAEP data with NCES Common Core expenditure data to evaluate whether current expenditures per pupil and/or the difference between an individual district’s instructional spending rate and the state average instructional spending rate (called DDR in their study) relate to student outcomes in 1990, 1992 and 1996. The authors found overall positive effects of current spending on outcomes but “Net of relevant district factors, DDR was found unrelated to districts’ average 8th grade math performance.” This test is similar to testing whether districts over or under a 65% instructional spending threshold perform better or worse. The difference is that each district’s instructional share is benchmarked against its own state mean.
So, as it turns out, the best empirical research on this topic (into which I would put Lori Taylor’s work) tends not to show negative effects of administrative expense, or positive effects of instructional expense on student outcomes when addressed as internal shares of total budgets.
But what about New Jersey – that high spending, heavy administrative blob state? And especially those Abbott districts? Well, as I have discussed in previous posts, it turns out that New Jersey administrative salaries are actually relatively non-competitive if compared with (a) private school heads within New Jersey or (b) superintendents in other states like Illinois or Texas. I have also shown that Abbott district administrative shares and administrative expenses per pupil are in line with other New Jersey districts. My forthcoming research also shows that private independent schools spend much larger shares on administration than public schools in the same state. (post on this is forthcoming)
I have not yet checked on total administrative shares in NJ versus other states… but you see… the literature would suggest that administrative shares of spending are not, in fact, a huge drag on student outcomes. Rather, one might counter based on the available information that higher administrative expense and specifically administrative salaries might be warranted in New Jersey in order to begin attracting a stronger leader pool – especially in the schools and districts where they are most needed.