The Offensively Defensive Ideology of Charter Schooling

There now exists a fair amount of evidence that Charter schools in many locations, especially high performing charter schools in New Jersey and New York tend to serve much smaller shares of low income, special education and limited English proficient students (see various links that follow). And in some cases, high performing charter schools, especially charter middle schools, experience dramatic attrition between 6th and 8th grade, often the same grades over which student achievement climbs, suggesting that a “pushing out” form of attrition is partly accounting for charter achievement levels.

As I’ve stated many times on this blog, the extent to which we are concerned about these issues is a matter of perspective. It is entirely possible that a school – charter, private or otherwise – can achieve not only high performance levels but also greater achievement growth by serving a selective student population, including selection of students on the front end and attrition of students along the way. After all, one of the largest “within school effects on student performance” is the composition of the peer group.

From a parent (or child) perspective, one is relatively unconcerned whether the positive school effect is function of selectivity of peer group and attrition, so long as there is a positive effect.

But, from a public policy perspective, the model is only useful if the majority of positive effects are not due to peer group selectivity and attrition, but rather to the efficacy and transferability of the educational models, programs and strategies. To put it very bluntly, charters (or magnet schools) cannot dramatically improve overall performance in low income communities by this approach, because there simply aren’t enough less poor, fluent English speaking, non-disabled children to go around. They are not a replacement for the current public system, because their successes are in many cases based on doing things they couldn’t if they actually tried to serve everyone.

Again, this is not to say that some high performing charters aren’t essentially effective magnet school programs that do provide improved opportunities for select few. But that’s what they are.

But rather than acknowledging these issues and recognizing charters and their successes for what they are (or aren’t), charter pundits have developed a series of very intriguing (albeit largely unfounded) defensive responses (read excuses) to the available data.  These include the arguments that:

  1. Lotteries don’t discriminate and charters have to use lotteries, therefore they couldn’t possibly discriminate!
  2. Charters only appear to have fewer children with disabilities because they actually just provide better, more inclusive programming and choose not to label kids who would get labeled in the public system! In particular, charters do so much better at early grades interventions that they keep kids out of special education in later grades!
  3. While one might think charters are advantaged by having fewer low income children, in reality, Charters suffer significantly from “negative selection.” That is, the parents who choose charters are invariably the parents of kids who are having the most trouble in the public system.
  4. While it appears that Charter middle schools have high rates of attrition between 6th and 8th grade, all schools really do. Charters are no different.
  5. The data are always biased against charters and never in their favor on these issues.

The foundation for these arguments is flimsy in some cases, and manipulative in others.


1. Lotteries don’t discriminate

True, lotteries alone don’t, really can’t discriminate. They are random draws. Among those students whose parents enter them into a lottery for a specific school, those who get picked should be comparable to those who don’t picked.  But that does not by any stretch of the imagination – or by much of the available data – mean that those who end up in charter schools through the lottery system are in any way representative of students who live in the surrounding neighborhoods or attend traditional public schools in the local district.

In other words:

 Lotteried In = Lotteried Out

 Not the same as:

 Charter School Enrollment = Nearby Public School Enrollment

Why aren’t these the same? Well, those who enter the lottery to begin with are only a subset of those who might otherwise attend the local public schools. That subset can be influenced by a number of things, including quite simply, the motivation of a parent to sign up for the lottery, or parental impression regarding the “fit” of the school to the child. So, if the lottery pool is selective, then those lotteried into charters are merely a random group of the selective group.

Pundits frequently point to lottery based studies of charter school effects to make their case that lotteries don’t discriminate and that therefore charter schools serve the same students as traditional public schools.

Richard Ferris and I, in our recent study of New York City Charters note:

As one would expect, Hoxby found no differences between those who were randomly selected and those who entered the lottery but were not selected. This is not the same, however, as saying that the overall population in the charter schools is demographically similar to comparison groups or non-charter public school students. While they do compare the demographics of the charter “applicant pool” to those of the city schools as a whole (see Hoxby‟s Table IIA, page II-2),30 they never compare charter enrollment demographics with those the nearest similar schools or even schools citywide serving the same grade ranges.

2. Charters are just better at dealing with children with disabilities in their regular programs and therefore don’t classify them

This story takes two different forms:

Version 1: Charters simply don’t identify kids because they provide better inclusive programming

This is perhaps conceivable when addressing children with mild specific learning disabilities and/or mild behavioral problems, but much less likely to be the case where more severe disabilities are concerned. In New Jersey and in New York City, many charter schools serve few or no children with disabilities (see: ).  This can only be accomplished if the only children with disabilities who were present to begin with were those with only the mildest disabilities – making declassification reasonable. Perhaps more importantly, while charter advocates make this claim, I am aware of no rigorous large scale or even individual case study research that provides any validation of this claim.

Version 2: Charters provide better early intervention programs such that by third grade, children don’t need to be classified when they reach the grades where they typically would be classified.

I’ve only heard this argument on a few occasions and it is simply a variation on the first argument. But this argument has important additional holes in it that make it even more suspect than the first argument. Most notably, very large shares of charter schools including charter schools with disproportionately low shares of children with disabilities are charter schools that don’t have lower grades – and serve upper elementary to middle grades. In fact, nationally, 44% of charters start after 3rd grade, and in New Jersey, for example, these are the schools with very low rates of children classified for special education services.

Perhaps more importantly, while charter advocates make this claim, I am aware of no rigorous large scale or individual case study research that provides any validation of this claim.

3. Not only do charters not cream skim, they actually are disadvantaged by negative selection!

That is, among poor children or among non-poor children, some statistical models show a small effect of the average entry performance of those choosing charters to be lower. Actually, the only potential validation I can find of this is from a study of high school charter schools in Florida (and a similar study of high school voucher recipients in Florida), though some other studies speculate the existence of a small negative selection effect without strong empirical validation.

But even if we see negative selection, as typically reported in these studies, we have to consider what it is that is being reported. Typically, what is being reported is:

Initial Performance of Non-Disadvantaged Students in Charters <= Initial Performance of Non-Disadvantaged Students in Traditional Publics


 Initial Performance of Disadvantaged Students in Charters <= Initial Performance of Disadvantaged Students in Traditional Publics

And across other categories of student needs (to the extent the attend charters). This could be problematic for making statistical comparisons where one is only able to control for various disadvantages but not to capture the fact that there may be some “negative selection” within these groups (lower initial performance). That would create model bias that works to the disadvantage of charters.

But that’s not what the pundits are claiming. This punditry is rather like the punditry about lotteries not discriminating. The above comparisons do not address the simpler issue of:

% Disadvantaged in Charters < % Disadvantaged in Traditional Public Schools

Rather, they compare initial achievement only among subgroups.

If the traditional public school 90% low income and 10% non-low income and the charter school is only 50% low income and 50% non-low income, the populations are still different – significantly and substantially. The entry performance of the 50% low income is being compared to the entry performance of the 90% low income in the traditional public school. But this does not address the fact that the schools are, overall, very different and the average entry performance of the groups overall are very different. That is, cream-skimming is indeed occurring on the basis of income and of other factors and as a result on the basis of entry performance in across all groups, but charters aren’t necessarily getting the strongest students within those groups.


4. Traditional public schools have attrition too

This is largely true, but with a few qualifiers attached. In general, children residing in lower income communities tend to make more unplanned moves from school year to school year and even during school years. So, mobility is a problem in high poverty settings and it is perhaps reasonable to assume that these poverty induced – housing disruption induced – mobility patterns affect both traditional public school and charter students in some settings.  But, this is only one component of mobility and attrition in the urban schooling setting.

This has been a hot topic lately to some extent because a report released by Gary Miron which used national school enrollment data to look at attrition patterns in KIPP middle schools.  Many who immediately shot back at Miron cited the KIPP study done by Mathematica which was able to more precisely address which students were “retained” versus which actually left. Of course, Gary Miron also cited this study and explained that it had greater precision in some respects, but further explained how in his own calculations it was simply infeasible that all of the attrition could be explained by retention. That is, that the entire difference between the size of the 8th grade cohorts and 6th grade cohorts could be attributed to holding kids back in 6th grade. Unfortunately, while the original Mathematica KIPP study provided some additional insights, it did not provide sufficient disaggregation or precision in explaining the different types of mobility and attrition occurring across KIPP and nearby public schools.

Mathematica subsequently released a more detailed descriptive analysis of student mobility and attrition, which did largely confirm similar aggregate rates of attrition between KIPP and matched public schools. But, while this study does allay some of the concerns regarding perceptions of attrition in KIPP schools, further untangling of inter-school within district mobility is warranted, and the findings that pertain to KIPP middle schools in the Mathematica analysis do not necessarily pertain to any and all charter schools or host districts showing comparable attrition rates.

5. The Data are Always/Only Biased against Charters (never in their favor)

This is one of my favorites because I love data, but recognize their fallibility. The data are what they are. There may be explanations for why one set of schools is more or less likely to have accurate data than another, and why these differences may compromise comparisons. But the data are what they are, with all relevant caveats attached.  What is NOT reasonable is to use the existing data to make a comparison, find that the result isn’t what you wanted it to be, and then explain why the data aren’t what they are… but do so without alternative data.

For example, it is unreasonable to compare host district rates of special education classification and charter special education classification, find that charters have far fewer classified students, and then only provide reasons why the charter classification rates must be wrong… implying that despite what the data say… there really aren’t differences in classification rates… or in ELL/LEP concentrations… or in low income student concentrations. Yes, there may be problems with the data, but data proof speculation about those problems with corrections applying only to the favor of charters is unhelpful and dishonest.

Hoxby & Murarka spend two pages here making arguments for why the dramatically lower reported rates of special education and ELL students in New York charter schools simply must be wrong – systematically under-reported. While some of their arguments may be true and seem reasonable, there is no clear evidence to support their implied argument that in spite of the data, we should assume that charters are actually comparable to traditional public schools. Rather, the data they use shows a finding they don’t like – a finding that NYC charters appear to under-serve ELL children and children with disabilities.

One example of a common data bias that does cut the other way, as I’ve shown on multiple occasions, occurs when comparing rates of low income students in charters and traditional public schools if only comparing those who qualify for “free or reduced price lunch.” When this measure is used alone, charters often do look the same as nearby traditional public schools (at least in NY and NJ). But, when a lower income threshold is used, we see that charters actually serve far fewer of the poorer students.  The “free or reduced lunch” data are insufficient for the comparison, and the bias makes charters look more comparable than they really are.

Oh, and finally: Charter schools are public schools!  Or are they?

Charter pundits get particularly irked when anyone expresses as a dichotomy “charter schools vs. public schools,” referring to charter schools versus “traditional” district schools. Charter pundits will often immediately interrupt to correct the speaker’s supposed error, proclaiming ever so decisively – “let’s get this straight first – CHARTER SCHOOLS ARE PUBLIC SCHOOLS!”

Well, at least in terms of liability under Section 1983 of the U.S. Code, in cases involving employee dismissal (and deprivation of liberty interests w/o due process), the 9th Circuit Court of Appeals has decided that charter schools are not state actors. That is, at least in some regards, they are not public entities, even if they provide a “public” service.  Or at least the companies responsible for managing them and their boards of directors are not held to the same standards as would official state actors – public officials and/or employees.

 Horizon is a private entity that contracted with the state to provide students with educational services that are funded by the state. The Arizona statute, like the Massachusetts statute in Rendell-Baker, provides that the sponsor “may contract with a public body, private person or private organization for establishing a charter school,” Ariz. Rev. Stat. § 15- 183(B), to “provide additional academic choices for parents and pupils . . . [and] serve as alternatives to traditional public schools,” id. § 15-181(A). The Arizona legislature chose to provide alternative learning environments at public expense, but, as in Rendell-Baker, that “legislative policy choice in no way makes these services the exclusive province of the State.”

Merely because Horizon is “a private entity perform[ing] a function which serves the public does not make its acts state action.”



  1. It will be interesting to see how some of the charter groups who are being handed neighborhood schools perform vis-a-vis the prior public district incumbents. These are operators who are given the same building and the same student population. It’s happening a fair amount in Philadelphia. Closest thing to an apples-to-apples comparison, no?

  2. I’m not a lawyer, but the 9th Circuit opinion you cite clearly notes that Arizona law defines charter schools as public schools. I also says, as you point out, that “a private entity may be designated a state actor for some purposes but still function as a private actor in other respects.” As I read it a charter school board may choose a private entity to manage its school and this does not then make the private entity a state actor. Many government agencies contract out to get things done; this does not make all of those contractors state actors.

    1. The court goes to great lengths to explain that the mere declaration in the charter statute that charter schools are “public schools” does not necessarily make them public, per se, or “state actors” in every regard. Essentially, the court declares that charters are not public with respect to liabilities under federal laws governing public/government agencies. This is actually a big deal. The court acknowledges than an agency may be public in some regards but not in others. The court however is not asked in this case to declare whether charters are public in other regards but rather whether they are in this particular regard – to which the answer is no. There are a handful of state level cases on other questions. In Ohio, Charter boards were required to comply with open meetings laws. But, they were willing to argue that they shouldn’t have to, largely on the basis that charter boards are private. Charter operators continue to argue in many states that they should be exempted from a) prevailing wage laws regarding contracts, b) open records and open public meetings laws especially regarding contracts. The argument charter operators and management companies are making is that they are private entities and their contractual agreements private contracts. Yet, in the next breath, they argue “but we are public schools.” Perhaps they should clarify and argue that they are private agencies providing a public service. Can’t have it both ways.

  3. This is an interesting post, full of useful data. Thank you for putting it up. I have a couple questions and a couple comments. Questions first.

    You write that the new data from Mathematica “does allay some of the concerns regarding perceptions of attrition in KIPP schools.” I believe that study also addressed the issue of lottery selection bias and found incoming KIPP populations to be statistically similar to populations of matched public school. Have I misunderstood those results? Do you agree with them? Does this allay fears regarding that issue as well?

    I think there was a study on a number of high performing Boston charters that used the lottery losers as the control group and the lottery winners as the test group, and found persistent higher outcomes for the test group– have you seen this study?

    Have you visited any of these schools? If so, I’m curious what your impressions were.

    Ok, comments.

    Having worked in and observed a lot of No-Excuses charters (the model that all or nearly all the high-performing ones follow), I can say, just anecdotally, that there is an effort in many of these schools to avoid creating IEPs (i.e. designating students as learning disabled) because they don’t want to students to be stuck in special ed classrooms when they leave the charter. You’re right, of course, that this is only feasible for students with mild disabilities, but students with severe learning disabilities typically require a 12-1-1 classroom, which most of these charters don’t have the resources to provide. This gets into funding questions that I, being a pedagogy & school-culture person, don’t know much about: how does funding for 12-1-1 work? My understanding is that many district schools don’t have these kinds of classrooms either and that students with these kinds of needs have to be bussed to particular schools with those capabilities.

  4. On point 3, you should look at the RAND 2009 study of several jurisdictions. From page 12:

    “In sum, in all but one case (Chicago reading scores, which are virtually identical to the districtwide average), students switching to charter schools had prior test scores that were below districtwide or statewide averages (though usually the difference was small). Compared with their immediate peers in the TPSs they exited, students transferring to charter schools had slightly higher test scores in two of seven locations, while, in the other five locations, the scores of the transferring students were identical to or lower than those of their TPS peers.”

    1. This quote and study are actually quite consistent with my point #3 – that there is no clear systematic evidence of negative selection (but for Florida Charter High Schools). That, for example, compared with their immediate peers, the differences in initial performance were mixed. The overall comparison in Chicago, that students started w/below average scores is different than in some other contexts – Though I’ve not checked back on the overall characteristics of Chicago charter students – whether there is cream-skimming or not on other measures. Perhaps not in Chicago (as this finding would imply). But as noted, even these differences were small. I was fully aware of this study and this particular finding when I wrote the post. And, I was responding mainly to the point that some (not that many, but some who take a particularly defensive stance) repeatedly argue that a large body of research validates a consistent, substantial negative selection effect. It’s really an argument of a sub-cult of pundits who I would argue are being rather unhelpful to those actually trying to run and/or support actual charter schools.

  5. Well, I’d argue that it rather undermines the much more common claim that charter schools are only cream-skimming. That may be true in isolated cases, but not on the whole. (Actually, what’s likely the case is that a few charters are cream-skimming, many are not, and some have hugely negative selection, like all the Texas charter schools designated for “at risk” students, a category that includes former dropouts, drug addicts, homeless students, residents of psychiatric wards, etc.)

  6. Also, I’m having trouble parsing your comment — you seem to say that there was negative selection in Chicago, but nowhere else. But the RAND study was precisely opposite: no negative selection in Chicago, but everywhere else there was negative selection (albeit small) according to test scores.

    1. No, in fact that quote from RAND actually finds mixed results regarding negative selection. Negative selection pertains to the comparison of initial performance of immediate peers. There was no clear evidence of negative selection there. There was a lower average overall performance, indicating the likelihood that compared to district wide averages, there wasn’t cream-skimming. But, that’s not a very precise comparison. We don’t know from this how the kids compared to other TPS in the same neighborhood.

      Your comment on special schools is something that complicates many analyses. Clearly, many of those schools do serve very difficult populations as do alternative traditional public schools. But, those charter schools aren’t typically the schools being applauded as high achievers. You bring up a great bait-and-switch point here though. Many who so vociferously defend charters will include those schools in the mix of comparing student populations in order to show that charters serve difficult populations and special education students, and then will point to achievement findings from KIPP and other high flying charters – implying that they also serve these children- which they do not.

  7. In the majority of jurisdictions studied, students transferring to charter schools had the same or lower test scores when compared to their immediate peers, to the district, or to the state. Not huge differences, but certainly not cream-skimming either, not on average.

    As for KIPP, I’m not sure what you mean by “difficult populations,” but it must not include serving students who are poorer and more likely to be black and who have lower test scores than the district average, because that’s the typical demographic at a KIPP school. (Compared to the feeder elementary schools, however, entering KIPP students have test scores that are pretty much the same on average — which still doesn’t look like cream skimming.) See pages 8 to 14 here:

    1. I was responding specifically to your claim of severe negative selection for schools serving special, disadvantaged populations (as you note: like all the Texas charter schools designated for “at risk” students, a category that includes former dropouts, drug addicts, homeless students, residents of psychiatric wards, etc.). KIPP schools do not serve these populations and the KIPP studies have not shown significant negative selection (though some have indicated lower than district average starting scores). You’ve pulled the bait and switch yourself here. Homeless children rates at NYC and NJ charters are very low compared to traditional publics in the same neighborhood, and homeless rates are non-existent in high performing NJ charters. I’ve got better things to do now than to continue to humor you’re twisted musings.

  8. I don’t see any “bait and switch.” I’m just saying that while KIPP obviously isn’t doing the same thing as the Texas charter schools designated for “at risk” students, neither is it obvious that they’re “cream skimming” in any meaningful sense. Their student bodies are: more likely to be black, more likely to be poor, often with lower test scores, less likely to be special ed or English learners. So that looks like negative selection in several ways, and positive selection in other ways.

    So anyone who suggests that KIPP is cream skimming is implying that the latter two factors outweigh the first three by making a student body much easier to educate. But who says that’s the case?

    1. First. You are mixing up a) cream-skimming (more generally) and b) negative selection (a more specific statistical issue). Let’s clarify these two and then relate to KIPP. And then to the larger picture of charters. Cream-skimming speaks to the overall demography of charter enrollments relative to the traditional public schools that would have been attended by these students. That is, are the charter students as likely to be low income, homeless, LEP and in relation to that, higher or lower performing? Most KIPP and many other charter studies find that the students are as likely to qualify for “free or reduced” lunch, that they are much less likely to be LEP/ELL. Now, I take issue with the use of “free or reduced” because when I cut the data in NJ or NY, I find that charters serve FAR FEWER of those who qualify for Free lunch. The vast majority of families in poor urban settings fall below the 185% income threshold… so using that measure doesn’t provide any meaningful differentiation. It hides cream-skimming that may exist. It’s a bogus comparison measure for catching charter cream skimming in most poor urban contexts. in NJ and NY, charters – especially high flyers – serve far fewer very low income and homeless kids. That’s cream-skimming in general.

      Negative selection refers to the issue of – among otherwise measurably comparable students – are those who select charters relative underperformers? This speaks to a potential statistical bias that would disadvantage charters in comparing performance levels if only controlling for demographics. That is, if I compared outcomes of charters and non-charters controlling for income status, LEP, etc. but couldn’t correct for the fact that charter entrants within each group were more likely to be underperformers. This is not a major finding of KIPP studies. But it is often used as a defense. I know you’re not using it that way. But that was my point initially – that it is used as a defense when there is little support for it.

      Now, making comparisons on the basis of “free or reduced lunch” combined, at least in NY or NJ, introduces a comparable statistical bias that favors charter schools, because it assumes that their 80% free/reduced population (of which 50% are free lunch) is THE SAME as the TPS population which may have 65% to 70% free lunch of the 80% total free or reduced price lunch. These are matters of statistical bias in the comparisons.

      See this post for examples of the extent of cream-skimming in NJ:

      See this report for the NYC demographic comparisons:

      See these posts for a discussion of the poverty measurement problem (this is a huge bias in most analyses):

    2. Hi Bruce and Stuart.

      I may have missed it in the extensive exchange above, but it seems the KIPP attrition bias issue is being framed in a way that obscures the key point. As Gary Miron has pointed out repeatedly, the evidence suggests that KIPP’s attrition does indeed mirrors that of conventional schools serving comparable population.

      However — and this is the key point — those conventional schools (i.e., neighborhood public schools) serve a revolving population of transient students, while KIPP generally does not. That is, we’re not really talking about an attrition bias issue, we’re talking about a ‘backfilling-bias issue’. Here’s how Gary Miron described the issue as regards the Mathematica report released a year ago:

      “…an initial analysis of the report by Professor Gary Miron of Western Michigan University concludes that this initial study report misrepresents the attrition data. According to Miron, ‘While it may be true that attrition rates for KIPP schools and surrounding districts are similar, there is a big difference: KIPP does not generally fill empty places with the weaker students who are moving from school to school. Traditional public schools must receive all students who wish to attend, so the lower-performing students leaving KIPP schools receive a place in those schools.’

      “In contrast, Miron explains, ‘The lower performing, transient students coming from traditional public schools are not given a place in KIPP, since those schools generally only take students in during the initial intake grade, whether this be 5th or 6th grade.'”

  9. Kevin —

    Whatever Miron said there has been trumped by the new Mathematica study on that very issue, which found as follows:

    “Pooling the results across all offered middle school grades, there is no systematic pattern of
    differences in the prevalence of late arrivals at KIPP schools and comparison district schools. The
    overall proportion of late arrivals, relative to total enrollment, varies greatly at both KIPP and
    district schools. The overall proportion within KIPP schools ranges from 0.03 to 0.30. There is a
    similar degree of variation in the overall proportion of late arrivals in the district comparison group,
    where the site-specific proportion of late arrivals ranges from 0.04 to 0.28. Averaging the results
    across sites, the overall proportion of late arrivals at KIPP (15 percent) is similar to the overall
    proportion in the district comparison group (14 percent).”

    1. I believe there are some important differences in the presentation of this information, but I’ll have to go back and look again. This is why, as you will see in my original post, that I largely deferred to the follow up KIPP study (from this Spring) which provided more of the detail on the descriptives on types of mobility. But I think there’s still more to explore here.

    2. Hello again, Stuart.

      Thank you for pointing this out — I wasn’t aware of it.

      It will be interesting to see how this is presented by Mathematica when that working paper, presented in April at the AERA conference, is finalized. I’m glad those researchers are trying to seriously confront the issue.

      We should also note the research that Dynarski and Angrist have done looking at KIPP Lynn (MA), which seems to be doing good work.

      Just to be clear, I do hope that KIPP research shows success. If we can determine that approaches like extended learning time are effective and then can figure out the additional resources needed and how to scale up a successful model, I think this would be a major advance. My concerns, based on research from Bruce and from Gary Miron, is that the KIPP approach is very limited in terms of scale-ability, is probably too expensive to get general buy-in for a scale up (among policy makers and taxpayers), and may not be showing the type of positive results that are generally attributed to it. I hope I’m wrong on all fronts.

      btw, Stuart, have you come across any information about how widespread the use of these KIPP placement test might be? ( Obviously, even if the tests aren’t used as a concrete screening devise, they would have the effect of generating an application bias. Was this Baltimore KIPP the only place this has been happening?

  10. I actually don’t know. I suspect that such a test might do more good than bad, though. What is a school supposed to do with disadvantaged kids who are grades behind? Blithely pass them along to classes that they can’t understand, or try to address deficiencies and get them caught up? But how is a school to learn exactly what the student does and doesn’t understand without a placement test?

    1. Richard Kahlenberg has a posting on the “Answer Sheet” WashPost blog that does a much better job than I did above in articulating both support for KIPP and skepticism about some of the empirical claims and the political/policy uses of those claims:
      Note that this Kahlenberg post follows earlier posts by him and by KIPP’s Ryan Hill. The entire exchange is worth reading.
      Regarding the ‘more good than bad’ assertion, there are two separate issues. One concerns policy, which is what Stuart’s point is above — that placement tests can be used to help educators understand a student’s strengths and weaknesses. I have my doubts that this KIPP test was used in a constructive way, but that’s certainly a possibility. But the other issue concerns research, which is why I brought up the tests. The concern here is simply that the use of those tests contributes to an unmeasured selection bias.

Comments are closed.