An Illustrative Case of the Numbskullery of Evaluating Teacher Preparation by Student Growth Scores


Assumption:  A good teacher preparation program is one that produces teachers whose students achieve high test score gains

Relay Graduate School of Education is housed in North Star Academy in Newark, and its course modules are largely provided by relatively inexperienced “champion” teachers within its own network (and in from the school itself).  The program is designed to train its own future teachers [and others in network] – and to actually credential them (and grant them graduate degrees) in the specific methods used in their school(s).

Put simply, Relay GSE uses relatively inexperienced teachers to grant degrees to their own new colleagues, where those colleagues may be required by the school to gain those credentials in order to retain employment. No conflict of interest here? But I digress. Back to the point.

Their modules, as shown on the Relay website, are in their best light, little more than mindless professional development for classroom management, and reading inspirational books by school founders, discussed with “champion” teachers. Hardly the stuff of legitimate graduate work, in any field. But again, I digress.

Relay GSE will likely place a significant number of its graduates in its own school (or in network).

North Star Academy has pretty good growth scores, by the (bogus) New Jersey growth metric.

Therefore, not only is North Star Academy totally awesome, but Relay GSE must be an outstanding  teacher preparation institution! It’s just that simple. They must be offering that secret sauce of teaching pedagogy which we should all be looking to as a model. Right?

Setting aside that the New Jersey growth scores themselves are suspect, and that the endeavor of linking teacher preparation program effectiveness to such measures is completely invalid, what the current approach fails to recognize is that North Star Academy actually retains less than 50% of any given 5th grade cohort through 12th grade in any given year, and far fewer than that for black boys. The school loses the vast majority of black boys, and for the few who remain behind, their growth scores – likely as influenced by dwindling peer group composition among those left as by “teacher” effects – are pretty good.

But is a school really successful if 50 enter 5th grade, 1/3 are gone by 8th grade and only a handful ever graduate?

Is this any indication of the quality of teaching, or pedagogy involved?  I won’t go so far as to suggest that what I personally might perceive as offensive, demeaning pedagogy is driving these attrition rates (okay… maybe I just did).

But, at the very least, I might argue that a school that loses over half its kids from grade 5 to 12 is a failing school, not an outstanding one. Whether that has any implications for labeling their teachers as “failing” and their preparation programs as “failing” is another question entirely.

It is quite simply completely and utterly ridiculous to suggest that Relay GSE is an outstanding graduate school of education as a function of measured test score gains of the few students who might stick around to take the tests in subsequent years.

No secret sauce here… just a boatload of bogus policy assumptions creating perverse incentives and taking our education system even further in the wrong direction.

Notably, this does not prove it’s a bad, or awful grad school of education either (see their videos, and read the reports here for evidence of that).

My point here is that this particular case – or what it has the potential to be – is wonderfully (in a twisted way) illustrative of the numbskullery that pervades public education policy from k-12 school accountability metrics to proposals for “improving” teacher preparation.

This foolishness must stop.

3 thoughts on “An Illustrative Case of the Numbskullery of Evaluating Teacher Preparation by Student Growth Scores

  1. “I might argue that a school that loses over half its kids from grade 5 to 12 is a failing school, not an outstanding one. ” This. If I were incredibly unethical and wanted high class averages, all I would have to do is make it so difficult for struggling students to succeed that they would give up. If I create a situation where the bottom 1/3 drop the course, my average goes up and if that’s what people look at they think I’m great. That’s the OPPOSITE of what teachers should be doing! This is (one of) the dangers of using student scores as a metric for teacher evaluation or teacher prep program evaluation.

  2. This is a horrible way to prepare teachers and it really disturbs me that anyone from traditional Teacher Ed would condone it. Relay and other similar teacher training programs, like Match and ABCTE, were not amongst those that NCTQ evaluated and now you know why. They were compelled to trashed Ed Schools in order to make way for the new, preferred teacher prep programs of corporate “reformers,” which trains people to use ONE pedagogical approach, that of the military style drill sergeant teacher, as implemented at KIPP et al. This is like novice TFA “teachers” being in charge of training future teachers. It sends chills up my spine.

    In EVERY case of what passes as education “reform” today, from increased high-stakes testing and data sharing to the privatization of public education, de-professionalization of teaching and the expansion of charter schools, TFA and these sub-par, one-dimensional teacher training programs, ALWAYS investigate the profit motive and follow the money! You are most likely to find that it is politicians and the super-rich who sponsor them, from corporations and organizations classified as both for-profits and non-profits, that are the wizards pulling the strings and leveraging their power behind the curtain, in order to increase profits. Again, always ask what kind of financial pay-off is likely to result and FOLLOW THE MONEY! (Yes, there IS a profit motive here as well.)

Comments are closed.