Blog: The Educated Reporter

Is John Smith the worst teacher in Los Angeles?

I feel really bad for John Smith. Smith was the subject of an extraordinary Los Angeles Times article Saturday, the first piece of a big database project that uses student test score data to rate teacher effectiveness. Smith, according to the Times, was one of the least effective elementary school teachers in LAUSD.

“Who cares about John Smith?” I can hear a lot of you saying. “I feel bad for his students.” I get that. I do too. And I am thrilled that the Times is devoting so many resources to the issue of teacher effectiveness. Value-added measures are coming to a school near you, if they have not already, and deep journalistic study of the issues involved at the classroom level is rare if not nonexistent. On a policy level, it has always seemed the height of crazy, as the piece puts it, that districts “act as though one teacher is as good as another.” Challenging the primacy of collegiality over quality—yes, this is a dichotomy in school culture, though it shouldn’t be—is overdue. The reporters on this piece are talented, and I am looking forward to their future stories on the database, as I am confident they will be telling, provocative and important.

So why do I feel bad for John Smith, and relieved that his anonymous name* probably spares him from a fair bit of e-hate to his inbox? Most of all, because this may well be the first time he has gotten information saying his skills are lacking. Your students perform poorly under you for years, and the first indication you get is accompanied by your photo on the front page of the Los Angeles Times? Ouch.

My other concern is more practical. The article sums up criticisms of value-added measures of teacher effectiveness as follows: the tests are flawed, they cannot measure intangibles of teaching, what do statisticians know about teaching? This felt, to me, too dismissive of the nuances of value-added and standardized testing, the logistical complications, the legitimate shortcomings. The strengths, of course, were made plain.

You can read a summary of the study done for the Times, by Richard Buddin of Rand Corp., here. I left my undergraduate math major once my classes stopped containing actual numbers, so I do not understand all the formulas. I do know that the report does not seem to address student mobility, student absences, co-teaching, pullout interventions and other workaday factors that potentially complicate value-added. It suggests that the effectiveness of a given teacher is best studied over several years, but the Times article seems to be based on one-year measures. These considerations don’t negate the value of the measure, but they make for important context.

I assume—I hope—these nuances will be addressed in later stories. Teachers are given the chance to comment on their value-added scores in the Times database, but that is not the same as reporters using their skills and authority to explain what these numbers (the students’ scores, the teachers’ scores) do and do not show us. We need transparency all along the pipeline, not just at the end.

*A few people have asked me why I thought John Smith was a pseudonym. I didn’t mean that. I meant that such a name is so common people might have a hard time tracking him down online.



Have a question, comment or concern for the Educated Reporter? Contact Emily Richmond. Follow her on Twitter @EWAEmily.

Read other Educated Reporter articles.