Monday, April 18, 2011

What dead quantum mechanics cats can teach us about teacher evaluations

One of my favorite books is Moneyball my Michael Lewis. In it, Lewis profiles Oakland A's general manager Billy Beane's methods of scouting baseball players and composing a major league team and drafting new players for the minor league farm system. Working in one of baseball's smallest television markets and having one of the smallest budgets, Beane has pretty consistently put together teams that are very competitive with his large-market competitors like the New York Yankees. They don't win championships at the same rate, but are one of the standards for how to run a small-business in a big-business world.

The chief way Beane did this was to quantify baseball in a statistically relevant way. Though ironically, baseball was the sport that produced more statistics than any other (which geeks like me memorized as kids), most of the numbers we memorized had little to do with predicting future success or measuring how a player made a true statistically valid impact on the game.

Like anyone who wants to do something new, Beane met resistance from lots of veterans who said "we've always done it this way." But his success validated his change in thinking and influenced many teams to adopt what have become known as "moneyball" strategies. Combine those strategies with actual money and you get the Boston Red Sox of the past 8ish years.

Anyway, I was so taken with Beane and his assistants' application of rigorous statistical analysis to a seemingly complex sea of numbers and variables, that like many who paid him to speak to their companies I started thinking that we just need better analysis and more things can certainly be quantified. Moneyball wasn't about baseball, it was about re-thinking and the power of math and education. In a weird way, I got so drunk on moneyball that I started half-equating climate change deniers and Creationists with those old baseball scouts who just wanted to keep doing things the way they've been doing them.

When the L.A. Times published an investigative series about using value-added analysis (my short definition: seeing how much students improved from year to year on standardized tests) I was so impressed by the apparent rigor of it, that I started coming around to believe that this type of analysis should be PART (about 20 percent maybe 33 percent) of teacher evaluations. Sadly for the L.A. Times its methods have since been called into question by one of the people it quotes in the story as someone who backed up the Times' numbers and conclusions.

What does this have to do with dead cats and quantum mechanics?

This article in The Economist, which basically says that what-many-now-see-as-the-MBA-ization-of-education has historical roots in failure. The Economist piece by I can't tell who (dammit Economist be better than that), notes that a couple centuries ago some Germans wanted to get better timber yield from their forests so they took some rudimentary measurements and eventually planted one species exclusively in very dense rows. At first huge success, but then the disaster of unforeseeable (at the time because of our lack of understanding about soil microbiology) consequences, like high susceptibility to disease and depletion of soil nutrients because one kind of tree will want the same thing.

But the greater premise, which the above example merely illustrates, is that efforts to "read" a population consequently beget efforts to transform that population to make it more "readable."

Finally, here comes the dead quantum mechanics cat. In 1935 Austrian physicist Erwin Schrödinger, developed a paradoxical thought experiment that essentially said if you put a radioactive substance that has a 50-50 chance activate in an hour and a cat in a box, after an hour the cat could be dead or alive, or is in fact dead and alive. Ultimately he didn't argue that it's both simultaneously, merely that one's conclusions are dependent on observation.

The Economist notes: Daniel Koretz, the Harvard education professor recognized as the country's leading expert on academic testing, writes in his book Measuring Up that Campbell's Law is especially applicable to education; there is a preponderance of evidence showing that high-stakes tests lead to a narrowed curriculum, score inflation, and even outright cheating among those tasked with scoring exams.
So by trying to measure teacher's competence with a students' standardized test how can we not be redefining the very nature of teaching into a way that no longer measures competence at inspiring students to explore, drilling cogent facts into them, encouraging ethical intellectual behavior, and demonstrating the importance of common sense?

The Economist piece closes by citing examples from South Korea and Finland (a country lauded in Waiting for Superman, though sadly the most important lessons of Finland aren't advocated for, instead the documentary pushes testing!); these countries "[rely] more on systems of peer review and intensive comment and training from in-school "master teachers", as well as making teachers' jobs involve much more time planning their lessons in groups with other adults.

I agree with the blogger who posted this on The Economist website. The common sense of using master teachers to do more in-service training is so obvious that it almost hurts. I haven't totally given up on a form of value-added analysis being incorporated into evaluations but I'm thinking much more like 10 percent. If we're sincere that children are our future and our most precious resource, then we need to invest in them with hard money. That doesn't mean using MBA cost-cutting and efficiency findings. It means hiring more teachers and aides and yes, perhaps even more assistant principals to serve as these "master teachers." This also would create additional middle class professional jobs, and the people who fill those jobs would spend money ... and create tax revenues and ...

My favorite thing about the Internet is the availability of information, specifically information that makes me rethink about important topics. And this piece in The Economist has certainly got me doing that.

No comments: