What We Can Learn From The Celtics Failed Trade

The Boston Celtics lost tonight. Until mid-season they were the top team in the East. What happened? The Celtics made a trade. At the time of the trade, my favorite NBA commentator, David Berri, imagined the following conversation:

Celtics General Manager Danny Ainge: Sam, what can I do for you?

Thunder General Manager Sam Presti: Danny, how about you take the two worst players on my team? And in return give me a big man that can help me contend for a title?

That's in fact what happened. The Celtics traded a slightly above average center, Kendrick Perkins. In return, the main attraction was supposed to be Jeff Green, pictured above.

Berri has always had a very simple but profound notion about the NBA. People overvalue scorers. People undervalue many other things. And not just fans. Even famous general managers like Danny Ainge, who has spent his life playing, coaching, and now evaluating the game.

Berri is an economist and a fan. He has zero special knowledge of the game of basketball. Instead, he has complicated mathematical regressions. The formulae tell him things which experts may overlook. In this case, he knew that Jeff Green, the guy the Celtics got, was a terrible rebounder for a man of his size (6'9"). He is also a very inefficient scorer. As a result, he harms his team every minute he plays.

I've written previously about the MET project, funded by Gates Foundation. The early finding is that the popular teacher evaluation rubrics -- the forms filled out by many principals and other evaluators -- seem to be overlooking something. We just don't know what.

These tools, when used by trained observers, fail to predict student learning very well, as measured by student test score gains. These evaluation tools are like Danny Ainge's eye.

The problem is we don't have a David Berri yet. We do not have a much better way of evaluating teachers -- such that we can do a decent job of predicting the future (how much each teacher's kids will learn, as measured by VAM).

In fact, the early MET data, released to the public a few months ago, seemed to show that Harvard political scientist Ron Ferguson was closer to being David Berri than any of the teaching experts. His approach is to never observe the teacher at all. Instead, survey kids.

That is, if instead of giving an expert on of these teacher evaluation forms, you give the kids a survey called Tripod, you are more likely to be able to predict the learning gains of that teacher's students.

My belief is we need to open up the teacher evaluation challenge to the "hive mind" -- tens of thousands of curious people around the world who have zero expertise in teaching, and therefore may be able to see things that experts cannot.

I will explain my idea in a future post. It involves putting lots of teacher video on the web. But only with explicit permission of teacher and kids for each clip.

Essentially, we'd create a game that allows anyone to try to predict student learning by watching teachers in action.

My belief is some new insights would emerge that move our field forward. Whether those insights will come from a retired calc teacher in Omaha, an off-Broadway actor who knows something about performance, a psych grad student in India, or a security guard at a pork and beans cannery, I have no idea.

But I'd bet a lot that someone would move us forward if we made the raw video easy to play with and think about.