Nate Silver, 34 out of 50

Andrew Mooney is a Harvard student. He wrote a great blog today:

For one small subcommunity of America, the man who benefited the most from the country’s decisions at the polls on Tuesday was not Barack Obama—it was Nate Silver, statistician and creator of the FiveThirtyEight blog on the New York Times' website.

Based on current election returns, Silver correctly predicted the outcomes of all 50 states, with the result in Florida still pending. Given his track record—he got 49 out of 50 right in 2008—Silver appears to have ushered in a new level of credibility for statistical analysis in politics.

Not so fast, says Mooney.

But there may be a better way of evaluating Silver’s predictions than a binary right-wrong analysis....Using this methodology (examining margin of error), Silver’s record looks a lot less clean. The actual election results in 16 states fell outside the margin of error Silver allotted himself in his projections, reducing his total to 34-for-50, or 68 percent.

He was furthest off in Mississippi, which wasn’t nearly as lopsided as he predicted, and West Virginia, which voted more Republican than expected. Of course, Silver was still within two percent on 19 states, an impressive feat in itself.

Big Picture: Geeks Versus "Experts"

Moneyball is the mostly real-life story of Geek versus Expert. A baseball general manager (Billy Beane, played by Brad Pitt in the movie) relies on experts -- tobacco-chewing scouts who've been in the game their whole lives, guys like Grady Fuson.

Beane hires a geek, a chubby recent Yale grad (Paul DePodesta, played by Jonah Hill). New decisions are made.

Experts get angry. They've often spent a lifetime acquiring experience. "Feel." How dare the young whippersnapper challenge their author-it-ay? Beane fires Fuson.

(FYI. The firing was a Hollywood flourish. The real life story is here).

The A's become a good team with ballplayers that experts didn't value, hidden gems only revealed by numbers.

* * *

That story just played out again on Tuesday, on a different ballfield. Karl Rove, George Will, Dick Morris, Michael Barone, Peggy Noonan predicted a Romney landslide right before the election. They were spectacularly wrong.

Silver and a bunch of other geeks, like Mark Blumenthal at Pollster.com, and Sam Wang at Princeton, largely got the story right.

As I observed last week, one reason some experts err is they choose to live in a hyper-partisan bubble. They purposefully avoid data that tells a story they don't want to hear.

All the geeks did is work with public polls that were easily available on the Internet. Yet conservative media often simply played up "Romney ahead" polls and didn't mention "Obama ahead" polls.

* * *

K-12 has its own Nate Silvers. Two I know are Tom Kane and Roland Fryer. Those guys never were schoolteachers. They look at numbers. Then sometimes they say things that makes the experts in our field quite angry.

For example, Tom Kane showed that expert observers -- whether principals or teachers, any veteran who watches a class and then fills out a scoring rubric -- are not very good at predicting which teachers help kids to make the most growth. The eyeball test does not work well. Which is troubling remains most of what we do to evaluate teachers.

Meanwhile, sometimes data allows individuals to do their jobs better. Case in point: Ross Trudeau's blog here.

Big Picture: Geeks are on the rise. They will continue to battle experts, but increasingly decision-makers will listen to geeks.

But beneath that Big Picture, there are 2 Big Caveats.

1. First, what Andrew Mooney writes about political geeks also applies to K-12 geeks:

The takeaway here is that, while Silver’s work the last four years has been impressive, he is not a mysterious wizard—for example, both the Huffington Post and Princeton’s Sam Wang had similarly accurate results. He is also not infallible, and he would be the first to admit it.

Forecasting is never an area where we should expect 100 percent accuracy, and though Silver’s work is bringing a lot of positive attention to statistical analysis in general, it’s important that people keep their expectations of its applications realistic.

2. The Bigger Caveat in my opinion: Silver, Bill James, et al...they got to where they are because they are the best geeks. Mooney's point applies to the limits of the best quants.

There are many geeks that aren't very good at their work, by any definition. They crunch numbers, but mess up some key thinking.

And I worry about that in the K-12 field. So-so and bad geeks proliferate.

* * *

To sum up:

1. Increasingly, K-12 decision-makers are open to "using numbers" and the geeks who crunch 'em. They're not reflexively anti-geek and pro-"expert." Which is good. If they hire the best geeks, and so long as they understand the limits of geekery, kids will be better off.

2. However, often the K-12 decision-makers hire mediocre geeks, or worse. So the result is not Moneyball, not stats-based breakthroughs that help the kiddos learn, or that help teachers improve. Instead, nothing improves over the so-so or bad "expert" decision making which has long been in place. And sometimes things get worse.