Measuring Schools

I've been really enjoying the blogging of a guy named Matt DiCarlo. He blogs for the Shanker Institute. Good one here on charter school research. On Apollo 20 tutoring/turnaround.

And recently on measuring schools. That's what we'll look at today.

He writes:

Roughly speaking, in addition to the inevitable measurement error, a school’s absolute performance level reflects a combination of two factors:

Students’ performance levels upon entry into the school; Their improvement while attending the school.

Schools cannot control the former (which students they serve). It varies widely, and schools only serve students for a few years at most. They can, however, control the latter (whether students improve while enrolled).

So, why not just use the latter – growth – directly? Why would we hold schools accountable for an outcome that is largely out of their hands, when we have the option of isolating (at least approximately) the portion that they actually can control?

I agree with him. In Massachusetts, for example, the newspapers publish the absolute scores of schools. They've done that since MCAS began. Suburban superintendents like the absolute score list, by and large.

No Child Left Behind -- remember that law that Ted Kennedy and George Bush passed -- also requires states to public "subgroup data." Black and Hispanic kids; poor kids; special ed kids; English Language Learners. How were all of those groups doing?

Some suburban superintendents up here that. What if all your white kids do well but your black kids do badly? That used to be swept under the rug. Now it was there in the newspaper, on a "NCLB Report Card."

So the Massachusetts Association of Superintendents began lobbying the state to calculate MCAS growth scores, too. The idea was to show that the "subgroup kids" were making good progress, even if (of course) they had not reached the level of the middle class white kids.

Also, I'm told, they wanted to show that charter schools were not succeeding. They just took the good kids, that's why they had high absolute scores. Growth data, they thought, would tell that tale.

The state agreed to the supes wish. So in 2010, they began to publish MCAS Growth Data.

Despite that, test score growth -- the stuff a school controls -- does not get very much news media coverage.

Why?

A few reasons.

1. Most "ranking" lists are absolute. Baseball standings. Wealthiest people.

True dat, but some lists are about growth, not absolute. Stock prices, for example. Typically they list "Big Gainers" and "Big Losers" on any given day or year. They never list stock prices in absolute terms. Otherwise Berkshire Hathaway would always be #1. This morning it cost $122,600 for a single share.

2. Who is the audience?

Newspaper readers are more likely to be interested in how their suburban school stacks up against the nearby suburban school.

3. What does the growth story tell us?

And then supes went silent. Crickets. Why?

a. Charters tend to quite high on the "MCAS Growth" list. In every grade, every subject, every year. Example, this happens to be Grade 5.

Not what supes had in mind.

Why? Some of that is actual quality.

(The Boston area charters in particular have done well. Less so of suburban Massachusetts charters, as measured by MIT economists. Remember, across the USA, the word "charter school" on average does not correlate with "high quality." But in Boston, it does).

Some of the over-representation is just Stats 101: if you're organization is small (fewer total children who take the test), it will be easier to be near the top or bottom of any particular list.

So look at the "Low Growth" list -- the bottom ten. Again, over-representation of charters.

b. But there's a bigger issue than charters.

Several suburban districts, with high absolute scores, have average growth scores.

That may irritate the very suburban superintendents who had been clamoring for public growth scores. Per DiCarlo, if top suburban districts in MCAS absolute scores are heavily the result of kids simply arriving to Grade K in good shape, it would rule out "brilliant superintendent" as the cause.

When the growth scores -- the ones DiCarlo explains are what the schools actually control -- are average, sometimes a superintendent cries foul.

For example, let's examine this news article from a tony Massachusetts suburb:

The Winchester School District again shows why it is regarded as one of the top districts in the state. The school district received a very high performance rating from the state.

Two weeks ago the state released the MCAS scores and Winchester High School was the top school in the state in the English Language Arts Exam goes for percentage of students who scored advanced.

All good. High absolute scores. We tout that too in our school.

Now let's look at another article, this one more probing:

“Winchester students are doing very well, certainly compared to the state averages,” said School Superintendant William McAlduff. “Our scoring trends, in almost every case, are mirroring the state level, but at a much higher rate.”

Now if the reporter knew about Growth scores, there'd be a logical follow-up question. Because growth is not a much higher rate.

Because of NCLB, however, the reporter was able to ask at least the "subgroup" question.

At the same time, Winchester struggled in meeting expectations for “adequate yearly progress” (AYP). The federal No Child Left Behind Act establishes guidelines for AYP, which requires schools to boost test scores each year.

Three Winchester schools—Ambrose and Lynch elementary schools, and McCall Middle School—failed to meet the state’s benchmarks for AYP. McAlduff said the failure wasn’t a poor reflection on the schools.

That is, it's not our fault if the poor/minority kids don't do well, it's not our schools' job, it's the parents job.

Hmm. What if the supe had said this instead:

We're proud of our kids' absolute scores. Our parents send us well-educated kids and the teachers don't mess anything up and the teachers do a good job from there.

But the growth data and the subgroup data show we need to get better. And we will work hard to do so.

The 2011 growth data across our whole district has us at the 54th percentile in English growth, and the 48th percentile in math growth. Doesn't get much more average than that. We are safely average. We don't want to be average.

The subgroup data tells us 3 stores.

1. We haven't done a great job of helping the kids from poor families and other subgroups. They lag our other kids, by a lot.

Yet we only have about 20 such kids in any particular grade in the whole district. Ie, 19 poor kids in Grade 4, 23 in Grade 5, etc. Out of 300+ total kid per grade.

So....if each teacher would take responsibility to tutor just ONE kid at a high-dosage, we could probably get the poor kids to make huge progress and join their peers.

Moreover, if just the elementary teachers did this, kids from all income levels would arrive at the middle schools in decent shape.

2. The achievement gap in our schools does not close as kids spend more years with us. It seems to expand a bit, actually. The more time they spend in our schools. We should figure out why.

3. Our kids make slightly above average gains -- compared to other suburbs -- in English. But we make slightly below average gains in math. To improve, we plan to do X, Y, and Z.

Well, if he said that publicly, perhaps he'd stir up a hornet's nest. But then there would be opportunity....