Today's Puzzle: See If You Can Pick The Literacy Intervention That Works

There's a wonderful study in the new issue of the Journal of Research on Education Effectiveness. What? You don't subscribe? You read TMZ online instead? Well let me help you out.* Oops. The study is behind a firewall. Luckily the US Government paid for this study. So an older version, from 2009, is here.

The study examines 4 reading interventions on 5th graders. They were similar. Each gave kids 30 to 45 minutes per day of extra reading instruction. Kids learned reading strategies...like how to summarize, how to identify text structure, etc.

The kids were mostly from high-poverty schools.

Which one do you think worked best?

1. Reading For Knowledge, which was derived from Success For All. It had more group work than the other 3.

2. ReadAbout. It alone used computers for part of the intervention.

3. CRISS and Read For Real. Only these two included the strategy of teaching kids to "activate prior knowledge."

* * *

Actually, all 4 failed. None helped kids.**

A few thoughts.

I'm going to channel a few edu-thinkers that I like, and guess what their reactions might be to a study like this one.

1. Robert Pondiscio of Core Knowledge might react this way:

Of course these interventions failed. The teachers explain "reading strategies" to kids, model those strategies, and then get kids to practice the strategies.

But teaching reading strategies don't work. Dan Willingham has speculated that heavy focus on teaching these strategies may even harm kids. And the authors of this study seem to find precisely that result: some negative results!

I don't understand why the study's authors don't explore that narrative....that these reading strategy interventions actually cause harm, rather than simply fail to provide "good."

Instead, the authors go in a different direction. They speculate that perhaps the reason the programs don't work is because teachers aren't implementing the curriculum with fidelity. My organization, Core Knowledge, has a different belief: reading interventions need to increase kids' underlying knowledge of "real stuff" (sometimes blandly called "content"...ie, science, history, etc).

2. Rick Hess of American Enterprise Institute might react this way:

Big Government -- the feds -- often tells teachers (and schools) what to do. Why butt in? They insist their rules come from "research." They know best.

But often the "research" is plain ol' wrong. It's hard to really know what works. This new JREE examines four supposedly "research-proven" literacy interventions which many schools around the USA use...and it finds that none actually helped kids.

Why do I believe the JREE study, and not the research claims of the interventions? The JREE research design is more careful. It uses the gold-standard: randomized design.

Let's zoom out. My belief is the feds should generally steer clear of telling teachers and schools what to do. How do the feds try to exert control? Title I is one big way. The feds give money -- often $1,000+ per "poor kid" per year -- to school districts. Fine. But the feds then make a ton of rules of how to spend the money. This constrains innovation.

My complex, nuanced reaction to Title I rules is here.

But the short version is that I think DC educrats telling local schools what to do -- that's generally dumb. Decentralize.

3. Harvard scholar Jon Fullerton might say:

Title 1 has a culture of compliance. Not results. Here's a paper I wrote about that with Dalia Hochman.

The culture of compliance means that many approved programs are quite similar -- that's the best way to "follow the rules." So if one program design fails, it's likely all will fail, because they're so similar.

Imagine instead if the 4 interventions were not driven by compliance. Instead imagine you had some crazy or merely "different" intervention ideas mixed in there. It'd be far more interesting to study 4 very different interventions, and it would increase the likelihood of a big success or a big failure, which is precisely the way we learn about "what works."

* * *

I'm friendly with Robert, Rick, and Jon. Let me emphasize again this is just me making up in my head what they might say. As in I totally made up each quote, because I wanted to show you, dear readers, the way various thinkers might process the results of this study.

4. And what do I think?

a. Kudos to the authors. The research design is brilliant. Not their fault these interventions didn't help kids read better. My only quibble is I wish that their study was something like "Good Intentions, Bad Results."

b. I wish we had a lot more randomized research studies with negative findings that were lead articles in journals. A lot of seemingly good ideas in K-12 don't help kids. We need to know which they are. It's easier in medicine to find negative findings.

c. Another randomized study of Project CRISS, this is 2011, also showed no effect.

d. I wonder if real-life teachers get to read this cautionary study before they're told to do these programs.

I.e., these programs still exist, or seem to, from some random Googling I did. Teachers are told to do them. At the very least, there should be disclosure, right? And ideally, there would be easy opt-out for teachers.

Same question with parents. Maybe: "Dear Parent, we provide an after-school reading program. Some research found these programs didn't work in other schools. So we're giving you a heads-up. Here's why we think that those studies are wrong, and that this stuff DOES work. But if you want your kids to get literacy help in another format, we'll make it happen."

* * * * *

*As always, there's a 20+% chance I'm totally misunderstanding the study. Actually 25%: haven't had coffee today. Caveat Emptor.

**In the 2012 version of this study in JREE, which is a rewrite of the 2009 study with more data, there was a sliver of a positive finding for ReadAbout.

I won't bore you with the asterisks: but even with ReadAbout, only 1 out of the 3 different types of reading tests showed a boost in scores....and even that was only among teachers in their second year of providing the intervention. So while someone could reasonably quibble with my statement "None helped kids," I think the preponderance of the evidence still lies with "none."