"Deeper Learning" and Ed Tech

When I get home from Match, my wife and I review our kids' adventures and misadventures from that day. But in the old days, pre-kids, we compared notes from work. Since her work is treating cancer, I sometimes compare her field to K-12. There are similarities.

For example, people want a "silver bullet" solution to cancer. Smart scientists often dream up such approaches. Often there is a surge of optimism. But the cures don't pan out. Sometimes instead they can be a small part of a larger solution. Sometimes they're actually no good at all.

Sound like K-12?

There are differences, too.

One is cancer has precise vocabulary that is widely used.

Here's an example. Cancer is often described as having 5 stages. Pretty much everyone agrees on this terminology.

The language makes it easier for doctors and nurses and patients to talk and work together. When scientists come up with treatments, they often find them to be effective for cancers only in certain stages. So when they tell doctors: "treatment only effective for X cancer in stage two," everybody knows what that means.

Our sector talks a lot of "Deeper Learning." Or "Higher-Order Skills."

But what does that mean? There's not a commonly-accepted terminology or taxonomy. Instead, there are tons of competing terms and ladders.

In math, for example, here's language that the US Gov't uses for the NAEP test. Low, middle, and high complexity. I suppose they might characterize the "high" as "deeper learning."

Here's Costa's approach, a different 3 levels. Text explicit, text implicit, and activate prior knowledge. Again, perhaps the last is "deeper learning."

Here's another take, more general than math-specific, from Hewlett.

A software like MathScore has its own complexity ratings.

And so on. You could find 10 more in 10 minutes of Googling.

This lack of common vocabulary, I believe, makes life harder for teachers. They get bombarded all the time with new products, websites, software that all claim they can get students to "deeper learning." But without a common understanding of what actually qualifies, it's hard to know if X even purports to get your kids where you want them to go.

* * *

Let's explore this a bit more.

Third grade is the first year kids that Massachusetts kids take a state exam, the MCAS.

Here is what a 3rd grade teacher is up against.

There are many third graders in Massachusetts who can’t answer questions like this:

What is perimeter?

There are actually many 3rd graders who can’t do this:

What is 4 + 5 + 3 + 6 + 8?

or

What is 7 times 8?

How do I know this? Because for years Match High School has served incoming 9th graders who arrived unable to solve 7 times 8.

Unfortunately for 3rd grade teachers in Massachusetts, the MCAS doesn't include any questions at this level of difficulty. MCAS question writers love them those "higher order" questions. Harder questions.

So -- keep in mind the underlying knowledge that kids lack -- they ask questions like this (#8 from last year's MCAS):

In this question, you need to know 4 things. First you need to know what perimeter means. Second you need to know you that you need to fill in the "missing sides." Third you need to know what to fill in, because you understand "rectangle." Finally you need to add those 4 numbers. If you only understand 3 of the 4 ideas, you'll get the question wrong.

Does this question probe "deeper learning" for a 3rd grader? Who the heck knows.

In Costa's framework, the question does require "activating prior knowledge."

In NAEP's framework, feels like the "moderate" category.

In Hewlett's framework, not deep -- not a novel or real-world situation.

So what’s the problem here?

From a teacher’s point of view, often you're told to teach "deeper knowledge." "Raise the level of complexity." Etc.

In practical terms, since you're being held accountable for kids learning perimeter to a particular level, you probably want to address a kid’s needs “in order.”

a. Can’t add? Let’s fix that.

b. Lack underlying knowledge about shapes? Let's fix that.

c. Don’t know what a perimeter is? Let’s teach that.

d. Don’t know how to solve straightforward "find the perimeter" problems? Let’s teach that.

All of that stuff precedes a kid being able to solve that MCAS question.

For many teachers, the time it'd realistically take to "cover" all the topics that "should have been learned by second grade but were NOT learned by most kids," and to cover the "new stuff that you're supposed to learn in Grade 3".....well that long list does not line up well with the 140 or so hour-long math lessons you have from September to early May, when the test is given. You can't simply "teach the needed topics in order."

Hence a teacher might want help from software.

But a teacher searching for math software has no real idea on how "deep" they go, or how "shallow" they start. No common language for "Depth" or "Complexity."

This is not the Test Kitchen issue: an gap in the information market, where it's hard to know how good a product is.

This is a language gap: clarity about what a product even tries to do.