# Khan Academy Vs. Accelerated Math, Blog 3 of 3

MG here. Below, my colleague Ray describes a positive experience with Khan Academy problem sets. Some good comments yesterday critiquing Khan's problem sets. Dan, a math teacher turned doctoral student, writes:

In short, there are lots of valuable mathematics that you can’t assess using KA’s limited input set. Of the math you CAN assess using KA’s limited input set, they make it too easy to demonstrate proficiency without actually being proficient.

Read his views here and here.

Chris, who teaches math at Normandale Community College in Bloomington, MN, argues that Khan problem sets aren't constructed with particularly good "pedagogical content knowledge." (Our friend Deborah Ball discussed the PCK concept here). Chris critiques, for example,

...Exercises that ask students to locate numbers on a number line, but these numbers all have one decimal place and are to be placed on a number line that is already subdivided into tenths. In short, the exercises offer no intellectual rigor and do not address our central concern.

At some point I'd like to return to these critiques. For today, however, I'm going to turn the floor back over to Ray. No need to mention that Manti Teo ran a very poor 40 in the NFL Combine. Oops, I just did.

* * * *

Ray here.

Monday I guest-blogged about one component of our current math tutorial structure (Accelerated Math).

Today I'll describe how we -- 6 kids, 3 tutors, and me -- actually used Khan in a daily hour set aside for math tutorial. I describe "5 rounds" below of us trying to "hack Khan."

First, to MG's mom: he requested I clarify what I mean by hacking. "Hacking into" something is a security breach. "Bad." But when I take out the word ‘into’ and say, “I hacked Khan Academy,” it just means I’m tinkering with the product to work better in my context. "Good." Mike didn't want you to think he was condoning any nefarious activity on my part.

Second, for context, readers should know: our kids' math classes and tutorial are a mix of more abstract conceptual stuff and basic problem practice. If you're interested in the general question about the balance between the mix of math "basics" and math "concepts," or math concepts versus math procedures, this is worth a read.

Okay, off we go.

Round One:

Once we’d showed kids how to use the Khan website, we let them pick and choose which aims they wanted to work on. Kids picked the easy ones. They racked up tons of points and badges. They loved it. When they clicked on a more challenging aim, they got frustrated quickly, and would frequently downshift to an easier aim to work on. Not a good start. Product's fault or my fault? My fault. Didn't set kids up for success.

Round Two:

Tutors selected aims for kids. Tutors at first picked easy aims and had the same problem getting the kids to work hard at more challenging ones. Hmm.

Round Three:

Tutors picked aims right at the upper edge of a kid’s ability. (MG note: for more on this, see Vygotsky "Zone of Promixal Development" -- drink Norm!). This is the right idea generally. But after too many "right at the edge" problems in a row, our students became frustrated.

I had flashbacks to my days as a distance runner in high school. I’d go out too hard at the start and implode before the end of the race. Fly and die, coach called it. That is what was happening on Khan.

Round Four: The Khan Fartlek

Fartleks are unfortunately-named interval drills for runners (apparently it’s a Swedish term for ‘speed play.') You go at peak for a certain period of time, then slow to a jog to recover, then speed up again. Over and over.

The Khan Fartlek starts with the goal-setting function. Every account can create goals - sets of 5 exercises or videos that the program saves. Tutors created 2 different sets of 5 of goals: an easy set and a hard set. We made kids switch back and forth – achieve mastery on an easy goal, then go achieve mastery on a hard one.

Worked for a little bit. But then kids would get stuck for 30 or 40 minutes on a hard goal and burn out. They can’t sprint forever.

Round Five:

Each kid got 5 different sets of goals at increasing levels of difficulty:

Level 1: Walk. Aims that a kid will very easily master. Highly unlikely to get one wrong. Not so easy as to be a waste of time, but something that won’t take much time to achieve proficiency on. No help required from a tutor.

Level 2: Light jog. A kid will often get a problem or two wrong at first, but then pretty quickly figure it out. No help required from a tutor.

Level 3: Moderate jog. A kid will almost certainly get some of the problems wrong at first. Very minimal tutor intervention may be necessary.

Level 4: Run. An aim a kid does not remember how to do. Will need a few minutes of tutor help before is able to work productively alone.

Level 5: Sprint. Most challenging aim. Something a kid may have never seen before. Is at their upper limit of ability. Will need 5-10 minutes of tutor help or a video to get started.

Tutors created these levels for each of their kids. An advanced kid had much more challenging aims than a struggling kid. So level 1 for one person might be another’s level 4. And it was easy for everyone to know what they needed to do. No searching around on the ‘concept map’ for the next aim. Instead, a kid would just click on the ‘goals’ button and see this:

Then a kid could pick any goal she/she liked.

With the 5 levels created, we needed to get kids to push themselves without burning out. So we created an incentive program. Every kid got a half-sheet of paper at the beginning of each class. On that sheet were a bunch of boxes with numbers inside, like this:

Each number represented a level. If kids completed an exercise at that level, they raised their hand and a tutor came over and checked off that number. Once they completed every number inside one box, they would earn a merit. Kids got to choose the aims they worked on (which they liked), but they weren’t rewarded unless they pushed themselves and completed at least some higher-level aims.

Vibe:

Once we figured all this out, the room became quiet. Kids worked productively and independently for most of the hour. Once in a while a kid would raise her hand and ask her tutor for help on a problem. Sometimes a kid would pump a fist in the air in victory, or grab his hair when he made a mistake.

Tutors seemed to have a much easier time. In the old tutorial, a tutor sat together with both tutees and constantly tracked how each kid was doing. With Khan, they stepped back and let the kids struggle much more independently.

Videos:

We wondered: should we make kids use the videos when stuck, or have tutors just jump in?

We found that, compared to a tutor, the videos were a pretty inefficient way of getting the kids un-stuck. Usually a kid had one small misunderstanding that a tutor could diagnose in 10 seconds and fix in 15. Watching a 5-minute video wasted time.

Results:

Ray’s Overall Rating of Khan Academy in this context: 9/10.

Tutor Rating: 8/10 average over the three weeks, closer to 9/10 at the end. This compared with a baseline rating of 6/10 for Accelerated Math.

My guess would be that Khan, minute for minute, was about 3 times more productive than Accelerated Math. While is not an apples-to-apples comparison, kids mastered 8 or so Khan objectives per week, compared to 1 (more challenging) Accelerated Math per week. I stress again that this is purely in our context, with all our peculiarities.

Examples of Khan aims achieved:

•2 step equations •adding decimals •converting decimals to percents •least common multiple •prime factorization •adding negative numbers •number lines •distributive property •stem and leaf plots

(Note: fair question in Dan's blogs about degree to which "aim achieved" means "thoroughly achieved.")

Kid Reactions:

What they liked:

“I liked that I could choose the topic I could work on.” “I liked that it was a challenge and you could get merits.” “It gave me lots of different math things to choose from.” “I liked that they gave you different ways of showing your work.” “It gives you hints.”

What they didn’t like:

“I didn’t like that I had to keep doing types of problems until I got a full star.” “I didn’t like that my tutorial partner and I were split up.” “Too hard.” “It got me mad when I didn’t fill up the star.” “They didn’t really give you time to talk with your tutor and tutee.” “Nothing bad. It was a 10 out of 10.” “Ugh. I never did so much math in my life.” “My head hurts.”

When I asked kids if they wanted to continue using Khan or go back to Accelerated Math, they were split. Half complained that it was much harder using Khan and they missed spending time with their friends in our normal tutorial. They went from having a very interactive tutorial, with little breaks of chatting or waiting for their tutor to come check their work, to a tutorial where they were working very hard for the entire time.

Of course as an educator, I'm discounting their concern to a degree...what I thought I was observing was more efficient math practice. Khan is a huge protector against the natural digression of tutorial. It’s challenging to get kids to work really hard. It’s particularly challenging when you tutor two or three of them at a time, all needing attention at once. And when everyone’s sitting around a table, it’s much easier to take little breaks.

With Khan, there’s an unlimited bank of problems. Fewer logistical bottlenecks. Automatic feedback loops that don’t require a human tutor to be constantly correcting a kid. It creates a more challenging tutorial, and a hugely productive one.

This does imply, however, that Match Next, if we're successful at creating 40 to 60 minute bursts of more efficient learning where kids are really racking their brains, we'll need to look hard at better/longer breaks or downtime for kids during the day than conventional schools.

Big thanks to all my peeps at the Match middle school, and the 6 students and 3 tutors in particular for leading this work.

-Guestblogger Ray Schleck