NSVF Boston Fund

Ah, 2010.  

The iPad debuted.  Vuvuzelas at the World Cup.  Girl WithThe Dragon Tatoo did boffo box office (novelist Stieg Larsson's identity revealed here for first time as that of Mike Larsson and Stig Leschly). 

And the Massachusetts cap on charter schools was lifted. 

So in 2011, the New Schools Venture Fund launched its Boston Fund.  They just released a report covering the last 4 years. 

So what did the NSVF Fund do?  

Basically Boston grew from serving 5,000-ish kids in 2010 to 11,000 kids when the current growth spurt is done.  Without losing any quality!  That's what amazes me, looking back.  No dilution in results.  Boston's charter kids are still #1 compared to charter school students in all the other cities (according to Stanford's study of the 41 such cities).  

Props to all the folks who led organizations that grew, and the veteran teachers who showed the way to the newbies: our friends at Brooke, COAH, Excel, KIPP, Uncommon, UP.  The work is rewarding.  But really, really draining....particularly if you're going to grown and keep quality high.

From a Match point of view, NSVF helped launched Match Community Day in collaboration with Lawrence Community Day.  And allowed us to grow Match Teacher Residency, which supplies teachers to a number of the Boston charters.  

It takes a ton of energy to raise a fund like this, and then to do the diligence to invest it well.  Props to Jim, Lauren, Graham, and the NSVF team.

Lost at School

1. Ross G

A new Mother Jones magazine article is entitled "What If Everything You Knew About Disciplining Kids Was Wrong?"

Psychologist Ross Greene, who has taught at Harvard and Virginia Tech, has developed a near cult following among parents and educators who deal with challenging children. What Richard Ferber’s sleep-training method meant to parents desperate for an easy bedtime, Greene’s disciplinary method has been for parents of kids with behavior problems, who often pass around copies of his books, The Explosive Child and Lost at School, as though they were holy writ.

His model was honed in children’s psychiatric clinics and battle-tested in state juvenile facilities, and in 2006 it formally made its way into a smattering of public and private schools. The results thus far have been dramatic, with schools reporting drops as great as 80 percent in disciplinary referrals, suspensions, and incidents of peer aggression. “We know if we keep doing what isn’t working for those kids, we lose them,” Greene told me. “Eventually there’s this whole population of kids we refer to as overcorrected, overdirected, and overpunished. Anyone who works with kids who are behaviorally challenging knows these kids: They’ve habituated to punishment.”

Under Greene’s philosophy, you’d no more punish a child for yelling out in class or jumping out of his seat repeatedly than you would if he bombed a spelling test. You’d talk with the kid to figure out the reasons for the outburst (was he worried he would forget what he wanted to say?), then brainstorm alternative strategies for the next time he felt that way. The goal is to get to the root of the problem, not to discipline a kid for the way his brain is wired.

2. Ben

I asked Ben Marcovitz for this thoughts here.  Ben is one of the most thoughtful educators I know. Here is a recent story about his New Orleans charter schools, called Collegiate Academies, and their efforts to launch programs to serve students with severe special needs.

Ben's organization recently went from a suspension rate of 50+% annually to one below 5% annually.  They did this all through restorative programs (which he says costs 'a boatload of money').  So I asked him for his thoughts on the article.  

Ben writes:

A. What the article gets right:

• I love me some Ross Greene. Lost At School is a landmark. The view that behavioral skill deficits should be treated with a similar mindset to academic skill deficits has no downside I can think of, and generally tends to benefit both the teachers who have that mindset, the the kids they teach.

• Skinner-esque behavioralism does not benefit all kids. It will almost always fail as a one-size-fits-all measure for student populations with any diversity at all.

• Greene’s methods aren’t the only ones, but i generally think that using his stuff where behavioralism fails—and indeed with any kid—is typically better than any alternative i know.

B. What the article gets wrong:

Teacher time.

The school profiled in the article? It’s in Central Maine. I’m guessing there’s not a huge number of behavioral challenges. Teachers can do a better job with the few who struggle.

The article—as with so many like it—just doesn’t address how much more time-consuming these approaches are, or how this approach plays out in a school with a tougher population. Without doing that, it fails to recognize the real reason schools don’t tend to do a lot of this: to most it is inconceivable to scale these methods across a school, with the skill level most teachers have (and the time they have). The implication can be to vilify educators who don’t use these methods, painting them as retributive tyrants who just “get mad” at kids who misbehave (sometimes true), rather than teachers who wish they had time for more effective interventions (more often true).

Let’s take a typical high-poverty school. Teachers may be already working overtime to keep academics above water. The Greene approach leads to some very tough choices. You’re basically telling hard-working teachers to either put in many more hours per week, or to substitute academic work (planning lessons, tutoring strugglers after school, using data, meeting with other teachers) for 1:1 conversations with students about strategies.

Collegiate Academies were able to partially bypass this tough choice. We are fortunate in that our academic success has led to fundraising prowess, which I suspect many traditional schools don’t have. So our team includes:

• additional aides to help our kids with special needs in more specialized ways
• additional tutors to close a giant grade-level math/reading gap across the board in our schools
• additional mental health personnel to confront the extraordinary trauma our student population faces as a whole
• more robust extra-curriculars to invest kids who’ve had a terrible relationship with school for ten years before finding us
• a national recruitment program to find the best possible adults to help in all of the above

C. What the article doesn’t address

• It’s true, I think, that the typical high-poverty public schools in the USA are a mess. That is probably true of many “typical” charter schools. I’m not sure, however, that these schools are flourishing when they adopt restorative justice. Time-strapped teachers already on the brink? I’d like to see some reporting there.

• The outlier group of high-performing charters, however, often combine “traditional consequences” with lots of joy, consistency, and proactive parent outreach. For every 1 kid who gets detention and it makes their behavior worse, there are 10 who initially learned to curb their negative impulses in order to avoid detention. That is a big omission. In fact, in many schools where teachers have limited time, I wonder about restorative justice versus attending to these 3.

• There remains a big disconnect between teachers who see these highly volatile behaviors each day, and those outsiders who are most concerned with discipline “reform” (whether scholars, activists, parents, reporters, or educators from suburban or selective schools). The RSD in New Orleans held a community roundtable on discipline. They mediated it quite well. They asked the many advocates and community members (both pro-charter and anti-charter) to write down “what behaviors actually warrant suspension?” School folks wrote down things like “purposely urinating on school property” and “throwing a staple gun at a teacher’s head.” The activists who opposed suspension tended to say things like “verbal sexual harassment” and “refusing directions more than once.” Wow! They were “harder” on kids than we (the supposed hardliners) were! Which just goes to show, many of the activists are not sitting with a teacher, embedded for a full week or two, to see the true range of behaviors that a teacher sees. Because then I believe we’d agree on a lot more.

3. My thoughts (just speaking for myself, not Match)

It's hard to use public policy to tell teachers what to do -- that goes for everything, not just discipline.  It just seems like the Law of Unintended Consequences tends to win most of the time, when you regulate from afar.  

I tend to like the combination of: school choice for teachers, school choice for parents, transparency on the policies (stated rules and consequences), transparency on the outcomes.  

The last one seems quite problematic right now.  We're in a period where discipline outcomes are measured by suspensions and expulsions.  That drives the public narrative.  Because that data is available!  Wow, that is a huge problem.  

Measuring just on suspensions is like measuring football entirely on turnovers.  Yes, you would like to avoid them.  But at what price?  You can avoid fumbles by handing it to a sure-handed beefy runner....but maybe one who is slow not very good at helping your team score.  You can avoid interceptions by never passing, etc.  That's not a recipe for success.  If you cut suspensions, but teacher departure rises, student enrollment falls, and achievement falls -- is that a good outcome? 

I would think most people would agree: we care about "total climate."  That would include - what's the typical class like?  How many minutes are lost to misbehavior?  How much authentic positive stuff happens?  Is there legit joy, smiling, laughter (that doesn't come from teasing other kids, or being rude to the teacher, etc)?  How often does the teacher make what Ted Sizer called the compromise -- give easy academic work in exchange for no flagrant misbehavior?  What about the hallways between class, or the lunchroom, or the bathroom -- what are they like?  Are some kids scared?  Bullied?  

I like the NYC approach to measuring school climate.  Multiple measures.  They survey parents, teachers, and kids each year.  Here, take a look at this snapshot from Bayside High, a big school in Queens.  These are student responses. 

Screen Shot 2015-07-20 at 8.47.40 AM.png

You can read the whole Bayside High report here.   

Unfortunately, my sense is: this wonderful NYC data is too buried to drive the policy conversation.  Maybe I'm wrong.

Is anyone aware of scholars and reporters digging deep into this data set?  Is there any other data set in the USA just as good?  

I think it'd be hugely productive to identify NYC schools which have made progress in "Total Climate" -- and then study why.  Sometimes you'll just find good old-fashioned leadership and teamwork, without any fancy new policies.  

And to study the "low tail" as well -- which NYC schools have culture which plummeted.  I suspect sometimes you'd find that a few key staff departed, and it turned out "They were the glue that held it all together."   

A Walk in the PARCC – Part 2: ELA

Guest Post by Stig Leschly, CEO of Match Education

This is the second of two posts on the move from MCAS to PARCC in Massachusetts.  This blog covers the move in ELA. 

Just as we prefer the new PARCC-Math tests to the old MCAS-Math tests, we prefer the new PARCC-ELA tests to their MCAS predecessors.  

Here is why:

  1. PARCC-ELA mitigates the role of luck.
  2. PARCC-ELA, more than MCAS-ELA, requires students to distinguish nuances of word meaning.
  3. PARCC-ELA, more than MCAS-ELA, demands that students analyze authors’ uses of literary devices.
  4. PARCC-ELA, more than MCAS-ELA, tests true reading comprehension.
  5. PARCC-ELA, far more than MCAS-ELA, tests writing extensively and requires students to write in response to complex texts.

1.  PARCC-ELA mitigates the role of luck

PARCC-ELA makes guessing less profitable.  It does so in two ways.  First, it often requires students to select multiple correct answers and to complete two-step problems.  The payoff from blind guessing in problems of this sort is smaller.  Second, PARCC-ELA increases similarity among answer choices, thereby making it harder for students to rule out obviously incorrect answers.
 
Consider this 5th grade MCAS-ELA item that tests a student’s understanding of the main meaning of a paragraph:

Here is the paragraph in question:

A student who has failed entirely to understand the meaning of the paragraph obviously has a 25% chance of getting the question correct by simply guessing.

Moreover, a student with a modest understanding of the meaning of the paragraph might be able to rule out answers B and C (which have Shift as the narrator) simply by recognizing that the narrator of the paragraph is Puzzle.  If able to rule out answers B and C, a student then has a 50% chance of guessing correctly between answers A and D.

By contrast, consider this 5th grade PARCC-ELA item that similarly asks a student to select the best summary of a text:

Part A of the question above, like the MCAS-ELA question before it, offers a student who has completely failed to understand the underlying text a 25% chance of guessing correctly.  

However, Part B is effectively immune to guessing since it requires a student to have answered Part A correctly in the first place and, further, requires a student to select two correct responses from six options (the odds of blindly selecting two correct items from a set of six choices are approximately 6%).

Moreover, in the PARCC-ELA item above, a student with only a superficial understanding of the underlying text will struggle to rule out answer choices as obviously wrong because the answer choices are all reasonable and similar.  For a student to rule out any of the answer choices, she has to read the underlying passage carefully for inference, point of view, and meaning.


2.  PARCC-ELA, more than MCAS-ELA, requires students to distinguish secondary, figurative, and context-specific meanings of words

MCAS-ELA, of course, tests vocabulary.  But, as you will see, its approach to vocabulary is far less challenging than that of PARCC-ELA.  

Take a look at this 5th grade MCAS-ELA vocabulary question:

The underlying passage is a poem.  The poem is essentially a fictional dialogue between a dog and a squirrel in which the dog fantasizes about catching the squirrel and in which the squirrel teases the dog.  In Line 23, the squirrel taunts the dog, “I know the precise point at which I must flee.”

The MCAS-ELA vocabulary question is asking the student to define “precise,” and the student can get the right answer in two ways.

First, a student can get the correct answer, independent of underlying passage, simply by knowing the main dictionary definition of the word “precise.”  The main definition of “precise” is “exact.”   Exact is listed as answer D, and—very importantly—answers A, B, and C are not secondary definitions of the word “precise.”  So, a student who knows that “precise” means “exact” can answer the question correctly on that basis alone.  She never has to even read the text.

Second, a student who does not know the main dictionary definition of “precise” might nevertheless get the correct answer if she understands the meaning of the words offered in the answer selections A-D, if she generally understands the underlying text, and if she substitutes the answer choices into Line 23 in place of “precise” to test for coherence.   If a student does that, she might be able to rule out answers A, B, and C as either nonsense (in the case of answer B) or as inferior (in the case of A and C).

By contrast, PARCC-ELA vocabulary items virtually always require students to read the text and rarely allow students to rule out obviously wrong answers via the fill-in-the-blank process described above.  Also, PARCC-ELA vocabulary items often test secondary and figurative meanings of words.

Consider the following PARCC-ELA item, also a test of 5th grade vocabulary:

Here is the paragraph from the underlying passage:

Notice that this PARCC-ELA question, unlike the MCAS-ELA example above, is testing for one of multiple definitions of “vocal.”  Specifically, in the passage, “vocal” is being use to mean “noisy.”   That is a not the exclusive or even primary definition of “vocal” (which is mainly defined as “of the vocal chords”).  Also, to make matters more difficult, the question lists “challenging” – a secondary definition of “vocal” but not its use in this passage – as an alluring but false answer option. 

In all, to find the correct answer in the PARCC-ELA vocabulary question, a student has to read the passage for context and meaning (i.e. she has to understand, from context clues, which definition of “vocal” is being used), and she has to understand the full range of definitions for “vocal” (in order to rule our tempting but false answer options).

PARCC-ELA routinely takes this sort of approach to vocabulary.  It generally requires students to master figurative, secondary, and context-specific meanings of words.    


3.  PARCC-ELA, more than MCAS-ELA, demands that students analyze authors’ uses of literary devices

MCAS-ELA often requires students to identify literary devices where they occurred in texts.  For example, in this 5th grade MCAS-ELA question, students must identify figurative language:

This MCAS-ELA question is a sensible, but relatively easy, assessment of whether a student can identify figurative language.   

PARCC-ELA, in testing for figurative language, increases the challenge.  Consider this 5th grade PARCC-ELA item:

First of all, notice that—without a thorough reading of the underlying passage—a student cannot answer either part of the question.  The correct answer here—one that is only apparent from considering the whole underlying text—is that the song “gives the reader information about Davy’s life” (answer B).  By contrast, on the MCAS-ELA question above, a student could answer correctly without reading the underlying passage so long as the student knew the general qualities of figurative language.  

And notice again that PARCC-ELA demands that students justify their answers (see Part B).  This two-part problem structure thwarts students who might guess right on the second part of the item.  They don’t have the opportunity to earn any points on Part B unless they have answered Part A correctly.


4.  PARCC-ELA, more than MCAS-ELA, tests true reading comprehension

Here is a 4th grade MCAS-ELA question.  This question tests a student’s ability to select the main idea of a text:

This item is, at best, a modest quiz of comprehension.  The answer options are highly different from one another.  So, even a student with only a limited understanding of the underlying text will be able to rule out 1-2 clearly erroneous answer options.

Now take a look at this 4th grade PARCC-ELA question, also on main idea:

This PARCC-ELA item on main meaning is far superior to the MCAS-ELA question, for two reasons.

First, the answer options are all reasonable choices, at least on a first pass of the underlying text.  They all refer to themes or events in the passage, and none of them is obviously correct or incorrect to a student who has only a basic understanding of the underlying text.

Second, Part B requires students to identify the evidence that “best” supports the correct answer in Part A.  The answer choices, again, are all reasonable.  They vary in degree.  Some answer choices offer partial evidence for the correct answer in Part A, and some cover tangential themes.  While wrong, these answer choices are tempting.  To get the question correct, the student has to weigh all the answer choices to determine the one that “best” supports the correct answer in Part A.


5.  PARCC-ELA, far more than MCAS-ELA, tests writing extensively and requires students to write in response to texts

MCAS-ELA asks students to write a composition in grades 4, 7, and 10.  Composition is not included on MCAS-ELA in grades 3, 5, 6, and 8.  

PARCC-ELA, in contrast, has three writing exercises in every grade from 3-11.    So, the first point is simple: students write far more on PARCC-ELA than they do on MCAS-ELA.

The second point is that PARCC-ELA writing tasks are almost always in response to complex texts and dependent on a student’s understanding of those texts.  As you will see below, PARCC-ELA writing assessments are more than strong tests of general writing skill. They are also enormously demanding tests of reading comprehension and textual analysis.

Here is a typical MCAS writing prompt.  This prompt is for 7th graders, and it asks the student to write a personal statement:

This writing assessment tests a student’s general writing skills.    A student can earn a top score by crafting a well-structured essay (with an opening, a body, and a conclusion) and complying with rules of grammar and syntax. 

Now turn to PARCC-ELA and its approach to writing.  

As mentioned, students in every grade must complete three writing exercises.  These writing tasks cover literary analysis (in which students are writing in response to fiction texts), research writing (in which students are writing in response to non-fiction texts), and narrative writing (in which students create an original narrative, often in the first person).

In all of these assignments, students must not only write well.  They must also demonstrate strong reading comprehension and text analysis.

Consider this 7th literary analysis task from PARCC-ELA:

To do well on this literary analysis task, a student must comprehend the relevant passages from The Count of Monte Cristo and Blessings, must identify themes in each book, and must reason through the choices that each author makes in developing those themes.  This difficult work of comprehension and analysis precedes the subsequent (and also difficult) task of building a well-argued, well-organized, and well-written essay.

Consider this 7th grade PARCC-ELA narrative writing task:

Here again, textual analysis and true comprehension are prerequisites for the writing assignment.  A student must understand the story in depth in order to assume the point of view of one of its characters.  Again, PARCC-ELA is testing both strong writing (in this case, narrative writing) and reading comprehension and textual analysis (in this case, a critical understanding of the characters and plot in the piece).

Finally, consider this PARCC-ELA research writing task from 7th grade.  As mentioned, PARCC-ELA’s research writing tasks require students to write critically across multiple non-fiction texts:

This writing task, again, truly requires skill in both writing and textual analysis.
 
Notice the depth of understanding and analysis required of students in order to even outline a response.  Students must understand and compare the way in which each author uses explanations, examples, and descriptions to accomplish a purpose.  This difficult analytic work has to be completed before the subsequent task of writing a complex critical essay can unfold.

In all, PARCC-ELA requires far more writing of students that does MCAS-ELA, and, as importantly, PARCC-ELA writing always requires complex comprehension and textual analysis as a precursor to writing.


Conclusion

We like the PARCC-ELA tests, just as we like the PARCC-math tests.  

Compared to their MCAS analogs, PARCC-ELA tests and the related Common Core standards in ELA impress us as more rigorous and more closely aligned to the English and humanities challenges that our students will face in our high school and in college.  If PARCC stabilizes and becomes the new standard-bearer in Massachusetts, we look forward to challenging ourselves to meet its high bar in ELA.  It is the right bar, we think.

The following people gave input in writing this post: Anne Lyneis, Jamie Morrison, Kim Nicoll, Ray Schleck, Meredith Segal, Emily Stainer

 

A Walk in the PARCC – Part 1: Math

Guest Post by Stig Leschly, CEO of Match Education

Massachusetts is moving to the new national standards (Common Core) and related tests (PARCC). At least so it seems.  The politics of it all are complex and hard to predict.  They’ll play out over the next few years.

This post is not about the politics, though.  It’s about the substance of it all—about how the new standards and tests look, here on the ground, to our students and teachers.

This post is the first of two.  It addresses the move from MCAS to PARCC in math.   

A second post, later this week, will cover the move from MCAS to PARCC in ELA.

Here at Match, we like PARCC-Math over its predecessor, the MCAS-Math.  Here are some of the reasons why, each of which I’ll cover in detail in this post:

  1. PARCC-Math mitigates the role of luck.
  2. PARCC-Math, more than MCAS-Math, requires students to understand math conceptually.
  3. PARCC-Math, more than MCAS-Math, requires students to provide detailed explanations for their solutions.
  4. PARCC-Math open response questions, more than their MCAS analogs, demand that students identify relevant information and solve complex, multi-step problems.

1.  PARCC-Math mitigates the role of luck

Guessing is invariably part of test taking, but less so with PARCC-Math than with MCAS-Math.

A typical MCAS-Math multiple-choice question asks a student to select one of four correct answers.  For example, consider this order-of-operations question from the 5th grade MCAS-Math:

A student with no knowledge of order of operations has a 25% chance of guessing the correct answer.

PARCC-Math greatly reduces a student’s odds of lucking into the correct answer.  PARCC-Math asks fewer straight multiple-choice questions and, instead, favors open-response questions.  Where PARCC-Math seeks a simple numerical answer, it often asks students to fill out an answer grid, rather than offering a series of possible answers.

Here is an example of an order-of-operations question from 5th grade PARCC-Math, one that involves a “grid” answer key:

Even when PARCC does ask standard multiple-choice questions, it often asks the student to select “each” correct answer, without specifying how many correct answers are present in the answer line up.

Here is an example of a 7th grade PARCC-Math multiple-choice question with multiple correct answers:

The question above has two correct answers (A and E).  The odds of randomly guessing A and E are miniscule (about 3%).  Students cannot easily luck into showing mastery on PARCC-Math.


2.  PARCC-Math, more that MCAS-Math, requires students to understand math conceptually

Consider the following 7th grade MCAS-Math question on volume:

To answer this question, a student needs simply to apply the formula for volume (L x W x H) but does not need to understand volume conceptually (i.e. be able to spot a volume question that is not overtly described as such).  

Moreover, in the question above, the values for the formula are obvious.  A student does not need to be selective in identifying relevant data to input into the volume formula.

For contrast, consider this 7th grade PARCC-Math question on volume:

In this PARCC-Math volume question, a student still needs to know and be able to apply the basic formula for volume of a rectangular prism.   

But notice the additional challenges involved in the PARCC-Math version.  

First, students have to arrive independently at the values for L, W, and H in the volume formula.   They are not given or obvious.  That challenge alone will stump students who have only a superficial understanding of volume.   

Second, students have to know the meaning of a “right rectangular prism” in order to downsize the block correctly by 20 units, as directed in the third bullet of the question.  

And third, students have to show their work and logic as they progress through the problem.

In short, students truly have to understand volume as a phenomenon in order to pass this PARCC-Math question.  Mastering the formula for volume alone is not enough.


3.  PARCC-Math, more than MCAS-Math, requires students to provide detailed explanations for their solutions

Historically, MCAS-Math rarely asked students to explain their choice of math algorithm.  For example, consider this 5th grade MCAS-Math problem on fractions:  

By comparison, consider this 5th grade PARCC-Math question, also on the addition and subtraction of fractions:

To get full credit on this PARCC-Math question, a student obviously needs to be able to subtract 1 3/4 from 3 2/4.  

But she also needs to understand this procedure as the one that is called for in the word problem and – for full credit – to go further and explain how the protagonist mishandled the problem.    

The question, far more than a conventional MCAS-Math question, demands fluency with fractions.


4.  PARCC-Math open response questions, more than their MCAS analogs, demand that students sort for relevant information and solve complex, multi-step problems

The most challenging MCAS-Math questions are “open response” questions.  These questions typically ask a string of 2 or 3 questions, sequenced in a way that guides a student through a problem.

Here is an example of an MCAS-Math open response question in 5th grade.  It deals mainly with multiplication and division:

This question is, in our opinion, a solid test of a student’s mastery of multiplication and division.  But, it could be a lot more demanding.  

In particular, notice how a question is inserted immediately following the information relevant to that question.  In this way, the question guides students. 

By contrast, PARCC-Math open-response questions tend to present a full and often lengthy word problem and then, at the end, ask a series of questions that require students to sort for and manipulate relevant information.

Here is a 5th grade PARCC-Math open-response question that also tests multiplication and division.  It is much harder, as you will see:

On this PARCC-Math question students have to take seven or eight steps to reach a solution, whereas the MCAS-Math analog above required only three or four.   And students have to search intensively for information relevant to each step of the solution.  The question truly tests students’ ability to parse information in context, to think conceptually, and to discern an efficient path to a solution.

This PARCC-Math question also involves numbers that are plainly more difficult to multiply and divide than the numbers on the MCAS-Math questions.   The last step on the PARCC-Math question involves dividing 1,491 by 24, finding the answer as 62 with remainder 3, and concluding as a result that at least 63 cases of water are needed.  By comparison, the most complex operation on the MCAS-Math question is to divide 180 by 18.


Conclusion

In all, we like PARCC-Math here at Match.   

The new tests and the Common Core standards offer a more rigorous approach to math skills and knowledge than their predecessors, in our view.  They prepare our students more clearly and from an earlier age for the challenges of advanced math (algebra, geometry, trigonometry, and calculus) that they will face at our high school and in college.    

PARCC-Math is a challenge worthy of our students and of our teachers.

 

The following people gave input in writing this post: Ryan Holmes, Jamie Morrison, Ray Schleck, Meredith Segal, Jennifer Spencer, Kyla Spindler, Emily Stainer

Getting Better at Teaching

Guest post by Neal Teague, Director of the Teacher Launch Project

If you asked the 25 year old me at the end of a long day at Charlestown High School during my rookie year how I planned to get better at teaching (and boy did I need to get better), I would have said something like “Well, first I need to figure out which activities are the best.  I am going to try to plan different activities and see which ones keep my students’ attention the most.  Teaching is something you have to learn by doing it, and then you reflect on what worked and what didn’t.  I will make a lot of mistakes and have plenty of bad days but in 3-5 years I think I will get the hang of it.”

That would have echoed a very commonly held belief about teaching: that the job is more art than science, and the only way to learn how to do it is through years of trial and error.  Of course, if you spend a moment thinking about that terrible metaphor, you might remember an art teacher you had at some point showing you how to draw, or paint, or sculpt that clay ashtray you gave to your non-smoking mother for Mothers’ Day.  Of course mom loved what you made, but it wasn’t exactly ready for the Pottery Barn showroom. 

Unfortunately, the stakes are a lot higher for teachers. Too often, novice teachers have little to no impact on student growth during their first year in the classroom. Below is a chart that illustrates the differences in students’ math outcomes that are generated Boston Public School teachers over the course of the first 8 years of their career. 

The good news here is that teachers tend to show rapid growth between their second and fourth years, which seems to reinforce the “learning on the job” narrative. 

But what if we could change that trajectory?  What if we could show that there is a way to get teachers ready to impact student outcomes from day one?  And most importantly, what if we could show that this method of teacher preparation does not only work in high performing charters but in the much broader world of traditional district schools?  Imagine the impact that this new trajectory for rookie teachers would have on closing the achievement gap.

These questions are driving the Teacher Launch Project at the Sposato Graduate School of Education.  Sposato has long been in the business of preparing rookie teachers to teach in the country’s highest performing urban charter schools and has consistently achieved strong outcomes. In aggregate, Sposato-trained teachers significantly outperform their non-Sposato rookie peers, as measured by outside experts and principal evaluations.  

But it’s hard to know why these teachers are getting these results. Is it just that the program is recruiting and selecting the right people (Sposato’s admissions rate is historically <10%, with most participants coming from top-tier colleges and universities)? Or perhaps these teachers and the methods they learn through Sposato can only be successful in charter schools (which only educate 4.2% of the total student population in the U.S.)? 

To affect the broader world of teacher preparation, we need to understand the degree to which the Sposato methodology, separate from the program’s selection and placement practices, is generating unusually effective rookie teachers. Methods can scale a lot faster and wider than teacher prep programs with 10% admissions rates. 

At the heart of this methodology is two big ideas about how novices learn to teach: (1) Rookie teachers need detailed, nuanced, and prescriptive instruction on highly specific teacher skills and moves to guide their decision making with planning, classroom management, and instructional execution. (2) These skills can only be learned through intensive, deliberate practice and immediate feedback from expert coaches—feedback that sounds a lot more like what your basketball coach said when your shooting elbow was in the wrong place rather than the suggestive, “Hmmm, you might try doing X or Y or Z” coaching that happens in a lot of schools. 

So beginning this summer, the Teacher Launch Project will embark on a three year randomized controlled trial of the Sposato methodology in partnership with Dr. Tom Kane and Harvard’s Center for Education Policy Research. Our pilot cohort of 30 teachers—folks who are recent graduates of mainstream education school programs and who are on track to teach in traditional public school districts—will attend an intensive four week summer institute and then will receive 20 weeks of coaching during their rookie year.  The summer institute will carefully replicate the types of teaching simulations and real-time feedback used in Sposato. And the weekly coaching in the fall and winter of their rookie year will look a lot like this session with UP’s very own Kelsey LeBuffe and Jesus Moore at UP Oliver:

The results of the 30 teachers in this “treatment” group will be rigorously evaluated by Dr. Kane and compared to a control group of teachers, who volunteered for the training and coaching but were not selected through the randomized lottery. In the two years that follow, another 200 teachers will be recruited for the Teacher Launch Project—100 to receive the training and coaching, and 100 for the control group. The three years of data will help us learn if the Sposato approach does indeed change the trajectory of rookie teachers who are more representative of the broader workforce in Massachusetts’ public schools. 

We are confident that our model will demonstrate a clearer pathway to ensuring every teacher is effective in their rookie year.  This evidence can then inform decisions that school districts and schools of education make about how they prepare new teachers.  We are pushing for a future where new teachers across the country answer the question about how to get better with a simple two word response: “practice and feedback”.  We like that a lot better than “trial and error.”