A Walk in the PARCC – Part 2: ELA

Guest Post by Stig Leschly, CEO of Match Education

This is the second of two posts on the move from MCAS to PARCC in Massachusetts.  This blog covers the move in ELA. 

Just as we prefer the new PARCC-Math tests to the old MCAS-Math tests, we prefer the new PARCC-ELA tests to their MCAS predecessors.  

Here is why:

  1. PARCC-ELA mitigates the role of luck.
  2. PARCC-ELA, more than MCAS-ELA, requires students to distinguish nuances of word meaning.
  3. PARCC-ELA, more than MCAS-ELA, demands that students analyze authors’ uses of literary devices.
  4. PARCC-ELA, more than MCAS-ELA, tests true reading comprehension.
  5. PARCC-ELA, far more than MCAS-ELA, tests writing extensively and requires students to write in response to complex texts.

1.  PARCC-ELA mitigates the role of luck

PARCC-ELA makes guessing less profitable.  It does so in two ways.  First, it often requires students to select multiple correct answers and to complete two-step problems.  The payoff from blind guessing in problems of this sort is smaller.  Second, PARCC-ELA increases similarity among answer choices, thereby making it harder for students to rule out obviously incorrect answers.
 
Consider this 5th grade MCAS-ELA item that tests a student’s understanding of the main meaning of a paragraph:

Here is the paragraph in question:

A student who has failed entirely to understand the meaning of the paragraph obviously has a 25% chance of getting the question correct by simply guessing.

Moreover, a student with a modest understanding of the meaning of the paragraph might be able to rule out answers B and C (which have Shift as the narrator) simply by recognizing that the narrator of the paragraph is Puzzle.  If able to rule out answers B and C, a student then has a 50% chance of guessing correctly between answers A and D.

By contrast, consider this 5th grade PARCC-ELA item that similarly asks a student to select the best summary of a text:

Part A of the question above, like the MCAS-ELA question before it, offers a student who has completely failed to understand the underlying text a 25% chance of guessing correctly.  

However, Part B is effectively immune to guessing since it requires a student to have answered Part A correctly in the first place and, further, requires a student to select two correct responses from six options (the odds of blindly selecting two correct items from a set of six choices are approximately 6%).

Moreover, in the PARCC-ELA item above, a student with only a superficial understanding of the underlying text will struggle to rule out answer choices as obviously wrong because the answer choices are all reasonable and similar.  For a student to rule out any of the answer choices, she has to read the underlying passage carefully for inference, point of view, and meaning.


2.  PARCC-ELA, more than MCAS-ELA, requires students to distinguish secondary, figurative, and context-specific meanings of words

MCAS-ELA, of course, tests vocabulary.  But, as you will see, its approach to vocabulary is far less challenging than that of PARCC-ELA.  

Take a look at this 5th grade MCAS-ELA vocabulary question:

The underlying passage is a poem.  The poem is essentially a fictional dialogue between a dog and a squirrel in which the dog fantasizes about catching the squirrel and in which the squirrel teases the dog.  In Line 23, the squirrel taunts the dog, “I know the precise point at which I must flee.”

The MCAS-ELA vocabulary question is asking the student to define “precise,” and the student can get the right answer in two ways.

First, a student can get the correct answer, independent of underlying passage, simply by knowing the main dictionary definition of the word “precise.”  The main definition of “precise” is “exact.”   Exact is listed as answer D, and—very importantly—answers A, B, and C are not secondary definitions of the word “precise.”  So, a student who knows that “precise” means “exact” can answer the question correctly on that basis alone.  She never has to even read the text.

Second, a student who does not know the main dictionary definition of “precise” might nevertheless get the correct answer if she understands the meaning of the words offered in the answer selections A-D, if she generally understands the underlying text, and if she substitutes the answer choices into Line 23 in place of “precise” to test for coherence.   If a student does that, she might be able to rule out answers A, B, and C as either nonsense (in the case of answer B) or as inferior (in the case of A and C).

By contrast, PARCC-ELA vocabulary items virtually always require students to read the text and rarely allow students to rule out obviously wrong answers via the fill-in-the-blank process described above.  Also, PARCC-ELA vocabulary items often test secondary and figurative meanings of words.

Consider the following PARCC-ELA item, also a test of 5th grade vocabulary:

Here is the paragraph from the underlying passage:

Notice that this PARCC-ELA question, unlike the MCAS-ELA example above, is testing for one of multiple definitions of “vocal.”  Specifically, in the passage, “vocal” is being use to mean “noisy.”   That is a not the exclusive or even primary definition of “vocal” (which is mainly defined as “of the vocal chords”).  Also, to make matters more difficult, the question lists “challenging” – a secondary definition of “vocal” but not its use in this passage – as an alluring but false answer option. 

In all, to find the correct answer in the PARCC-ELA vocabulary question, a student has to read the passage for context and meaning (i.e. she has to understand, from context clues, which definition of “vocal” is being used), and she has to understand the full range of definitions for “vocal” (in order to rule our tempting but false answer options).

PARCC-ELA routinely takes this sort of approach to vocabulary.  It generally requires students to master figurative, secondary, and context-specific meanings of words.    


3.  PARCC-ELA, more than MCAS-ELA, demands that students analyze authors’ uses of literary devices

MCAS-ELA often requires students to identify literary devices where they occurred in texts.  For example, in this 5th grade MCAS-ELA question, students must identify figurative language:

This MCAS-ELA question is a sensible, but relatively easy, assessment of whether a student can identify figurative language.   

PARCC-ELA, in testing for figurative language, increases the challenge.  Consider this 5th grade PARCC-ELA item:

First of all, notice that—without a thorough reading of the underlying passage—a student cannot answer either part of the question.  The correct answer here—one that is only apparent from considering the whole underlying text—is that the song “gives the reader information about Davy’s life” (answer B).  By contrast, on the MCAS-ELA question above, a student could answer correctly without reading the underlying passage so long as the student knew the general qualities of figurative language.  

And notice again that PARCC-ELA demands that students justify their answers (see Part B).  This two-part problem structure thwarts students who might guess right on the second part of the item.  They don’t have the opportunity to earn any points on Part B unless they have answered Part A correctly.


4.  PARCC-ELA, more than MCAS-ELA, tests true reading comprehension

Here is a 4th grade MCAS-ELA question.  This question tests a student’s ability to select the main idea of a text:

This item is, at best, a modest quiz of comprehension.  The answer options are highly different from one another.  So, even a student with only a limited understanding of the underlying text will be able to rule out 1-2 clearly erroneous answer options.

Now take a look at this 4th grade PARCC-ELA question, also on main idea:

This PARCC-ELA item on main meaning is far superior to the MCAS-ELA question, for two reasons.

First, the answer options are all reasonable choices, at least on a first pass of the underlying text.  They all refer to themes or events in the passage, and none of them is obviously correct or incorrect to a student who has only a basic understanding of the underlying text.

Second, Part B requires students to identify the evidence that “best” supports the correct answer in Part A.  The answer choices, again, are all reasonable.  They vary in degree.  Some answer choices offer partial evidence for the correct answer in Part A, and some cover tangential themes.  While wrong, these answer choices are tempting.  To get the question correct, the student has to weigh all the answer choices to determine the one that “best” supports the correct answer in Part A.


5.  PARCC-ELA, far more than MCAS-ELA, tests writing extensively and requires students to write in response to texts

MCAS-ELA asks students to write a composition in grades 4, 7, and 10.  Composition is not included on MCAS-ELA in grades 3, 5, 6, and 8.  

PARCC-ELA, in contrast, has three writing exercises in every grade from 3-11.    So, the first point is simple: students write far more on PARCC-ELA than they do on MCAS-ELA.

The second point is that PARCC-ELA writing tasks are almost always in response to complex texts and dependent on a student’s understanding of those texts.  As you will see below, PARCC-ELA writing assessments are more than strong tests of general writing skill. They are also enormously demanding tests of reading comprehension and textual analysis.

Here is a typical MCAS writing prompt.  This prompt is for 7th graders, and it asks the student to write a personal statement:

This writing assessment tests a student’s general writing skills.    A student can earn a top score by crafting a well-structured essay (with an opening, a body, and a conclusion) and complying with rules of grammar and syntax. 

Now turn to PARCC-ELA and its approach to writing.  

As mentioned, students in every grade must complete three writing exercises.  These writing tasks cover literary analysis (in which students are writing in response to fiction texts), research writing (in which students are writing in response to non-fiction texts), and narrative writing (in which students create an original narrative, often in the first person).

In all of these assignments, students must not only write well.  They must also demonstrate strong reading comprehension and text analysis.

Consider this 7th literary analysis task from PARCC-ELA:

To do well on this literary analysis task, a student must comprehend the relevant passages from The Count of Monte Cristo and Blessings, must identify themes in each book, and must reason through the choices that each author makes in developing those themes.  This difficult work of comprehension and analysis precedes the subsequent (and also difficult) task of building a well-argued, well-organized, and well-written essay.

Consider this 7th grade PARCC-ELA narrative writing task:

Here again, textual analysis and true comprehension are prerequisites for the writing assignment.  A student must understand the story in depth in order to assume the point of view of one of its characters.  Again, PARCC-ELA is testing both strong writing (in this case, narrative writing) and reading comprehension and textual analysis (in this case, a critical understanding of the characters and plot in the piece).

Finally, consider this PARCC-ELA research writing task from 7th grade.  As mentioned, PARCC-ELA’s research writing tasks require students to write critically across multiple non-fiction texts:

This writing task, again, truly requires skill in both writing and textual analysis.
 
Notice the depth of understanding and analysis required of students in order to even outline a response.  Students must understand and compare the way in which each author uses explanations, examples, and descriptions to accomplish a purpose.  This difficult analytic work has to be completed before the subsequent task of writing a complex critical essay can unfold.

In all, PARCC-ELA requires far more writing of students that does MCAS-ELA, and, as importantly, PARCC-ELA writing always requires complex comprehension and textual analysis as a precursor to writing.


Conclusion

We like the PARCC-ELA tests, just as we like the PARCC-math tests.  

Compared to their MCAS analogs, PARCC-ELA tests and the related Common Core standards in ELA impress us as more rigorous and more closely aligned to the English and humanities challenges that our students will face in our high school and in college.  If PARCC stabilizes and becomes the new standard-bearer in Massachusetts, we look forward to challenging ourselves to meet its high bar in ELA.  It is the right bar, we think.

The following people gave input in writing this post: Anne Lyneis, Jamie Morrison, Kim Nicoll, Ray Schleck, Meredith Segal, Emily Stainer

 

A Walk in the PARCC – Part 1: Math

Guest Post by Stig Leschly, CEO of Match Education

Massachusetts is moving to the new national standards (Common Core) and related tests (PARCC). At least so it seems.  The politics of it all are complex and hard to predict.  They’ll play out over the next few years.

This post is not about the politics, though.  It’s about the substance of it all—about how the new standards and tests look, here on the ground, to our students and teachers.

This post is the first of two.  It addresses the move from MCAS to PARCC in math.   

A second post, later this week, will cover the move from MCAS to PARCC in ELA.

Here at Match, we like PARCC-Math over its predecessor, the MCAS-Math.  Here are some of the reasons why, each of which I’ll cover in detail in this post:

  1. PARCC-Math mitigates the role of luck.
  2. PARCC-Math, more than MCAS-Math, requires students to understand math conceptually.
  3. PARCC-Math, more than MCAS-Math, requires students to provide detailed explanations for their solutions.
  4. PARCC-Math open response questions, more than their MCAS analogs, demand that students identify relevant information and solve complex, multi-step problems.

1.  PARCC-Math mitigates the role of luck

Guessing is invariably part of test taking, but less so with PARCC-Math than with MCAS-Math.

A typical MCAS-Math multiple-choice question asks a student to select one of four correct answers.  For example, consider this order-of-operations question from the 5th grade MCAS-Math:

A student with no knowledge of order of operations has a 25% chance of guessing the correct answer.

PARCC-Math greatly reduces a student’s odds of lucking into the correct answer.  PARCC-Math asks fewer straight multiple-choice questions and, instead, favors open-response questions.  Where PARCC-Math seeks a simple numerical answer, it often asks students to fill out an answer grid, rather than offering a series of possible answers.

Here is an example of an order-of-operations question from 5th grade PARCC-Math, one that involves a “grid” answer key:

Even when PARCC does ask standard multiple-choice questions, it often asks the student to select “each” correct answer, without specifying how many correct answers are present in the answer line up.

Here is an example of a 7th grade PARCC-Math multiple-choice question with multiple correct answers:

The question above has two correct answers (A and E).  The odds of randomly guessing A and E are miniscule (about 3%).  Students cannot easily luck into showing mastery on PARCC-Math.


2.  PARCC-Math, more that MCAS-Math, requires students to understand math conceptually

Consider the following 7th grade MCAS-Math question on volume:

To answer this question, a student needs simply to apply the formula for volume (L x W x H) but does not need to understand volume conceptually (i.e. be able to spot a volume question that is not overtly described as such).  

Moreover, in the question above, the values for the formula are obvious.  A student does not need to be selective in identifying relevant data to input into the volume formula.

For contrast, consider this 7th grade PARCC-Math question on volume:

In this PARCC-Math volume question, a student still needs to know and be able to apply the basic formula for volume of a rectangular prism.   

But notice the additional challenges involved in the PARCC-Math version.  

First, students have to arrive independently at the values for L, W, and H in the volume formula.   They are not given or obvious.  That challenge alone will stump students who have only a superficial understanding of volume.   

Second, students have to know the meaning of a “right rectangular prism” in order to downsize the block correctly by 20 units, as directed in the third bullet of the question.  

And third, students have to show their work and logic as they progress through the problem.

In short, students truly have to understand volume as a phenomenon in order to pass this PARCC-Math question.  Mastering the formula for volume alone is not enough.


3.  PARCC-Math, more than MCAS-Math, requires students to provide detailed explanations for their solutions

Historically, MCAS-Math rarely asked students to explain their choice of math algorithm.  For example, consider this 5th grade MCAS-Math problem on fractions:  

By comparison, consider this 5th grade PARCC-Math question, also on the addition and subtraction of fractions:

To get full credit on this PARCC-Math question, a student obviously needs to be able to subtract 1 3/4 from 3 2/4.  

But she also needs to understand this procedure as the one that is called for in the word problem and – for full credit – to go further and explain how the protagonist mishandled the problem.    

The question, far more than a conventional MCAS-Math question, demands fluency with fractions.


4.  PARCC-Math open response questions, more than their MCAS analogs, demand that students sort for relevant information and solve complex, multi-step problems

The most challenging MCAS-Math questions are “open response” questions.  These questions typically ask a string of 2 or 3 questions, sequenced in a way that guides a student through a problem.

Here is an example of an MCAS-Math open response question in 5th grade.  It deals mainly with multiplication and division:

This question is, in our opinion, a solid test of a student’s mastery of multiplication and division.  But, it could be a lot more demanding.  

In particular, notice how a question is inserted immediately following the information relevant to that question.  In this way, the question guides students. 

By contrast, PARCC-Math open-response questions tend to present a full and often lengthy word problem and then, at the end, ask a series of questions that require students to sort for and manipulate relevant information.

Here is a 5th grade PARCC-Math open-response question that also tests multiplication and division.  It is much harder, as you will see:

On this PARCC-Math question students have to take seven or eight steps to reach a solution, whereas the MCAS-Math analog above required only three or four.   And students have to search intensively for information relevant to each step of the solution.  The question truly tests students’ ability to parse information in context, to think conceptually, and to discern an efficient path to a solution.

This PARCC-Math question also involves numbers that are plainly more difficult to multiply and divide than the numbers on the MCAS-Math questions.   The last step on the PARCC-Math question involves dividing 1,491 by 24, finding the answer as 62 with remainder 3, and concluding as a result that at least 63 cases of water are needed.  By comparison, the most complex operation on the MCAS-Math question is to divide 180 by 18.


Conclusion

In all, we like PARCC-Math here at Match.   

The new tests and the Common Core standards offer a more rigorous approach to math skills and knowledge than their predecessors, in our view.  They prepare our students more clearly and from an earlier age for the challenges of advanced math (algebra, geometry, trigonometry, and calculus) that they will face at our high school and in college.    

PARCC-Math is a challenge worthy of our students and of our teachers.

 

The following people gave input in writing this post: Ryan Holmes, Jamie Morrison, Ray Schleck, Meredith Segal, Jennifer Spencer, Kyla Spindler, Emily Stainer

Getting Better at Teaching

Guest post by Neal Teague, Director of the Teacher Launch Project

If you asked the 25 year old me at the end of a long day at Charlestown High School during my rookie year how I planned to get better at teaching (and boy did I need to get better), I would have said something like “Well, first I need to figure out which activities are the best.  I am going to try to plan different activities and see which ones keep my students’ attention the most.  Teaching is something you have to learn by doing it, and then you reflect on what worked and what didn’t.  I will make a lot of mistakes and have plenty of bad days but in 3-5 years I think I will get the hang of it.”

That would have echoed a very commonly held belief about teaching: that the job is more art than science, and the only way to learn how to do it is through years of trial and error.  Of course, if you spend a moment thinking about that terrible metaphor, you might remember an art teacher you had at some point showing you how to draw, or paint, or sculpt that clay ashtray you gave to your non-smoking mother for Mothers’ Day.  Of course mom loved what you made, but it wasn’t exactly ready for the Pottery Barn showroom. 

Unfortunately, the stakes are a lot higher for teachers. Too often, novice teachers have little to no impact on student growth during their first year in the classroom. Below is a chart that illustrates the differences in students’ math outcomes that are generated Boston Public School teachers over the course of the first 8 years of their career. 

The good news here is that teachers tend to show rapid growth between their second and fourth years, which seems to reinforce the “learning on the job” narrative. 

But what if we could change that trajectory?  What if we could show that there is a way to get teachers ready to impact student outcomes from day one?  And most importantly, what if we could show that this method of teacher preparation does not only work in high performing charters but in the much broader world of traditional district schools?  Imagine the impact that this new trajectory for rookie teachers would have on closing the achievement gap.

These questions are driving the Teacher Launch Project at the Sposato Graduate School of Education.  Sposato has long been in the business of preparing rookie teachers to teach in the country’s highest performing urban charter schools and has consistently achieved strong outcomes. In aggregate, Sposato-trained teachers significantly outperform their non-Sposato rookie peers, as measured by outside experts and principal evaluations.  

But it’s hard to know why these teachers are getting these results. Is it just that the program is recruiting and selecting the right people (Sposato’s admissions rate is historically <10%, with most participants coming from top-tier colleges and universities)? Or perhaps these teachers and the methods they learn through Sposato can only be successful in charter schools (which only educate 4.2% of the total student population in the U.S.)? 

To affect the broader world of teacher preparation, we need to understand the degree to which the Sposato methodology, separate from the program’s selection and placement practices, is generating unusually effective rookie teachers. Methods can scale a lot faster and wider than teacher prep programs with 10% admissions rates. 

At the heart of this methodology is two big ideas about how novices learn to teach: (1) Rookie teachers need detailed, nuanced, and prescriptive instruction on highly specific teacher skills and moves to guide their decision making with planning, classroom management, and instructional execution. (2) These skills can only be learned through intensive, deliberate practice and immediate feedback from expert coaches—feedback that sounds a lot more like what your basketball coach said when your shooting elbow was in the wrong place rather than the suggestive, “Hmmm, you might try doing X or Y or Z” coaching that happens in a lot of schools. 

So beginning this summer, the Teacher Launch Project will embark on a three year randomized controlled trial of the Sposato methodology in partnership with Dr. Tom Kane and Harvard’s Center for Education Policy Research. Our pilot cohort of 30 teachers—folks who are recent graduates of mainstream education school programs and who are on track to teach in traditional public school districts—will attend an intensive four week summer institute and then will receive 20 weeks of coaching during their rookie year.  The summer institute will carefully replicate the types of teaching simulations and real-time feedback used in Sposato. And the weekly coaching in the fall and winter of their rookie year will look a lot like this session with UP’s very own Kelsey LeBuffe and Jesus Moore at UP Oliver:

The results of the 30 teachers in this “treatment” group will be rigorously evaluated by Dr. Kane and compared to a control group of teachers, who volunteered for the training and coaching but were not selected through the randomized lottery. In the two years that follow, another 200 teachers will be recruited for the Teacher Launch Project—100 to receive the training and coaching, and 100 for the control group. The three years of data will help us learn if the Sposato approach does indeed change the trajectory of rookie teachers who are more representative of the broader workforce in Massachusetts’ public schools. 

We are confident that our model will demonstrate a clearer pathway to ensuring every teacher is effective in their rookie year.  This evidence can then inform decisions that school districts and schools of education make about how they prepare new teachers.  We are pushing for a future where new teachers across the country answer the question about how to get better with a simple two word response: “practice and feedback”.  We like that a lot better than “trial and error.” 

Software Review: Kindle FreeTime

Guest Post by Andrew from Match Next
 

Part of our nightly homework includes 45 minutes of reading a book-of-choice. Checking whether this got done, though, is tricky. We end up relying on kids self-reporting how much they read and on looking at their Kindles (or paper books) to see if they reached their nightly progress goal. A kid who hasn’t done the reading can pretty easily game the system by flipping ahead in their book and lying.

We may have found a tech solution for this. It’s a feature on the new Kindles called FreeTime that let’s us monitor their reading activity by recording when they turn the pages and how long they spend on the device. Completely eliminates us relying on kid self-reporting.


So what is FreeTime and how do we rate it?

It’s a Kindle feature that lets users track the amount of reading a kid does each day. You can see information like how many minutes a student read, how many pages they got through, or how many words they looked up. 

Overall, I’d give it a 7 out of 10. It seriously eases the problem of relying on a kid having to self-report how much they read the previous night, and it helps our tutors spend less time having to figure out how much a kid actually read. 

It’s not perfect, though. It makes it much harder to load books onto a student’s Kindle by adding an extra 3 steps to the book loading process (this almost means it takes more time) Also, a kid can still flip ahead in their books on the Kindle. If they flip ahead slow enough, while playing a video game for instance, a kid could theoretically trick an adult into thinking she really did her reading (we actually think this is a pretty unrealistic scenario, but it’s still possible for a kid to do this to get out of doing their homework). 

I tested FreeTime on 4 of our consistent reading non-completers, and showed 2 of tutors how to use it and assign goals/books to kids. We had to iron out a few kinks early on (e.g. kids couldn’t access the books they had to read), but eventually we got it doing what we needed it to do. Now, one of those tutors who’s been using FreeTime with a chronic homework non-finisher says it’s “one of the most helpful tools she’s used all year - a total game changer,” because she never has to wonder whether her student actually did the work or not. It worked beautifully with these smaller groups, and we’d like to scale it up for our entire class to use, but can’t just yet - more on this later.


How we normally check homework

Right now, we check homework by looking at their Kindle to see how far they got, and checking a parent signature sheet that says ‘my kid did their homework.’

The parent sheets are tricky. On one hand, we want to make a nightly check-in with parents and kids around homework a routine. On the other hand, parents are understandably unreliable reporters of whether their kid did the reading or not. They face the same challenge we do – relying on the kid to be honest about whether they did the homework. The only way to be completely sure is to sit there and watch a child read for the full 45 minutes, but few parents have the time for this, and even then, you can’t be completely sure the kid is actually reading. 

If a student brings a signed parent sheet and is on her assigned location, then we count the homework as complete. If the student forgets the parent sheet, the tutor does a little digging, like having the student summarize what happened in last night’s reading, predict what’ll happen next in the story, etc. Even then it’s still hard to tell if they really spent the full 45 minutes reading.


How we assign homework using FreeTime

Tutors assign a ‘time’ goal directly on a student’s FreeTime account. Then, FreeTime tracks the student’s page turns while they read. The next morning, the tutor can then go onto a student’s FreeTime dashboard to see how many minutes they read for, and how many pages they read. Here’s what that dashboard looks like: 


  You can track other things too, like total time spent on a book or how much progress a kid has made. Here’s what that screen looks like: 


Problems with it

Two annoying things about FreeTime. 

1. Loading books onto the account is incredibly cumbersome. It takes an extra 3 steps to add a book so that the book can be tracked by FreeTime. This adds at least an extra 2-4 minutes to a process that normally takes no more than 1 minute. Multiply this by 50 kids, which turns into way too much wasted time. 

2. You can only have 4 FreeTime accounts on a single Kindle account. Since we have one Kindle account that holds our entire library, we can’t currently use this with every kid’s device unless we created a bunch of individual accounts. Possible, but another time-suck.. 

 

Overall, we’re very excited about this software. With a few tweaks it could literally solve the classic ‘did you really do your reading homework’ problem.

--
if you’d like to talk shop: andrew.jeong@matcheducation.org

Match Beyond Live in The Boston Globe

Guest post from Stig Leschly, CEO of Match Education

The Globe covered our very own Match Beyond this weekend.  It was good story, and an unveiling of sorts.

My favorite quote:

Match Beyond could better serve students, move the needle on college completion rates, and give many more people a route to the middle class.

The Match Beyond crew--including Andrew Balson (who joined us in January to lead Match Beyond), Mike Larsson, and Bob Hill--are rightfully proud and get all the credit.

They just put up a slick full web-site, and they're full speed ahead (hiring, adding students, etc.).

Recall the basics of Match Beyond:

  • Overview: Match Beyond is our hybrid college and jobs model.  It serves mainly graduates of Boston’s high schools who either never went to college or never finished college and who need better jobs.  It also serves some Match HS graduates who didn’t complete college.  The goals of Match Beyond are high degree completion rates (at a low-cost) and great jobs outcomes for students.
  • Part 1 of Match Beyond: The Degree. With our help, Match Beyond students enroll in Southern New Hampshire University’s College for America and, once enrolled there, work towards AA and BA degrees online.  These online degrees are rigorous, accredited, low-cost, eligible for financial aid, and competency-based.   Because the degrees are competency-based, students can work at their own pace.   Over the last two years, we have formed a close partnership with College for America.
  • Part 2 of Match Beyond:  The Coaching and Support.  From our Match Beyond staff, students get intensive personal coaching, academic support and jobs counseling as they work online and plan for better employment.  Our coaches form strong, authentic relationships with Match Beyond students.  Come summer, we’ll open a full “coaching” campus downtown.

Again, read more about all this in the Globe and on our new website

And to close, a quote from one of our Match Beyond students:

I am a real person.  I have bills.  I have to take care of a child.  It’s not easy, but Match Beyond makes it possible.  You get an education, and they will help me with my career.  Nowadays, you’re not going to get any good job without an education.  I don’t see why anybody wouldn’t do Match Beyond. 

-- Sarema, Match Beyond student