MCAS Vs. PARCC: Now, Education Board Might Face A Third Option

MCAS Vs. PARCC: Now, Education Board Might Face A Third Option

In most grades and subjects, Massachusetts students who took PARCC this past spring were less likely to score as having “met” or “exceeded” expectations than MCAS students were to score “Proficient” or “Advanced.” Here, a sixth grader in Ohio reads through a PARCC practice test. (Ty Wright/AP/File)

In most grades and subjects, Massachusetts students who took PARCC this past spring were less likely to score as having “met” or “exceeded” expectations than MCAS students were to score “Proficient” or “Advanced.” Here, a sixth grader in Ohio reads through a PARCC practice test. (Ty Wright/AP/File)

At the Tuesday meeting of the state Board of Elementary and Higher Education where the latest standardized test scores were released, those scores weren’t the main topic of the day. Instead, talk focused on a new twist in the ongoing discussion of whether to keep using the MCAS test or switch to PARCC: How about neither one?

Mitchell Chester, commissioner of elementary and secondary education, is due to make his recommendation on the tests to the board before its Nov. 17 vote. In a special meeting Monday, he told the board that he was now weighing a third possibility, or “Door No. 3,” as he put it: a so-called “MCAS 2.0,” which could use elements of the new PARCC tests to build a state-specific assessment.

“It’s not a binary decision,” Chester said Tuesday.

Public discussions of the decision, however, have largely focused on choosing between MCAS, the Massachusetts Comprehensive Assessment System test, and PARCC, the Partnership for Assessment of Readiness for College and Careers test. Massachusetts has been part of the consortium of states developing PARCC, and Chester chairs the PARCC governing board.

Now, state education secretary James Peyser said he’s willing to consider the new option.

“In thinking about MCAS 2.0, I think the question is, ‘What are the next steps in developing that assessment?'” Peyser said Tuesday.

Chester indicated that the work already done on PARCC could provide some of those steps.

“The question in my mind,” he said, “is to what extent can we take advantage of the development that’s been done in PARCC to take us down that road of a next-generation assessment?”

Laura Slover, CEO of PARCC Inc., the nonprofit organization that’s developing the new test, told the board Monday that it would be possible to use items from PARCC in a new, Massachusetts-designed test. But she noted that such a switch would require issuing a new request for proposals, which could delay any change, and that the costs of developing a new test are unknown.

The state has estimated that just getting all Massachusetts schools the necessary technology for administering PARCC (or any other test) on computers could cost as much as $12.3 million. Infrastructure upgrades could cost the state about $2.4 million.

Meanwhile, the statewide test results released Tuesday indicate that fewer students overall met state standards on the PARCC assessment than on MCAS.

These results are similar to the preliminary ones released last month, but Tuesday’s release includes the results for students who took the PARCC test this past spring on paper (about 41 percent), as well as those who used a computer (about 59 percent).

In most grades and subjects, the results show, students who took PARCC were less likely to score as having “met” or “exceeded” expectations than MCAS students were to score “Proficient” or “Advanced.” In fourth grade, however, the percentages were the same for math on both tests; in English language arts, a higher percentage of fourth graders passed this bar on MCAS than on PARCC.

“There’s no question that PARCC has set a higher standard for student performance,” commissioner Chester said at the Tuesday meeting.

Overall in grades 3-8, 60 percent of students met or exceeded expectations on PARCC, vs. 68 percent scoring proficient or advanced on MCAS; in math, it was 52 percent for PARCC, 60 percent for MCAS.

“These statewide PARCC scores will help establish a baseline for comparison with other PARCC states and with our own progress over time,” Chester said in a statement, “should the board choose to adopt PARCC within our statewide assessment.”

As part of a two-year “test drive” of PARCC, districts could choose this year whether to use PARCC or MCAS. And because districts that chose PARCC were different from MCAS districts, the department said, it did not use all students’ scores in making its comparisons.

Instead, it used “representative samples” (about 75 percent of both groups) that were matched to statewide statistics on achievement variables and demographic measures. In particular, according to a presentation to the state board, “MCAS districts had 10% fewer low income students than PARCC students.”

PARCC results for individual schools are expected to be released in November, possibly after the board makes its decision on what test to use. In the special meeting Monday, the board heard presentations on two recent studies comparing PARCC and MCAS.

One, from Mathematica Policy Research, showed that the two tests were about equal in their ability to predict college performance — and, like the SAT, were only moderately successful in doing so.

The other, commissioned by the state’s Executive Office of Education, concluded that both tests are “high-quality assessments with respect to issues of validity and reliability” that could be aligned with Common Core standards, but it also found complex benefits and risks for both. As a result, it said, “There is not a simple answer to the question of ‘MCAS or PARCC?’ ”

In response to a question at Tuesday’s board meeting, Chester and Peyser agreed that MCAS and PARCC are about equal in predicting college performance.

“But that’s one piece of the puzzle we’re trying to solve here,” Peyser said. “The other is to try to make sure that we’re sending the right signals to teachers and students while they’re in high school about what the expectations are,” both for what students should have learned at each level and for what they need to do in order to prepare for college and careers.

With the possibility of an MCAS 2.0, that puzzle just acquired a few new pieces.

WBUR’s Peter Balonon-Rosen contributed to this report.

Earlier Test Data:


  • Monster

    It’s hilarious that DESE uses the terms “met/did not meet/exceeded expectations” on the two tests without defining those terms. What percentage correct constitutes “expectation” on MCAS vs. PARCC? It’s a totally arbitrary measure that means nothing and is easily manipulated.

    • Louise Kennedy

      They did discuss the so-called cut scores at Tuesday’s board meeting. I didn’t go into that level of detail in this post, in part because just having the numbers for the scores at each level still doesn’t provide much useful information without more context. I can tell you, though, that PARCC looks to be on an 850-point scale; the “met/exceeded expectations” (i.e. a “passing”) grade starts at 750. Anything below 699 is “did not yet meet expectations,” which in the old days I think we would have called “fail.”

      • Monster

        And what percentiles of performance do those cut points correspond to? That’s always the most interesting question. What percentile constitutes a reason for concern? 60th? 49th? 24th? Anything below the mean? One/two/three standard deviations below the mean?

        Take a kid who scores in the 30th percentile. Is this concerning? Or is he just very average, considering that his score falls within the middle 50%? Will his parents hear that he is failing, needing improvement, or doing okay? What about a test wherein the average overall score is, say, 50%? Does that mean the test is inappropriately difficult or inscrutable, or does it mean that kids persevered through a challenging exam, or does it mean that Massachusetts as a whole did very poorly?

        That’s when this data slicing and dicing becomes interesting. We take kids and describe some percentage of them as “underperforming,” which to most would indicate an abnormally low level of performance. How the state’s statistical spin room defines normal and abnormal is the critical component.

        • Louise Kennedy

          Exactly, which is why I said these numbers don’t make much sense out of context. We will be digging into the numbers more as we get more information.

  • Christine Langhoff

    “One, from Mathematica Policy Research, showed that the two tests were about equal in their ability to predict college performance — and, like the SAT, were only moderately successful in doing so.”

    Moderately successful? As in 5%-18% correlation? GPA’s, assembled, over 4 years of a student’s education, does a far better job. Ask the colleges and they will tell you this is so.

    These tests do not have some magical qualities which allow them to see the future. They are products, and it is marketing of the products that have made us true believers of what they purport to tell us.

  • Monty Neill

    PARCC and MCAS are both narrow, very limited
    indicators of student learning or teacher or school quality, and that
    graduation exams have been found by Nat Acad of Sciences to drive up the
    dropout rate without improving student preparedness for college or
    employment. California and 3 other states have dropped their exit exams
    and awarded diplomas retroactively to those who completed high school
    except for the test; several other states ended or cut back their grad
    tests in recent years.
    Neither test is worth teaching to.

  • Nancy Novo O’Connor

    How on earth can they determine that PARCC is an equal predictor of college success when nobody who has taken it has gone to college? Is that just pure number fudging based on its similarity to MCAS questions and extrapolating? That one baffles me.