Blog: The Educated Reporter

Common Core State Standards: UCLA Report Highlights Potential, Challenges of New Assessments

As a majority of the nation’s schools prepare to adopt the new Common Core State Standards, the tests that will assess how much students actually know are coming under increased scrutiny. A new study that looks at the progress of building those tests suggests acing the exams could get a lot harder.

The National Center for Research on Evaluation, Standards, and Student Testing (CRESST) at University of California, Los Angeles has a new report out today, concluding the groups picked to design the assessments tied to the Common Core have the potential to create models that are more intellectually demanding than what states currently use to gauge student knowledge.

The increased rigor of the exams could have profound consequences in the classroom, as teachers tasked with preparing their students for the tests may have to lead tougher lessons on the subjects covered by the assessments.

The report focused on two items that the researchers say make SMARTER Balanced and Partnership for Assessment of Readiness for College and Careers (PARCC)—the two consortia developing the assessments—a cut above other state tests: the level of transparency the assessment makers are providing as they build the actual tests and the higher-order thinking that will be asked of students. 

On the transparency front, report co-writer Professor Joan Herman credits Smarter Balanced’s and PARCC’s use of an assessment development model called Evidence-Centered Design (ECD). In an interview, she explains ECD’s significance. “It’s a process that makes visible what’s going to be tested and gives an opportunity for feedback during the test development,” Herman says.

Not all test development strategies work like this, she says, but PARCC and Smarter Balanced are making public all stages of the test development process. Included in that are the types of test questions educators can expect and an explanation of the rationale that went into designing the tests the way they did. The process allows watchdogs and testing specialists to see how the assessments will be aligned with the Common Core State Standards. For the state and district administrators charged with rolling out the assessments in 2014-15, the ECD approach, Herman says, could help teachers tailor their instruction to reflect the more challenging tests.

“The reason the federal government invested in these consortia in the first place is that teachers and schools teach what’s tested,” Herman says. “So if we wanted rigorous teaching… the tests have to address those higher levels of rigor as well.”

In judging the extent to which the assessments will gauge higher-order thinking in math and English Language Arts, the CRESST researchers followed an evaluation model called Depth of Knowledge (DOK), which identifies four levels of test question types and the rigor of each type. The report finds that the assessment blueprints made public pave the way for many questions that register as DOK 3 and DOK 4, the higher two levels, meaning the testing items require “abstract thinking, reasoning, and/or more complex inferences” and prompt “Extended analysis or investigation that requires synthesis and analysis across multiple contexts and non-routine applications.”

A recent RAND study analyzed the DOK levels of released items from 17 leading state tests. The  study found that in 2009-10, just 3 to10 percent of U.S. students were assessed on deeper learning on at least one state standardized test. Using the same metric, 100 percent of students taking consortia tests will be assessed on deeper learning.

The report includes a chart that illustrates the distribution of DOKs on the Smarter Balanced assessment content so far. Herman cautions that while proportion of items at DOK3 and 4 are much higher than what one would find on most current state tests, there is no guarantee the consortia assessments will be that difficult. She says the projected DOK levels should be viewed as plans that may come to fruition.

The consortia have a long road ahead, she says. Looping back to the transparency of test making process, Herman says those who follow the development of the assessments will be able to pinpoint when and where the test questions’ difficulty were diluted if that were to happen.

In terms of the ready availability of supporting materials, there are differences between the two consortia on what they display online. Smarter Balanced has posted in full its testing blue prints, content specifications, item and task specifications. PARCC is still putting together those items for public view, but it has a collection of other public material, like its vendor requests and Content Frameworks that allowed the CRESST researchers to make their evaluations.



Have a question, comment or concern for the Educated Reporter? Contact Emily Richmond. Follow her on Twitter @EWAEmily.

Read other Educated Reporter articles.