Word on the Beat: NAEP
What reporters need to know about the 'nation's report card'
What it means: The National Assessment for Educational Progress, sometimes referred to as “the nation’s report card,” is given every two years to a representative sample of the nation’s students in grades 4 and 8 to gauge achievement in reading, writing, and mathematics. Twelfth graders also are tested on roughly the same schedule. Specialized subject tests, including science and civics, are administered, on a rotating basis at all three grade levels, but are less frequent. NAEP is a low-stakes test — meaning it doesn’t appear on a student’s transcript or impact their ability to advance to the next grade or graduate.
Why it matters: NAEP is the only assessment system designed specifically to track student achievement nationally over the long term. The test questions are designed to measure what students know, and are able to do. And the questions stay relatively the same over time for greater consistency in comparisons, according to the National Assessment Governing Board (NAGB) which oversees NAEP. The data can serve as an important backdrop for educators and researchers in evaluating public schools, and looking at trends by student demographics, including geographic location, socioeconomic status, race, special education status, and gender.
Who’s talking about it: This week, the results of the most recent NAEP exams in reading and math will be made public, offering a snapshot of U.S. student achievement, plus state-by-state data, and outcomes for 27 large urban districts. A few years ago, I posited the 2018 results could carry an unusual amount of weight. For supporters of reform efforts like the Common Core, an improvement in reading and math scores would likely be seen as validation. Or, if there was another decline, it might bolster arguments that the initiative had failed to live up to expectations. At the federal level, NAEP scores are often leveraged as either an indictment or validation of a presidential administration’s educational priorities, even when the scores reflect time periods under different leadership.
What to remember: Correlation is not causation. As Morgan Polikoff, an education professor at the University of Southern California put it, “friends don’t let friends misuse NAEP data.” The National Assessment Governing Board will be the first say the test results cannot be used to gauge the impact or effectiveness of a particular educational intervention. (That, of course, won’t stop critics of specific programs and advocates of others from doing precisely that.)
That being said, because the assessments aren’t linked to any local or state learning standards, NAEP can be used as a barometer to compare with results from states’ own tests, especially when it comes to reading and math. Several studies have compared levels of student achievement between state tests and NAEP, often finding large gaps that serve as red flags that the state expectations for student performance may be lower than those of NAEP.
Want know more? Cara Jackson, a longtime assessment researcher now with Montgomery (Maryland) Public Schools, put together a helpful Twitter thread of NAEP-related analysis and resources. EWA recently held a webinar to help reporters get ready for the new test results, and you can catch the replay. Michael Petrilli, the president of the Thomas B. Fordham Institute, a conservative-leaning think tank, offers seven stories to watch for in coverage of the NAEP scores. Sarah Sparks, writing for Education Week, breaks down the assessments and what can be gleaned from the results. The federal NAEP overseers recommend focusing on long-term trends rather than theoretically smaller changes from one test year to the next. EWA’s Topics Page on Standards & Testing is another useful resource. And Chalkbeat’s Matt Barnum looked at the impact of NAEP moving from pencil-and-paper to a digital-only platform.