Which Education Research Is Worth the Hype?
Education reporters may have the power of the pen, but when it comes to navigating the complex methods of research studies, we may feel powerless. As researchers churn out report after report, how can journalists on deadline figure out which studies are worth covering?
A post in The Atlantic shines a spotlight on education studies that generated a lot of media attention but weren’t as conclusive as initially reported.
As Ashley Merryman, the author of Nurture Shock: New Thinking About Children, and Top Dog: The Science of Winning and Losing, told the Atlantic writers, “just because something is statistically significant does not mean it is meaningfully significant.”
Influential studies like the Tennessee STAR report on classroom size may show a policy works in one context but has a weaker effect in another context. Small idiosyncratic studies, like analyzing the effect a barren room has on young learners when taking tests, may be too experimental to have any real-world impact. The writers of the article even dressed down a study featured in The Atlantic’s print issue that found that parents helping students with homework may slow the kids down academically.
Holly Yettick, a former education reporter and current education scholar, spoke at Education Writers Association’s National Seminar in May in Nashville on how to distinguish solid education research from bad. In an EWA blog post summarizing her presentation, Yettick warns “press releases announcing new findings often come from P.R. folks with an interest in making a study sound extra-juicy. Yettick said it’s important to look critically at what the researchers had to say and never to rely on a summary in a press release.”
“If you get a press release about a study, some advocacy-oriented group has probably decided the results are worth spending money to promote. She suggested a few more common warning signs: promising a ‘silver bullet’ to a big problem, using letter grades simplify complex findings, and suggesting a connection between two very different things. Sometimes, Yettick said, statistics-oriented researchers draw conclusions from data that they wouldn’t make if they better understood how schools work.”
Reporters can also bone up on their data literacy to avoid instances of scholars overselling an observation. International comparison assessments, mainly PISA and TIMSS, give ample ammunition to advocates from all sides to cast the test results in ways that reaffirm their pet concerns.
Betsy Hammond of The Oregonian attended an EWA 2014 National Seminar panel on misuses of international assessment findings. Her whole post is worth a read, but some examples of bad data gleaning include:
- Not realizing statistical noise is all that separates something ranked first from tenth.
- Contrary to popular opinion, U.S. students have never performed better than mediocre on international assessments. In the 1960s, U.S. students were second from last on the first international assessment.
- Yes, the U.S. has a higher percentage of poor students than other rich countries, but our richest and smartest are outgunned by their better-off peers in leading education countries.
- Even well-established institutions bite off more than they can chew, especially when drawing a causal claim from a correlation. OECD, the organization behind PISA, was guilty of this, said former National Center on Education Statistics stats guru Jack Buckley.
Another personal tip on weeding out the research that’s low-grade stuff? The U.S. Department of Education has an ongoing project called the What Works Clearinghouse. It evaluates studies for their methodological rigor after they’ve been published. If you’ve time to vet reports you want to use in your reporting, check out the Clearinghouse’s nearly 11,000 research reviews.