Most education reporters tread into the world of education
research from time to time, whether to gauge charter school
achievement, the impact of teacher quality, or the effects of a
reading program, among myriad possibilities. But making sense of
the research, with its often-impenetrable prose, methodological
jargon and mathematical formulas, can be daunting. Despite the
challenge, gaining some basic skills and knowledge in navigating
research makes for stronger journalism.
Research in education serves a wide variety of functions. It
can help explain the link between poverty and student
achievement, assess the likelihood that a new policy or practice
will be effective, raise questions about possible unintended side
effects of such changes, or shed light on students’ and teachers’
behaviors. Research can also muddle debates – and sometimes
unfairly cast aspersions – on well-intended strategies for
improvement. And in this era of “fake news” and fast-moving
disinformation campaigns, research has become an important tool
for fact-checking statements made by politicians, education
officials and others. The trick for education reporters is to
understand when research findings are trustworthy and when they
are not. Reporters need to know how to assess research findings,
where to find credible research, and what the limitations and red
flags are in the studies they encounter.
Research comes across education reporters’ radar screens in at
least two ways. The most obvious is when a major study is issued
that speaks to an important question of education policy or
practice. The second is when, in the course of covering news on
the beat, the reporter wonders: “What does the research say on
Either way, the reporter’s job is to know where to find the
relevant studies, how to understand them, how to assess their
significance and credibility, and how to clearly and accurately
communicate that information to the public.
For local reporters, a good place to get help is the education
school at a nearby college or university. Scholars there can help
translate findings and identify colleagues across the country who
are doing similar work.
At the national level, organizations such as the American
Educational Research Association, the American Sociological
Association, and the American Psychological Association can
direct reporters to academics who are studying the same topic and
may be willing to comment on the research. Indeed, a large and
growing number of education researchers have come to understand
the value of speaking with reporters, and the communications
teams at many major universities have become sophisticated in
their outreach when timely studies are issued.
Within a particular study, the footnotes and appendix provide a
useful road map to the names of other scholars who can lend a
hand in vetting the findings. They will be listed as primary
authors on the studies cited in the research article.
With help from a good librarian or free search engine such as
Google Scholar, reporters may also be able to find useful,
relatively recent meta-analyses to provide background on their
research questions. For a meta-analysis – essentially, a study of
studies – researchers may look at dozens and sometimes hundreds
of academic reports on the same question in order to calculate
the overall effect of a particular intervention or program.
Meta-analyses in education, for example, have addressed the
academic impacts on students of homework, school uniform
policies, and peer tutoring, among other topics. Some have become
touchstones for anyone looking to get the lay of the land for
those issues. Sometimes meta-analyses, which focus on numerical
evidence, also provide new estimates for such things as the
prevalence of cyberbullying and the achievement gaps between
students with and without disabilities. The authors of those
meta-analyses are arguably among the most authoritative sources
to speak on “what the research says” in their study area.
It can be comforting to think of research as the ultimate
authority on a question of educational policy or practice, but
the truth is that usually it is not. The best that research can
do is to provide clues to what works, when, and for whom,
because classrooms, schools, and communities inevitably vary. A
reading program that achieved good results in an affluent Fairfax
County, Virginia, community might not work as well in a rural,
Mississippi Delta school, for instance.
In some cases, research can provide a skewed or even misleading
take on what it is examining, as sometimes happens when advocacy
groups attempt to spin a public debate by producing their own
Reporters should ask some key questions to help understand the
limitations of studies and sift out credible nuggets of research
information from findings that are suspect. Also, keep in mind
that it is always best to go straight to the source and read the
full study. If there is any uncertainty, rely on reputable
scholars for guidance.
Who paid for the study? Journalism 101 teaches reporters to be
suspicious of information generated by anyone with a stake in the
results. This policy holds true for education research, too,
generally speaking. But it is also important to be aware that it
can be difficult for programs to get outside funding to evaluate
their initiatives. And not every study from an advocacy group is
necessarily biased or of poor quality. Some advocacy-oriented
think tanks produce very credible research; some do not. Others
produce a mixture of bad and good. A closer look at the study
itself (and perhaps finding another expert to take a look) should
be the deciding factor on its merit.
Where was the study published? In terms of trustworthiness,
research published in a peer-reviewed journal almost always
trumps research that is published without extensive review from
other scholars in the same field.
How were participants selected for the study? Reporters should
always be on the lookout for evidence of “creaming” – in other
words, choosing the best and brightest students for the
intervention group. But there are many other ways the
selection process can introduce biases. For example, researchers
may get a very different result if they examine the behaviors of
paid volunteers recruited from targeted neighborhoods than if
they study individuals who were randomly chosen from a large
How were the results measured? It is not enough to state that
students did better in reading, math, or another subject.
Reporters need to know exactly what measuring stick was used. Was
it a standardized test or one that was developed by the
researchers? Did the test measure what researchers were actually
studying? Did students take the same test before and after the
study? (In other words, could students have done better the
second time around just because they were older, or more familiar
with the test?)
Was there a comparison group? Reporters should be wary of
conclusions based on a simple pre- and post-test conducted with a
single group of students. Whether the study design is a random
assignment, a regression-discontinuity analysis, or a basic
comparison, there should always be a control group with
demographic and other characteristics that are similar to those
of the experimental group.
How many participants were involved in the study? Remember that
larger sample sizes generally offer more accurate results than
What else happened during the study period that might explain the
results? For example, were there any changes in the school’s
enrollment, teaching staff or leadership?
What are the limitations of the study? Most good researchers will
go into detail about the limitations of their study and its
findings. This information often can be found in the “data,”
“results,” “discussion” and “conclusions” sections of an academic
paper. Some papers have a “limitations” section. Reporters should
read these sections carefully.
Are the findings in sync with other research in the same area? If
conclusions are drastically different, it’s not necessarily a red
flag, but it does signal the need to dig deeper. What was
different about this study compared with previous ones that
looked at the same research question?
Other factors also may be important to keep in mind when
considering the value of research. For example, if there was an
experimental group, did it really get a different treatment? In a
school-based study, teachers may not have implemented an
intervention faithfully. What do the researchers know about
“implementation fidelity” in their study?
Likewise, reporters should learn what sorts of instructional
treatments students in the control group were getting. A federal
evaluation of the short-lived Reading First program found that it
led to no sizable reading gains, but it also determined that
students in the comparison group were not being taught in a
vacuum. The instructional approaches used in the experimental
group were widespread, and, in many cases, the students in the
control group were being taught by some of the same methods even
though they were not participating in a Reading First program.
For a time, the conventional wisdom around education research was
that only true experiments – studies in which participants were
randomly assigned to either a control or a treatment group –
could provide reliable, cause-and-effect conclusions. While they
are still considered the “gold standard” for establishing
causation, such randomized, controlled studies are not
infallible. Those experiments can be very expensive to carry out
– and even unethical in some cases. If researchers really believe
an intervention will improve learning, for example, withholding
it from a control group of students could be seen as unjustified.
Regression-discontinuity designs are a type of analysis sometimes
used to evaluate interventions that involve some sort of a
test-score cutoff – for example, a grade-retention policy that
requires students to repeat a grade if their reading scores on a
standardized test fall below a specified point. The assumption is
that students who fall just above or below the cutoff are not all
that different from one another, academically speaking.
But more typical, quasi-experimental studies, in which
researchers compare a group of students who are getting an
experimental intervention with those who are not, can provide
good information as well if they are carefully done. The key
ingredient is having a comparison group that looks a lot like the
experimental group in terms of students’ academic backgrounds,
ages, and demographics.
Unlike in the medical field, which has a handful of must-read
academic journals such as the Journal of the American Medical
Association or Lancet, education research is disseminated through
dozens of journals. Many published studies also tend to be fairly
old, due to the long lag between the time a study is performed
and actual publication of the resulting research article.
While a number of academic journals post forthcoming papers to
their websites, the data often is dated because it can take
months to a year or more to go through the peer-review process.
Sometimes, an easier way to stay on top of new and newly
published studies is academic conferences and news aggregators
like Newswire and Futurity, which disseminate information about
research provided by colleges and universities. Reporters can
also get on the email lists for private research organizations,
such as Mathematica or the American Institutes of Research, that
do extensive work in education. The National Bureau of Economic
Research distributes high-quality working papers on education and
other public policy topics.
If the reporter’s aim is to find other studies on the same topic,
the What Works Clearinghouse operated by the U.S. Department of
Education is one place to look. Researchers can submit studies to
the clearinghouse, where they are vetted by department reviewers
at the Institute for Education Sciences, the department’s main
research arm. The resulting reviews are posted on the What Works
website. Be warned: Due to the rigorous methodology used by the
department, few studies get an unqualified thumbs-up.
The research agency also periodically publishes intervention
reports, which use the same tough What Works standards to assess
the body of research evidence on the effectiveness of particular
instructional approaches or commercial instructional programs (as
opposed to the quality of a specific study on that
The institute also publishes best-practice reports, which take a
broader approach to assessing empirical evidence. Those reports
tend to be most useful for practitioners looking for best bets on
what might work in their own classrooms, schools, or districts.
Denise Marie-Ordway contributed to this guide. It was made
possible by grants from the W.T. Grant Foundation and the