Education Research

Overview

Making Sense of Education Research

Most education reporters tread into the world of education research from time to time, whether to gauge charter school achievement, the impact of teacher quality, or the effects of a reading program, among myriad possibilities. But making sense of the research, with its often-impenetrable prose, methodological jargon and mathematical formulas, can be daunting. Despite the challenge, gaining some basic skills and knowledge in navigating research makes for stronger journalism. 

Most education reporters tread into the world of education research from time to time, whether to gauge charter school achievement, the impact of teacher quality, or the effects of a reading program, among myriad possibilities. But making sense of the research, with its often-impenetrable prose, methodological jargon and mathematical formulas, can be daunting. Despite the challenge, gaining some basic skills and knowledge in navigating research makes for stronger journalism. 

Research in education serves a wide variety of functions. It can help explain the link between poverty and student achievement, assess the likelihood that a new policy or practice will be effective, raise questions about possible unintended side effects of such changes, or shed light on students’ and teachers’ behaviors. Research can also muddle debates – and sometimes unfairly cast aspersions – on well-intended strategies for improvement. And in this era of “fake news” and fast-moving disinformation campaigns, research has become an important tool for fact-checking statements made by politicians, education officials and others. The trick for education reporters is to understand when research findings are trustworthy and when they are not. Reporters need to know how to assess research findings, where to find credible research, and what the limitations and red flags are in the studies they encounter.

Getting Help

Research comes across education reporters’ radar screens in at least two ways. The most obvious is when a major study is issued that speaks to an important question of education policy or practice. The second is when, in the course of covering news on the beat, the reporter wonders: “What does the research say on that?” 

Either way, the reporter’s job is to know where to find the relevant studies, how to understand them, how to assess their significance and credibility, and how to clearly and accurately communicate that information to the public.

For local reporters, a good place to get help is the education school at a nearby college or university. Scholars there can help translate findings and identify colleagues across the country who are doing similar work.

At the national level, organizations such as the American Educational Research Association, the American Sociological Association, and the American Psychological Association can direct reporters to academics who are studying the same topic and may be willing to comment on the research. Indeed, a large and growing number of education researchers have come to understand the value of speaking with reporters, and the communications teams at many major universities have become sophisticated in their outreach when timely studies are issued.

Within a particular study, the footnotes and appendix provide a useful road map to the names of other scholars who can lend a hand in vetting the findings. They will be listed as primary authors on the studies cited in the research article.

With help from a good librarian or free search engine such as Google Scholar, reporters may also be able to find useful, relatively recent meta-analyses to provide background on their research questions. For a meta-analysis – essentially, a study of studies – researchers may look at dozens and sometimes hundreds of academic reports on the same question in order to calculate the overall effect of a particular intervention or program. Meta-analyses in education, for example, have addressed the academic impacts on students of homework, school uniform policies, and peer tutoring, among other topics. Some have become touchstones for anyone looking to get the lay of the land for those issues. Sometimes meta-analyses, which focus on numerical evidence, also provide new estimates for such things as the prevalence of cyberbullying and the achievement gaps between students with and without disabilities. The authors of those meta-analyses are arguably among the most authoritative sources to speak on “what the research says” in their study area.

Key Questions

It can be comforting to think of research as the ultimate authority on a question of educational policy or practice, but the truth is that usually it is not. The best that research can do is to provide clues to what works, when, and for whom, because classrooms, schools, and communities inevitably vary. A reading program that achieved good results in an affluent Fairfax County, Virginia, community might not work as well in a rural, Mississippi Delta school, for instance.

In some cases, research can provide a skewed or even misleading take on what it is examining, as sometimes happens when advocacy groups attempt to spin a public debate by producing their own headline-grabbing studies. 

Reporters should ask some key questions to help understand the limitations of studies and sift out credible nuggets of research information from findings that are suspect. Also, keep in mind that it is always best to go straight to the source and read the full study. If there is any uncertainty, rely on reputable scholars for guidance.

Who paid for the study? Journalism 101 teaches reporters to be suspicious of information generated by anyone with a stake in the results. This policy holds true for education research, too, generally speaking. But it is also important to be aware that it can be difficult for programs to get outside funding to evaluate their initiatives. And not every study from an advocacy group is necessarily biased or of poor quality. Some advocacy-oriented think tanks produce very credible research; some do not. Others produce a mixture of bad and good. A closer look at the study itself (and perhaps finding another expert to take a look) should be the deciding factor on its merit.

Where was the study published? In terms of trustworthiness, research published in a peer-reviewed journal almost always trumps research that is published without extensive review from other scholars in the same field.

How were participants selected for the study? Reporters should always be on the lookout for evidence of “creaming” – in other words, choosing the best and brightest students for the intervention group. But there are many other ways the selection process can introduce biases. For example, researchers may get a very different result if they examine the behaviors of paid volunteers recruited from targeted neighborhoods than if they study individuals who were randomly chosen from a large geographical area.

How were the results measured? It is not enough to state that students did better in reading, math, or another subject. Reporters need to know exactly what measuring stick was used. Was it a standardized test or one that was developed by the researchers? Did the test measure what researchers were actually studying? Did students take the same test before and after the study? (In other words, could students have done better the second time around just because they were older, or more familiar with the test?) 

Was there a comparison group? Reporters should be wary of conclusions based on a simple pre- and post-test conducted with a single group of students. Whether the study design is a random assignment, a regression-discontinuity analysis, or a basic comparison, there should always be a control group with demographic and other characteristics that are similar to those of the experimental group.

How many participants were involved in the study? Remember that larger sample sizes generally offer more accurate results than smaller ones.

What else happened during the study period that might explain the results? For example, were there any changes in the school’s enrollment, teaching staff or leadership?

What are the limitations of the study? Most good researchers will go into detail about the limitations of their study and its findings. This information often can be found in the “data,” “results,” “discussion” and “conclusions” sections of an academic paper. Some papers have a “limitations” section. Reporters should read these sections carefully.

Are the findings in sync with other research in the same area? If conclusions are drastically different, it’s not necessarily a red flag, but it does signal the need to dig deeper. What was different about this study compared with previous ones that looked at the same research question?

Other factors also may be important to keep in mind when considering the value of research. For example, if there was an experimental group, did it really get a different treatment? In a school-based study, teachers may not have implemented an intervention faithfully. What do the researchers know about “implementation fidelity” in their study?

Likewise, reporters should learn what sorts of instructional treatments students in the control group were getting. A federal evaluation of the short-lived Reading First program found that it led to no sizable reading gains, but it also determined that students in the comparison group were not being taught in a vacuum. The instructional approaches used in the experimental group were widespread, and, in many cases, the students in the control group were being taught by some of the same methods even though they were not participating in a Reading First program.

Study Design

For a time, the conventional wisdom around education research was that only true experiments – studies in which participants were randomly assigned to either a control or a treatment group – could provide reliable, cause-and-effect conclusions. While they are still considered the “gold standard” for establishing causation, such randomized, controlled studies are not infallible. Those experiments can be very expensive to carry out – and even unethical in some cases. If researchers really believe an intervention will improve learning, for example, withholding it from a control group of students could be seen as unjustified.

Regression-discontinuity designs are a type of analysis sometimes used to evaluate interventions that involve some sort of a test-score cutoff – for example, a grade-retention policy that requires students to repeat a grade if their reading scores on a standardized test fall below a specified point. The assumption is that students who fall just above or below the cutoff are not all that different from one another, academically speaking.

But more typical, quasi-experimental studies, in which researchers compare a group of students who are getting an experimental intervention with those who are not, can provide good information as well if they are carefully done. The key ingredient is having a comparison group that looks a lot like the experimental group in terms of students’ academic backgrounds, ages, and demographics. 

Finding Research

Unlike in the medical field, which has a handful of must-read academic journals such as the Journal of the American Medical Association or Lancet, education research is disseminated through dozens of journals. Many published studies also tend to be fairly old, due to the long lag between the time a study is performed and actual publication of the resulting research article.

While a number of academic journals post forthcoming papers to their websites, the data often is dated because it can take months to a year or more to go through the peer-review process.

Sometimes, an easier way to stay on top of new and newly published studies is academic conferences and news aggregators like Newswire and Futurity, which disseminate information about research provided by colleges and universities. Reporters can also get on the email lists for private research organizations, such as Mathematica or the American Institutes of Research, that do extensive work in education. The National Bureau of Economic Research distributes high-quality working papers on education and other public policy topics.

If the reporter’s aim is to find other studies on the same topic, the What Works Clearinghouse operated by the U.S. Department of Education is one place to look. Researchers can submit studies to the clearinghouse, where they are vetted by department reviewers at the Institute for Education Sciences, the department’s main research arm. The resulting reviews are posted on the What Works website. Be warned: Due to the rigorous methodology used by the department, few studies get an unqualified thumbs-up.

The research agency also periodically publishes intervention reports, which use the same tough What Works standards to assess the body of research evidence on the effectiveness of particular instructional approaches or commercial instructional programs (as opposed to the quality of a specific study on that intervention). 

The institute also publishes best-practice reports, which take a broader approach to assessing empirical evidence. Those reports tend to be most useful for practitioners looking for best bets on what might work in their own classrooms, schools, or districts.

Denise Marie-Ordway contributed to this guide. It was made possible by grants from the W.T. Grant Foundation and the Spencer Foundation.

Highlight

Statistical Significance
Are research findings the result of a particular intervention or due to chance?

An important concept in education research is statistical significance. The idea is to determine the likelihood that a particular outcome in a study, such as higher student math achievement, is mere chance or was associated with a specific intervention. Statistical significance is measured by what is known as a “p value.” Big values of p are evidence supporting the idea that the results were due to chance. Small values of p are evidence against the idea that chance explains the outcome.

Highlight

Effect Size
Gauging the effect size helps audiences evaluate whether a program produced meaningful results that were worth the time and energy.

An effect size measures the magnitude of a research finding. At its most basic level, the effect size is the difference between those who received a particular intervention (the “treatment group”) and those who did not (the “control group”). For example, let’s say a researcher is studying the effects of a new online course that prepares students for the ACT college-admissions exam. Half of the 11th graders in a school district take the course. Their average score is 22. Half the 11th graders in the district do not take the course.

Highlight

About the Authors

Denise-Marie Ordway is a veteran education reporter and the managing editor of Journalist’s Resource, a project of Harvard’s Shorenstein Center on Media, Politics and Public Policy aimed at bridging the gap between journalism and academia. Debra Viadero is an assistant managing editor at Education Week who oversees the newspaper’s coverage of education research and other issues. Holly Yettick is the director of the Education Week Research Center.

Blog: The Educated Reporter

Soft Skills Training Teaches Electricians to Fix Fuses, Not Blow Them
Community colleges award budding trades workers badges in empathy

Sure, a plumber should be able to stop a leak or fix a toilet. Those job skills are essential, and easily measured.

But what about the rest of the equation — the people skills customers also want? How does an employer really know if an applicant has what it takes? Can’t there be a test or something?

Blog: The Educated Reporter

Why Tapping Education Researchers Pays Off
Reporters See Value in Teaming Up With Experts to Examine Data

From test scores to graduation rates, the education system is a world of numbers that can show how well policies and practices are serving students – if you know how to analyze the data.

“When there’s a data session here and you have to pick which category you’re in, I would be in the beginner category,” said Adam Tamburin, a higher education reporter for The Tennessean, during a panel at the Education Writers Association’s 2019 National Seminar in Baltimore.

Enter the trained scientists.

Reporter Guide

Making Sense of Research Literature Reviews and Meta-Analyses

When journalists want to learn what’s known about a certain subject, they look for research. Scholars are continually conducting studies on education topics ranging from kindergarten readiness and teacher pay to public university funding and Ivy League admissions.

One of the best ways for a reporter to get up to date quickly, though, is to read a study of studies, which come in two forms: a literature review and a meta-analysis.