Proceed With Caution: Using Polling Data in Education Reporting
On Wednesday, Aug. 21 at 9 a.m. EDT, I’ll be chatting live with Bill Bushaw, executive director of Phi Delta Kappa, about the new Gallup poll findings. You can use this link to ask questions in advance of the conversation.
The annual PDK/Gallup education poll comes out Wednesday, and policymakers, analysts and pundits will be busy parsing the findings on perceptions of the nation’s public schools – from campus safety to high-stakes testing to the new Common Core State Standards.
We can also expect comparisons to be made to the recent Associated Press-NORC Center for Public Affairs Research poll, sponsored by the Joyce Foundation, which covered some similar territory. You can read the AP’s story on the findings, including the conclusion that the surveyed parents were in favor of high-stakes testing, here.
However, it’s important to remember that polls are snapshots, not definitive statements. The findings are always open to interpretation. Here’s another reason to limit comparisons of the findings of these two particular polls: AP-NORC used parents of school-aged children. The PDK/Gallup poll sampled adults over the age of 18 with a telephone at home, and about 30 percent of them turned out to have kids in school. (Both had similar sample sizes of about 1,000 people.)
For advice on using polling data, take a look at the 20 Questions a Journalist Should Ask, compiled by the National Council on Public Polls. No. 14 is particularly relevant: “You must find out the exact wording of the poll questions. Why? Because the very wording of questions can make major differences in the results.”
Take one example. Robert Schaeffer, public education director for the advocacy group FairTest, has taken issue with the AP-NORC poll’s question on the public attitude toward testing. Schaeffer, whose organization is highly critical of perceived misuses of testing, contends that the question doesn’t distinguish between “teacher-and-school designed exams” and other “standardized tests,” making the assessments the poll participants were asked about “seem innocuous and potentially low-stakes.” He points out that while the words “high-stakes” don’t appear in the poll questions, respondents’ support for standardized tests seemed to steadily decrease as the potential consequences rose for schools and districts. For example, 93 percent of the parents polled supported using standardized tests to identify students who need extra help, while 60 percent of parents were in favor of using the standardized test results to evaluate teachers. By the time the poll question reaches what’s arguably the highest stake on the list – using standardized test results “to determine the level of funding each local school should receive” – only 40 percent of parents were supportive.
As for the PDK/Gallup poll, no one recognizes the importance of a question’s wording better than Bill Bushaw, executive director of PDK. He provided me with an interesting example from the September 2009 issue of Phi Delta Kappan magazine, explaining how the organization tested a question about teacher tenure:
“Americans’ opinions about teacher tenure have much to do with how the question is asked. In the 2009 poll, we asked half of respondents if they approved or disapproved of teacher tenure, equating it to receiving a “lifetime contract.” That group of Americans overwhelmingly disapproved of teacher tenure 73% to 26%. The other half of the sample received a similar question that equated tenure to providing a formal legal review before a teacher could be terminated. In this case, the response was reversed, 66% approving of teacher tenure, 34% disapproving.”
So what’s the message here? It’s one I’ve argued before: That polls, taken in context, can provide valuable information. At the same time, journalists have to be careful when comparing prior years’ results to make sure that methodological changes haven’t influenced the findings; you can see how that played out in last year’s MetLife teacher poll. And it’s a good idea to use caution when comparing findings among different polls, even when the questions, at least on the surface, seem similar.