
Courtney Kennedy (Photo courtesy of the Pew Research Center)
Journalists often include survey results in a story to offer a sense of public opinion. But not all surveys are created equal, and some should be avoided at all costs.
In a recent phone interview, Courtney Kennedy, vice president of survey research and innovation at the Pew Research Center, a “nonpartisan fact tank,” shared advice with me on how to judge survey quality.
A longer version of our conversation, which was edited for length and clarity, can be found at The Freelance Center.
What makes a good survey?
Ten years ago, that was an easy question to answer because there were certain established, codified ways of doing things that were proven to be better than other approaches. But with surveys moving largely online and with so much variation in how that is done, it’s no longer easy for me to rattle off, ‘Here are the three things for reporters to watch for.’
Previously, surveys were mostly conducted by phone?
That’s right, and you could say, ‘Only report a survey that drew a random sample of the population where the sample source included all Americans.’ That could be a random sample of all the phone numbers in the United States because about 98% of Americans have a phone number or a random sample from all addresses.
What can you tell reporters now that the majority of surveys are being conducted online?
I encourage people to look at transparency. Does the poll tell you how it was conducted? One red flag and, in my mind, a disqualifier is if all that is disclosed is that the poll was done online. That’s a signal that the pollster is really trying to avoid the conversation about how they drew the sample and how it was done. The better pollsters will tell you, ‘This survey was done online. And here is where the sample came from. And here is what I did to make that sample as nationally representative as possible, and here is what I did to weight the data.’
Does the sponsor, the entity paying for the survey, matter?
You are going to get better data in the long run by looking at polls done by a neutral, nonpartisan source, whether that’s a news outlet or a nonprofit like Kaiser Family Foundation, something like that. Really pay attention to who the sponsor is and do they have a conflict of interest.
Now that surveys are conducted mostly online, are they still done with random sampling?
At Pew — and at other places like the Associated Press — we do surveys online, but we do them with rigorous panels that were actually recruited offline. A panel is a group of people who have agreed to take surveys on an ongoing basis. At Pew, we have a panel of about 10,000 people, and we interview them once or twice a month for years. We do it this way because it’s no longer practical to cold call 1000 Americans; people just don’t pick up the phone anymore.
Is this panel a random sample?
Yes. We draw a random national sample of home addresses because the Postal Service makes available the master list of residential addresses.
But not all online surveys use random sampling.
The other way is convenience sampling. If you go to Google and do a search or if you are on social media, sometimes you’ll see an ad to take surveys. Or you might get an email from, for instance, Walgreens, and it says, ‘You’re in our Customer Service Program, please take a survey.’ There is a whole mishmash of convenience sampling that is in this other bucket of online surveys, and it’s a combination of ads and emails, and it’s haphazard sampling.
Why is a random sample better than a convenience sample?
In a random sample, everybody in the public had a chance of being selected for the survey no matter their race, religion, income or age. And so, the people who respond — it’s not going to be perfect — are going to be more representative.
You mentioned weighting the data. What is that?
All surveys need to be weighted. It takes the people who actually completed your survey, and it makes that sample more representative of the country. So, when I explained that Pew recruits through the mail, it turns out that women are more likely to open the mail than men, for whatever reason. And so, when we get our data in, it tends to be a little too female. So, if the Census Bureau tells us that 52% of U.S. adults are women, when we get our data back, it might be 56% female. But we want our data to look like the nation. So, we weigh down, we make the women in the sample have a little less influence on final estimates. A poll of the general public should be weighted on things like education, age, gender, and race and ethnicity, those core demographics, at a minimum.
Should reporters look at sample size when deciding whether to quote from a survey?
Most rigorous surveys have at least 1,000 interviews if it is a national sample. And so that is the ideal minimum. If I saw a survey done with only something like 300 people, that would be a huge red flag.
Is it okay for the sample size to be smaller if the pollster is trying to gauge the opinion of a subgroup, for instance cancer patients, and not all American adults?
Yes, but you would still want a minimum of 100.