Last December, a joint survey by The Economist and the polling organization YouGov claimed to reveal a striking antisemitic streak among America’s youth. One in five young Americans thinks the Holocaust is a myth, according to the poll. And 28 percent think Jews in America have too much power.
“Our new poll makes alarming reading,” declared The Economist. The results inflamed discourse over the Israel-Hamas war on social media and made international news.
There was one problem: The survey was almost certainly wrong. The Economist/YouGov poll was a so-called opt-in poll, in which pollsters often pay people they’ve recruited online to take surveys. According to a recent analysis from the nonprofit Pew Research Center, such polls are plagued by “bogus respondents” who answer questions disingenuously for fun, or to get through the survey as quickly as possible to earn their reward.
In the case of the antisemitism poll, Pew’s analysis suggested that the Economist/YouGov team’s methods had yielded wildly inflated numbers. In a more rigorous poll posing some of the same questions, Pew found that only 3 percent of young Americans agreed with the statement “the Holocaust is a myth.”
These are strange times for survey science. Traditional polling, which relies on responses from a randomly selected group that represents the entire population, remains the gold standard for gauging public opinion, said Stanford political scientist Jon Krosnick. But as it’s become harder to reach people on the phone, response rates have plummeted, and those surveys have grown exponentially more expensive to run. Meanwhile, cheaper, less-accurate online polls have proliferated.
“Unfortunately, the world is seeing much more of the nonscientific methods that are put forth as if they’re scientific,” said Krosnick.
Meanwhile, some pollsters defend those opt-in methods—and say traditional polling has its own serious issues. Random sampling is a great scientific method, agreed Krosnick’s Stanford colleague Douglas Rivers, chief scientist at YouGov. But these days, he said, it suffers from the reality that almost everyone contacted refuses to participate. Pollsters systematically underestimated support for Donald Trump in 2016 and 2020, he pointed out, because they failed to hear from enough of those voters. While lax quality controls for younger respondents, since tightened, led to misleading results on the antisemitism poll, YouGov’s overall track record is good, said Rivers: “We’re competitive with anybody who’s doing election polls.”
Nonetheless, headlines as outrageous as they are implausible continue to proliferate: 7 percent of American adults think chocolate milk comes from brown cows; 10 percent of college graduates think Judge Judy is on the Supreme Court; and 4 percent of American adults (about 10 million people) drank or gargled bleach to prevent COVID-19. And although YouGov is one of the more respected opt-in pollsters, some of its findings—one third of young millennials aren’t sure the Earth is round, for example—strain credulity.
Amidst a sea of surveys, it’s hard to distinguish solid findings from those that dissolve under scrutiny. And that confusion, some experts say, reflects deep-seated problems with new methods in the field—developed in response to a modern era in which a representative sample of the public no longer picks up the phone.
The fractious evolution in polling science is likely to receive fresh attention as the 2024 elections heat up, not least because the consequences of failed or misleading surveys can go well beyond social science. Such “survey clickbait” erodes society’s self-esteem, said Duke University political scientist Sunshine Hillygus: It “undermines people’s trust that the American public is capable of self-governance.”