As Nov. 8 dawned, it all seemed so clear: National polls predicted that Hillary Clinton would prevail. Polls conducted at the state level, meanwhile, indicated that a “blue firewall” of Democratic-leaning states would comfortably propel her to victory in the electoral college.
Now we know better. In the final reckoning, Clinton underperformed across the board, not only losing battlegrounds like Florida and North Carolina, but also sliding to defeat in Wisconsin and Pennsylvania, long seen as safely in the Democratic corner.
In their defense, some polling firms say that they weren’t too far off on the national popular vote, in which Clinton led by about 1 percentage point.
“Our final poll had Clinton plus three, with a margin of error of three,” Jeff Cartwright, director of communications with Morning Consult, told BuzzFeed News. “We’ve had her at that number for weeks.”
But in states with close battles, where the presidential election is actually won and lost, there seems to have been a systematic polling bias in Clinton’s favor.
So, what went wrong? Here are the leading theories, from polling experts who are struggling to work out what just happened.
1. Don’t blame any one type of poll. They all got it wrong.
Before the election, traditional phone pollsters were sparring with the new breed of online survey firms, each arguing that their methods were superior.
Phone pollsters were sticking with the established method of random digit dialing to get a representative sample of voters — but that “gold standard” was getting increasingly expensive, given dwindling response rates to phone surveys.
Online pollsters, meanwhile, argued that they could piece together good samples by carefully selecting from the people who volunteer for online polls.
Neither camp has much to shout about right now. “Everybody got it wrong. Everybody missed this. Whatever the explanation, it is something that functions across all the methodologies,” Timothy Johnson, director of the Survey Research Laboratory at the University of Illinois at Chicago, told BuzzFeed News.
2. A late surge of “undecided” Republicans swung back to their traditional home at the last minute.
The idea here is that Trump’s unorthodox campaign alienated mainstream Republicans, who sat on the fence until the final reckoning.
“One significant source of polling error might have been a late break of voters,” Sam Wang of the Princeton Election Consortium told BuzzFeed News. His prediction model, based on aggregated poll results, was particularly bullish about a Clinton victory, putting her chances of prevailing at 99% certainty on the eve of the election.
As FiveThirtyEight’s Nate Silver noted in his final election forecast (which gave Clinton a 72% chance of winning), about 12% of voters were either undecided or said they’d vote for a third-party candidate in the most recent pre-election polls. That’s much more than in recent elections.
But the late-surge idea could easily be wishful thinking. “That would be the favorite of pollsters because it means there’s nothing fundamentally wrong with polling,” said Claudia Deane, vice president of research at the Pew Research Center in Washington, DC.
Deane is skeptical of the surge idea, given what people said about their voting decisions in exit polls: According to the New York Times, those who said they decided “in the last few days” went 46% for Trump, 44% for Clinton — a far smaller difference than the 51% to 37% split for those who said they'd made their minds up in October.
Also, while the Republican leadership distanced themselves from Trump at various flash points in his controversial candidacy, rank-and-file GOP voters mostly seem to have stuck with the party throughout, according to Deane. “They weren’t treating Trump differently,” she said.
3. The “likely voter” models used by pollsters were flummoxed by Trump’s movement of disaffected white people, and wrongly put them in the “not likely to vote” column.
Election polling is harder than other forms of survey research, because you must assess two things at once. Not only do you have to find out who people say they will support, you also have to estimate their likelihood of actually turning up to vote.
So Trump may have been right in claiming that he’d created a new movement of people who had previously shunned political engagement. If so, pollsters who relied on prior voting behavior to predict who would turn out this time would have systematically underestimated Trump’s support.
Problems with likely voter modeling could also mean that the pollsters overestimated the extent to which the “Obama coalition” of black, Latino, and younger voters would turn out for Clinton.
One problem with this explanation, however, is that different pollsters approach likely voter modeling in different ways. Yes, some place a lot of weight on votes in previous elections, but others place more faith in other methods, such as trying to gauge respondents’ enthusiasm about their preferred candidates. And yet on Tuesday, every method got it wrong in a similar way.
“There is such incredible variability in those models. And to see such a wide diversity of models all get it wrong seems a little implausible,” said Johnson.
4. “Shy Trumpers,” who were embarrassed to admit their support for the GOP candidate, quietly delivered their verdict in the polling booths.
This theory first emerged in the run-up to the Republican primaries, as pollsters noticed that Trump was doing better in online polls than in those conducted over the phone. The idea was that some of Trump’s supporters were embarrassed to admit their choice to a real person. The idea gained traction when a polling experiment run last December by Morning Consult seemed to confirm that the effect was real.
In the match-up against Clinton, however, Trump’s advantage in online polls mostly evaporated. And when Morning Consult ran a poll with Politico in late October to specifically probe for the effect, it seemed to operate only among college-educated voters.
“Overall, it didn’t look like it massively shifted the race,” Morning Consult’s Cartwright said.
5. Trump’s anti-establishment supporters believed the polls were rigged, and so they refused to answer the phone or respond to online surveys.
For pollsters, this is a much darker possibility. The idea that the polls were rigged became a popular refrain among Trump’s supporters. So maybe these people simply refused to participate in polls, either on the phone or online.
If so, all of the pollsters may have been systematically blind to many of the disaffected, mostly white voters who drove Trump to victory, especially in the Rust Belt states of the Midwest.
“People who don’t like the government often perceive the polls as being part of the government,” said Johnson of the University of Illinois, who believes this is the most plausible explanation for the pollsters’ miss. “This does merit close investigation.”
That would be hard, because anti-establishment voters who don’t trust pollsters are unlikely to be keen on participating in research to find out what went wrong.
The bottom line is no one knows for sure, and the post mortem will take months.
“The ‘why’ is going to take a lot of time to seriously answer,” Charles Franklin, director of the Marquette Law School Poll in Milwaukee, told BuzzFeed News.
Solving the mystery will mean poring over the voter files maintained by states that show who voted and who did not. This research could show, for example, whether polling samples systematically missed chunks of the electorate that were solidly behind Trump, or whether those people were polled and then wrongly assumed to be unlikely to cast a vote. Or it could reveal some other, novel explanation.
The American Association for Public Opinion Research has a task force that was planning to investigate the accuracy of 2016 election polls, even before last night’s upset. It is expected to take several months to deliver a verdict.
“It’s going to be very hard to develop an adequate explanation,” Michael Traugott of the Center for Political Studies at the University of Michigan in Ann Arbor told BuzzFeed News.
Peter Aldhous is a Science Reporter for BuzzFeed News and is based in San Francisco. His secure PGP fingerprint is 225F B2AF 4B8E 6E3D B1EA 7F9A B96E BF7D 9CB2 9B16
Contact Peter Aldhous at email@example.com.
Got a confidential tip? Submit it here.