“If this exit poll is right, I will publicly eat my hat on your programme,” said Paddy Ashdown on the BBC’s election night coverage. People on all sides struggled to believe the exit polls and came up with all sorts of explanations, until it turned out it was the opinion polls that got it wrong. There are several possibilities: people lied in all the polls before the exit poll or loads of people changed their minds once they had the pencils in their hands or people voted tactically. Maybe exit polls are just better, but they’re definitely different.
What is exit polls
Exit polls (http://www2.warwick.ac.uk/fac/sci/statistics/staff/academic-research/firth/exit-poll-explainer) are not just a proxy count of votes: they are modelled along with results of previous elections, previous local exit polls, and swings in party support. Results of previous elections are important once voter intransigence is assumed, that is, if people are likely to vote for the same party as last time. In one sense, an exit poll is a panel survey and inter-election exit poll comparisons are essentially trackers. In the same way, they are subject to the importance of sampling and of maintaining the same participants; in this case the ‘participant’ is the polling station rather than the individual. The sample for an exit poll is between 100 and 200 voters at about 100 polling stations, a total of more than 20,000. Most polling locations are retained from election to election but there can be changes to attend to changes in electorate, for example. As with all opinion polls, exit polls assume some error and build it into the regression models. The models then produce probabilities for each candidate winning each seat, with confidence intervals around predictions of both final percentages and seat numbers.
The important difference between exit polls and most opinion polls is in the questions posed. Exit polls can ask about actual votes for actual candidate names but national-level opinion polls tend to ask about parties. Party-prompt polls seem better suited to systems using electoral lists where votes are for parties whose members are ranked, so the higher one is up the list, the higher the probability of being elected. Some countries’ rules allow the popular vote to influence the order in which a party’s candidates are listed, so candidate-specific voting is possible but less powerful than in First-Past-the-Post (FPP) or proportional representation.
Curiously, one outlier poll actually got very close to the actual results, but it was so different from the preceding polls that Survation (http://survation.com/snatching-defeat-from-the-jaws-of-victory/?utm_source=dlvr.it&utm_medium=twitter) “chickened out” of publishing it. There’s nothing magic about what Survation did: it was a 1,000-respondent nationally representative telephone poll the day before the election that listed the candidates in each respondent’s constituency and asked how they planned to vote. At that juncture, there was less scope for confounding and, possibly, more social desirability pressure to keep one’s word. However, FiveThirtyEight (http://fivethirtyeight.com/liveblogs/uk-general-election-2015/?#livepress-update-21944073) modelled both candidate prompts and party prompts concluded, that generic questions about party-level voting intention were more accurate than specific ones.
So, was lots of people just fibbing?
An oft-repeated explanation in the past couple of weeks has been that Conservative and UKIP voters were “shy” in their responses to opinion polls. Eric Kaufman (http://blogs.lse.ac.uk/politicsandpolicy/the-shy-english-nationalists-who-won-it-for-the-tories-and-flummoxed-the-pollsters) points to other research that indicates reluctance on the part of UKIP supporters to answer questions about class, and that shows lower levels of trust in others. Evidence from polls of Scottish voters can, at least, eliminate the simple shy-Tory explanation: estimates of Conservative votes were consistent with the election result and it was the SNP that was under-estimated.
Tactical voting another peculiarity of the FPP system that may have had an impact on the gap between the opinion and exit polls. The process might go something like this: I’d like to vote for the Party X so that’s what I’ll tell the pollsters, but I live in constituency where Party Y holds the seat and where only Party Z can challenge them. I don’t want Party Y to win so I’ll vote for Z instead. (If it helps, insert UKIP, LAB, and CON for X, Y, and Z.) An attempt as mass vote-switching was orchestrated by a certain right-wing daily newspaper which ran a headline a couple of weeks before the election instructing citizens in 50 constituencies how to vote tactically to “help keep Labour out of Number 10”. Whether it worked or not is of less concern than whether a campaign of this sort should be considered a confounding variable in analysis of polls.
It’s very confusing, this exit poll business. People might have lied, or been shy, or voted tactically but the main difference is methodological. Think about between becoming engaged and getting married, each based on a different question. About 15% of engagements (http://content.time.com/time/magazine/article/0,9171,490683,00.html) are called off each year, and about 15% of those polled got cold feet, or changed their minds, or lied. If an opinion poll is “Will you marry me?” an exit poll is “Do you take this man to be your lawful wedded prime minister?”
Global Green Growth Institute
Seoul, Republic of Korea
September 30, 2019
Westminster, London, UK
September 22, 2019
The Department for International Trade (DIT)
Whitehall Place, London
September 29, 2019
Department for International Trade (DIT)
Whitehall Place, London
September 29, 2019
High Holborn, London
September 21, 2019
South Colonnade, London
September 23, 2019
October 04, 2019