The Inquiry into the Failure of the 2015 Pre-election polls

Date
Category
NCRM news
Author(s)
Patrick Sturgis, NCRM, University of Southampton

The only people who woke up on the morning of May the 8th this year feeling worse than Ed Milliband and Nick Clegg were the pollsters. Although it was clear during the campaign period that the exact vote share would be difficult to predict, the clear consensus from the polling data was that a hung parliament was a near certainty. In the end, of course, David Cameron returned to Downing Street with a 6.5% lead over Labour and a narrow but clear majority in the Commons, the first Conservative Prime Minister to achieve this since 1992. The 1992 election was also the last time that the polls got the result so spectacularly wrong, a pattern which some believe may prove to be more than just coincidence. The review into the 1992 polling miss concluded that the error was due to a combination of factors, notably ‘late swing’ and inaccurate population data for setting sample quotas. So what went wrong with the polls in 2015? At present it is too early to say but there are some likely contenders which the British Polling Council/Market Research Society Inquiry will be considering.

First, every pollster knows their predictions come with a ‘margin of error’ due to sampling variability, hence the result may be a few percentage points above or below the estimate from any one poll. Sampling variability can’t be dismissed entirely but the size of the error and its conformity across polling organisations renders it very unlikely as the sole or even a notable contributory factor.

A second possibility is ‘late swing’, people changing their minds about which party they will vote for late in the campaign, after the final polls have taken place.  This is certainly plausible and at least one prominent pollster has advanced this as the likely key explanation in 2015. However, polls taken on 7th May by a number of polling organisations showed no evidence of vote switching between the final poll and the election day. So, while late swing may have been part of the problem, it seems unlikely that it will account for much of it.

There is also the well-known ‘shy Tory’ effect, people apparently too embarrassed to admit they are going to vote Conservative and lying to pollsters about their intentions. This was a favoured explanation in 1992, anecdotally at least.  On the face of it, it seems less relevant this time, as most of the polls these days are carried out on the internet, rather than face-to-face as they were in 1992.  There is no reason to think that respondents should be embarrassed about clicking a mouse to indicate who they intend to vote for.

It may also be the case that pollsters drew their samples in ways that over-represented Labour voters at the expense of Tories and over-represented likely voters at the expense of those less likely to vote. There are good reasons to expect that these factors would have been evident in 2015 and the pollsters took a range of measures to try to correct for them. However, it is difficult to know a priori exactly which adjustments should be made and bias arising from sample composition and from adjustment measures are likely to have played at least some part in what went wrong.

A new possibility this time around relates to a phenomenon referred to as ‘herding’. Herding is when poll estimates converge on a consensus estimate and, in 2015, the consensus estimate out to be wrong.  There is some debate, much of it rather acrimonious, about how herding behavior arises. Although some pollsters may deliberately align their estimates with the majority position out of fear of being wrong, the more likely explanation is that the herding arises ‘unconsciously’ through the effect of prior beliefs about the likely result on adjustment decisions. Pollsters having to make important decisions about how to adjust their raw data have no way of being sure  how to do this correctly in advance. It is feasible that these micro decisions may be influenced by prior beliefs about the likely outcome and these beliefs are themselves likely to derive from existing poll estimates. This pattern induces a circularity which causes estimates to converge.

These potential explanations, as well as others which may emerge, will be considered by the inquiry which is due to report in March 2016.

NCRM Director, Professor Patrick Sturgis chairs the BPC/MRS Inquiry into the 2015 Pre-election polls. Details of the Inquiry can be found at www.ncrm.ac.uk/polling.