top of page
  • Rebecca Quarles

2018: A Turning Point in the Accuracy of Political Polls



2018: A Largely Unsung Victory for Political Polling


Before the 2018 election, Harry Enten [1], a senior writer and analyst at CNN politics, challenged the pollsters, saying that the best way to push back against those who call the polls “fake news” is “for the polls to predict election results.” And they did exactly that. The 2018 polls were considerably more accurate than in the past.


Enten points out that in polls the average poll in congressional districts missed the final election results by a margin of 4.9 percent, which is 1 full percentage lower than the average from 1998 to 2016. Moreover, a margin of 4.9 percent is exactly the margin of sampling error for samples of 400, a common sample size for district-based surveys like this. This means that – in many cases – the absolute error would be no larger than the unavoidable sampling error. In other words, they not only made a dramatic improvement, but they did as well as one should expect a poll to do.


This is extremely impressive given that, in 2018, pollsters were trying to predict state and district-wide races. This is a formidable task, given much lower budgets for polling than in national races. This means that sample sizes are lower, error due to sampling is higher, and the amount of professional time devoted to designing the surveys and analyzing the results is less than it is in national races.


Enten also notes that the average poll in the Senate was off by only 4.2 points. The average Senate poll historically has been off by 5.2 points, which means this year's polls were a point better than average. Likewise, the average governor's poll had an error rate of 4.4 points. That's 0.7 point more accurate than the average governor's poll since 1998.


Why Did the Polls’ Performance Improve in 2018?


Much of the increase in accuracy stems from about 100 state and district polls conducted by The New York Times Upshot/Siena College, which had an average error of 3 percent. That’s 3 points better than average, which – as Enten remarked – is “off the charts good.” This polling effort would have represented a major effort and investment by The Times. And that, I think, is the answer.


Nate Cohn [2] detailed the Times’ 2018 polling methodology. The polls were based on samples of registered voters in each of the surveyed districts and states and were conducted by telephone by live interviewers, as opposed to robocalls. Voter samples have some disadvantages, but the Times, like many other pollsters, compensated for many of these disadvantages through the use of sophisticated statistical modeling. This modeling was used to design the samples and to weight the survey results to reflect the demographic and political characteristics of the state or district, as well as performing the critical task of estimating turnout.


The Times’ efforts excelled largely because they paid more attention to detail in both data collection and modeling than is customary in this type of polling. For example,

· They called and attempted to interview more of the types of people who are relatively unlikely to respond to polls. This made the respondents more representative and reduces the need for weighting of the survey results.


They interviewed all respondents, regardless of whether their turnout model identified them as likely voters (usually people who voted in two of the last three elections). Instead, they used the turnout models only to adjust the survey results. This meant that they were able to make last minute adjustments to their models based on survey data and other information resources. Cohn notes that prematurely screening out so-called “unlikely voters” is why polls sometimes miss surprising candidates like Alexandria Ocasio-Cortez in New York’s 14thDistrict.”


They didn’t just use one turnout model for all districts and states. Instead, each district’s turnout was modeled separately, on the assumption that the voters of these districts are “unique enough to merit it.”


All of us in the survey research business should thank the Times for supporting this research. It proved the naysayers and the “fake news” expounders wrong. And it showed that accurate polling results can be achieved. This achievement should be celebrated as a reminder that excellence in polling, like excellence in almost everything else, is possible given enough brain power, commitment and funding.


 [1] Harry Enten, “2018 was a very good year for polls,” CNN, November 19, 2018.

[2] Nate Cohn. “Why did we do the polls the way we did?: A look at the challenges and trade-offs of conducting a political survey,” The New York Times, September 6, 2018.

7 views0 comments
bottom of page