In part one of this polling report, we covered the structural elements of the Civitas Poll, including the methodology and some of the statistical factors that any poll interpretation should take into consideration. The difference between the poll results and the true views of the population is known as the sampling error.
The 2016 election caused many polling experts to confront other types of polling considerations such as non-sampling errors.
The Roper Center for Public Opinion Research at Cornell University has a great explanation of sampling and non-sampling errors, and I will be using their definitions in this piece to explain how that relates to Civitas Polls and other North Carolina polling.
Coverage error – the “error associated with the inability to contact portions of the population.”
This error has to do with segments of the population that are systematically unreachable for polling. One common example is the homeless population, who may be hesitant to respond to polling or may be unreachable in landline surveys. Recently, the prevalence of landline polling can lead to coverage error for younger voters, who may be more likely to not have landlines. For that reason, Civitas Polls have been increasing their portion of cell phone surveys; the current ratio is 35 percent cell phone to 65 percent landline.
North Carolina is also subject to coverage error during times of, directly before, and after a hurricane or other natural disaster. An inability to reach certain hurricane-impacted counties can lead to the sample skewing towards public opinion in other areas of the state.
Non-response error – “results from not being able to interview people who would be eligible to take the survey”
Slightly different than coverage error, non-response error refers to a systematic refusal for certain groups to take polls. This is the type of error most often blamed for the 2016 presidential polls missing the mark. The idea of the “shy Trump supporter” led many to believe that people who planned to vote for Trump refused to take public opinion polls because they did not want to admit that they were voting for the controversial candidate. There is also a stereotype of the Trump supporters who are distrustful of polls or “political insiders” who found Trump’s candor refreshing and compelling enough to come out to vote for him.
When it comes to coverage or non-response error, a mathematical method known as “weighting” is sometimes used to align the sample response rate with the actual demographics of the target population. Per industry standard, Civitas Polls are sometimes weighted to better reflect the population demographics, in order to correct for these non-sample errors. See the Roper Center poll page for more information.
Measurement error – the “error or bias that occurs when surveys do not survey what they intended to measure.”
Poll question wording has huge significance in determining the validity of the poll results. For this reason, Civitas Polls are carefully vetted by our staff in collaboration with our contracted professional pollsters. When viewing any poll, users should check for leading or confusing language that may result in measurement error within the results.
To some degree, very few poll questions can be “perfectly” universally understandable since each poll respondent comes to the poll with different experiences and perspectives. For example, take a simple question like, “Do you have a favorable or unfavorable view of socialism?” Older North Carolinians may view this term much differently than their younger counterparts. Even something like “Do you approve or disapprove of President Donald Trump?” can be interpreted differently by different people – are they sharing their feelings about his competency on the job or him as a person? There are things we, as poll producers, do to mitigate these problems – such as asking a follow-up question with a definition of socialism or asking about presidential approval and job approval separately. But not all polls are as diligent, and measurement error is something that any poll consumer should look for in the results.
An example – discussion of August 2019 polling differences
Two Civitas Polls released in August 2019 provide an excellent example of why poll parameters are so important in interpretation. One poll was the typical monthly statewide Civitas Poll – conducted by Harper Polling on August 1-4 via live caller polling, the poll had a sample of 500 likely voters and a 4.38 percent margin of error. Another poll released by Civitas was conducted by Survey USA through online polling on August 1-5, with a sample of 2,113 registered voters and a 2.7 percent margin of error.
The polls came back with vastly different results in some areas. Let’s use the example of the head-to-head presidential match between leading Democratic candidate Joe Biden and Republican President Donald Trump. The statewide Civitas Poll found that Trump held a one-point advantage over Biden (45 to 44) while the special Civitas Poll showed Biden with an eight-point lead over the president (49 to 41). What can we learn from these differences?
The first thing to note is the differences in the sample population. The poll screens voters based on their likelihood to vote, so that the poll will better represent the electorate. The special poll considered anyone who was a registered voter. Likely voters are a subset of registered voters, and may be substantively different in their views, which may account for the discrepancy in results.
Another difference is the polling method. The special poll was conducted online, which could mean that the sample is different in some way than people who would take live caller polls. Which group is more likely to reflect the electorate? That is up to interpretation, but it is something to keep in mind.
It is also worth noting that the larger sample size in the special poll leads to its smaller margin of error. Online polling is significantly more affordable and probably has a higher response rate than live caller polling. But, online survey takers may be significantly different than the population at large (arguably subject to coverage error). The same could be true of live caller polling. Thus, considering polls with a differing methodology can help poll users get a more comprehensive picture of public opinion.
Finally, the statewide poll results show Trump with a slight advantage, but it is well within the poll’s margin of error, meaning that the results were essentially a statistical tie for how the sample results translate to the views of the larger population of likely voters.
No poll is perfect. Every poll comes with limitations and caveats. Understanding the full scope of what polling can and cannot do not only deepens our understanding of this extremely useful public policy tool, it also equips us all to be more responsible and confident poll users.