Polling Election Day Registration: Question Wording and Order

A Guest Post by Emily Shaw

From Amy Fried: I am enormously pleased to welcome Emily Shaw to this blog and thank her for this contribution.  Shaw is a political scientist at Thomas College.  Like a previous blog post, this post focuses on issues for polling and the September 7 Rasmussen poll on election day registration in Maine. It is also a helpful guide to those considering what are good and bad practices for poll questions.

  • Since survey research is an interest of mine, Amy asked me to write a little bit about my perspective on the wording of survey questions in the context of the recent Maine Heritage Policy Center/Pulse Opinion poll.
  • I teach survey research to students studying American politics. I feel like it’s important information for them to have, both in order for them to design their own survey projects and also so they can become intelligent consumers of survey research. The first thing I teach them is how to define “a good poll.” Unlike so much in politics, the question of how to define a good poll is not a question that is up for debate or compromise. A good poll is simply one that accurately estimates the opinions of the population from which it samples.

Given that accurate estimation is the goal of a good poll, it shouldn’t matter whether or not the opinions that a poll reflects are ones that the pollster agrees with. The opinions themselves, and what they say about the fortunes of particular issues or candidates, are not the point. Rather, the point is to create a snapshot of public opinion that is so accurate that it will be able to accurately predict other things, like voting behaviors in an upcoming election. A good poll is able to transcend the fact that it is an entirely artificial construct — a telephone interruption in the middle of your dinner, a set of intrusive questions on a subject you weren’t even thinking about at the time — to tap into something real and consistent in your thinking about a specific political subject.

The flip side of this observation is that a bad poll is one that does not accurately estimate the opinions of the sampled population. Instead of giving you a clear picture of the whole population’s opinion, the bad poll may in fact tell you more about the polling organization’s opinions, or more about the opinions of a subset of the sampled population, or maybe it’s just entirely garbage. Part of the way that an organization can produce a bad poll is by employing poor sampling technique. Amy spoke to some of these points in her earlier posting. (Everybody reading this is probably already aware that opt-in polls which sample opinions by having viewers call in on a number or fill out an optional online survey are entirely useless for estimating the opinions of a larger population.) However, the other major way that a poll can be bad is by using questions that lead respondents to answer in ways that don’t reflect their “true” opinion — in other words, which don’t predict their future voting behavior.

Bad questions take a number of forms. The American Association for Public Opinion Research has a nice review of some of the most common varieties of bad questions. Questions are bad when they confuse the respondent, are too verbose or contain a tricky word pattern that makes it hard for the respondent to know exactly what “yes” and “no” mean as responses. Answers to this kind of bad question are bad because they are likely to produce a somewhat random response rather than systematically reflecting true variation in the population.

I believe that polls about election-day registration question are generally vulnerable to this problem because of the likelihood of double-negative question forms. “Do you support or oppose the elimination of same day registration” is simply a complicated question because “support” means eliminate but “oppose” means keep. This is a common problem for popular referendums, however, and not specific to the voter registration issue.

Other kinds of questions, however, are bad because they produce too predictable a response. Leading questions are questions formulated to lead a respondent to a “correct” answer. There are different ways to make questions lead a respondent. Some of these are subtle. The primacy effect leads people to disproportionately agree with the first option offered, especially when presented with a list of choices. A slightly more obvious means of leading respondents is by offering answer options that poorly reflect the true range of existing opinion, artificially limiting how the respondent would be likely to answer the question if asked in an open-ended way. You can see the effect of leading through poor response options in question 3 of the recent MHPC/Pulse Opinion poll, which asked:

“Which is more important to you, protecting against voter fraud or increasing voter turnout?”

Since “increasing voter turnout” isn’t a framing that people have heard in the popular discussion about this issue, it doesn’t provide an easy option for people who don’t feel strongly about “protecting against voter fraud.”

You can observe the poor fit of this option in the fact that three times as many people chose to answer “don’t know” than did in the previous questions. (My guess is that the response option “preventing voter exclusion” would have been more popular.)

The clearest forms of leading questions are those which provide specific cues to respondents about the “right” answer to a question. These cues can be positive adjectives that point to one of the response options. They can also be mentions of authoritative figures, which in the context of a survey question tend to lead respondents to agree with those authority figures. In the recent MHPC/Pulse Opinion poll, for example, the first question asked was:

“The Maine legislature recently discontinued the practice of same-day voter registration, in order to give municipal clerks time to verify that registrants meet residence and legal requirements to vote. Opponents claim that this change will make it more difficult to vote, while supporters say the trade-off is necessary to protect against election fraud. Do you favor or oppose the elimination of same-day registration?”

By providing both an authority figure — “the Maine legislature” — and highlighting a positive argument for their action — “to give municipal clerks time [to do their job]” — this question strongly promotes an affirmative response. This effect is compounded by the wording of the second section of the question, where “opponents” — a loaded word — merely make a “claim”, suggesting something uncertain, while “supporters say” the change is “necessary,” suggesting certainty.

Question order is another critical component in leading respondents to a particular response. Repeated questions in a particular domain will lead respondents to answer in terms of that domain. The MHPC/Pulse Opinion poll is a little bit difficult to read for question order since the organizations did not release their entire set of polling questions: we know that they conducted a likely voter screen and gathered demographic information but those questions aren’t presented on the questionnaire, making it harder to know whether they in fact released all of their polled substantive questions in presenting their results. However, if we take the questions they did release at face value, we find that they asked four questions in a row about election fraud. Moreover, the questions tapped different varieties of fraud, enhancing the broad salience of “fraud” in the respondent’s mind: while the primary purpose of this poll was to discover Maine opinion on the election-day voting registration laws, the poll’s second question concerned photo ID, an entirely different kind of fraud issue. The effect of repeated questions about fraud should lead a respondent to be more likely to express concern about fraud than would ordinarily be the case.

Polls conducted by advocacy organizations automatically require close scrutiny. I tell my students to pay attention to who sponsored a poll because advocacy organizations have a compelling interest in releasing information which is favorable to their position and in not releasing information that isn’t. However, what kind of scrutiny should we apply? To investigate the reliability of a poll, we must look more closely at whether it is likely to produce accurate findings. Question wording, question order, and sampling technique will all affect the likelihood that a poll will accurately reflect the sampled population. In the case of the MHPC/Pulse Opinion poll, several problems that I reviewed suggest that this poll is unlikely to closely match the uncued, “true” opinion of likely Maine voters.

Many thanks to Amy for the opportunity to talk about this on her blog!

Amy Fried

About Amy Fried

Amy Fried loves Maine's sense of community and the wonderful mix of culture and outdoor recreation. She loves politics in three ways: as an analytical political scientist, a devoted political junkie and a citizen who believes politics matters for people's lives. Fried is Professor of Political Science at the University of Maine. Her views do not reflect those of her employer or any group to which she belongs.