Reporting Information and Weighting

All information about the Elon University Poll that is released to the public conforms to reporting conventions recommended by the American Association for Public Opinion Research (AAPOR). The raw datasets from the Elon University Poll are owned and maintained by Elon University.

Results from the Elon University Poll are typically weighted by race, gender, and age, and any other relevant demographic information. Decisions to weight survey results are based on the representation of demographic characteristics in the sample relative to the population disposition of these characteristics (i.e., we use post-stratification adjustments). Weighting survey results is a common statistical procedure that adjusts an underrepresented element(s) in the sample to conform to population parameters for that specific element. Information about weighting of survey samples for each poll is provided on the Elon University Poll website (and entitled ‘demographic variables’).

Question Construction and Question Order

With every release, the Elon University Poll provides a detailed toplines report listing the questions as worded and the order in which these questions are administered to respondents. In an effort to provide neutral, non-biased questions, we observe conventional question wording and question order protocols in all of our polls. Though not exhaustive by any means, examples of such protocols/practices include: avoiding the use of jargon and ambiguous terms; avoiding any priming or leading questions; wording questions succinctly and specifically; avoiding ‘double-barreled’ questions that ask about multiple topics in one question; ensuring reasonable response options that conform to topic/question.

Information contained within brackets ( [ ] ) denotes response options as provided in the question. These response options provided in brackets are rotated randomly; this technique serves to ensure that a set order of response options are not presented to respondents. Randomly rotating response option order maintains question construction integrity by avoiding respondent acquiescence based on question composition (i.e., recency or primacy effects), e.g., a set order of candidate names may lend to a person mentioned first or last being selected more than others simply by the ordered position of the name, therefore alternating the order of options protects against response option order influencing a person’s response to a question, which then would bias results for that question. Response options, however, are generally offered for questions about demographic characteristics (background characteristic, e.g., age, education, income, etc.).

Some questions in our surveys used a probe maneuver to determine a respondent’s intensity of opinion. Probe techniques used in our questionnaires consist mainly of asking a respondent if their response is more intense than initially provided with the simple dichotomized option. For example, in asking a question that elicits a satisfaction/dissatisfaction response, the respondent, upon indicating whether s/he is satisfied or dissatisfied, is asked a follow up question that probes for intensity: “would you say you are very ‘satisfied’/’dissatisfied’”. This technique aids respondents by enhancing the interpretation, recall, and judgment required to answer a question.

Oftentimes respondents volunteer response options not provided to them as an option. As we typically only offer response options in the questions as presented, some respondents choose to ignore these explicit options provided to them and offer or volunteer another response option; in the event that such more common options can be anticipated, these are noted (by the lower case ‘v’ in parentheses (v)); though not all volunteered options can be accommodated or anticipated, the more common options are noted.

In telephone surveys, we typically do not express to respondents that the ‘don’t know’ response is an option for most questions; we do, however, record this response should it be offered. If a respondent indicates s/he has no opinion because s/he does not know how to respond, the interviewer codes the response as a ‘don’t know’ and proceeds to the next question. For questions involving topics that are sensitive or less salient, we oftentimes offer an option that permits respondents to comfortably acknowledge lack of interest or attention, and little or no knowledge or awareness about a topic. Again, as explained previously, this option is provided as part of the question presented to respondent.

Telephone Survey Methodology

Our telephone surveys are conducted using a stratified random sample of households with telephones and wireless (cell) telephone numbers in the population of interest – in most cases this means citizens in North Carolina. We do at times survey citizens in other south Atlantic states (e.g., Florida, Georgia, South Carolina, and Virginia). Samples of telephone numbers for our surveys are purchased from Dynata.

Selection of Households

To equalize the probability of telephone selection, sample telephone numbers are systematically stratified according to subpopulation strata (e.g., a zip code, a county, a state, etc.), which yields a sample from telephone exchanges in proportion to each exchange’s share of telephone households in the population of interest. Estimates of telephone households in the population of interest are generally obtained from several databases. Samples of household telephone numbers are distributed across all eligible blocks of numbers in proportion to the density of listed households assigned in the population of interest according to a specified subpopulation stratum.

Upon determining the projected (or preferred) sample size, a sampling interval is calculated by summing the number of listed residential numbers in each eligible block within the population of interest and dividing that sum by the number of sampling points assigned to the population. From a random start between zero and the sampling interval, blocks are systematically selected in proportion to the density of listed household “working blocks.” A block (also known as a bank) is a set of contiguous numbers identified by the first two digits of the last four digits of a telephone number. A working block contains three or more working telephone numbers. Exchanges are assigned to a population on the basis of all eligible blocks in proportion to the density of working telephone households. Once each population’s proportion of telephone households is determined, then a sampling interval, based on that proportion, is calculated and specific exchanges and numbers are randomly selected.

The wireless component of the study sample starts with determining which area code-exchange combinations in North Carolina are included in the wireless or shared Telcordia types. Similar to the process for selecting household telephone numbers, wireless numbers involve a multi-step process in which blocks of numbers are determined for each area code-exchange combination in the Telcordia types. From a random start within the first sampling interval, a systematic nth selection of each block of numbers is performed and a two-digit random number between 00 and 99 is appended to each selected nth block stem. The intent is to provide a stratification that will yield a sample that is representative both geographically and by large and small carrier. From these, a random sample is generated.

Because exchanges and numbers are randomly selected, unlisted as well as listed numbers are included in the sample. Thus, the sample of telephone numbers generated for the population of interest constitutes a random sample of telephone households and wireless numbers of the population.

Procedures Used for Conducting the Poll

The Elon University Poll typically conducts surveys on a Monday through Thursday schedule. Calls are made from 6:30 p.m. to 9:00 p.m. during the week. The specific times and dates are delineated for each survey conducted.

The Elon University Poll uses CATI system software (Computer Assisted Telephone Interviewing) for the administration of surveys. Multiple attempts (up to three) are made to reach each working telephone number in the sample. Only individuals 18 years or older are interviewed; those individuals reached at business or work numbers are not interviewed. For each number reached, one adult is generally selected based on whether s/he is the oldest or youngest adult at home at that time. Interviews, which are conducted by paid, live interviewers, are completed with adults from the target population as specified. A survey is considered completed if a respondent progresses through the survey and completes 80 percent of the survey questions. Interviews for most surveys generally result in 500-600 interviews with adults from the target population (e.g., North Carolinians if it is a survey of North Carolina residents).

For a sample size of 500, there is a 95 percent probability that our survey results are within plus or minus 4.5 percentage points (the margin of sampling error) of the actual population distribution for any given question (and for a sample size of 600, the margin of sampling error is 4.1 percentage points). For sub-samples, which is a subgroup selected from the overall sample, the margin of error is higher depending on the size of the subsample. When we use a subsample, we identify these results as being from a subsample and provide the total number of respondents and margin of error for that subsample. Because our surveys are based on probability sampling, there are a variety of factors that prevent these results from being perfect, complete depictions of the population; the foremost example is that of margin of sampling error (as noted above). As with all probability samples there are theoretical and practical difficulties estimating population characteristics (or parameters). Efforts are made to reduce or lessen sampling error, as well as other types of errors associated with survey research; error effects are present in surveys derived from probability samples and, while not all inclusive, examples of such threats and effects include: non-response rates, question order effects, question wording effects, etc.