Lesson on Statistical Evidence in Argumentation – Evaluating Survey Accuracy

 

 

Sampling & Evaluating Survey Accuracy

Posted by Keren Wang, 2024

*Before starting this lesson, make sure you have reviewed: Statistical Evidence: Survey and Opinion Polling

*

Sampling Error and Polling Accuracy Case Study: 2016 U.S. Presidential Election

During the 2016 U.S. Presidential Election, many national polls predicted a victory for Democratic candidate Hillary Clinton. While Clinton won the popular vote by around 2.1%, Republican candidate Donald Trump won the Electoral College, securing the presidency. [1

Huffington Post 2016 Election Prediction

Image: Huffington Post’s 2016 US Presidential Election prediction updated on the eve of the election day.

The challenges pollsters faced with predicting the 2016 election shed light on a common problem with using statistics in arguments: numbers can give us a false sense of certainty. Stats are powerful—they carry authority and can seem “objective”—but they’re rarely as clear-cut as they appear. In reality, they often come with hidden biases and assumptions that may not capture the full picture. [2]

In 2016, pollsters and analysts leaned heavily on polling data to forecast the election outcome, treating the numbers almost like a science. But they didn’t account for factors like sampling errors, social desirability bias, and last-minute changes in voter sentiment, all of which skewed the predictions. The result? A widespread belief that one outcome was nearly certain—until it wasn’t.

This reliance on numbers to tell a definitive story shows how easy it is to be misled by the “authority” of stats. It’s a reminder that while statistical evidence can be persuasive, it’s not infallible. To use data responsibly in arguments, we need to present it with a little humility, recognizing its limitations and the need to pair it with other forms of analysis. Instead of seeing numbers as the whole truth, we should treat them as one piece of the puzzle, open to interpretation and discussion. [3]

Evaluating the Use of Polling Evidence in 2016 US Presidential Election

1. Non-Response Bias

 

Impact: Non-response bias occurs when certain demographics are less likely to respond to polls, which can skew results. In 2016, many polls underrepresented rural and working-class voters, groups that tended to favor Donald Trump. These groups were harder to reach and less likely to respond to traditional polling methods. [4]

 

Problematic Use in Argumentation: Analysts and commentators who used these poll results often overlooked or underestimated the impact of this underrepresentation. News networks frequently relied on results from similar polling agencies, creating a feedback loop that reinforced a constructed “reality” of Clinton’s expected victory. This effect was further amplified as media outlets fed off each other’s election news stories and headlines, creating a narrative that appeared authoritative but was actually based on incomplete data. This collective overconfidence in Clinton’s chances contributed to a misleading perception that didn’t reflect the complexities and variances among voter demographics.

2. Late Shifts in Voter Preferences

Impact: Many voters made up their minds close to Election Day, influenced by last-minute campaign events, media coverage, or debates. Polling, however, generally stops a few days before the election, often missing these late shifts. In 2016, a significant portion of undecided voters shifted toward Trump in the final days, which wasn’t captured in most polling. [5] The reasons for this shift are complex, but one contributing factor may have been social desirability bias—some Trump supporters may not have honestly disclosed their preferences to pollsters, fearing negative judgment from their friends and family members. As a result, these voters remained hidden within the undecided category, skewing polling data away from an accurate portrayal of support for Trump. [6]

Problematic Use in Argumentation: This late shift was largely invisible in the polling data, leading analysts to underestimate Trump’s chances. When using this data for argumentation, commentators tended to either overlook the intrinsic time constraint of surveying, or erroneously assume that the voters who were undecided would either not vote or distribute evenly across candidates. [7] This assumption failed to account for the unpredictability of undecided voters, ultimately leading to faulty conclusions.

3. Sampling Error

 

Impact: Sampling error, the statistical error that occurs when a poll’s sample does not perfectly represent the population, was especially impactful in closely divided states. In 2016, even minor errors in states like Michigan, Wisconsin, and Pennsylvania, where the polls showed narrow leads for Clinton, contributed to a misleading picture. The underestimated support for Trump in these states shifted the Electoral College outcome in his favor. [8]

Studies have found that that 2016 election polls across the board suffer from a range of sampling malpractices—often collectively referred to as “total survey error.” Reported margins of error typically capture only sampling variability and often ignore systemic sampling errors, such as uncertainty in defining the target population, particularly regarding who will likely to vote. An empirical analysis 2016 US presidential election cycles found that average survey error was around 3.5 percentage points—approximately double the margin implied by most reported margins of error. Polls were also predominately erred on the side of overestimating Clinton’s performance, which was partly due to similar types of unwarranted assumptions about voter demographics by major polling organizations. This shared “meta-bias” creates further compound inaccuracies, especially in close races as seen in 2016. [9]

Problematic Use in Argumentation: Polling margins of error are often presented as minor uncertainties, with little impact on the overall narrative. In 2016, this assumption was problematic because the race in key states was so close that even a small sampling error could, and did, shift the predicted outcome. The statistics carried an aura of scientific objectivity, which masked underlying biases and imperfect assumptions that remained tacit and hidden behind cold numbers. News media perpetuated this misconception by over-relying on the seemingly definitive value of these numerical data, interpreting polling results as if they offered predictable and accurate insights. [10] This contributed to overconfidence in Clinton’s prospects and led commentators to misjudge the actual electoral dynamics in crucial swing states.

 

The Problem of Sampling

Sampling is the process of selecting a subset of individuals from a larger population to make inferences about that population. Effective sampling ensures that survey findings are representative and reliable.

The goal of sampling is to accurately reflect the larger population’s characteristics by selecting a group that is both representative and adequately sized. This section of the reading covers three primary sampling methods used to create a sample that reflects the diversity and characteristics of the population being surveyed: simple random sampling, stratified random sampling, and clustered sampling:

Simple Random Samples

A simple random sample gives each member of the population an equal chance of being selected. This method involves a straightforward, single-step selection process.

Process

Researchers assign a number to each individual in the population and use a random number generator or a table of random numbers to select individuals.

Example

In a survey of university students’ media consumption habits, a researcher may use a list of all enrolled students, assign each a number, and then use a random number generator to pick students for the survey.

Benefits

This method helps prevent bias since each individual has an equal opportunity to be included. It’s often the most representative sampling method if done correctly.

Limitations

Simple random sampling can be challenging and time-consuming when dealing with large populations, as researchers need an accurate, complete list of all members.

Stratified Random Samples

Stratified random sampling involves dividing the population into subgroups (strata) based on relevant characteristics and then randomly selecting individuals within each stratum.

Process

Researchers identify categories (e.g., age, gender, ethnicity), divide the population accordingly, and randomly sample individuals from each category. This ensures each subgroup is adequately represented.

Example

If a study examines the impact of social media on mental health among high school students, researchers might stratify the sample by grade level (e.g., freshman, sophomore) and then randomly select students within each grade level to participate.

Benefits

This method increases precision by ensuring the sample reflects key population characteristics, making it valuable for ensuring representation across specific groups.

Limitations

Stratified sampling requires additional time and resources to divide the population and select individuals within each subgroup. It assumes that researchers know which subgroups are relevant to the study.

Clustered Samples

Clustered sampling selects groups (clusters) rather than individual members, which is useful for large, widely distributed populations or when a complete list of members is impractical.

Process

Researchers divide the population into naturally occurring clusters, such as geographical locations, and then randomly select clusters. Within each cluster, they may survey all members or randomly choose individuals.

Example

In a survey on internet access across a large city, researchers might select certain neighborhoods as clusters and then survey individuals within those neighborhoods.

Benefits

Cluster sampling saves time and reduces travel costs, especially for geographically dispersed populations. It’s often more practical and economical for large-scale studies.

Limitations

Clustered sampling can lead to sampling bias if clusters are not representative of the overall population and is generally less precise than other methods due to potential similarities within clusters.

Obtaining Samples

Random sampling ensures that each member of the population has a known, non-zero chance of being selected, minimizing bias and improving the representativeness of the sample. Here’s how the process works in the three sampling methods discussed earlier:

1. Simple Random Sampling

Researchers create a list of every individual in the population and assign a sequential number to each. Using a table of random numbers or software, they randomly select numbers corresponding to individuals.

  • Population Members: Imagine 100 individuals in a line, represented as gray dots labeled from 1 to 100 as shown below.
  •  Random Sample Selection: 15 individuals (most commonly selected via computer-generated random numbers) are highlighted in blue among the gray dots, showing how a simple random sample is chosen without any grouping or structure.

This method works best with smaller, manageable populations where researchers have full access to an accurate population list. [11]

Simple Random Sampling

2. Stratified Random Sampling

This method typically involves dividing the population into distinct groups or strata based on relevant characteristics (such as age, income, education level). Within each stratum, a simple random sample is then conducted.

By sampling within each group, researchers can control for potential influences that specific characteristics might have on the survey’s findings, thereby increasing the sample’s representativeness. [12]

Stratified Random Sampling

3. Clustered Sampling

When it is impractical to list every individual in a large population, researchers divide the population into clusters, often based on geographic or organizational divisions.

They randomly select entire clusters and survey individuals within those selected clusters. This can involve surveying everyone in each cluster or using random sampling within clusters for larger groups. [13]

Clustered Sampling

Evaluating Survey Accuracy

This section explores three critical factors for assessing survey accuracy: Sample Size, Margin of Error, and Confidence Level. Understanding these elements helps researchers determine the reliability of their survey results and interpret findings with appropriate caution.

1. Sample Size

Definition: The number of individuals or units chosen to represent the population in the survey.

Importance: Larger samples generally provide more accurate data. The relationship between sample size and accuracy follows the law of diminishing returns, meaning that after a certain point, increases in sample size result in only minor improvements in accuracy.

Key Concept: Sampling Error decreases as sample size increases. However, the increase in precision grows smaller as the sample size becomes very large.

Example: Imagine researchers want to understand coffee preferences across a city with 100,000 residents. They conduct a survey to find out what percentage of residents prefer iced coffee over hot coffee:

  • The researchers initially surveyed 100 residents and found that 60% prefer iced coffee. However, with only 100 people surveyed out of 100,000, this small sample has a higher margin of error, potentially making the results less representative of the entire population.
  • To get a more precise estimate, they increase the sample size to 1,000 people, which lowers the margin of error. As the sample size grows, the accuracy of the result improves, giving a clearer picture of the true percentage of residents who prefer iced coffee.

2. Margin of Error

Definition: The range within which the true population parameter is expected to fall, considering sampling variability.

Role in Surveys: The margin of error shows the possible deviation between the survey’s results and the actual population values.

Calculation: It’s derived from the standard error and sample size, reflecting how representative the sample is of the population.

Example : In the same coffee preferences survey scenario:

  • With the 100-person survey, they might have a margin of error of ±10% (at 95% confidence level), meaning the true preference for iced coffee could be anywhere between 50% and 70%.
  • With the 1,000-person survey, the margin of error decreases to ±3% (at 95% confidence level), so they can now be more confident that the true preference is between 57% and 63%.
  • With a 2,000-person survey, the margin of error further goes down to ±2% (at 95% confidence level).

3. Confidence Level

Definition: The degree of certainty that the population parameter lies within the margin of error.

Common Confidence Levels: 95% confidence is standard, meaning if the survey were repeated multiple times, 95% of the results would fall within the margin of error.

Confidence Interval: This is the range constructed around the survey result to indicate where the true population parameter is likely to be, given the confidence level.

Example: In the coffee preferences survey scenario:

  • 95% Confidence Level (C.L.): The researchers can be 95% confident that the true percentage of iced coffee preference lies within their calculated margin of error (±3%).
  • 99% C.L.: If they want to be even more certain, they could use a 99% confidence level, increasing the margin of error to ±4% for the 1,000-person survey.
  • To maintain the same margin of error at a 99% confidence level, a larger sample size would be required, such as 2,000 people to achieve a ±3% margin of error.

Sample Size and Margin of Error Chart

Chart: The chart above illustrates the relationship between sample size and margin of error across different confidence levels (95% and 99%).

As sample size increases, the margin of error decreases, making the survey more precise. Higher confidence levels (e.g., 99%) result in a larger margin of error, meaning we can be more confident in the results but within a wider range. The diminishing effect of increasing sample size shows that the margin of error decreases rapidly with smaller samples but flattens out at higher sample sizes.

*

 

Further Reading

1. Courtney Kennedy et. al., An Evaluation of the 2016 Election Polls in the United States, Public Opinion Quarterly, Volume 82, Issue 1, Spring 2018.

2. Joshua J. Bon, Timothy Ballard, Bernard Baffour, Polling Bias and Undecided Voter Allocations: US Presidential Elections, 2004-2016, Journal of the Royal Statistical Society Series A: Statistics in Society, Volume 182, Issue 2, February 2019, Pages 467-493.

3. Wright, Fred A., and Alec A. Wright. “How surprising was Trump’s victory? Evaluations of the 2016 US presidential election and a new poll aggregation model.” Electoral Studies 54 (2018): 81-89.

4. Battersby, Mark. “The Rhetoric of Numbers: Statistical Inference as Argumentation.” (2003).

5. Hoeken, Hans. “Anecdotal, statistical, and causal evidence: Their perceived and actual persuasiveness.” Argumentation 15 (2001): 425-437.

6. Giri, Vetti, and M. U. Paily. “Effect of scientific argumentation on the development of critical thinking.” Science & Education 29, no. 3 (2020): 673-690.

7. Gibson, James L., and Joseph L. Sutherland. “Keeping your mouth shut: Spiraling self-censorship in the United States.” Political Science Quarterly 138, no. 3 (2023): 361-376.

8. Roeh, Itzhak, and Saul Feldman. “The rhetoric of numbers in front-page journalism: how numbers contribute to the melodramatic in the popular press.” Text-Interdisciplinary Journal for the Study of Discourse 4, no. 4 (1984): 347-368.

9. Ziliak, Stephen T., and Ron Wasserstein. “One Thing About… the Rhetoric of Statistics.” CHANCE 36, no. 4 (2023): 55-56.

Lesson on Statistical Evidence – Survey and Opinion Polling

 

 

 Statistical Evidence – Survey and Opinion Polling

Posted by Keren Wang, 2024

This lesson covers using statistical evidence, specially surveys and opinion polls surveys in argumentation.

Surveys, also known as opinion polls, are designed to represent the views of a population by posing a series of questions to a sample group and then extrapolating broader trends or conclusions in quantitative terms. [1] [2]  For example, in an election year, polling agencies might survey a diverse group of registered voters to gauge public support for different candidates, often reporting findings in percentages to indicate the overall popularity of each option. These results help forecast likely outcomes and provide insights into public sentiment on key issues. However, like all statistical evidence, opinion poll results are susceptible to inaccuracies and distortions, which can be misused or misinterpreted.

In this lesson, we will examine various types of surveys, their advantages and limitations, and their appropriate applications.

There are two basic types of surveys – descriptive and analytic:

Survey Types

Survey Type Descriptive Survey Analytic Survey
Purpose Document and describe the characteristics, behaviors, or attitudes of a specific population at a given time. Goes beyond description to investigate why certain patterns or behaviors occur.
Focus Covers demographic data, behavior frequency, or opinions and preferences. Tests hypotheses by examining relationships between variables.
Use in Communication Research Useful for establishing a snapshot of audience preferences or public opinion. Used to analyze how communication factors affect audience attitudes and behaviors.
Example 1 Survey examining social media platform usage among college students. Survey studying the relationship between crisis communication and trust in government.
Example 2 Survey by a PR firm to gauge public perceptions of a corporate brand. Survey exploring whether exposure to diverse news sources affects political polarization.

Advantages and Limitations of Surveys

Advantages

  • Anonymity: Anonymous surveys can increase comfort in sharing personal or sensitive information. Respondents may feel more at ease providing honest answers when their identities are not disclosed, leading to more accurate data. [3]
  • Broad Reach: Surveys / opinion polls allow researchers to reach large, diverse populations. For instance, online surveys can be distributed globally, allowing for data collection from a wide audience. [4]
  • Cost-Effective: Surveys are accessible across industries for large-scale data collection. Online surveys, in particular, are less expensive compared to other methods, such as face-to-face or telephone interviews. [5]
  • Quantifiable Data: Surveys allow for measurable insights into the target population. By using structured questions, researchers can collect data that is easy to analyze statistically, facilitating the identification of patterns and trends. When designed and applied correctly, surveys can increase consistency and reduce researcher bias. By administering the same set of questions to all respondents, surveys can bring uniformity in opinion collection and analysis.

Limitations

  1. Ineffective or Misleading Questions: Survey questions are sometimes written in vague, overly broad, or ambiguous ways that can lead to misinterpretation or inaccurate responses. Closed-ended questions, such as yes/no or rating scale questions, may lack depth. While these types of questions facilitate quantitative analysis, they often fail to capture the full complexity of respondents’ thoughts and feelings. [6]
    • Example: A closed-ended question like “Do you support increased sales taxes to improve education and healthcare? YES or NO” can be misleading. It combines two issues—education and healthcare—forcing respondents to address both at once, even if they feel differently about each. It also presents “increased taxes” as the only solution, ignoring other funding options. Imagine if 70% of respondents to this question answered “NO,” a news headline claiming “Poll Shows 70% of Voters Don’t Want to Improve Education & Healthcare” would also be misleading. Many may have chosen “NO” not because they oppose improvements, but because they doubt the effectiveness using higher sales taxes to fund them.
  2. Low Response Rates: Surveys / opinion polls can lead to unrepresentative samples. A low response rate may result in a sample that does not accurately reflect the target population, potentially skewing the results. [7]
    • Example: In the 2016 U.S. presidential election, many opinion polls predicted a victory for Hillary Clinton. However, these polls often had low response rates, leading to unrepresentative samples that underestimated support for Donald Trump. This discrepancy highlighted how low participation can skew survey results. [8]
  3. Sampling Challenges: Surveys risk sampling bias if the sample lacks diversity. If certain groups are underrepresented, the findings may not be generalizable to the entire population.
    • Example: A 1936 Literary Digest poll for U.S. presidential election predicted Alf Landon’s victory over Franklin D. Roosevelt by sampling its readers, car owners, and telephone users—groups not representative of the general population during the Great Depression. This sampling bias led to an incorrect prediction. [9]
  4. Response Bias: Surveys may not always provide truthful answers. Respondents might give socially desirable responses or may not recall information accurately, leading to biased data. [10] Surveys may also lead to other self-reporting inaccuracies due to memory lapses, misinterpreting questions, or self-censorship. [11]
    • Example: Surveys on sensitive topics, such as dietary habits or illicit drug use, often face response bias, with respondents under-reporting due to social desirability bias. This can result in data that grossly misrepresents actual behaviors.

Writing Effective Survey Questions

Clarity and Simplicity

Use straightforward language and avoid technical jargon or complex wording.

Example: Instead of “Do you support implementing an ETS protocol that requires corporate entities to offset GHG emissions via tradable emissions permits?” try “Do you support the creation of a program that would require companies to purchase permits for their excess emissions in order to reduce greenhouse gas emissions?”

Conciseness and Single-Focus

Keep questions short to reduce respondent fatigue and prevent confusion. Ask only one piece of information per question to avoid ambiguity.

Examples:

Split “How satisfied are you with the quality and reliability of our customer service?” into two questions: “How satisfied are you with the quality of our customer service?” and “How satisfied are you with the reliability of our customer service?”
Rather than “How satisfied are you with the company’s benefits and career development opportunities?” split it into “How satisfied are you with the company’s benefits?” and “How satisfied are you with the company’s career development opportunities?”

Neutral Language

Avoid leading questions that imply a preferred answer. Use neutral phrasing to encourage honesty.

Example: Instead of “Don’t you think our product is the best on the market?” try “How would you rate our product compared to others on the market?”

Relevance and Sensitivity

Ensure questions are relevant and appropriate for the audience. Avoid overly personal questions unless essential.

Example: Instead of “How often do you feel sad while using our service?” ask, “How does using our service affect your mood?”

Choosing Between Open-Ended and Close-Ended Questions

Researchers tend to use closed-ended questions when they need quantifiable data that is easily comparable, such as demographic details or satisfaction ratings. Open-ended questions are used when in-depth feedback is needed, to explore new topics, or to understand respondents’ reasoning. There are pros and cons to both question types.

Close-Ended Questions

Close-ended questions provide respondents with predefined options, such as multiple-choice, yes/no, or rating scales.

Advantages
  • Ease of Analysis: Quantifiable data is easy to code and analyze. Ensures uniform responses across participants.
  • Efficiency: Quick for respondents to answer, which improves response rates.
Limitations
  • Limited Insight: Restricts responses to preset options, missing depth.
  • Risk of Bias: Poor answer choices may bias responses. See “Limitations” section above.
Examples of Close-Ended Questions
Multiple-Choice: “What social media platform do you use most frequently?”
Yes/No: “Do you feel our product meets your expectations?”
Likert Scale: “How satisfied are you with our service?” Options: 1. Very Dissatisfied, 2. Dissatisfied, 3. Neutral, 4. Satisfied, 5. Very Satisfied

Open-Ended Questions

These questions allow respondents to answer freely in their own words, offering more detailed insights.

Advantages
  • Nuance: Allows for detailed responses, offering deeper insights.
  • Flexibility: Enables responses that may reveal unexpected insights.
Limitations
  • Time-Consuming: Longer response times and more complex analysis.
  • Complex Analysis: Open-ended responses need qualitative coding.
Examples of Open-Ended Questions
Example 1: “What features would you like us to add to our product, and why?”
Example 2: “Describe your experience with our customer service.”
Example 3: “What motivates you to choose one news source over another?”

*Click here to continue to our lesson on Sampling & Evaluating Survey Accuracy

❖ Further Reading ❖

  • Fink, A. (2009). How to conduct surveys: A step-by-step guide (4th ed.). Thousand Oaks, CA: Sage.
  • Fowler Jr, F. J. (2013). Survey research methods. Sage publications.
  • Mehrabi, N., et al. (2021). A survey on bias and fairness in machine learning. ACM computing surveys.
  • Nardi, P. M. (2018). Doing survey research: A guide to quantitative methods. Routledge.
  • Rea, L. M., & Parker, R. A. (2005). Designing and conducting survey research. San Francisco: Jossey-Bass.