[Teaching] Evaluating Online Sources

Lesson module posted by Dr. Keren Wang, updated SP 2025

© 2025 Keren Wang — Licensed under a Creative Commons Attribution–NonCommercial–NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
Educational use permitted with attribution; all other rights reserved.

For permission requests, please contact the author directly.


1. OVERVIEW

This lesson will be focusing on understanding and evaluating evidence and information sources, a crucial aspect of constructing persuasive arguments. It explains how evidence interacts with values, and presents general tests for assessing the quality of evidence. We will also be learning how to locate and evaluate various sources of evidence, guiding you on choosing reliable information from books, periodicals, websites, and more. The chapter emphasizes the importance of digital literacy and critical evaluation of different types of sources.

Ad Fontes Media Bias Chart 6.0
Media Bias Chart published by Ad Fontes Media, 2020. Fact-checking always lags behind the emergence of new biased sources of information.


2. UNDERSTANDING EVIDENCE

Evidence and Values

In public discourse, evidence is invariably filtered through the “terminal screens” of societal norms and cultural values, leading to divergent interpretations even when presented with the same set of facts. Consider, for instance, debates surrounding Artificial Intelligence (AI) and employment. For techno-optimists, as represented by some Silicon Valley entrepreneurs, rapid technological advancements are seen as essential for societal evolution. They may interpret the emergence of Artificial General Intelligence (AGI) labor as an auspice for accelerated economic growth, productivity, and innovation, contending that AI liberates human workers from repetitive labor and allows greater engagement in creative, strategic, or emotionally rewarding tasks. Conversely, many labor advocates and trade unionists may interpret the prospect of an AGI workforce less positively. As critics of unchecked technological disruption, they might perceive this development as a harbinger of livelihood displacement, expressing concerns that automation could trigger widespread unemployment, diminish workers’ bargaining power, and deepen existing economic inequalities. Such rhetorical divergence highlights how interpretations of evidence surrounding AI’s impact are strategically framed to reinforce broader narratives of either progress or caution.

This example illustrates that the interpretation of evidence is not merely a neutral or objective process but is deeply intertwined with rhetorical constructions that reflect and reinforce specific value systems. Recognizing this interplay is crucial for understanding the dynamics of public debates and the ways in which information is presented and perceived.

Robotic Sculpture at MIT Media Lab (photo by Keren Wang 2015)

3. DIGITAL LITERACY

Digital Literacy refers to the ability to effectively navigate, evaluate, and utilize online sources of information. Digital literacy is more than simply being able to use technology; it is about understanding how to critically evaluate the veracity and quality of digital content and its sources. Key aspects of digital literacy include:

    1. Critical Evaluation of Sources: Not all websites are created equal. Digital literacy involves determining whether an online source is credible, up-to-date, and relevant, as well as recognizing the purpose of the content—whether it aims to inform, persuade, entertain, or mislead.
    2. Understanding Bias and Intent: It is important to understand the motives behind the creation of digital content. Websites often have particular political, social, or commercial agendas, and digital literacy involves identifying these biases. For example, a blog promoting dietary supplements might not be objective if it’s sponsored by a company that sells such products.
    3. Verification of Facts: Digital literacy requires cross-referencing information found online with multiple reliable sources. This helps verify facts and avoid falling for misinformation or “fake news.” For instance, a claim about a health benefit found on social media should be verified through medical publications or government health websites.
    4. Awareness of Digital Manipulation: The internet includes not only text but also images, videos, and audio clips, many of which may be digitally altered. Digital literacy involves assessing whether visual or multimedia evidence has been manipulated to present a biased narrative.
    5. Navigating Information Overload: The sheer volume of information available online can be overwhelming. Being digitally literate means knowing how to sift through large amounts of data to find high-quality, relevant information. This involves using effective search terms, recognizing institutional domains (e.g., “.gov” or “.edu”), and understanding how search engine algorithms may prioritize certain content.
    1. Digital Security and Privacy: Digital literacy also includes understanding how to protect one’s privacy online and recognizing secure websites. For example, a digitally literate individual would know to look for “https://” at the beginning of a URL as an indicator of a secure website.
Example of Digital Literacy in Practice: Suppose you are researching the benefits of electric vehicles (EVs). A digitally literate approach would involve consulting a mix of sources, including reputable news organizations (e.g., Associated Press, Reuters), trusted independent technical professional organizations or public agencies (e.g., IEEE, European Alternative Fuels Observatory), and peer-reviewed journals (Energies, Transport Reviews, Journal of Power Sources). It would also involve recognizing potential biases (such as an oil company-funded blog questioning the sustainability of EVs).

4. TESTING DIGITAL EVIDENCE

In the field of argumentation, there are several general tests of evidence that can help evaluate whether evidence used in an argument is reliable, credible, and sufficient to support a conclusion. These tests provide a comprehensive approach to assessing the quality of evidence. Here’s a detailed breakdown:

Accessibility: Is the Evidence Available?

Evidence that is accessible and open to scrutiny is generally considered more reliable.

Example: A public health official cites the number of COVID-19 cases reported by the Centers for Disease Control and Prevention (CDC). This evidence is accessible because the CDC publishes its data on a website that anyone can visit and verify. Counterexample: Someone claims that the government has "secret documents" showing proof of extraterrestrial contact. Since these alleged documents are not accessible for review, the claim fails the test of accessibility.

Internal Consistency: Does the Evidence Contradict Itself?

Evidence should not contradict itself. If evidence is self-contradictory, it weakens the argument and creates doubt regarding its reliability.

Example: A government report on unemployment must consistently present the same statistics throughout the report. If one section states an unemployment rate of 6% and another section states 8% without clarification, the evidence lacks internal consistency.

External Consistency: Does the Evidence Contradict Other Evidence?

Evidence that sharply contradicts most other reputable evidence is often seen as unreliable.

Example: A study on climate change that finds rising global temperatures should align with the majority of climate research from other scientific bodies such as NASA, the IPCC, and NOAA.

The “CRAAP” Test:

Currency

Is the Evidence Up to Date? Check whether information reflects the most recent findings or developments. Citing a meta-analysis on the effectiveness of renewable energy technologies from the previous year is preferable to citing a similar study from more than a decade ago, as the newer study will have taken into account relevant technological advancements.  Evidence that has been superseded by more recent findings may no longer be applicable. Avoid sources that do not indicate the date the content was published or last updated.

Example: During the COVID-19 pandemic, many outdated news articles, studies, and public health guidelines from early stages of the pandemic continued to circulate on social media well into 2022, causing confusion over mask recommendations, vaccine efficacy, and treatment guidelines.

Relevance:

Does the Evidence Bear on the Conclusion? Confirm if the evidence meaningfully and directly addresses the topic at hand. Even seemingly high quality and authoritative evidence that does not directly relate to the argument is not helpful.

Example: For someone researching the impact of social media on teen mental health, citing a 2023 study published in a peer-reviewed public health journal examining social media use among adolescents would offer substantial relevance. Conversely, an op-ed from The Wall Street Journal that broadly discusses a social media platform's political stance would likely be irrelevant to the specific issue at hand.

Authority:

Verify the source’s credibility, relevant expertise, and institutional affiliations. This can depend on the reputation of the author or organization providing the evidence, as well as whether the source has the appropriate credentials or expertise.

Example: Consider debates surrounding the societal impact of artificial general intelligence (AGI). A peer-reviewed article authored by recognized AI experts or computer ethicists, published in reputable journals such as the Journal of Experimental & Theoretical Artificial Intelligence, holds substantial authoritative weight. Conversely, a popular podcast hosted by a tech enthusiast, while potentially relevant and informative, would not be considered authoritative scholarly evidence.

Accuracy:

Is the evidence accurate or sufficient to support its claim? Cross-reference with multiple reputable sources.

Example 1: In contentious and rapidly evolving events such as the 2022 Russian full-scale invasion of Ukraine, initial reporting often contains inaccuracies and contradictory claims. A good practice is to cross-check information with other reputable sources (e.g., Associated Press, Reuters) and use fact-checking websites such as Snopes or Media Bias / Fact Check. The Wayback Machine (Internet Archive, https://archive.org/web/) is a helpful tool for retrieving deleted pages or spotting revisions.

Purpose:

Assess whether the content aims to inform, persuade, mislead, or sell.

      • Recognize the potential bias of the underlying source. Websites created to sell a product, promote a political agenda, or advocate for a specific cause may present information in a skewed manner.
      • Check for an “About Us” page that details the site’s mission, authors, and background. Absence of this information can be a red flag.
      • Lobbying organizations often present information strategically to advocate for specific interests, resulting in one-sided perspectives.
The OpenSecrets project (https://www.opensecrets.org/orgs/all-profiles) offers extensive data on federal lobbying activities in the U.S.

Language and Content Quality:

Credible websites typically use a moderate and professional tone, avoiding extreme or sensational language that appeals more to emotion than to facts. They support claims with references, links to original studies, or citations.

    • Example: Compare a “clickbait” headline like "5 Ways Coffee Will Instantly Cure All Health Problems!" with a more measured one such as "Research Shows Potential Health Benefits of Moderate Coffee Consumption." The latter is more likely to come from a reputable source.
      


4. OPEN ACCES TOOLS

AI and Bot Detection Tools

2. Fact-Checking Tools

3. Source Verification and Bias Check

4. Open Access Resources for Teaching & Research

Further Reading:

  • Baly, Ramy, Giovanni Da San Martino, James Glass, and Preslav Nakov. “We can detect your bias: Predicting the political ideology of news articles.” arXiv preprint arXiv:2010.05338 (2020).
  • Chiang, Chun-Fang, and Brian Knight. “Media bias and influence: Evidence from newspaper endorsements.” The Review of economic studies 78, no. 3 (2011): 795-820.
  • Epstein, Robert, and Ronald E. Robertson. “The search engine manipulation effect (SEME) and its possible impact on the outcomes of elections.” Proceedings of the national academy of sciences 112, no. 33 (2015): E4512-E4521.
  • Finlayson, Alan. “YouTube and political ideologies: Technology, populism and rhetorical form.” Political Studies 70, no. 1 (2022): 62-80.
  • Jahanbakhsh, Farnaz, and David R. Karger. “A Browser Extension for in-place Signaling and Assessment of Misinformation.” In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, pp. 1-21. 2024.
  • Ju, Yan, et al. “DeepFake-O-Meter v2.0: An open platform for DeepFake detection.” In 2024 IEEE 7th International Conference on Multimedia Information Processing and Retrieval (MIPR), pp. 439-445. IEEE, 2024.
  • Korinek, Anton. Economic policy challenges for the age of AI. No. w32980. National Bureau of Economic Research, 2024.
  • Kulshrestha, Juhi, et al. “Search bias quantification: investigating political bias in social media and web search.” Information Retrieval Journal 22 (2019): 188-227.
  • Li, Heidi Oi-Yee, et al. “YouTube as a source of information on COVID-19: a pandemic of misinformation?.” BMJ global health 5, no. 5 (2020): e002604.
  • McLean, Scott, et al. “The risks associated with Artificial General Intelligence: A systematic review.” Journal of Experimental & Theoretical Artificial Intelligence 35, no. 5 (2023): 649-663.
  • McGrew, Sarah. “Learning to evaluate: An intervention in civic online reasoning.” Computers & Education 145 (2020): 103711.
  • Marciano, Laura, et al. “Digital media use and adolescents’ mental health during the COVID-19 pandemic: a systematic review and meta-analysis.” Frontiers in public health 9 (2022): 793868.
  • Morstatter, Fred, et al. “Identifying framing bias in online news.” ACM Transactions on Social Computing 1, no. 2 (2018): 1-18.
  • Pangrazio, Luci, and Julian Sefton-Green. “Digital rights, digital citizenship and digital literacy: What’s the difference?.” Journal of new approaches in educational research 10, no. 1 (2021): 15-27.
  • Sobbrio, Francesco. “Indirect lobbying and media bias.” Quarterly Journal of Political Science 6 (2011): 3-4.
  • Stiefenhofer, Pascal. “Artificial General Intelligence and the End of Human Employment: The Need to Renegotiate the Social Contract.” arXiv preprint arXiv:2502.07050 (2025).
  • Tinmaz, Hasan, et al. “A systematic review on digital literacy.” Smart Learning Environments 9, no. 1 (2022): 21.
© 2025 Keren Wang — Licensed under a Creative Commons Attribution–NonCommercial–NoDerivatives 4.0 International License (CC BY-NC-ND 4.0).
Educational use permitted with attribution; all other rights reserved.

For permission requests, please contact the author directly.