At its core, democracy is the cycle of civic engagement and citizenship.
Citizenship, though it does not have one clear definition, has three primary aspects: belonging (who someone is), rights (what one can do), and action (what one should do). (1) Civic engagement falls under action, referring to the relationships and interactions between community (and non-community) members to promote belonging and uphold rights. (2)
Civic engagement can happen anytime and anywhere, but its prevalence in some democratic processes is hindered due to a lack of money, experience, or both—lobbying is one such process. (3) Dedicated firms are oppressive thanks to their vast resources and are detrimental to democracy as less-privileged audiences lose their voice. However, with the introduction of generative AI like ChatGPT, political influence in these processes might be changing for the better.
What makes generative AI so fascinating is how powerful and accessible it is. ChatGPT and its derivatives are widely available and can provide information on almost every topic within a few keystrokes. This gives leverage to those with less power. A defining feature of ChatGPT is its ability to provide information and bring awareness to existing political topics. With its accessibility, citizens can get up to speed on politics faster, and the number of informed grows. It may prompt some to take action to promote belonging and protect their rights—political influence can sway away from firms and into the hands of the people. However, the opposite is true: firms can also utilize these tools to accelerate and promote their ideals.
While ChatGPT is powerful, it derives its power from many datasets, primarily the internet. It uses data to generate its responses, and while some may be accurate, others will be inaccurate and sometimes even misleading (through what some researchers call “hallucinating”). (4) Regular citizens must know how to utilize these tools correctly and filter through misinformation by cross-referencing sources. Further, data can also be skewed, causing some responses to contain bias or discrimination. While it may display both sides of a debate, some critical information can be lost due to a lack of relevance or discussion, depending on the sources it references.
Democracy has been around for centuries, and AI is still young—it is impossible to predict what the political landscape will look like in the future. An interesting concept within AI is confidence levels: while AI depends on humans now (for data and development), humans may progressively rely on AI for information. In the future, will debates be a clash of AI (with humans serving simply as the “host”)? How might restrictions be imposed on AI to limit its influence? These questions will have to be answered as humans move into the Age of AI.
References:
- Steudeman, M., & O’Hara, J. (n.d.). Citizenship. https://sites.psu.edu/caskeywords/2022/06/24/citizenship/
- Vivian, B. (n.d.). Civic Engagement. https://sites.psu.edu/caskeywords/2022/06/25/civic-engagement/
- Sanders, N. E., & Schneier, B. (2023, January 17). Opinion | How ChatGPT hijacks democracy. The New York Times. https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html
- Smith, C. S. (2023, March 29). ChatGPT’s Hallucinations Could Keep It from Succeeding. IEEE Spectrum. https://spectrum.ieee.org/ai-hallucination
Photo: Flickr user Daniel Huizinga
Firstly, I would like to compliment you on the layout of your blog and your thoroughness, it is far beyond what I have seen from anyone else. Your post was well-organized and your points thoughtful but succinct. I appreciate how informative your post was as well, but could not help to wonder what your thoughts were on the matter at hand. You seem to avoid outwardly opinionated statements, which was unfortunate because I believe that you could have great insight into the use of artificial intelligence by lobbying firms. I thought you brought up an interesting point when you mentioned that artificial intelligence relies on datasets compiled by humans, but one day we may rely on them for information. Do you think this could become a reality, because I think it unlikely that the vast majority of humans would trust information gathered solely by artificial intelligence and with no human oversight. You also mention future debates being between different AIs, with humans serving as hosts, but that seems impossible. A debate between two forms of AI would be impossible for humans to even attempt to follow because of the incredible speed at which they can communicate and comprehend. There would be no point in any human intervention in these debates at all, and even if there was, what would they debate? Artificial intelligence cannot form opinions, and if they become advanced enough to do so, those opinions would surely be the same if they were not formed on biased data sets. I hope that in the future you go more into depth about some of these thoughts, I would be very interested to see where they lead.