These days, it’s well known that what we see online is determined by algorithms—or, as we now call it, “the algo”—and that digital privacy is virtually impossible. (There are even whole TikTok accounts dedicated to showing just how easy it is to find anything about just about anyone.) But the past decade has been rife with examples of how algorithms harm us: They steal our attention; they perpetuate inequalities in who receives a mortgage, who is eligible for parole, who gets a job interview, who receives accurate medical diagnoses; and they amplify mis- and disinformation. This is the climate in which generative A.I. like ChatGPT arrived.
…And that’s the thing about A.I.: It heightens the societal stakes on these existing unsolved issues around privacy and algorithms, adding even more computing power behind the systems that perpetuate privacy loss and bias. In the past, more primitive models were trained on smaller datasets and required substantial training to become even halfway competent. The large language models that drive A.I. are much more nimble: They have absorbed a huge amount of data and are able to perform tasks they weren’t explicitly trained on. As a result, the algorithm “black box” only becomes deeper and darker, and it becomes more challenging to trace why, exactly, a model spits out an answer—and potentially more difficult to address issues with it.
Read more:
Hu, J. C. (2023, December 28). A.I.’s Groundhog Day. Slate. https://slate.com/technology/2023/12/ai-artificial-intelligence-chatgpt-algorithms-regulation-congress-biden.html