AI’s intelligence may be artificial, but humans encode its values. OpenAI, for instance, effectively decides whether ChatGPT takes stances on the death penalty (no opinion), torture (it’s opposed), and whether a man can get pregnant (it says no). With its AI illustrator Dall-E, the organization influences what type of a person the tech portrays when it draws a CEO. In each case, humans behind the scenes make decisions. And humans are influenceable.
Like content moderation, there will be some obvious, consensus ethical decisions for generative AI (you don’t want chatbots advocating for genocide, for instance) but advocates will stake their ground in the grey. “It’s a very powerful tool, and people are going to want to do a broad range of things with it to meet their own interests,” said Lessin. “If you look at how the free speech stuff played out, it will play out the same way again, just faster.”
The potential conflict areas include how AI addresses race, gender, warfare, and other thorny issues…. Ethical decisions for generative AI are particularly high stakes because they scale. When you encode values into a chatbot, it can push those values repeatedly in conversation. When you make a content moderation decision, in most instances, it involves just one individual and one piece of content.
Read more:
Kantrowitz, A. (2023, January 19). Why The AI Ethics War Will Make The Content Moderation Fight Seem Tame. Big Technology. https://www.bigtechnology.com/p/why-the-ai-ethics-war-will-make-the