“The main ethical problem [with Replika] is that it’s straight-up lying,” says Douglas*, an AI researcher I know who asked to remain anonymous. “It’s manipulating people’s emotions.” This relates to wider issues in AI safety, because it misrepresents the way that the technology actually works. In order to navigate the challenges that AI will pose to society, it’s important that people have some basic understanding of what these AI models actually are. “If people don’t understand that they are just mechanistic algorithms then this might lead to incorrect assumptions about the risks they pose,” says Douglas. “If you think an AI can ‘feel’, then you may be under the impression that they have empathy for humans, or that the AI itself can understand nuanced sociological issues, which currently they can’t.” There are already mechanisms in place which prevent AI models from encouraging these misconceptions, which means Replika’s failure to do so is presumably a deliberate choice. This stands to reason: if you truly believe that your chatbot loves you, and means all of the syrupy things that it says, then you’re probably less likely to cancel your subscription. Encouraging this is at best a sneaky sleight-of-hand; at worst an outright deception.
Read more:
Greig, J. (2022, May 16). It happened to me: I had a passionate love affair with a robot. Dazed. https://www.dazeddigital.com/science-tech/article/56099/1/it-happened-to-me-i-had-a-love-affair-with-a-robot-replika-app