Artificial Intelligence (AI) is a rapidly expanding field that aims to develop machines capable of performing tasks that were previously considered unique to humans, such as learning, reasoning, problem-solving, and decision-making. However, as AI technology progresses, concerns are growing about its potential impact on society, including negative emotions and perceptions of threats. The recent release of ChatGPT, a cutting-edge conversational chatbot based on Large Language Model technology, has sparked debates on the potential of this technology and garnered widespread attention in the mainstream media.
Using a socio-psychological approach and drawing on the Intergroup Threat Theory, our study demonstrates that when faced with ChatGPT’s ability to reproduce the complexity of human language and conversation, participants reported significantly higher levels of negative emotions than those in the control group. These negative emotions predicted participants’ perception of the conversational chatbot as a realistic and symbolic threat to various aspects of human life, including safety, jobs, resources, inequality, identity, uniqueness, and value.
Our findings emphasize the importance of considering emotional and societal impacts when developing and deploying advanced AI technologies like ChatGPT and implementing responsible guidelines to minimize negative effects. As AI technology advances, addressing public concerns and regulating its usage is crucial for the benefit of society. Collaboration between experts, policymakers, and the public is necessary to achieve this goal.
Read more: