Artificial intelligence (AI) is a type of programming software that uses logic to think and adapt their behavior based on their previous learning from their experiences. It is a rapidly growing field, with some modern programs being just as, if not more intelligent than humans. There are even AI developed conversational robots for customer service calls that are indistinguishable from humans. This level of advancement, however, raises ethical concerns. Should an artificial program with a mind just as, if not more, developed than that of a human have similar rights? Should an AI-controlled living being have the same kind of rights? Should humans know when they are interacting with another human or a robot? Should we compromise the efficiency of AI to make them less autonomous?
This month, a study was published showcasing the world’s first programmable organism developed from frog stem cells; it is called a xenobot, and they are about 0.04 inches long. Michael Levin, one of the study co-authors, said, “These are entirely new lifeforms. They are living, programmable organisms.” If you think about what Levin is saying, this is an entity that is composed of living cells but is being entirely controlled by the implanted code of the developers. There are further plans to scale the xenobot up to be human-size, living robots, with a full nervous system and blood vessels. Although this technology has many incredible uses, it is extremely concerning when considering some of the quotes published in the study including, “we cut the living robot almost in half, and its cells automatically zippered its body back up,” and “it’s almost like a wind-up toy.” These quotes show that the scientists do not have moral considerations for the xenobot, as they treat is like an object despite it being composed of living tissue and muscle, and having mental capabilities on par with humans.
In addition to the issue of the AI itself having rights, there is also the conflict of whether or not humans should be informed if they are interacting with AI or not. Studies showed that when not informed if cooperating with an AI robot or human in a game, the participants were far more cooperative with the AI, however once informed that they were interacting with the software they became far more disruptive. Essentially, since the humans knew that they were not interacting with another human, they disregard their moral compass and are more likely to use abusive language or insults since the AI does not possess the emotional qualities of a human. As a practical example, this means that with the AI customer service, if the caller is informed that their service provider is non-human, then the overall quality and efficiency of the service will be degraded, whereas if the caller was uninformed if they were talking to a robot or human, efficiency would be optimal. For example, Google developed an AI assistant that was capable of conversation over the phone, but the public was outraged that the bot does not say it is not human, and would otherwise be deceiving the person on the other end of the line. Google agreed to have the bot initially disclose that it is software at the start of the phone call, despite it reducing the overall quality of the conversation through doing so. We cannot have optimal operating efficiency of AI while maintaining total transparency of it, so we need to pick between one or the other, as the two are mutually exclusive from each other.
A third potential issue with AI is its lack of programmed morals. AI software is coded with clear goals and incentives, and if executed will simply try anything possible to reach these goals, unless coded to avoid certain method (this would be the equivalent of morals). YouTube’s AI search algorithm, was coded to provide searches that would result in the most traffic on the site, which seems like an uncontroversial goal for the site. What this resulted in, however, is extreme, polarizing content being recommended. It seemed innocent at first, with videos of jogging leading to videos of ultramarathons, and videos of vegetarianism leading to those of veganism. However, a researcher reported that after watching footage of a Trump campaign rally, she was recommended videos including a white supremacist rant, and a Holocaust denial conspiracy theory. This tendency of YouTube’s AI algorithm to recommend to extremes based on a certain video has contributed to the polarization in the country because the algorithm sought only recommendations that would lead to the most viewership, and was not coded to recognize the ethical problems with this method. Although this compromises the optimal efficiency of the program, it is something that needs to be considered when creating AI software.
To sum up, AI software could help to bring a far more advanced and efficient future, however, it is held back by the civic issues of the ethics of its implementation in living cells, the issue of its most optimal efficiency relying on it being non-transparent to the public, and the lack of moral restraints intervening between the AI’s pursuit of its programmed goal. If these issues can be resolved, potentially through the limitation of AI in living creatures, compromising on the level of transparency of AI to maintain its efficiency and the implementation of restrictions on how AI achieves its intended purpose, then it could be a much more useful, controllable, and ethical tool for the future.