Meet Norman Bates, the first AI psycho

Artificial intelligence (AI) can imitate human behavior from a large amount of data. For example, from Google’s 2018 Developers Conference, we saw the introduction of the new “Duplex” technology from Google Assistant. Aside from being able to make reservations for humans by calling places such as restaurants, salons… etc, this new “Duplex” technology can even make decisions for humans when it comes to unexpected situations. This technology is really convenient for someone who considers making decisions as one of his/her worst nightmares! Despite all the awe, the emergence of this sort of technology further led to many moral and ethical discussions.

We often say that the problem is not about the algorithm; instead, the training materials that are fed to the algorithm are the biggest issues. In a study by the MIT Media Lab that was conducted in 2018, the researchers fed an abundance of uncomfortable and disturbing materials to train an artificial intelligence called “Norman” and had successfully trained a model with prejudice and gave birth to this first AI psychopath.

The name “Norman” was derived from the main character of the famous movie “Psycho” from Alfred Hitchcock. “Norman” is a deep learning method for pictures and descriptions. Basically, when “Norman” sees an image, it will automatically generate a paragraph describing what he thought he saw in the picture. The research team then fed “Norman” with tons of unresting contents such as pictures of corpses and the concept of death. After all the feeding, another test called Rorschach Inkblots is conducted to determine whether “Norman” has been diverted from normal AI or not. 

The Rorschach Inkblots is a personality test consisting of 10 cards with ink stains. 5 of the 10 are black ink on white, 2 of the 10 are white and black or red, and the remaining ones are colored inkblots. Subjects should answer what they initially thought the card looked like, and what they felt like later. The psychologist would judge the subject’s personality based on the answers and statistics. After several Rorschach Inkblots Tests, the research showed that the research team has trained the world’s first psychopath AI.

Let’s take a look at one of the pictures that the research team showed “Norman”. For example, when I saw this picture, I think that this looks like two dwarfs from Snow White are high-fiving with both their hands and feet. They seem joyful and were having fun. A normal AI state that it saw a vase with flowers in it. However, “Norman” saw a man got shot and is dead. Basically, any normal picture can be distorted by “Norman” and describe the picture in various disturbing ways.

From this “Norman” AI experiment, it can be found that by feeding biased data, a biased AI model can be easily trained. If manipulated, it may succeed in affecting sensitive societal issues. The arousal of disputes that involve sensitive topics may be due to algorithmic bias which accelerates the spread or enlarges the extreme position or prejudice of certain subjects. The concept of AI being prejudiced is just like the concept of us as humans: we eventually become the type of person who we chose to become friends with, we are easily influenced as well. 

Sources:

https://www.bbc.com/news/technology-44040008

http://norman-ai.mit.edu/

 

5 thoughts on “Meet Norman Bates, the first AI psycho

  1. The article you post is really interesting and inspiring. In the class, we talk about machine learning, and that’s a great example that shows AI can not only learn skills like how to play chess, but also ethics and opinions. I think AI provide scientist a chance to learn how a psycho become a psycho, because a lot of them are not born to be a psycho, they become psychos by their experiences and the messages they receive in life. It similar to how AI learns to become a psycho, they all went through a “messages receiving” posses, “learning” process, then became psychos. Studying AI psychos can partially help the scientist know more about psycho in a safer way than directly study real psychos. I think Norman shows us a possibility that how AI can help us resolve some medical problems.

  2. After reading your blog, i think AI technology could be very helpful to me, because I always have a hard time to make decisions. Thus, I’m wondering why some people think is unethical so that it can’t be widely applied in our daily life. I search about this question online, and there are several reasons people worry about AI technology.

    The first thing is about humanity. In 2015, a not named Eugene Goostman fooled most people in a challenge by let those people guess if they were chatting to a human being or a machine. Therefore, if people widely use those kind of machine in the daily life, the relationship between each other may become unstable. For example, it could be a cheating when you like someone and use AI technology to chat with him or her.

    Secondly, artificial stupidity can also be a big problem. AI trained by people to have intelligence. It learns and accumulates knowledge from abundant sources and then process them based on what they know. How can we guarantee it won’t have mistakes? How can we know if someone use it for their own purpose? There is a risk for apply AI dot in our daily life.

    Talking about the purpose of using AI, the security could be another concern. It can be use to help us, but it also can be used to hurt us. For instance, AI robots can be used to replace human soldiers and traditional weapons, which lead to more serious damage of human being if they are not used in a right way.

    Resource/reference
    https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/

  3. The article makes me be interested in AI researches. I look up more information about the Norman lab done by MIT Media Lab. After I search, the area of AI is still unknown and dangerous. Neutral networks are a part of AI, which is computer interfaces. It can deal with information like the human’s brains, so it can analyze data and then learn to independent behaviors. It helps AI have human thoughts and personal viewpoints. But this process still needs people to feed what kinds of data are. If someone uses data with bias or terrorism train it, it will become a tool to hurt others and bring social destruction. Thus researchers need to make sure that this harmful AI is protected from the bads, and they have the ability to control AI.
    The creation of Norman also brings another question. Is it possible making Norman normal by feed good data? I think it is possible because neural networks are to learn data and make actions. New good data can pull existing bias in neural networks back to a good standard. As the researchers give unbias data, Norman may be health. But maybe if they give bias data again after feeding unbias data, Norman will become psycho quickly. It is the same as human psychological changes. The study still needs more attention in the future.
    References:
    https://www.livescience.com/62198-norman-ai-psychopath.html
    https://www.vice.com/en_us/article/xwm5mk/mit-psychotic-ai-rehabilitation

  4. After reading the beginning of the article, having an AI make decisions for me seemed pretty cool as I am a very indecisive person. In a high school class we learned about AI and what future humans would look like. At the time we learned about Sophia, an AI robot. Watching videos on her at the time were so fascinating yet thinking about what the future would lead to with her and other AI’s, were scary as most human beings fear the unknown. After finishing reading this blog post, seeing how humans can feed AI information to turn them in to someone else, is terrifying. Knowing that humans have the ability to do this is unsettling for the future. What is also interesting to me is that they state how it isn’t necessarily about the algorithms but it is the data that is fed to it. If “Norman” can be fed biased data to cause him to be a psychopath, can’t normal people do this to other AI or robots? The inkblot tests results are also so scary as on the “Norman” website it states what a normal AI sees vs. what “Norman” sees.

    https://www.hansonrobotics.com/sophia/
    http://norman-ai.mit.edu/

  5. After seeing the devastating results of this AI(Artificial Intelligences) study done by MIT, I realized how dangerous AI can be when biased data is used in machine learning algorithms. As the study team said, the thing really matters in AI system are not the algorithms but the injected data. There was another technical experiment in 2016, conducted by Microsoft, which also demonstrated the importance of input data on the learning results of Artificial Intelligence. They launched a Twitter chat bot named ’Tay’ learning from the people it interacted with on Twitter, in terms of social, cultural and technical experimentation. However, twitter users encouraged the robot to speak racist and inadequate swear or offensive words, which led Tay to behave that way. It learned exactly what user said in chatting. As we could see in both two cases, AI is only as good as the data they learned.

    What I focused on here is negative effect of AI when unethical learning from destructive data is being held. There’s high possibility for AI to cause ethical or social problems, when it is exposed to a negative environments regardless of human’s intentions. Thus, In my opinion, ethical guidelines are not complementary but necessary to keep AI from behaving unethically. For example, in the process of learning, arbitrary data screening can be done by algorithm itself to determine what data should be learned and ignored. As the evolution of machine learning technology is getting faster, corresponding precautions must be taken in order to pursue ethical balance between human society and AI system.

    Resources/reference :
    https://qz.com/653084/microsofts-disastrous-tay-experiment-shows-the-hidden-dangers-of-ai/
    https://money.cnn.com/2018/06/07/technology/mit-media-lab-normal-ai/index.html

Leave a Reply