Alan Turing – Computer Intelligence, and the Tragedy of a Hero

As a software developer, one of my heroes for many years has been the English scientist Alan Turing.  He was one of the first people to talk about the concepts of machine intelligence and machine learning.  His insight was brilliant, and, beyond his contributions to computer science, he was a hero who came to a tragic end.

I first heard of the Turing Test when I was 13.  I’d already been programming in BASIC, and was fascinated by the idea of artificial intelligence.  The Turing Test seemed simple enough; a computer program had to convince a human that it wasn’t a computer program — it was also a human.  Essentially, a person would sit down at a computer and start a natural language conversation with two entities — one a computer and one another human.  The tester would need to determine which responses were coming from a computer, and which were coming from a human. While not a sign of actual intelligence, per se, it is still a valuable exercise in understanding communication and thought.

As a teenager, I thought this wouldn’t be that difficult, with enough lines of code.  So I started making up a Q&A Turing Test, just to see how it would work.  Let’s look at an example of some input, computer ‘deliberation’, and responses:

Human: “Hello, I’m Lee”

(simple introduction… respond in kind.)

Computer: “Hi Lee, I’m Hal.  How are you?”

Human: I’m well.  Looking forward to the Superbowl.  How about those Seahawks?

(… assuming the software knows what a Superbowl is… and the Seahawks… and can put the Seahawks into context, and not determine the human is talking about a type of bird — three pretty huge assumptions already)

Computer: Yes, it will be exciting.  The Seahawks are great this year.

Lee: Do you have any plans?

(… assuming the computer is still understanding that Lee is talking about the superbowl… and that people ‘do’ things for the superbowl – going to parties or sports bars and such…)

It quickly becomes clear just what a huge undertaking this kind of thing is. Conversation, it turned out, is incredibly complex.  So, even with a million if-then-else statements, and a computer fast enough to comb through them all to come up with an appropriate response, tricking a human that a computer program isn’t a computer program can be quite difficult. It can be done, but not consistently, and in many instances not convincingly.  In fact, there’s an annual prize that’s been around since the 1990s — the Loebner Prize — that aims to challenge programmers to write software that does what the Turing Test was getting at. The ‘winners’ so far have been not completely convincing ‘chatterbots’ that don’t demonstrate intelligence so much as good trickery.

It is impressive that Turing came up with his idea when computers were in their infancy. In fact, one of the most complicated machine Turing had encountered was not a computer at all. It was critically important to the future of the world, though. During World War II, Turing was the leader of a team of engineers and scientists that unlocked the secrets of Germany’s Enigma machine – a complex device the Nazis used during the war to send and receive coded messages.  The code generated by the machine was seemingly unbreakable.  Turing managed to essentially reverse engineer the Enigma machine and break the code, though, so the Allies were able to intercept and understand German transmissions.  Some historians estimate that Turing’s efforts shortened the war in Europe by 1-2 years (Fitzsimmons, 2013).

For all of his successes, Turing’s life ended sadly.  In the early 1950s, the British government convicted him of being a homosexual, which was a crime in Britain then.  He was forcibly chemically castrated.  Two years later, before he turned 42, Turing committed suicide.  Thus ended the life of one of the greatest contributors to computer science – and tangentially to cognitive psychology – as well as a great hero of World War II. Ironically, he was pardoned for his ‘crime’ by Queen Elizabeth II, just a little over a month ago.

Fitzsimmons, Emma.  “Alan Turing, Enigma Code-Breaker and Computer Pioneer, Wins Royal Pardon”, NYTimes.com, December 24, 2013.  Last accessed February 2, 2014. Original Link: http://www.nytimes.com/2013/12/24/world/europe/alan-turing-enigma-code-breaker-and-computer-pioneer-wins-royal-pardon.html?_r=0

 

3 thoughts on “Alan Turing – Computer Intelligence, and the Tragedy of a Hero

  1. kwr5262

    I am fascinated with Turing’s vision of computer AI because of its connection to perception, working and long term memory. We, the programmers, are a computers bottom-up process; it is up to us to program commands for the computer to execute. Though what we program into it will eventually become its top-down process. Are we capable of programming into a computer something similar to our long term memory which merely does not repeat back facts, but instead processes the information into semantics? Sperling proved our sensory module takes in about 80% of everything it is exposed to. After that, working memory in conjunction with long term memory converts some of the senses into auditory, visual, or in the case of LTM mostly semantic meaning. Can we program our experiences and understanding of the world in order to produce an AI that understands semantics? There have been numerous analogies comparing the computer and the mind, but a computer cannot replicate how we learn through physical experience.
    Philosopher John Searle says computer AI is impossible using what he calls the “Chinese Room” argument. The implication is if a person does not understand Chinese he is still able to implement the correct program, so because of the way a computer operates there is no such thing as AI. This philosophical approach to AI goes beyond possibility and addresses its very existence.
    Can we program a computer to be more than what we put into it? Our brains using bottom up, top down processing, working and long term memories will be difficult, if not impossible, to replicate. I believe the search for AI will continue to be a fascinating one to follow.

    (2009, September 22). Retrieved from http://plato.stanford.edu/entries/chinese-room/

  2. Julie Hall

    Turing Test and Conversations

    Artificial intelligence to the point of not distinguishing between human and computer would be impressive.

    As you mentioned (and as shown in the video in the class notes), a conversation with a computer would be incredibly hard. We use so much slang or shorthand, it seems like the interactive response program would have to be endless. An interview by CNET (Terdiman, 2014) with a top chatbot programmer (two time Loebner Prize winners) noted the characters are limited to the dialogue that is created for them, but they try to create a knowledge base appropriate for the character. A teen character may chatter about shopping and makeup, but will likely be lost if you wanted to talked about cognitive psychology. It would be impossible to build a program that was prepared to discuss everything so chatbots are often built to redirect the conversation.

    Conversations include emotions: happy, mad, bored, it’s a challenge to project that from a computer. Although programmers are trying to create, “synthetic emotion” (Terdiman, 2014) so the characters feel more real. We also use tone, inflection and body language to communicate. The impact of nonverbal communication is significant. This interaction is part of what makes the conversation. We use our top-down processing to interpret the meaning which includes relationship history with the person, the surroundings and the topic we are discussing. We compare the words they are saying to how they are acting to gauge the truthfulness.

    I was surprised to learn of the strides that have already been made with the apps that are available now, my iPhone Siri doesn’t seem to know what I’m talking about most of the time. She does sound bored when responding so I guess that’s something!

    References

    Terdiman, D. (2014, March 1). ‘Talking Angela’ programmer talks hoaxes, AI mastery (Q&A). C|NET. Retrieved from http://news.cnet.com/8301-11386_3-57619752-76/talking-angela-programmer-talks-hoaxes-ai-mastery-q-a/

    Thompson, J. (2011, September 30). Beyond Words. Psychology Today. Retrieved from
    http://www.psychologytoday.com/blog/beyond-words/201109/is-nonverbal-communication-numbers-game

  3. Matthew Kaufmann

    I was particularly intrigued by the example of conversation. People don’t give much thought to the complexity of conversation because it comes so naturally to most of us. But, if we really analyze it there aren’t any rules. When two people are talking each turn in the conversation hinges on the previous one. And, sometimes it doesn’t. Sometimes a person’s “internal dialogue” will be the impetus for a change in the subject.

    For instance, if after talking about the Seahawks, we followed with the question, “Do you have any plans?” we could be referring to the superbowl or we could be talking about the present, as in, “Do you have any plans right now, would you like to go somewhere with me?”

    Sometimes there are things that are implied in conversation, some of which is made clear through facial cues or gestures and others which are made clear simply by knowing somebody for a long time. Conversation is indeed complex.

    And, to take things a step further, I am a very tech savvy individual. Having worked in the IT field for over ten years, I’m pretty good at drawing parallels between the way computers and humans process information. It’s actually uncanny how similar it is. But, we still tend to feel that programming a computer to act like a human is artificial (hence artificial intelligence). So the one parallel I have not been able to draw is that of learning.

Leave a Reply