Let’s make a deal: Could AI compromise better than humans?

This article was quite interesting because I’ve never thought about computers can compromise before. It is more common for computer to compete not cooperate and compromise. In this article, people created a new algorithm S# and they found machine compromise and cooperation appears not just possible, but at times even effective than among humans. In the study, researchers programmed machines with the algorithm and ran them through a variety of two-player games to see how well they would cooperate in certain relationships.

로봇에 대한 이미지 검색결과

Also more interesting thing was the machines can speak during cooperation with human. If human participants cooperated with the machine, the machine might respond with a “Sweet. We are getting rich!” or “I accept your last proposal.” If the participants tried to betray the machine or back out of a deal with them, they might be met with a trash-talking “Curse you!,” “You will pay for that!” or even an “In your face!” It looks like they are like humans!

The goal of this project is to understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills. I think someday we and machines can cooperate together to solve some problems and it will be great!

Source: https://www.sciencedaily.com/releases/2018/01/180119113526.htm

 

2 thoughts on “Let’s make a deal: Could AI compromise better than humans?

  1. I think this is an interesting and new take on what robots will be able to do. I often think of robots simply responding to the commands of the human with rigid yes or no’s, rather than having any flexibility. It is amazing to me that they were able to take math that has a right or wrong answer and turn it into something that can being changed in order to make things work better for both parties. The pick up on social cues is a large factor in making this a success. Its what breaks the barrier of just being able to function as a computer and becoming human like and sociable thinking on a higher level.
    Humans have complex brains that allow them to make inferences and connections on a deeper levels than what computers could do. This article from Science Magazine talks about a different algorithm that it letting computers think more closely to how humans do. It is similar to the idea of the S# Algorithm in that is making those human level connections. It is accomplishing this by connecting a couple programs together, like connecting a neurons in the brain. An example described in the article was its ability to make inferences and connections from a couple sentences;
    “Lily is a Swan. Lily is white. Greg is a swan. What color is Greg?” with the computer answering “white” correctly.
    It will be interesting to see where this kind of algorithm can take us. If we can continually develop more algorithms that let computers have more human like qualities and work out the best compromises our world could become much more fair and efficient.

    http://www.sciencemag.org/news/2017/06/computers-are-starting-reason-humans

  2. I also, never thought about machines cooperating with humans. This could lead to a various amount of possibilities between us humans and machines. This article helped foreshadow that world of possibilities. The S# algorithm discussed within your blog seems to groundbreaking, allowing machines to be more honest than humans could make relationships in society last longer or even help in the work force. One article I read about the S# algorithm made a smart comparison between the algorithm and Asimov’s famous book I,Robot. Stating how the world ruled by robots in the book actually made the world a better place due to their ethics, and S# is proving the possibility of how machines can be more ethical than humans.
    After reading your blog it reminded me of an article I read about how these artificial intelligence systems are trained to compromise. Reinforcement learning seems to be a main way on how these systems are taught. With each action taken by the system, there will be either positive or negative feedback. The system will then learn from this and reduce the actions that causes negative feedback. I look forward to more research on the compromising of machines.
    Sources:https://www.inverse.com/article/40383-ai-better-at-compromise-than-humans
    https://futureoflife.org/2016/09/26/training-artificial-intelligence-compromise/

Leave a Reply