The Dangers of Artificial Intelligence

Artificial intelligence is being used to do a lot of great things in the world from helping create better music playlists to revolutionizing autonomous cars. As AI starts to be used for more and more things, however, the potential for misuse increases. Elon Musk was one of many businessmen who sent a letter to the UN urging them to put autonomous weapon systems–weapons that can identify and attack targets without any human intervention–on the list of banned weapons that countries can not possess.

There is a new danger as AI becomes more main stream that it will be used for negative things, such as creating weapons that are far quicker and more lethal than anything we currently have. These new weapons could identify enemies as opposed to friendlies and choose who to target. This is a dangerous road to go down because it puts a lot of power into the hands of computers.

The UN has begun to hold meetings on how to address the issue of AI-backed weapons and warfare. Musk himself is a big supporter of AI, but he has also warned of the dangers of it just as much as he has praised it. He has many plans for AI given his role in Tesla, but he has been strongly opposed to allowing it to have a place in war and weapons.

Personally I think that the UN needs to place some restrictions on AI and warfare. The effects of not doing so could be devastating for the world as future conflicts play out. I am fascinated by all of the good that AI has brought to the world and the amazing things it has allowed us to do like self driving cars, but warfare is definitely a place where I think the creativity needs to be limited and watched over.

It will be very interesting to see if and how the UN chooses to restrict or limit the use of AI in weapons. There are very real dangers that can come from AI and this is a prominent one that needs to be addressed sooner rather than later.

Source: https://www.yahoo.com/news/elon-musk-speaking-against-artificial-145718172.html

7 thoughts on “The Dangers of Artificial Intelligence

  1. With artificial intelligence becoming more and more relevant in today’s world, the rise of issues that come with it will too. This article is well written and has a lot of strong points. To further elaborate on all that was written, AI imposing more harm than good is very real. AI is great until it becomes smarter than the people who create it. An example of this is the Facebook AI project that had gone wrong a few weeks back. AI creators were trying to create a chat-bot that would be able to negotiate with people over the internet without giving the impression that it was a bot. This was a success; however, the bots started to create their own, more efficient, language that no one knew. This is just the beginning of stories where AI has become smarter than us. We most definitely have to be extremely careful when designing and implementing these AI systems. To come full-circle, putting AI in charge of online chatting had a bad outcome, we could only imagine what would happen if it was in charge of our weapons.

    The link below is the article of the facebook AI.

    http://gizmodo.com/no-facebook-did-not-panic-and-shut-down-an-ai-program-1797414922

  2. I totally agree to this article that nowadays, the technique of artificial intelligence becomes more advanced and successful. So of the factories had already started using robot and other machines to replace human work. This gives disadvantages for the human because even though people may not as efficient as the machines, some lower class citizens need these jobs to earn money to help their family. If the robot took away their job, they couldn’t help their family. Google created an artificial intelligence to compete the best Chinese go master, and the AI even defeated the best go master in history. This shows that they could beat us in many different ways. Also, Facebook also invented the AI robots. They make the two AI robots communicated to each other. However, after talking to each other for a while, the two robots starts created their own language that we cannot even understand. Facebook quickly shut them down to avoid something dangerous happen. To sum up, as the world become more “technologies”, the world would be even more dangerous.

    https://www.youtube.com/watch?v=e7tCq2NG-ts

  3. Your article about Artificial Intelligence really opened my eyes to the potential dangers I haven’t thought of before. I never saw the correlation in the technology that is so prized and sought after in Tesla’s being connected to potential threats through military. You mentioned Elon Musk being an advocate for banning automated weapons and I found an article linked below that mentions four other intelligent people that agree with Elon Musk in the banning of automated weapons. These four people include Stephen Hawking, Nick Bostrom, James Barrat, and Vernor Vinge. Stephen Hawkings even mentions that AI could be so destructive it could end the human race. I think the topic of AI is an important thing to be aware of and I completely agree with your stance on being extremely cautious when dealing with this topic and the military.
    http://time.com/3614349/artificial-intelligence-singularity-stephen-hawking-elon-musk/

  4. As i was reading this article, the thought of the Facebook AI that created its own language came to mind. Although it doesn’t have anything to do with war so to speak, it shows the length of how uncontrolled AI use could be. There are many people who believe if we begin to use AI for everything, eventually they will take over the world(ahh!!). Personally i don’t believe this though. I believe, as said in this article, that they could defiantly malfunction and become dangerous especially in warfare. There is no guarantee that every AI will work 100% so the concerns of Elon Musk are very valid. Hopefully we don’t fall into the hands of AI destruction in the future as many companies are beginning to want to utilize AI.

    https://www.forbes.com/sites/tonybradley/2017/07/31/facebook-ai-creates-its-own-language-in-creepy-preview-of-our-potential-future/#2298c3c1292c

  5. In response to these comments made by Elon Musk, an article on NPR makes an interesting case about our fear of AI in today’s society. The article poses the question “Is AI more threatening than North Korean missiles?” to which the article seems to say no. However, one interesting fact the article discusses is the idea that the ideal AI would be as competent as the human mind. For this to be possible, first the human mind would have to be similar to that of a computer, with inputs and outputs, which it does not appear to be. However, we would also need to understand the human mind for this creation to be possible. In a world where we understand things that we never could have imagined explaining, the way the human mind works still eludes us. awe cannot explain the workings of the mind in any complete form. Therefore, extreme cases of AI are not the threat to today’s society that they are made out to be. Related to the issue in your post is the threat of the Type 2 “semi-intelligent” AIs, and our reliance on them in today’s society. It is the dependence on AIs and the lack of a human component that makes them dangerous. The true danger of these autonomous weapon systems is that they have no moral ground or human factor to them, and therefore cannot be fully relied upon. Yet, in today’s society we consistently tend to rely on such technology. The more immediate problem may be our dependence on AI, and how we can combat it from completely taking over daily tasks presented to us. As the NPR article suggests, the moment someone figures out how to shut off such intelligence, we could be in trouble.

    Noë, Alva. “Is AI More Threatening than North Korean Missiles?” NPR, 18 Aug. 2017, http://www.npr.org/sections/13.7/2017/08/18/544061771/is-ai-more-threatening-than-north-korean-missiles. Accessed 28 Aug. 2017.

  6. Drew, after reading your post I have not really though about the dangers of Artificial intelligence until now. I strongly agree with your point in that the “United Nations need to place many restrictions on Artificial Intelligence and warfare.” There are always the good and bad of AI such as the good being red light cameras and like you said drew self-driving cars. An example of bad is involved with weapons and the UN. But the united nations have to be really focused on and I couldn’t agree with you more. We are letting technology take over and it is getting way out of hand for humans and pretty soon the world could be a disaster if this pace keeps up.

  7. I agree and disagree that there can be some dangers AI can create. Looking at the topic this way, makes me realize that most things that are great can be used in such a negative way. If we start to let computers take over the tasks of humans, we may not be able to control the power it will have on society. We do not want to cause any problems that are not there. Our weapons should 100% be under control. Although, looking on the other hand, if there is a way we could identify the enemy, and be able to have our computers to do the right thing we should take that opportunity, for example, we need to almost “train” our computers to realize what is wrong from right, and we will not have this kind of issue. AI is an amazing way to start moving in the direction of efficiency, but wee should not reach the point where a computer does the job that a human really needs to have, like in years from now we should still have a human president.

Leave a Reply