Over the summer, I took a course on machine learning. While you might be picturing terminator, what I learned is that machines can’t learn anywhere near as efficiently as humans, but they are capable of processing data sets that are far too big for humans to process.
For one thing, in order to teach a machine a basic relationship, often you will have to give it at least a few dozen examples. To model more complex things (like y=x^2), it could take a lot of examples before the computer finally guesses the proper solution. However, machine learning is a field that is advancing at a breakneck pace and is yielding things like the robots that Google acquired like crazy last December.
So what are the applications of learning machines? Right now, we still have to teach them how to learn and what to learn. It’s pretty specific. So right now, there’s little danger. But let’s pretend that we were able to teach a computer to be able to teach itself. What would happen then? This is something that billionaire entrepreneur Elon Musk is very vocal about. He fears Terminator-esque consequences, and these consequences could be conceivable for a computer that could modify its own programming and learn based on interaction.
There’s no telling what a computer could evolve to learn. If it learns like humans, it slowly wipes out the biodiversity of the earth (hint: this biodiversity would probably include us, except we’d be able to fight back, so it might have to take us on directly).
We can look to sci-fi like Terminator to give us a few possible outcomes, but it’s very different to actually say what the outcome would be. The conceivable outcomes are as follows: humans won’t be able to make a sentient being, humans make a sentient being that lives in harmony with us, humans make sentient beings that become slaves to humans (think iRobot in the beginning of the movie), humans create sentient beings that are hostile towards humans between they begin to fight for resources, humans create sentient beings that quickly outpace our existence and world (think Her).
While some of these outcomes are okay, we need to consider the negative possibilities and always examine the potential dangers of the technology we are exploring. This is always true with technology, but many seem to believe that artificial intelligence is particularly dangerous. Certainly critical thinking is a crucial skill for researchers. And ethics and morals should always play a large role in the decisions that are made within a research lab.
Follow Us!