Artificial Intelligence Or The End To Humanity?

Posted on Tuesday, April 18th, 2017

If you’re familiar with the genre of science fiction, you would be lead to believe that the development of artificial intelligence leads to only one grim conclusion: the destruction of humanity. However, with the exponential increase in technological development, what was once a fictional idea of superintelligence portrayed in films such as The Terminator is soon becoming a reality. As a civilisation, we have a moral obligation to ask ourselves if we are capable of managing and setting precedence for such a God-like intelligence… should we continue to develop narrow artificial intelligence until it reaches superintelligence?

 

 

Siri, self-driving cars and Amazon Echo are enablers of us simplifying processes. Through the use of these devices, non-value adding tasks are near eliminated and more time is added to our days. As a result, we create a huge market for autonomous products, and one which companies are all too happy to supply. For example, Google has invested in DeepMind to compete in this race of developing artificial intelligence into marketable products. But will there be an end? In the short term, we are certain to see a shift in the types of jobs required to be completed by people, monotonous unhealthy manufacturing jobs will continue to decline and be replaced by robots; the maths just makes sense. In the long term, these products will solve more complex problems, financial advisors and paralegals will see their positions being taken by a worker downloaded off the internet.

 

 

The first most probable scenario of devastation is that of an accident- the programmer deploying the AI has good intent, but fails to define constraints for which it is to operate. Take the scenario of that virtual financial advisor being given the goal of maximising profit to its clients, and advised investing into defence shares. The superintelligent AI could plant the seed for a war resulting in a boom for orders from the defence company. This is one way in which artificial intelligence could indirectly threaten humanity.

 

A second scenario is that the superintelligent AI is programmed with a direct goal of doing something devastating. It would become the deadliest and most competent weapon to have ever existed. The enemy may try to fight back, but given that electrical circuits run 1,000,000 times faster than biological human minds, the superintelligent AI would effectively be hundreds of years ahead within a few hours. An interesting point to note here is that in both scenarios the superintelligent AI is not itself malevolent, but that it is extremely competent. It appears that the root cause of disaster occurring is when the AI’s goals diverge from that of our own. Take humans and ants; humans don’t hate ants, and will generally leave them be. But if an ant was walking in the middle of the road, we wouldn’t think twice about driving over it. In terms of superintelligence, we must avoid finding ourselves in the position of the ants