Potential Dangers of Artificial Intelligence

October 28, 2019 by No Comments

From virtual assistants like Siri through to self-driving cars, Artificial Intelligence is making very rapid progressions. While we’ve come to expect the technology to look like vaguely humanoid robots thanks to science fiction books and movies, it actually encompasses everything from autonomous weapons to Google search algorithms and IBM’s Watson.

Narrow Vs General AI

The proper terms for Artificial Intelligence today are Narrow, or Weak, AI. This is because it’s been designed to perform narrow tasks like facial recognition only, just internet searches, or solely driving a vehicle. While this type of AI may outperform humans at finding sites that offer a huge range of real money pokies, for example, the long-term goal is creating General or Strong AI. AGI would outperform human beings at almost all cognitive tasks.

The Risks of AGI

The majority of researchers agree that a hyperintelligent AGI is not very likely to display human emotions like hate and love, and there’s no indication that it would become purposefully kind or unkind. The risks, say experts, are most likely to be one of two scenarios.

The first is that the AI gets programmed to wreak havoc. Autonomous weapons are an example of the kind of systems that are programmed to kill. Should this technology fall into the wrong hands, it would be very easy for it to cause devastating casualties. Additionally, an AI arms race could unwittingly lead to an all-out AI war, which could also result in wholesale slaughter. To avoid being thwarted by the combatant, this kind of system could be designed to be nearly impossible to switch off, so it’s not too much of a stretch of the imagination to see humans losing control of the situation.

The second is that the AI gets programmed to do something helpful, but develops a ruinous method for achieving its goal. This could happen if we fail to fully align AI goals with our own, which is more difficult than one would assume. For example, if you asked a compliant intelligent car to take you to the airport as quickly as possible, it might do that but your vomit-soaked entrance could be greeted by police helicopters wanting to talk to you about obeying speed limits! It’s not doing so much what you need it to, but literally what you asked of it. So, in this scenario, if an ultrasmart system is tasked with accomplishing a progressive geoengineering project, it may inadvertently wreak havoc with our ecosystem and see our attempts to stop it as threatening.

The Concern is About Competence

So scientists are not so much worried that AGI will become malevolent, they’re struggling with increasing its levels of competence. As good as this technology will be at accomplishing its objectives, if those are not properly positioned with ours, problems will occur.

You probably have no particular issue with ants and don’t make it a priority to step on them. But let’s say that you’re heading up a hydroelectric green energy project and the region you need to flood is filled with anthills? One of the key goals of safety research into Artificial Intelligence is that we’re never put in the position that those ants are in.