As the parents sit down on the blanket to eat, the kids pull out the Super Soakers in the scorching heat. The group of parents open the picnic basket, content to have the kids drench each other knowing that they will dry out and hopefully tire each other out in the process. No one notices the drone as it circles overhead in the Middle Eastern sun. The drone reviews the situation and determines that while the figures have guns, the height of them, along with the people sitting on the blanket, suggest that this is a family setting rather than a threat.
This is actually a real situation that is being considered by people I met at the recent South by Southwest (SXSW) Conference who are currently programming drones. The assessment is innocuous – unless the intelligence gathered informs an incorrect decision by a military installation far away, or worse, the drone itself is fitted with a weapon and makes the incorrect decision. As America attempts to protect itself from its own liberties, you can see a situation in the future where schools are patrolled by drones to protect children. The ridiculous notion of arming teachers could give way to armed drones in schools (also ridiculous).
It’s no secret that tech entrepreneur Elon Musk believes AI has the potential to open up a big scary can of Apocalyptic worms. Speaking at SXSW, he proclaimed that “the danger of AI is much bigger than the danger of nuclear warheads – by a lot.” He continued: “Nobody would suggest we allow the world to just build nuclear warheads if they want, that would be insane. And mark my words: AI is far more dangerous than nukes.”
Surely the scenario of Skynet taking over and attacking the human race is just the stuff of SciFi films. Or is it?
Do machines learn?
Do computers actually make decisions, or are they really decisions based on the outcome of a predetermined set of parameters programmed by a person into the algorithm? In other words, can computers go beyond their programming to make their own choices? If an object is in front of a car and the calculation identifies that the object will be hit, then the response is to decelerate the car. In this case, did the car make the decision, or did the programming?
Essentially we built computers in order to exceed human ability – and some have. At the very least, we expect them to process something for us a lot faster, thus improving our lives by saving us time.
The device you are reading this article on has more computer power than the shuttle that went to the moon. However we now see it as routine technology. As computers get smarter they take on more of the things that we don’t want to do, with the promise that we will be freed up to do the things we want. However, as Musk highlights, what if they get too smart?
Intelligence is defined as ‘the ability to acquire and apply knowledge and skills’. No longer is a computer a screen attached to a circuit board acting as a large calculator pumped with knowledge (or access to it) in order to solve complex problems.
The advent of machine learning changes the paradigm. The machine goes beyond the programming and begins to learn with experience. Like a toddler, the more it learns, the more independent it becomes. The reality is that machines have been learning for decades and were able to beat humans at chess and checkers a long time ago.
The advancement in recent times is their ability to understand and process what we are saying. Natural language processing is a breakthrough in computer processing. The advent of digital assistants has created a situation where we can now readily talk to a computer and expect a response. In time, the ability to have a conversation with a digital assistant will become natural and conversation interfaces will allow robots to interact with us as SciFi movies predicted.
Robots driving cars
It is widely accepted that computers will exceed the ability of a human to drive a vehicle, as they can calculate the variables faster. Several speakers at SXSW mentioned that they were excited about what autonomous cars and trucks could do for safety and efficiency. Self-driving trucks have already been launched – in October 2017 in California, delivering refridgerators from El Paso to Palm Springs.
As self-driving trucks become widespread (they are both cool and scary in the X-Men movie Logan), there will be a major impact on America’s three million truck drivers. Hemant Taneja from General Catalyst says “Imagine a future where a truck driver lives longer (due to advances in medical care), though is out of work due to autonomous vehicles. Can you imagine being out of work for 40-50 years?”
Are you comfortable with a truck being on the road that is driving itself? Consider you are already comfortable sleeping with a plane flying itself on autopilot, across the Pacific. The argument is that due to the number of sensors on a vehicle and the computer’s ability to process the information, it has a better chance than you of operating the machine in changing conditions. The best bit, it doesn’t get tired.
How far can AI go? And should we be scared?
The fear is that as computer power increases, and the artificial intelligence becomes too smart, we could lose control. It’s not too much of a leap to envisage a situation where insurance companies will use AI to just insure those who are healthy. Artificial intelligence will also revolutionise industries and fundamentally change society, resulting in a range of companies either adapting or going out of business. If we end up with a majority of cars driving themselves, what will happen to the car insurance industry?
As AI develops, Taneja argues that we should build products that understand the impact from the beginning: “If we knew that the combustion engine would lead to serious air pollution, we would probably have included the environmental cost of carbon into the price of the car”. But that supposes we can actually predict the ultimate outcome.
Elon Musk has warned, "The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are. This tends to plague smart people. They define themselves by their intelligence and they don't like the idea that a machine could be way smarter than them, so they discount the idea – which is fundamentally flawed." Zuckerberg countered this argument, saying Musk was being irresponsible, and adding that "in the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives".
We measure the capacity of robots and AI based upon our perception of what is possible. We cannot yet see beyond our own limited thinking. Maybe Zuckerberg is right and robots could solve major problems for society. Conversely, Musk’s view – while potentially alarmist or defeatist – may prove right. What if the artificial intelligence becomes so smart we cannot control it?
Rene Descartes proposed “I think therefore I am”. If a machine thinks it's alive, then it probably is, and the consequences are extraordinary. Maybe Musk is right and AI should be feared. Amara’s Law, (Roy Amara, circa 1965) says that “we tend to overestimate the impact of a new technology in the short run, but we underestimate it in the long run”.
For me, I think that the future is extremely bright and the possibilities for tackling poverty and the serious environmental issues we face are huge. The likelihood of the robots taking over and making us their slaves is quite low.
In the short term, I am still trying to get my Alexa to talk to Harmony, so I can change the TV channel via voice. Maybe the machines are already smarter.
Want to know more?
We have a whole team of experts who would love to talk to you.Get in touch
Want more? Here are some other blog posts you might be interested in.