Blog: If AI became all powerful like we fear; don’t worry — they’d ignore us.
Are Stephen Hawking’s and Elon Musk’s fears founded?
“Can machines think?” Let’s expand on this question asked by Alan Turing in the 1950s. Numerous made up disaster scenarios, in which artificial intelligence (AI) takes over the world and destroys humanity, have been told over and over again in Hollywood.
AI is yet or progress to a stage that requires serious concern; AI has definitely not yet taken control of humanity – however, technology has indeed seeped into every aspect of our lives and has taken control of many aspects of our lives even if we do not perceive it as such. We have drones that fly our pizzas. We have cars that drove themselves. We accept AI as an innocuous part of our lives. The simplest example is our smartphones – Siri and Alexa are our robot guides.
The use of AI applications in this domain is so widespread that it is now possible to produce solutions for almost all professional groups. Medicine, education, transport, defence, farming, energy, data, natural sciences, finance, art, and law – any areas that require automation and succinct data can be greatly simplified with the use of AI machinery. Is this development a good thing?
Historically, we’ve always been afraid of technology. The majority of the population are slow adopters; preferring to get the latest gadget once its mainstream and popular.
The first science fiction novel ever written was Frankenstein in 1818. It arose out of the Industrial Revolution and the idea that ‘we’ve made a horrible mistake, we’ve developed and created and invented these intelligent machines that are ridiculously powerful; what happens when we inevitably lose control of them one day?’ This fear is the driving narrative idea behind many pop culture films; from Ex Machina to I, Robot. The recurring message is always the same – when AI gain sentience they will destroy humanity and the world will end in flames.
Hollywood is filled with warnings about the impending apocalypse whenever AI is concerned. I’m not too fussed; AI is much more helpful than it is detrimental. One reason I boldly claim that robots won’t take over the world and cruelly obliterate us is that if they ever do – no one can ever really blame me for it — we’ll all be gone at that point. I was having a discussion with one of my IT friends this morning, and he said that if we actually ever do develop extremely intelligent sentient machines, that are capable of emotion and deep learning – they wouldn’t annihilate us; they’d just ignore us. When a robot mind has all the knowledge and power in the world, us humans would be incredibly mundane and of no interest to them at all. The real interesting scenario would be what these robotic intelligent minds would do between and amongst themselves – it would be something far more interesting than crushing life on Earth. I find thought both humbling and comforting.
There are two ways of potentially dividing what people can provide and what AI can provide; I would divide it into the brain and the heart. Computers and robots can help make logical functions faster and more accurate; they can simplify menial tasks and reduce the margin of human error for mechanical input related tasks – but can they do any of the emotional and compassionate tasks involved with being human? Can they make decisions mixing logic and emotion? In short, in our current modern day – no. Technology is not there yet. Although I agree it’s a potential concern in the future, it’ll likely be a centurion away. AI minds can probably process information as quickly or as well as us if they are given the right parameters, and down the road they will learn how to do things from their different experiences. In terms of emotions, I don’t know whether a computer can learn or be programmed with emotions. With respect to the heart, I’m not so sure yet whether I agree that artificial intelligence, if we do include emotions in it, could ever overtake humans because that aspect to humanity separates us from computers.
One of the concerns is we’re now training computers how to learn and they will in turn teach other computers how to get smarter and smarter, but they won’t necessarily integrate the emotional aspects of human’s lives like love — that’s where the danger comes from. Stephen Hawking is worried that once they are loose on the Internet, they could be vastly more intelligent than any group of humans combined.
While I don’t fear an apocalypse, I actually am more wary about the winner-take-all scenario — where there is technological advancements, there is a lot of money involved. Who is profiting from the growth of AI? There are people making large chunk of change that would otherwise have been split among many more people; is the development of AI being used for good purposes?
As we accelerate with greater and greater advancements in technology, are we going to be displacing more and more people? Is this displacement a good thing, because it removes menial tasks, or will it have an unintended negative effect?
I think whether AI can learn to become sentient, and mimic human emotions, may be an issue of data. The more data, the more information that is uploaded, the more AI can learn and more effectively mimic human emotions. They can learn from the data we provide — I have yet to see any technology that is that far advanced, however I have no doubt we will be seeing more emotive capabilities in next 20 to 30 years.
Technology is neither good nor bad; whether it’s helpful or detrimental is in our hands. We have the power to decide whether it’s a tool that will help us or a weapon that will harm us; I’m certain the cleverest of us all will include failsafe options in the event of the small probability robots decide to exterminate humans.