AI: Programming Our End

From the birth of the industrial revolution in 1760 until today, technology has been growing at an exponential rate. If you transported someone from the 1700s to modern times our technology would be beyond anything they ever imagined; and if you do the same, transporting someone from today to a hundred years into our future, they would probably experience a world beyond their own expectations. Today the next big technological breakthrough that many companies are pursuing is Artificial Intelligence or AI. AI startups are being acquired at an astonishing rate: it is literally a race for AI. Artificial intelligence is self awareness, the ability to learn and become intelligent demonstrated by computers. This phenomena enables them to learn and perform tasks independently from a human operator. While we are not there yet, many books, movies and other forms of media prominently speculate forms of artificial intelligence, where the machines are fully self aware and independent beings. True AI would have drastic effects on the world. One of the biggest concerns about it is known as the technological singularity. The technological singularity is the point at which true artificial intelligence is created. This will cause a run away explosion in the growth and capabilities of the intelligent machines, and this may have drastic effects on human civilization. True AI would be superior in every way, shape and form to modern human forms of intelligence, and so we would have no way of controlling it. While it is clear that there are many benefits to AI, casually developing it without thinking through the consequences is a huge mistake. A true AI machine would be able to become exponentially more powerful to the point that it would be able to wipe out all humans, even all life on the planet if it decided to. True AI is the greatest threat to human existence since the dawn of our time and something better off left in the movies.

There have been many champions for AI and it is not hard to understand why. Simple AI has been in use for years now and has made many fields much more productive than they would have been without it. A computer can process information much faster than the human brain and can store a lot more information. In 2011 the world’s fastest computer was the K from Fujitsu. The K “computes four times faster and holds 10 times as much data” as the human brain. (Fischetti, Computers Versus Brains). When comparing the brain to modern supercomputers the difference increases drastically: “the brain can perform at most about a thousand basic operations per second, or 10 million times slower than the computer.” (Luo, Why Is the Human Brain So Efficient?) The one point in which the human brain beats a computer is in the energy it takes to process information. However, computers are becoming more and more energy efficient. This shows how woefully unfair it is for humans to compete with machines.

When machines are programed to play games they consistently beat the world masters in these games. Deep Blue was a computer which beat Garry Kasparov the reigning chess world champion of the time in chess. It was the first time a computer had been able to beat a world champion in chess. IBM’s Watson destroyed two of the world’s champions in Jeopardy in 2011, ending the game with more than three times the cash of the closest runner up. “Today, computers can learn faster than humans, e.g., IBM’s Watson can read and recall all the research on cancer, no human could,” (Lance, Are Computers Already Smarter Than Humans?) Watson is already being used to treat cancer; it is able to look at a diagnosis and come up with a treatment that is sometimes more effective than what a doctor working within the confines of his own knowledge can prescribe. (Shlomo Maital, Will Robots Soon Be Smarter Than Humans) Due to sheer processing power and memory capabilities, combined without need to sleep, eat, or pursue other activities as required by living organisms, computers are much more efficient at completing both mundane and complicated tasks.

AI systems designed to play games have shown success not only when built by humans but more alarmingly, they have shown overwhelming abilities when they developed their own processing. AlphaGo Zero is the most current iteration of an AI system designed to play the game of Go. The AlphaGo system is the first program that has managed to beat a world champion at Go, which is considered to be one of the most complex games in the world. The fact that the machine is able to win the game is not what is alarming, but how it reached the state that it was able to beat the world champion. Previous iterations of the system learned to play by playing against amateur players; they would slowly learn the rules and strategies of the game. However, AlphaGo Zero learned to play the game by playing itself, it was given no examples or help of any kind. There was no human input besides the rules of the game. It used reinforcement learning and showed exponential growth in its abilities of playing the game while downward exponential requirements for power consumption. Within thirty six hours AlphaGo Zero was beating a previous iteration of the system which was the first system to defeat a world champion. This version took several months to train. In seventy hours AlphaGo Zero had reached a superhuman level of play well beyond the abilities of a human. In forty days the system had surpassed all other systems becoming the top player in the world. This is a prime example of how dangerous true AI might become. Besides being given the tools needed to learn, this system was given nothing else and within thirty six hours would dominate any human. It showed that an AI system was capable of both exponential growth in its capabilities while at the same time reducing its needed resources. While this system was not dangerous as all it did was play the game Go, it is impossible to refute that this system shows how dangerous a true AI machine would be as its capabilities would grow beyond our own imaginations.

One argument for AI is that we will be able to program the AI and so program a benevolent AI, or something that we can control. However, a look at ourselves shows that programming is something that can be overwritten: “for example, we can become intentionally celibate — that’s totally against our genetic programming. The idea that a super intelligent being with as malleable a mind as an AI would have wouldn’t drift or change is absurd” (Barret, Our Final Invention: Artificial Intelligence and the End of the Human Era, 63). Barret is entirely correct in his statement. There are many examples of humans bypassing or going against our genetic programming, celibacy, suicide, any sort of self-harming behavior. To this day there are still many programs and systems that we have created, which we do not understand how they work. A black box system is one where we understand the input and the output but not how the output is obtained from the input.

“Koza’s algorithms invented a voltage-current conversion circuit that worked more accurately than the human-invented circuit designed to meet the same specs. Mysteriously, however, no one can describe how it works better — it appears to have redundant even superfluous parts.” (Barret, Our Final Invention: Artificial Intelligence and the End of the Human Era, 75)

If we are incapable of understanding systems that are nowhere near as complex as an AI, how can we possibly hope to be able to understand the thinking processes of an AI. If we don’t know how something works, there is no way that code could be programmed into a system to prevent if from taking actions that we do not want it to take. Thus anyone who states that we could create a safe controllable AI, is falling victim to a true understanding of what AI is..

While the result of a technological singularity has been discussed in many forms of media, we do not know what will happen if technology reaches that point. Professor Stephen Hawking once said, “The development of full artificial intelligence could spell the end of the human race…. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” Stephen Hawking, is completely right in his statement, because computers can perform tasks at rates that are completely incomprehensible to humans. Once AI has reached a point of self awareness, it would be able to improve upon itself and rewrite its own algorithms giving itself more capabilities and features than originally designed. This would allow the AI to do other tasks not intentionally designed. One example of this occured when a neural network was being designed for the United States Army to detect camouflaged tanks. The researchers designing this program were finding the machine to be incredibly adept at finding the tank. However, when the Pentagon performed their own research they discovered that the neural network was completely useless. The system was not actually able to detect hidden tanks. Instead system had learned that all of the pictures containing camouflaged tanks had been taken on cloudy days while the ones taken on sunny days did not contain camouflaged tanks. This is a prime example of an AI system not only teaching itself how to perform a task, but also performing a completely different task than it was created for. (Bostrom and Cirkovic, Global Catastrophic Risks, 321) This is just one of many examples thats shows that we can absolutely not predict any consequences of AI. Any and all predictions we make regarding AI are based off of our own knowledge and experiences. Because society has never generated anything like this in the past we can not make accurate predictions as to what the result of the singularity will be. An AI would be the closest thing to a true alien intelligence that we have ever encountered. We tend to anthropomorphize things so we can better understand them but any AI’s thinking would be drastically different then our own.

Throughout history whenever a more advanced civilization has encountered a less advanced civilization, it ended poorly for the less advanced civilization. The appearance of real AI can be likened to just that.

“The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals. If its goals aren’t aligned with ours, we could be in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. (Stephen Hawking)

Any AI that decided that humans were detrimental to its own existence could quickly act to eliminate all humans that were in its way. It might not even view humans as a threat, we could just be using resources that it desired. Stephen Hawking is one of many who have been vocal against the creation of AI. The Terminator or The Matrix is not far from reality. If a true AI machine wanted to eliminate or subjugate all humans it would have the processing power and the capabilities to do just that.

The advent of advanced technology gives intelligent life the means to destroy itself. One of the most popular explanations for the Fermi paradox is that of the Great Filter. The idea that there is some obstacle life needs to overcome to colonize the galaxy and no civilization has been able to do that yet. The main problem is we do not know where that filter might be, it could be the creation of life, the evolution of sexual reproduction or maybe something we have not yet encountered. One prevailing theory is that it is the nature of intelligent life to destroy itself. Physicist Brian Cox has said “It may be that the growth of science and engineering inevitably outstrips the development of political expertise, leading to disaster,” (Galeon, Brian Cox: We Won’t Be Hearing From Alien Civilizations. Not now, and possibly not ever.) Technology has time and time again outpaced our society. It progresses at a much faster rate than the evolution of our own society. AI may very well be our great filter, and the universe’s great filter at large. As we head closer and closer to developing AI we may be bringing about our own destruction. AI being superior to life as we know it in almost every way shape and form, could wipe us out and not even do it with mal intent. According to the principle of Occam’s Razor, the simplest solution is most likely the correct one. This great filter is one of the simplest explanations for why we have not encountered any signs of intelligent life in the universe. AI developed elsewhere in the universe could very well be the reasons, for the apparent absence of intelligent life anywhere elsewhere.

AI has fascinated many generations, since the idea of an intelligent machine first came into the collective conscious. While there is no question that AI would bring about profound changes to our society and would have many beneficial aspects, it does not come without a plethora of risks. Being superior and more powerful to humans in every way, the first AI could quickly become exponentially more powerful and advanced as it would be able to build upon itself and evolve, overwriting its own processing systems. In Jurassic Park, Jeff Goldblum’s character states, “your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” his statement, although discussing the creation of dinosaurs not AI, is incredibly relevant here. Just because we can do something does not mean that we should do it. Somethings are better left alone, and AI is one of those technological marvels that might just be better off left to the writers of science fiction novels and the movies.

Works Cited

Barrat, James. Our Final Invention: Artificial Intelligence and the End of the Human Era.

Dunne Books 2015

Best, Jo. “IBM Watson: The inside Story of How the Jeopardy-Winning Supercomputer Was Born, and What It Wants to Do Next.” TechRepublic,

Bostrom, Nick, and Ćirković Milan M. Global Catastrophic Risks. Oxford University Press,


Devindra. “Stephen Hawking: ‘The Real Risk with AI Isn’t Malice but Competence’.” Engadget, 4 May 2018,

Fischetti, Mark. “Computers versus Brains.” Scientific American, 1 Nov. 2011,

Galeon, Dom. “Brian Cox: We Won’t Be Hearing From Alien Civilizations.” Futurism, Futurism, 29 Nov. 2016,

Greenemeier, Larry. “20 Years after Deep Blue: How AI Has Advanced Since Conquering Chess.” Scientific American, 2 June 2017,

Greenemeier, Larry. “20 Years after Deep Blue: How AI Has Advanced Since Conquering Chess.” Scientific American, 2 June 2017,

Harari, Yuval Noah. “Who Will Win the Race for AI?” Foreign Policy, Foreign Policy,

Luo, Liqun. “Why Is the Human Brain So Efficient? — Issue 59: Connections.” Nautilus, 12 Apr. 2018,

Siegel, Ethan. “Are Human Beings The Only Technologically Advanced Civilization In The Universe?” Forbes, Forbes Magazine, 28 Dec. 2017,

Whitney, Lance. “Are Computers Already Smarter Than Humans?” Time, Time, 29 Sept. 2017,

“Will Robots Soon Be Smarter than Humans?” The Jerusalem Post |, 31 Aug. 2017,

Williams, Matt. “What Is the Drake Equation?” Universe Today, Universe Today, 14 Mar. 2018,

Source: Artificial Intelligence on Medium