ProjectBlog: The Road to Artificial General Intelligence

Blog: The Road to Artificial General Intelligence

“Detective Del Spooner: Human beings have dreams. Even dogs have dreams, but not you, you are just a machine. An imitation of life. Can a robot write a symphony? Can a robot turn a… canvas into a beautiful masterpiece?

[Robot] Sonny: Can *you*?”

— Passage from I, Robot

This final chapter addresses how Artificial Intelligence systems might evolve into Artificial General Intelligence, using the past as an indicator of the future. It explains the difference between knowing that versus knowing how. And given that the brain is a good indicator of how these systems evolve, we know that for the animal kingdom there is a high correlation of intelligence to the number of pallial and cortical neurons. The same has been true for Deep Learning. The higher the number of neurons, the more performant a multilayer neural network is. While a few orders of magnitude exist from how many neurons the human brain has, we are marching toward it. At which point, we will hit Singularity, a time where artificial intelligence might be hard to control.

The Past as an Indicator of the Future

Arthur C. Clarke has an interesting quote where he says, “Any advanced technology is indistinguishable from magic.” If you were to go back to the 1800s, it would be unthinkable to imagine cars traveling at 100 mph on the highway or living with handheld devices for connecting with people on the other side of the planet.

Since the creation of the Artificial Intelligence field and the Dartmouth Conference, great strides have been made. The original dream many had of computers, which was to perform any intellectual task better than humans, is much closer than before. Some argue that this may never happen or is still in the very distant future.

The past, however, may be a good indication of the future. Software is better than the best humans at playing Checkers, Chess, Jeopardy!, Atari, Go, and Dota 2. It already performs text translation for a few languages better than the average human. Today, these systems improve the lives of millions of people in areas like transportation, e-commerce, music, media, and many others. Adaptive systems help people drive on highways and streets, preventing accidents.

At first, it may be hard to imagine computer systems performing what once were cerebral tasks like engineering systems or writing a legal brief. But at one time, it was also hard to imagine systems triumphing over the best humans at Chess. People claim that robots do not have the imagination or will never accomplish tasks that only humans can perform. Others say that computers cannot explain why something happens and will never be able to.

Knowing That versus Knowing How

The problem is that in many tasks humans cannot explain why or how something happens, even though they might know how to do it. A child knows that a bicycle has two wheels, its tires have air, and you ride by pushing the pedals forward in circles. But this information is different than knowing how to ride a bicycle. The first kind of knowledge is usually called “knowing that,” while the skill of riding the bike is “knowing how.”

The two kinds of knowledge are independent of each other, but they might help each other. Knowing that you need to push the pedals forward can help a person ride a bike. But “knowing how” cannot be reduced to “knowing that.” Knowing how to ride a bike does not imply that you understand how it works. In the same way, computers and humans perform different tasks that require them to know how to do it but not “know that.” Many rules apply to the pronunciation of certain words in English. People know how to pronounce the words, but they cannot explain why. A person who has access to a Chinese dictionary may actually understand Chinese when that resource is available. Computers, in the same way, perform tasks and may not be able to explain the details. Asking why computers do what they do might be the same as asking why someone swings a bat the way they do when playing baseball.

It is hard to predict how everything will play out in the future and what will come next. But looking at the advances of the different subfields of Artificial Intelligence and their performance over time may be the best predictor of what might be possible in the future. Given that, let’s look at the advances in the different fields of A.I. and how they stack up. From Natural Language Processing and Speech Recognition to Computer Vision, all systems are improving linearly, with no signs of stopping.

A.I. advances at different benchmarks over time


Algorithms can only solve problems like self-driving cars, winning Go games, and other tasks with the correct data. For these algorithms to exist, it is essential to have properly labeled data. In research circles, significant efforts are underway to reduce the size of the datasets needed to create the appropriate algorithms, but even with this work, a need still exists for large datasets.

Dataset size comparison with the number of seconds that a human lives from birth to college graduation

Datasets are already comparable to what humans capture during their lifetime. Figure 32.2 compares the size of the datasets used to train computers to the number of seconds from birth to college graduation of a human on a logarithmic scale. One of the datasets in the Figure is Fei-Fei Li’s ImageNet described earlier in this book. The last dataset in the picture is used by Google to create their model to understand street numbers on the façades of houses and buildings.

In Machine Learning, an entire field of research studies how to combine Machine Learning models and how humans can fix and change labeled data. But it is clear that the amount of data that we can capture is already equivalent to what humans do over their lifetime.


But Machine Learning software does not depend solely on data. Another piece of the puzzle is computation. A way of analyzing the computational power of neural networks deployed today versus what human brains use is to look at the size of the neural network in these models. Figure 32.3 compares them on a logarithmic scale.

Comparison of the model size of a neural network and the number of neurons and connections of animals and humans

Neural networks shown in this figure were used to detect and transcribe images through its models for self-driving cars. Figure 32.4 compares the scale of both the number of neurons and the connections per neuron. Both are important factors for the performance of neural networks. Artificial neural networks are still orders of magnitude away from the size of the human brain, but they are starting to become competitive to some mammals.

The world’s $1,000 computers now beat mouse brains, which are about a 1,000th of the human level

And, the price of computation has declined over time, and the added computation power to society has increased. The amount of computing power one can get for every dollar spent has been increasing exponentially. In fact, in Chapter 16, I showed that the amount of computation used in the largest A.I. training runs has been doubling every 3.5 months. Some argue that due to physics constraints, computing power cannot continue this trend. Past trends, however, do not support this theory. Money and resources in the area have increased over time as well. More and more people work in the field, developing better algorithms and hardware. And, the human brain is a limit that can be achieved because it satisfies physics constraints.


With more computing power and improved software, it may be that A.I. systems surpass human intelligence. The point at which these systems are smarter and more capable than humans is called Singularity. For every task, these systems will be better than humans. When computers outperform humans, some people argue that they can then become better and better. In other words, if we make them as smart as us, there is no reason to believe that they cannot make themselves better, in a spiral of ever-improving machines, turning to superintelligence.

Some predict that Singularity will come as soon as 2045. Nick Bostrom and Vincent C. Müller conducted a survey that asked hundreds of A.I. experts at a series of conferences by what year will Singularity happen (or human-level machine intelligence) with a 10% chance, 50% chance, and 90% chance. The responses were the following:

  • Median optimistic year (10% likelihood): 2022
  • Median realistic year (50% likelihood): 2040
  • Median pessimistic year (90% likelihood): 2075

So, that means that in around 20 years, A.I. experts believe that machines will be as smart as humans.

What does that mean to society?

If Singularity is as near as many predict and surpasses human intelligence, meaning achieving Artificial General Intelligence, the consequences are unthinkable to society as we now know. Imagine that dogs created humans. Would dogs understand the result of creating such creatures in their lives? I doubt it. The same way as humans creating something that is smarter than we are.

Optimists argue that because of the surge of Singularity, problems previously deemed impossible will soon be obvious, and this superintelligence will solve many societal problems like mortality. Pessimists, however, say that as soon as we achieve superintelligence, then human society will be extinct as we know it. No reason for humans will exist. The truth is that it is hard to predict what will come after the creation of such technology, only that many agree that Singularity is near.

Source: Artificial Intelligence on Medium

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top

Display your work in a bold & confident manner. Sometimes it’s easy for your creativity to stand out from the crowd.