If you ask this question to the owner of an AI startup, the answer may be yes: yes we’re doing real AI and our products can perform complicated tasks like human! If you ask an AI engineer, however, he/she may say: no, we’re not even close.

Both answers are correct in a sense. Artificial intelligence technologies have so far developed amazing capabilities to accomplish tasks just like humans or other intelligent creatures do, and in some cases, AI may do even a better job. With the super computing power of modern devices (GPU, TPU, NNP, etc.), AI models can identify objects and motions in the pictures and videos, thanks to the invention of CNN (convolutional neural networks) together with other deep NN structures. This can help, and is already being used in self-service shopping, autonomous driving, violence/criminals detection, and so much more.

We’ve also witnessed significant developments in voice recognition and natural language processing, so that we have our voice assistants such as Alexa, Google Assistant and Xiao Ai Tong Xue. More interestingly, AI models have been trained to play complicated strategy games and are able to beat human, although it takes humongous amount of machine time to train.

Google’s AlphaStar playing the game StarCraft. The outputs from the activation layer of the Neural Networks are visualized in the graphs.

So, with all the above that we’ve seen, is it that we have actually built artificial intelligence algorithms/machines? Unfortunately, we can’t say yes to this question, yet.

All the state of art artificial intelligence agents require enormous amount of training data, as well as computing power. Take the example above, it took AlphaStar “up to 200 years of real-time StarCraft play” per agent to train. Typical SOA image detection or NLP models also requires millions of training inputs to train the base model. With this level of training, it almost seems like we’re not having an intelligent agent, but rather a thing that has extremely high dimension of flexibility, and simply overfits a little bit to every scenario that it has seen before.

The biggest difference between this “thing” and human may be that it cannot think logically and creatively drive new solutions. Humans have the ability to think, and develop new solutions with very few times of trial and error. But those algorithms cannot do so. Depend on different cases, it will take at least another hundreds or thousands rounds of training for it to adapt a little bit to new situations. It also has pretty poor ability to discover new solutions, once it comes to a local minimum in the algorithm.

We never know, this may be a new form of intelligence that could eventually outperform human beings. But at least for the time being, the machines that we’ve trained is far from truly intelligent. The AlphaStar outperformances human players because it has a major advantage that makes the game unfair — it sees the whole map all the time; while a human player needs to move around and each time only sees part of the map. Take another example, the autonomous driving cars are prone to simple diversions: you can fool it by replacing a white tape on the road, and it will drive to the opposite lane. Or placing a stop sign that is slightly altered, like shown below, which will completely fool the autonomous driving system.

One may still remember the restaurant reservation demo that was made by google assistant, which seemed extremely natural and no one would be able to tell its difference from an actual human. But remember this is done within this very limited domain/use case and again probably trained with enormous amount of training data, plus that the demo itself has been carefully tuned beforehand.

All in all, those AI agents don’t yet have the ability to efficiently integrate various information and do the “reasoning”. All they have is ingesting all possible sources of information and come up with an activated action based on best historical practices, which is not a bad idea, but is still far from human intelligence. It is not adaptive, and more importantly, no automated multi-step reasoning. Efforts have been made to create reasoning in the algorithms (i.e. https://papers.nips.cc/paper/7082-a-simple-neural-network-module-for-relational-reasoning.pdf, https://openreview.net/pdf?id=rJgMlhRctm) but break throughs that changes the game are yet to come.

This article is personal opinion and open to any criticisms or comments.

Source: Artificial Intelligence on Medium