a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: Deep Learning Black Box — The problem of interpretability

Blog: Deep Learning Black Box — The problem of interpretability


Opaque systems

The results provided by deep artificial neural networks — those famous deep learning algorithms — are extremely satisfying, and are at the origin of extraordinary progress in artificial intelligence. But the fact remains that the way in which the deep layers of these networks achieve those results is still opaque to the designers themselves: it is impossible for them to explain how these deep layers work. They know the inputs, they know the outputs, but what happens in-between remains a mystery. This is the black box effect of the Cybernetics (and that of its eldest daughter, Systems Thinking).

Let us give two concrete examples to make it short:

1. Olivier Bousquet, Head of Machine Learning at the Zurich Research Lab of Google France, describes how Google Translation works:

It is a huge neuron network that has taught itself to switch from one language to another. In some cases, it succeeds in being better than human translators. The other surprising thing is that it was only taught a few pairs of languages, and it deduced the others from those pairs. It created a kind of Esperanto of his own. But we still can’t decipher it properly.

(In Le talon d’Achille de l’intelligence artificielle — Benoît Georges, pdf document).

2. In August 2017, several media outlets published alarming articles claiming that Facebook researchers had urgently “disconnected” an AI program that had invented its own language without being trained to do so. This program, FAIR, based on a research article from Facebook’s artificial intelligence laboratory, concerns the creation of two chatbots — artificial intelligence programs designed for dialogue — capable of negotiating. To do this, the program has been “trained” based on many examples of human-to-human negotiations. It has provided such satisfactory and effective results that it has succeeded in fooling humans who thought they were talking to one of their peers. It has also managed to conduct tough negotiations with humans, and in some cases it has even “pretended” to be interested in an object in order to concede it thereafter for tactical purposes. We can thus say that it passed the famous Turing test with flying colours.

However, this program gradually “invented” an English-based language by modifying the English language because it “was not rewarded for respecting the structures of English”; it was rewarded for its ability to negotiate. But the program has not been disconnected. Moreover, this case of machines inventing languages of their own is not new in the world of AI.

Personally, these two examples of “cyber-linguistics” immediately reminded me of Chomsky’s Universal Grammar with its two components: deep structure and surface structure. Chomsky’s hypothesis asserts that the reason why children master so easily the complex operations of language is that they have an innate knowledge of certain principles that guide them in developing the grammar of their language. In other words, Chomsky’s theory is that language learning is facilitated by a predisposition of our brains for certain structures of language. But, for Chomsky’s theory to hold true, all of the languages in the world must share certain structural properties. Chomsky and other linguists from the generativist sphere of the 60s/70s managed to show that the few thousand languages of the planet, despite their very different grammar, have in common a set of basic syntactic rules and principles. This “universal grammar” is believed to be innate and embedded in the neural circuitry of the human brain.

(http://www.lecerveau.mcgill.ca/flash/capsules/outil_rouge06.html)

(N.B.: Recently, Chomsky has timidly refuted the presence of the universal grammar).

This concept of universal grammar dates back to the observations of Roger Bacon, a 13th century Franciscan friar and philosopher, according to whom all the world’s languages share a common grammar. According to Chomsky, the deep structure consists of innate and universal principles of grammar on which the languages of the world are based, despite the great differences in their surface structure. Remember that the deep structure of a language is achieved in surface structure by a series of transformations giving rise to comprehensible sentences.

Thus, arises the problem of the interpretability of these black boxes, which would possess a deep structure that is not understandable by humans (which is a bit normal as the computers’ machine language is just as incomprehensible for humans). We could consider these black boxes as being the equivalent of the human subconscious which, in the case of creativity, problem-solving or decision-making processes, would involve combinations of ideas that collide and interact in such a way that, without the individual’s knowledge, the best of them selectively combine and lead to the Eureka.

Some programs are already underway in companies involved in this field, including Darpa’s new Explainable Artificial Intelligence (XAI) program, which aim at creating machine learning technologies that produce more explicable models, while maintaining a high level of performance, and enable humans to understand, have real confidence and be able to effectively manage the emerging generation of AI tools.

(In Le talon d’Achille de l’intelligence artificielle — Benoît Georges, pdf document).

In view of this thin analogy between the DLs black boxes and the human subconscious, the question I have concerning the opacity of these networks is the following:

Wouldn’t it be possible to create a Deep Meta-Cognition process for each category of Deep Learning network — facial recognition, object recognition, Machine Translation, autonomous cars, etc. — and feed it with the methods of each of these networks in order to identify a common pattern and thus try to understand the deep functioning of these deep networks?

This is not a new idea, as we can trace it back to 1979 with Donald Maudsley’s work on meta-learning which is a process by which learners become aware and in control of their habits of perception, learning and reflection. John Biggs said the same in 1985. In the context of AI, meta-learning would be the machine’s ability to acquire versatility in the process of knowledge. Meta-learning methods exist in AI. It would therefore be wise to use them to feed a Deep Digger to try to understand how the black box of each different DL works and to achieve its good interpretability.

This idea of creating a Deep Meta-Cognition comes from Systems Thinking which deals with complex systems. Systems Thinking is the daughter of Cybernetics, a program that should be used to reach one or more solutions concerning the interpretability of the DL black boxes. There is always an advantage in referring to the Ancients to improve the present and, therefore, the future.

Source: Artificial Intelligence on Medium

(Visited 3 times, 1 visits today)
Post a Comment

Newsletter