Go to the profile of Kevin Ann

Below is a link to an interesting interview with Noam Chomsky about why he believes current approaches to Artificial Intelligence are wrong.

Noam Chomsky on Where Artificial Intelligence Went Wrong

Unfortunately, Chomsky is more widely known for his political views than his scientific and philosophical views — so just ignore those political views for now to learn from Chomsky as a philosopher of science. Chomsky compares the state of mainstream AI research now to where linguistics was back in the 1960s, in particular, in the same way that behavioralists treated an animals’s actions and thoughts as a black box where statistical associations between stimulus and response were seen as an adequate explanation.

To articulate my own thoughts, I will consider the two opposing views (Chomsky vs. Norvig) from my perspective as a scientist outside this field. Whenever there’s a ‘vs.’ in each of the headings, the first term on the left hand side are related across headings. In other words, my intention is roughly:

  • Chomsky = {Science, Simulation, Rockets, Theory}
  • Norvig = {Engineering, Emulation, Birds, Application}

1. Science vs. Engineering

Science seeks to understand fundamental principles of natural phenomena, with complete understanding only approached asymptotically in effort and empirical evidence. In contrast, engineering seeks to make something that’s useful, even if the more fundamental principles are not yet fully pegged down.

There’s still much that can be done engineering-wise even with imperfect scientific understanding, for example, we can achieve such grand feats such as sending rovers to Mars on just Newtonian Mechanics alone without factoring in the finer points of Relativity and Quantum Mechanics.

2. Simulation vs. Emulation

I’m reminded in particular of The Whole Brain Emulation Roadmap put out by Oxford’s Future of Humanity Institute.

Whole Brain Emulation Roadmap

The core thrust of the roadmap concerns how we would go about modeling the function of a human brain, with the implication that we’d someday want to instantiate or transfer a consciousness onto that system. In other words, it concerns the ontological issues related to mind uploads.

The dichotomy between simulation and emulation goes as follows. In simulation, which is more closely aligned with science, a human mind is modeled using more abstract higher-order principles. In emulation, which is more closely aligned with engineering, there’s a simple one-to-one copying without understand any higher-order principles.

Chomsky’s view is that the state of AI research at present more closely resembles emulation than it does simulation in the sense that researchers and practitioners are more concerned about statistical correlations in a black box than they are about truly understanding the more fundamental principles from which these statistical correlations are derived.

3. Rockets vs. Birds

In trying to create a heavier-than-air machine that flies, one can take two different approaches. The first is simply to emulate a bird without really understanding how it flies. The second is to understand high-order principles such as: Bernoulli’s principle, fluid dynamics, and Newtonian mechanics. Armed with these more generalized and abstract principles, one can then build a flying machine that goes faster than any naturally occurring flying system.

The more scientific approach involved with understanding the principles to build the rocket allows for eventual greater long-term achievement, for example, building rockets that can not only travel through air, but can travel also through space, that is, in a system beyond the constraints contained in the original framing of the problem. Of course, this requires a more deliberate and slower process. The more engineering-based approach of emulating the bird can get something that flies — but this cannot really be improved on, especially in a context beyond the original problem.

4. Theory vs. Application

It’s no surprise that one person who disagrees strongly with Chomsky’s position is Peter Norvig, the director of research at Google. Norvig writes an interesting and long article to counter Chomsky’s criticisms at the Brains, Minds, and Machines symposium at MIT, entitled:

On Chomsky and the Two Cultures of Statistical Learning

Chomsky is more concerned with getting things right scientifically, and even philosophically, whereas Norvig is more concerned about building something that works well and can make a lot of money.

Personally, I don’t think the core issue is which approach is more correct absolutely, but rather which is more correct for what purpose, especially since many important developments in Artificial Intelligence can occur under either approach. The scientist and philosopher in me wants to nail things perfectly and get to closing the ontological and epistemological gap at the foundations. The “mortal running out of time and afraid of dying before the Singularity” wants progress to occur in whatever way possible so that new technologies intervene to extend my life and upload my mind before I die.

Practically, I can worry more about the edges of theory and philosophy once biological immortality and/or mind uploading technologies arrive, but at the moment, time is ticking down.

Source: Artificial Intelligence on Medium