a

Lorem ipsum dolor sit amet, consectetur adicing elit ut ullamcorper. leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet. Leo, eget euismod orci. Cum sociis natoque penati bus et magnis dis.Proin gravida nibh vel velit auctor aliquet.

  /  Project   /  Blog: History of the first AI Winter

Blog: History of the first AI Winter


AI has a long history. One can argue it even started long before the term was first coined; mostly in stories and later in actual mechanical devices called automata. This chapter only covers events relevant to the periods of AI winters without being too exhaustive in hope to extract knowledge that can be applied today.

Events leading to the first AI Winter

To aid understanding the phenomenon of AI Winters, the events leading up to them are examined.

Beginnings of the AI Field in the 1950s

Many early ideas about thinking machines appeared in the late 1940s to ’50s by people like Turing or Von Neumann. Turing tried to frame the questions of “Can machines think?” differently and created the imitation game, now famously called the Turing Test.

In 1955, Arthur Samuel wrote a program that could play checkers very well. A year later, it even appeared on television. It used a combination of a tree search with heuristics and learned weights. Samuel handcrafted the heuristics inspired by a book from checkers experts. He used a learning algorithm he called “temporal-difference learning” where the weights are adjusted using the “error” between the score initially calculated and the score after the search was completed.

In 1954, one of the first experiments in machine translation was executed. It used a 250-word dictionary for translation combined with syntactical analysis. Translations between English and Russian were demonstrated. The New York Times commented:

“This admittedly will amount to a crude word-for-word translation … but will nevertheless be extremely valuable, the designers say, for such purposes as scientists translations of foreign technical papers in which vocabulary is far more of a problem than syntax .”

By then, the designers thought that most of the work was done with only some small errors to fix. Hutchin noted that it was the most far-reaching coverage that machine translation has ever received. For this reason, it generated tremendous hype and made it easier to obtain funding for the following work.

AI research gained much funding from U.S. Defense Establishments (ONR and ARPA, later called DARPA) in the hope that these technologies would be useful for the U.S. Navy. At the time, there was a substantial amount of enthusiasm and optimism regarding the state of AI. The field of machine translation was especially important during the cold war, as the government had a big interest in automatic translation from Russian to English.

Early experiments served as an inspiration to create the Dartmouth Summer Project in 1956, where the term AI was coined. The summer project was held under the motif that “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”. Researchers from many different fields were invited and a lot of different ideas, papers, and concepts were put forward. Though progress was made some were disappointed, for instance, McCarthy said in response to the workshop “[the] main reason the Workshop did not live up to my expectations is that AI is harder than we thought.”

In 1957, Rosenblatt invented perceptrons, a type of neural network where binary neural units are connected via adjustable weights. He was inspired by the work of neuroscience in the 1940s, which led him to create a crude replication of the neurons in the brain.

He tried many different layouts and learning algorithms. One type of perceptron was called series-coupled, which would in today terms refer to the standard feedforward layout of a neural network where data flows from input to output. A prominent layout was what he called alpha perceptron, a three-layer series-coupled network where the three layers, in this case, included the input and output layer. Computers at the time would have been too slow to run the perceptron, so Rosenblatt built a special purpose machine with adjustable resistors (potentiometers) controlled by little motors. The apparatus was able to learn to classify different images of shapes or letters. The New York Times reported on the perceptron:

“The Navy revealed the embryo of an electronic computer that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”

In the same year (1957), Simon summarized the current progress on AI like this:

“It is not my aim to surprise or shock you — but the simplest way I can summarize is to say that there are now in the world machines that think, that learn, and that create. Moreover, their ability to do these things is going to increase rapidly until — in a visible future — the range of problems they can handle will be coextensive with the range to which the human mind has been applied.”

The Quiet Decade

After the increases in funding and enthusiasm for machine translation in the 1950s and early 1960s, progress stalled. Hutchins called the period of 1967 to 1976 the quiet decade of machine translation. Bar-Hillel said that machine translation was not feasible. He demonstrated that computers would need too much information about the world for correct translation, which he thought was “utterly chimerical and hardly deserves any further discussion.” The Automatic Language Processing Advisory Committee concluded in 1964 that there was no immediate or predictable prospect of useful machine translation. A report in 1966 concluded that “there has been no machine translation of general scientific text, and none is in immediate prospect,” which led to a cut in funding for all academic translation projects.

The disappointments in machine translation were especially detrimental to the field of AI because the U.S. Defense Establishment hoped for usable systems to emerge, but similar patterns were noticed in other fields of AI too. In 1965, Dreyfus paralleled AI with alchemy. He studied the achievements of several fields at the time and concluded: “An overall pattern is taking shape: an early, dramatic success based on the easy performance of simple tasks, or low-quality work on complex tasks, and then diminishing returns, disenchantment, and, in some cases, pessimism.”

The quiet decade of machine translation was one of the most significant events that gave rise to the first AI winter. Another was the fall of the connectionist movement, which is discussed in the next section.

Fall of Connectionism

In 1969, Minsky and Papert’s book Perceptrons was published. It was a harsh critique on Rosenblatt’s perceptrons. Minsky and Papert proved that perceptrons could only be trained to solve linear separable problems. For instance, one of the most dooming examples of non-linear separable problems is the exclusive OR (XOR). For the network to successfully solve the XOR problem, its output can only be true if one of its inputs is true, but not both. This represented a big hit for the connectionists, who believe AI could be best achieved by mimicking the brain. Minsky and Papert knew that multiple layers would be able to solve this problem, but there was no algorithm to train such a network. It took 17 years until such an algorithm, now known as backpropagation, was devised. Only later on, was it discovered that the backpropagation algorithm had been discovered before. Indeed, it turned out that the backpropagation algorithm had been invented before Perceptrons was even published.

The Lighthill report and its Consequences

The Lighthill report (published in 1973) was an evaluation of the current state of AI at that time written for the British Science Research Council. The report came to the conclusion that the promises of AI researchers were exaggerated: “in no part of the field have discoveries made so far produced the major impact that was then promised.” Though it pointed out that the most disappointing area of research had been machine translation, “… where enormous sums have been spent with very little useful result…” James Lighthill, the author of the report, thought that the failure to defeat the “combinatorial explosion” was at the heart of the issue. The term combinatorial explosion he refers to is a well-known problem in search spaces, like trees, where the number of nodes increases exponentially when going down the tree. For example, in a game like chess, Shannon demonstrated that the number of possible games increases from 20 with the first move to 400 with the second move, and by the 5th move there are already 4,865,609 possibilities, thus representing a combinatorial explosion.

There was a lot of criticism of the Lighthill report at the time and even a debate filmed for the BBC that unfortunately never got televised. Comparisons to other fields of science were made and comments made that one should not expect results that fast. Though the comments were arguably correct, the report had an effect. After the report, the UK government cut funding for all but two universities involved in research in this field, and it started a wave that swept throughout Europe and even had an impact on the U.S.

The first AI Winter

Several circumstances combined to create the first AI winter. In the beginning, enthusiasm grew quickly about the potential of this new field with high amounts of optimistic press coverage. Then, disappointments in machine translation created a quiet era. Followed by Minsky and Pappert putting forward obstacles that impeded the progress of perceptrons. Finally, resulting in the instruction to create a realistic evaluation of the field, the Lighthill report came. With the arrival of the Lighthill report, the first winter started around 1973. The report had an effect on funding thus research on AI became difficult. DARPA (Defense Advanced Research Projects Agency) started funding more applied AI projects and less fundamental work. The AI winter lasted for a few years, but in the early 1980s, the field of AI experienced another high.

Source: Artificial Intelligence on Medium

(Visited 7 times, 1 visits today)
Post a Comment

Newsletter