During the early 1970s, the enthusiasm for AI started to wane. It would become known as the “AI winter,” which would last through 1980 or so (the term came from “nuclear winter,” an extinction event where the sun is blocked and temperatures plunge across the world).

Even though there were many strides made with AI, they still were mostly academic and involved in controlled environments. At the time, the computer systems were still limited. For example, a DEC PDP-11/45—which was common for AI research—had the ability to expand its memory to only 128K.

The Lisp language also was not ideal for computer systems. Rather, in the corporate world, the focus was primarily on FORTRAN.

Next, there were still many complex aspects when understanding intelligence and reasoning. Just one is disambiguation. This is the situation when a word has more than one meaning. This adds to the difficulty for an AI program since it will also need to understand the context.

Finally, the economic environment in the 1970s was far from robust. There were persistent inflation, slow growth, and supply disruptions, such as with the oil crisis.

Given all this, it should be no surprise that the US government was getting more stringent with funding. After all, for a Pentagon planner, how useful is a program that can play chess, solve a theorem, or recognize some basic images?

Not much, unfortunately.

A notable case is the Speech Understanding Research program at Carnegie Mellon University. For the Defense Advanced Research Projects Agency (DARPA), it thought this speech recognition system could be used for fighter pilots to make voice commands. But it proved to be unworkable. One of the programs, which was called Harpy, could understand 1,011 words—which is what a typical 3-year-old knows.

The officials at DARPA actually thought that it had been hoodwinked and eliminated the $3 million annual budget for the program.

But the biggest hit to AI came via a report—which came out in 1973—from Professor Sir James Lighthill. Funded by the UK Parliament, it was a full-on repudiation of the “grandiose objectives” of strong AI . A major issue he noted was “combinatorial explosion,” which was the problem where the models got too complicated and were difficult to adjust.

The report concluded: “In no part of the field have the discoveries made so far produced the major impact that was then promised.”9 He was so pessimistic that he did not believe computers would be able to recognize images or beat a chess grand master.

The report also led to a public debate that was televised on the BCC (you can find the videos on YouTube). It was Lighthill against Donald Michie, Richard Gregory, and John McCarthy.

Even though Lighthill had valid points—and evaluated large amounts of research—he did not see the power of weak AI . But it did not seem to matter as the winter took hold.

Things got so bad that many researchers changed their career paths. And as for those who still studied AI, they often referred to their work with other terms—like machine learning, pattern recognition, and informatics!


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *