Golden Age of AI

From 1956 to 1974, the AI field was one of the hottest spots in the tech world. A major catalyst was the rapid development in computer technologies. They went from being massive systems—based on vacuum tubes—to smaller systems run on integrated circuits that were much quicker and had more storage capacity.

The federal government was also investing heavily in new technologies. Part of this was due to the ambitious goals of the Apollo space program and the heavy demands of the Cold War.

As for AI, the main funding source was the Advanced Research Projects Agency (ARPA), which was launched in the late 1950s after the shock of Russia’s Sputnik. The spending on projects usually came with few requirements. The goal was to inspire breakthrough innovation. One of the leaders of ARPA, J. C. R. Licklider, had a motto of “fund people, not projects.” For the most part, the majority of the funding was from Stanford, MIT, Lincoln Laboratories, and Carnegie Mellon University.

Other than IBM, the private sector had little involvement in AI development. Keep in mind that—by the mid-1950s—IBM would pull back and focus on the commercialization of its computers. There was actually fear from customers that this technology would lead to significant job losses. So IBM did not want to be blamed.

In other words, much of the innovation in AI spun out from academia. For example, in 1959, Newell, Shaw, and Simon continued to push the boundaries in the AI field with the development of a program called “General Problem Solver.” As the name implied, it was about solving math problems, such as the Tower of Hanoi.

But there were many other programs that attempted to achieve some level of strong AI. Examples included the following:

  • SAINT or Symbolic Automatic INTegrator (1961): This program, created by MIT researcher James Slagle, helped to solve freshman calculus problems. It would be updated into other programs, called SIN and MACSYMA, that did much more advanced math. SAINT was actually the first example of an expert system , a category of AI we’ll cover later in this chapter.
  • ANALOGY (1963): This program was the creation of MIT professor Thomas Evans. The application demonstrated that a computer could solve analogy problems of an IQ test.
  • STUDENT (1964): Under the supervision of Minsky at MIT, Daniel Bobrow created this AI application for his PhD thesis. The system used Natural Language Processing (NLP) to solve algebra problems for high school students.
  • ELIZA (1965): MIT professor Joseph Weizenbaum designed this program, which instantly became a big hit. It even got buzz in the mainstream press. It was named after Eliza (based on George Bernard Shaw’s play Pygmalion) and served as a psychoanalyst. A user could type in questions, and ELIZA would provide counsel (this was the first example of a chatbot). Some people who used it thought the program was a real person, which deeply concerned Weizenbaum since the underlying technology was fairly basic. You can find examples of ELIZA on the web, such as at 
  • Computer Vision (1966): In a legendary story, MIT’s Marvin Minsky said to a student, Gerald Jay Sussman, to spend the summer linking a camera to a computer and getting the computer to describe what it saw. He did just that and built a system that detected basic patterns. It was the first use of computer vision.
  • Mac Hack (1968): MIT professor Richard D. Greenblatt created this program that played chess. It was the first to play in real tournaments and got a C-rating.
  • Hearsay I (Late 1960s): Professor Raj Reddy developed a continuous speech recognition system. Some of his students would then go on to create Dragon Systems, which became a major tech company.

During this period, there was a proliferation of AI academic papers and books. Some of the topics included Bayesian methods, machine learning, and vision.

But there were generally two major theories about AI. One was led by Minsky, who said that there needed to be symbolic systems. This meant that AI should be based on traditional computer logic or preprogramming—that is, the use of approaches like If-Then-Else statements.

Next, there was Frank Rosenblatt, who believed that AI needed to use systems similar to the brain like neural networks (this field was also known as connectionism). But instead of calling the inner workings neurons, he referred to them as perceptrons. A system would be able to learn as it ingested data over time.

In 1957, Rosenblatt created the first computer program for this, called the Mark 1 Perceptron. It included cameras to help to differentiate between two images (they had 20 × 20 pixels). The Mark 1 Perceptron would use data that had random weightings and then go through the following process:

  1. 1.Take in an input and come up with the perceptron output.
  2. 2.If there is not a match, then
    1. a.If the output should have been 0 but was 1, then the weight for 1 will be decreased.
    2. b.If the output should have been 1 but was 0, then the weight for 1 will be increased.
  3. 3.Repeat steps #1 and #2 until the results are accurate.

This was definitely pathbreaking for AI. The New York Times even had a write-up for Rosenblatt, extolling “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.”7

But there were still nagging issues with the perceptron. One was that the neural network had only one layer (primarily because of the lack of computation power at the time). Next, brain research was still in the nascent stages and did not offer much in terms of understanding cognitive ability.

Minsky would co-write a book, along with Seymour Papert, called Perceptrons (1969). The authors were relentless in attacking Rosenblatt’s approach, and it quickly faded away. Note that in the early 1950s Minsky developed a crude neural net machine, such as by using hundreds of vacuum tubes and spare parts from a B-24 bomber. But he saw that the technology was nowhere at a point to be workable.

Rosenblatt tried to fight back, but it was too late. The AI community quickly turned sour on neural networks. Rosenblatt would then die a couple years later in a boating accident. He was 43 years old.

Yet by the 1980s, his ideas would be revived—which would lead to a revolution in AI, primarily with the development of deep learning.

For the most part, the Golden Age of AI was freewheeling and exciting. Some of the brightest academics in the world were trying to create machines that could truly think. But the optimism often went to the extremes. In 1965, Simon said that within 20 years, a machine could do anything a human could. Then in 1970, in an interview with Life magazine, he said this would happen in only 3–8 years (by the way, he was an advisor on the 2001: A Space Odyssey movie).

Unfortunately, the next phase of AI would be much darker. There were more academics who were becoming skeptical. Perhaps the most vocal was Hubert Dreyfus, a philosopher. In books such as What Computers Still Can’t Do: A Critique of Artificial Reason,8 he set forth his ideas that computers were not similar to the human brain and that AI would woefully fall short of the lofty expectations.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *