A Brief Review of Artificial Intelligence Research

The Forties:
    The seeds of artificial intelligence research were sown during World War II when such electronic computers as the ENIAC, the EDVAC, the Aiken Mark I, the Differential Analyzer, and a number of analog shipboard fire control computers made their debut. It's not hard to see how these new "electronic brains", which could perform arithmetic and logical calculations orders of magnitude faster than human "computresses", would fire the imaginations of engineers and mathematicians. Could we develop thinking machines which could outperform humans in the mental arena, as labor saving machinery had already done in the physical domain? As Dr. Hans Moravec put it,
    "One line of research, called Cybernetics, used analog circuitry to produce machines that could recognize simple patterns, and turtle-like robots that found their way to lighted recharging hutches. An entirely different approach, named Artificial Intelligence (AI) attempted to duplicate rational human thought in large computers."
    The brain was regarded as a digital computer but, perhaps, with analog/digital circuits to accommodate control functions. In the latter 40's and early 50's, there was great fanfare regarding these prospects, together with concerns over what "technological unemployment', "automation", and the new science of cybernetics would do to humanity in a robot-run world. What would happen when we humans were no longer the smartest people we knew? An MIT mathematical prodigy, Dr. Norbert Wiener, wrote books called "Cybernetics", and "The Human Use of Human Beings", calling for responsible application of this revolutionary technology. The science fiction films of the era ("The Day the Earth Stood Still") had the robot as the master and a specially-created human emissary as its loyal servant. (Remember Jack Williamson's "The Humanoids"?) The mind was considered to be a program which ran in the brain, and it was thought to be only a matter of a few short years before intelligent machines were running our factories. This was the era of the first-generation vacuum tube computers such as the UNIVAC I, and the IBM 650 and 701. It was also the era of patch boards and analog computers. Feedback systems and servo theory were very much in vogue.
    Not to be ignored throughout the whole period from the forties to the nineties were the continuing studies of the brain by neurology researchers. These tended to proceed largely, though not completely, independently of artificial intelligence research.

The Fifties and Sixties:
     This rampant optimism persisted throughout the 50's and well into the 60's. In 1959, a Cornell Aeronautical Laboratories psychologist by the name of Dr. Frank Rosenblatt developed the first artificial neuron-based computer, called "The Perceptron". It was a 500-neuron, single-layer neural network and was attached to a 400-photocell optical array. Another major milestone occurred that year when Simon and Newell developed a theorem-proving program called "Logic Theorist" which was able to prove a number of mathematical theorems. Checkers-playing programs, algebraic manipulation programs (including symbolic integration and differential equation solving), language translation, natural language processing, and () were all under development during this time. OCR-A and OCR-B typing balls were offered for IBM Selectric typewriters, and optical character recognition systems were available to read text printed in those fonts. Simple wire-following robots that any radio amateur could build were devised and described in Scientific American. As one writer has put it, this was a period of "initial intoxication with cognitive science". (As we shall see in the Section below concerning the capabilities of the brain, the computers of the 50's were ludicrously slow and small, by a factor of at least 1,000,000 and perhaps closer to 1,000,000,000,000, for the implementation of human-caliber intelligence.)
    In the early sixties, the U.S. Postal Service mounted a major effort to develop optical character recognition hardware and software. (The program was oversold at the time but by now, it has led to advanced optical character recognition equipment which is in daily use by the Postal Service.) Also, in the early sixties, Simon and Newell created the General Problem Solver (GPS) as a generalized theorem proving system. Throughout the sixties, there was a ferment of activities in all areas of artificial intelligence. (Digital-to-analog converters probably weren't fast enough in the sixties to do much with machine vision.)
    However, by the end of the decade, the Postal Service had discovered how difficult it was to build a machine that could read addresses on letters. IBM had thrown in the towel on their Russian language translation program when it became apparent that a computer couldn't translate language without understanding it. And computers were too slow by many orders of magnitude for machine vision, virtual reality, and speech and handwriting recognition. While they could perform arithmetic and logical manipulations with great proficiency, they were light-years away from posing their own problems or understanding the real world, let alone handling the subtle nuances of interpersonal relationships.
    In 1969, Drs. Marvin Minsky and Seymour Papert of MIT published a book entitled "Perceptrons" in which they proved that single layer perceptron networks were, among other limitations, inherently incapable of performing the exclusive OR function, and were a dead end. One would think that their arguments would have been insupportable. After all, the human brain is a neural network of incredible complexity, containing tens of billions of neurons and hundreds of trillions of synapses. But for some reason, they were sufficient to derail neural network research for 15 years. (The authors would later explain that neural networks were competitors for research money.) Such is the power of scientific snobbery.

The Seventies:
     In the early 70's, researchers at Stanford and MIT began mounting TV cameras and manipulators on wheeled robotic carts and turning them loose in real-world environments. To quote Dr. Moravec again,
    "What a shock! While the pure reasoning programs did their jobs about as well and about as fast as a college freshman, the best robot control programs took hours to find and pick up a few blocks on a table, and often failed completely, a performance much worse than a six-month old child. This disparity, between programs that reason and programs that perceive and act holds to this day. At Carnegie Mellon University there are two desk-sized computers that can play chess at grandmaster level, within the top 100 players in the world, when given their moves on a keyboard. But present-day robotics could produce only a complex and unreliable machine for finding and moving normal chess pieces.
    "In hindsight, it seems that, in an absolute sense, reasoning is much easier than perceiving and acting—a position not hard to rationalize in evolutionary terms. The survival of human beings and their ancestors has depended for hundreds of millions of years on seeing and moving in the physical world, and in that competition large parts of their brains have become efficiently organized for the task. But we didn't appreciate this monumental skill because it is shared by every human being and most animals—it is commonplace. On the other hand, rational thinking, as in chess, is a newly acquired skill, perhaps less than one hundred thousand years old. The parts of our brains devoted to it are not well organized, and, in an absolute sense, we're not very good at it. But until recently, we had no competition to show us up."1
    Image enhancement was a popular topic in the 70's in support of DoD and NASA satellite image analysis and JPL's successes with Voyager photographs. Intel introduced the first microprocessor chip: the 8008.

The Eighties:
Another False Dawn for AI
    In the mid-80's, artificial intelligence enjoyed another false dawn. This time, it was rule-based expert systems, tree searches, and Symbolics Computers. Expert systems proved hostage to the intuition that so often guides human beings and that depends upon an overall understanding of the world. Also, it took too long to enter all the rules into a computer program. Expert systems still exist but they don't replace experts. Symbolics Computer Systems soon declared bankruptcy.

Slow Patient Progress Behind the Scenes
    In the meantime, slow, patient progress was underway. Machine vision systems began to be used for assembly line inspections. Unimation's "Puma" robotic arms were installed to carry out repetitive assembly line functions. Cheap embedded microprocessor chips were becoming faster and faster. The rapidly rising capabilities of personal computers permitted rapid programming of sophisticated software. Caere's Omnipage Professional became an increasingly robust optical character recognition program. Video games became ever more realistic. Though initially very expensive, trail-blazing speech recognition systems were developed by Bell Labs, and by many universities and small companies.

The Resurrection of Neural Networks and Fuzzy Logic
    During the 80's, a few "keepers of the flame" had devised multi-layer neural networks that circumvented the limitations described by Minsky and Papert. Fuzzy logic and genetic programming were added to neural networks, which were embraced with great enthusiasm by the Japanese. Various kinds of multi-layer neural networks with back-propagation and sometimes, fuzzy logic, are proving to possess fascinating and highly useful capabilities in the areas of pattern recognition and control. The latest release (6.0) of the Omnipage optical character recognition package incorporates a neural network to help recognize printed text. There is a great ferment of activity in this now-highly-fashionable area of research.
    Neural networks and fuzzy logic are hot!!
    Genetic programming seems to be receiving less attention.

The Nineties:
Speech Recognition in 1990:
    Computer control by voice command became available in the early 90's: Dragon Systems for IBM-compatibles and Voice Navigator for the Macintosh. These early speech recognition systems were speaker-dependent and had vocabularies of a few hundred words, spoken one at a time. (In 1991, AT&T had a laboratory system capable of recognizing continuous speech, but it required16 parallel, 32-bit digital signal processors.)

Speech Recognition in 1995:
    In 1995, IBM began offering a voice dictation package with their latest PCs. The IBM system is context sensitive and can distinguish among homonyms. Dragon Systems recently introduced a 120,000-word, discrete-word speech recognition system called DragonDictate, while Apple Computer Company is bundling a speech recognition program called Voiceprint with its high-end 8500 and 9500 computers. IBM's "VoiceType" is the most accurate of the speech dictation systems and includes the ability to examine context and to distinguish among homonyms. A small company called Speech Systems, Inc., began offering the first continuous speech, speaker independent, voice dictation system for personal computer owners in 1995. These systems aren't yet the kind of Smith Corona "Voicewriter" that you'll be able to buy from Service Merchandise sometime within the next ten or fifteen years, but they'll get there.

Other 1995 Capabilities:
    Optical character recognition (OCR) has improved steadily with the Omnipage Professional series of OCR packages, coupled with 600 dot-per-inch and higher resolution scanners. Handwriting recognition has improved rapidly since Apple Computing introduced the first Newton Personal Digital Assistant in 1992. Voice synthesis systems are also improving steadily at AT&T. Facial recognition systems are under development, together with usable fingerprint identification packages. Machine vision and industrial robotics systems should be entering their heyday with cheap multi-gigops processors such as the Texas Instruments T320C80 entering the marketplace.
    These are uniquely-human capabilities which are not even shared with the rest of the animal kingdom. Interestingly enough, they are being realized using conventional computers running conventional software. And these programs will only improve. The introduction of MMx processing on 80X86 processors in early 1997, coupled with ever-increasing clock rates, wider data paths, and Intel's 1998 P7 processor, should afford a 5-to-40-fold jump in implementing these higher human functions.
    However, it is important to distinguish between systems that perform functions upon command and self-organizing systems that give commands. What is missing from this picture are the self-aware, self-organizing, motivated characteristics of the animal kingdom, so perhaps it is in this arena that we might gainfully concentrate our efforts.
    Two of the most striking areas of computer progress thus far in the 1990s are the Internet and the advances which are being made in computer graphics.

Historical Summary
    It is clear that early AI researchers hugely underestimated the computational requirements of artificial intelligence.
    AI research has been hampered by "Big-Endian" and "Little-Endian" arguments about whether to concentrate on connectionist (neural net) or purely cognitive (e.g., theorem proving) approaches to achieving artificial intelligence. In reality, the two approaches will probably turn out to be complementary. It is never a good idea in research to put all one's eggs in one basket.

Back to Main Page