The Parallelism of Biological Nervous Systems
One of the most striking considerations
about biological nervous systems is how extremely slowly neurons
operate compared to silicon-based systems. Because of the
electrochemical nature of the neuronal discharge, nervous
impulses travel at no more than 25 meters per second (1 inch per
millisecond) in the gray matter of the central nervous system,
and at about 100 meters per second down the long, myelin-sheathed
peripheral nerves. Furthermore, neurons exhibit a 1 millisecond
switching time, followed by a 4 millisecond refractory period
during which the neuron discharges only in the presence of a
strong stimulus. This means that neurons could not fire at rates
greater than 200 times a second (a 5 millisecond cycle time). In
practice, their firing rates are probably significantly less than
One of the implications of these numbers is that it would require 5 milliseconds for a signal to go from the eye to the occipital (plate) where visual interpretation takes place. It would then require a minimum of another 10 or 15 milliseconds for a motor signal to go from the brain to an arm or leg muscle. At least 1 millisecond would be added to the signal processing time for each neuron in the processing chain between the sensory input and the motor output. Our minimum response times to stimuli range from, perhaps, 25 milliseconds for an eyeblink to 150 milliseconds to step on a brake pedal. This means that there can't be very many neurons in the chains between the inputs and the outputs. This, in turn, forces biological nervous systems to operate almost entirely in parallel. Everything must happen at once. Visual data must be analyzed, edges detected, features extracted, objects identified, and appropriate suites of motor commands issued all in an instantthat is, within a few "clock cycles".
Generally, when a system becomes this massively parallel, it becomes computationally inefficient. A lot of resources have to be dedicated to data transfer. Also, many useful functions, such as numerical integration, are inherently serial and don't lend themselves to parallel processing.
In addition, neurons are quite unreliable, perhaps because they are so miniaturized that quantum mechanical fluctuations permit spurious firings and misfirings.
By contrast, electricity travels down properly terminated copper wires at about 85% of the speed of light, or about 250,000,000 meters per second (compared to 25 meters per second for unsheathed nerve fibers). The fastest current computers run at a 300 MHz clock rate, with still higher clock speeds on the way. This compares with the <200 Hz cycle times of neurons.Although comparisons are dangerous, it may be that current silicon-based computers can be considered to be potentially 1,000,000 to 10,000,000 times faster than their biological counterparts.
For all practical purposes, digital electronic computers are 100% reliable. (The unreliability of biological computers may be an unavoidable consequence of their molecular- level miniaturization. Their circuit elements are so small that quantum-mechanical fluctuations may cause unpredictability as an outgrowth of the Heisenberg Uncertainty Principle. Silicon-based systems may be subject to unreliabilities and to a need for redundancy when circuit design rules decline below about 0.1 m 1,000 Å.) Silicon-based computer technology has focussed upon fast uniprocessors rather than upon multiple processor systems, in part because fast silicon-based uniprocessors have been technically feasible. Thus, silicon-based computers lie at the other end of the architectural spectrum from biological computers. Neural networks for biological computers may be an inevitable implication of the almost total parallelism dictated by the slow speed of electrochemical signal propagation.
For these reasons, it may be possible to implement AI functions in silicon using far fewer circuit elements than are required by the brain, by processing serially what the brain would have to process in parallel. In a way, it may be appropriate to compare the number of neurons in the brain with the number of calculations per second which can be performed by a computer, since at least one neuron in the brain must be dedicated for every operation that is to be performed in parallel. In that case, given the 1,000,000-fold speed advantage of silicon-based computers, coupled with their higher reliability, one could imagine that 10,000 parallel microprocessors might afford the same order of computational speed as the brain. This is approximately the number (9,000) of P6 chips that Intel has contracted to incorporate in their 1.8 teraflops Touchstone supercomputer which they are to deliver to DARPA next year or the 10 teraflops parallel processor that is projected for the year 2000.
We shouldn't be surprised if this were to come to pass. Human substitutes for the physical capabilities of the animal kingdom, while still unspeakably unsophisticated compared to biological systems, have given us the supersonic transport, the diesel locomotive, and a great array of ultra-fast, ultra-powerful, ultra-precise machines which far outperform their animal counterparts. It would not be too startling if this were to occur with the brain.
Back to Main Page.