Neural Nets and Artificial Neurons:
Neural network limitations:
Which neural network
implementation do we use?
How do we implement a network of networks?
Neural networks can't presently encode relationships.
How do we fabricate a 500,000,000,000,000-synapse net?
The human brain and the human mind are enormously complex.
One of the questions that might be asked is: why not use neural nets to create artificial intelligence? My reasons for not embracing neural networks at this time is fourfold:
(1) It seems to me that we
are a long way from fabricating 100,000,000,000 neuron nets with
thousands of interconnections per neuronperhaps 500
trillion synapses. The interconnections would be a plumber's
nightmare. Also, the brain possesses the ability to grow new
dendrites and establish new synapses. Of course, artificial
neurons may be able to operate orders of magnitude faster than
A small New Mexico State University spin-off called Intelligent Reasoning Systems, Inc., in Austin Texas, has developed what the company claims is the closest analog to an artificial neuron yet devised. It uses a hybrid analog/discrete representation that is said to more readily tackle such time-dependent tasks as speech recognition or motion detection without the complications of digital encoding. (The sampling of speech at regular intervals fails to encode the natural rhythms of speech, which continually varies its cadence.) The use of hybrid temporal processing elements (HTPE's) that are driven asynchronously by their inputs and that encode information temporally has shown that speech segmentation occurs naturally at a certain stage in the chain of signal processing. The developers, Mark Deyong and Randall Findley, state that silicon-based circuitry is much more reliable than biological neurons, which have are error-prone and have a high failure rate. In natural circuitry, there is no guarantee that a given neuron will fire, and nature makes up for that with a high degree of redundancyunnecessary for silicon-based circuits. In addition the 1,000,000 to 1 speed advantage which silicon circuits enjoy over biological circuits can be used to further reduce the neuron count. The HTPE approach, because it more faithfully simulates the behavior of actual neurons, is said to be well suited to handling time varying, as well as spatially-varying patterns, in contrast to conventional neural networks which utilize static (though trainable) weighting functions. The behavior of, and programming for biological-neuron analogs (HTPE's) is fundamentally different, leading to networks which are very sparse compared to those of conventional neural networks. The HTPE approach requires 7 FET's (Field Effect Transistors) per neuron and 5 FET's per synapse.
The problem that I foresee in trying to build an analog of the human brain using HTPE's or any other collection of neural networks in the near future lies in the number of synaptic weighting factors that would have to be represented. Even if HTPE's can remove the redundancies of natural neural networks and the 1,000,000-to-1 speed advantage of silicon circuitry could lower the neuraon count to 10,000 or 100,000, the synaptic transistor count (of the order of 100 trillion?) would seem to be be far beyond present-day technology. Of course, in the future, given chip counts with trillions of transistor-like processing elements, such networks might be feasible. It is even possible that supercomputer-class systems containing 100 trillion transistors might be constructed in the year 2000-2005 time frame. (The planned Cray T3E would house up to 4 trillion transistors.) Such an investment might be well worth its cost as a proof-of-principle demonstrator.
(2) The brain apparently contains a multitude of very complex and highly specialized areas which we probably haven't yet fully mapped out, and don't understand. I'm thinking that the functions that I'm discovering probably need to be performed no matter how one goes about it. Experience with biological systems in general shows that they are exceedingly complicated, with multiple backup systems. My consideration of the functions of the mind suggests that it is also extremely complicated. Perhaps this attempt to identify mental functions might even shed light on the specialized areas of the brain. Then, too, it isn't necessary that we exactly emulate the human brain. There may be other ways to achieve similar goals using conventional computers. For one thing, our robot need not be concerned about survival in the wild, and evasion of predators. If we can achieve even a tiny fraction of what the brain can accomplish, we might still have produced a very commercially-useful product.
(3) I don't have access to cutting-edge neural network research facilities. On the other hand, PCs are ubiquitous and I'm thinking that it may make sense to see what we can do with them using conventional, von Neumann programming.
(4) It should be easy to
duplicate the (humongous) file that represents a robot's data
base, using conventional disk-stored files. This should make it
possible to clone the robot, and should also confer
near-immortality upon it. However, reading out and duplicating
the synaptic settings of a neural network might be more