An Expert System for Learning and Discovery:
    To define an expert system for learning and discovery, we will ask: "How would an intelligent adult explore a totally alien existence, given our well-trained analytical abilities?"
    One very important step in the robot's learning process would be "objectification"—learning that there is a world out there that is not under the robot's direct control.
    If we install an expert system for learning and discovery, we would probably want to make it modifiable by experience.

How Do We Partition a Film Strip into Objects and Events without Human Intervention?:
    Whoa! What the robot is going to experience is an action sequence—a film clip. There is nothing in this to partition the world into actions and objects—verbs and nouns.

Answer: Following only our rules for associating, discriminating, forgetting, and condensing to generic classes, this partitioning should occur automatically because the object remains the same over a number of different experiential sequences. The animation tracks would fade into each other as the computer slowly reduced the level of detail night after night, combining the most similar animation tracks until one or only a few generic tracks were left. Meanwhile, the objects in the animation tracks would remain the same even when the animation tracks were sizably different. Consequently, the objects have a time-independent existence. Gradually, a model of external objects and of a world that has a time-independent reality would emerge. Or would it? Probably so. The shell model, including the objects, would end up as an invariant part of the generic action track or tracks which involved that room. If the objects and events within action sequences were stored independently of the sequences, then cross-linkages to other objects and events would develop that were based upon, for instance, the objects' similar silhouettes or other distinguishing features.
    Also, how will the robot learn to hang on to the objects it picks up?

Answer: Could be pre-programmed or could be learned by trial and error.
    Impenetrability of objects is another generalization to be made. The robot has to learn to associate its being stopped with the object which is stopping it. Again, the process of generalization should take place until the robot associates impenetrability with all objects. (Imagine what a shock it's going to be when the robot first encounters a liquid!) The robot's state of confidence in its assumptions about the world is going to be sorely tried for a long time. Its gut-level self-evaluation is going to be impacted by these unexpected discoveries. As a part of its modelling of the world, such unwelcome surprises should cause the robot to become more cautious and skeptical for a while, and to test its assumptions more extensively than before. This caution will soon taper off, but will rise again after each unexpected challenge to familiar assumptions.
    Does this phenomenon explain a child's uncertain grip on reality? Eventually, by the time we grow up, we learn enough about the world that we aren't so often surprised in such fundamental ways.

When the Robot Experiences Water:
    What will our baby robot make of water? It has no specific shape and no specific color at all. Its properties will be utterly unlike those of the solid objects for which we have designed the shape tables and object identification mechanisms.
    Also, how will the robot learn to use cup-shaped objects in lieu of actual cups, the way a human or a primate might improvise? How will it grasp the concept of providing a cup-shaped object to hold water?

What We Remember and What We Don't:
    Note that unique experiences or events are remembered, like the midnight hike with Mr. Drew, or Ruth and I climbing the mountain at Estes Park. On the other hand, routine action sequences in the same setting are soon forgotten but the setting itself is well-remembered.
   Need to distill action sequences down to highlight events to illuminate causal relationships.

Memory and Recognition:
    Whether or not experiences are remembered is determined by what's going on inside and not directly by what's happening in the external world.
    I choose not to remember the start-to-finish "video tape" of my visit to Nobie Stone. Instead, I extract excerpts from it at selected times when something special happened. There are "hot links" to Sunday School, SSL in 1965, and other Nobie events. There are links between various events and Nobie's name, Nobie's face, Nobie's voice, and all the locales where I have encountered him. Nobie's voice is stored not as actual words but as a certain pitch and a style of diction, together with images of his face while speaking (seen from various viewpoints). The most vivid image is that of him speaking in Sunday School class.
    I can remember thoughts that I have had without necessarily remembering when I have had them. Last night, when John Stephens brought up a cooking anecdote, it triggered my cake-baking anecdote. I had to understand (abstract) the meaning of his conversation before I could make the connection.

Should We Store Shell Models in Each Time-Based Animation Track? Or Should We Store Time-Independent Spatial Elements (Such as Objects and Shell Models) Separately?
    Objects are spatial and more or less time-independent. May need to store them differently from time tracks. The robot will need to generate a 3-D shell model of its environment. One way to do it could conceivably be to do that in the process of recording its action sequences. The reason for considering is that this may be the way a neuralnetwork would operate. (The 3-D model would be stored over and over in each action sequence or video clip. Since the 3-D shell model wouldn't change from sequence to sequence, it wouldn't be altered as the action sequences coalesced into one or a few generic sequences.) But let's face it: we live in a world of objects and events. We know that's how the storage will end up. It probably makes sense to store geometrical layouts and objects from the outset differently than we store a series of events.

We may end up in the expert system mode, storing whatever we can and availing ourselves of whatever we can. We might want to provide all the information about objects that is available to us, together with any relationships that might exist. (We could spoon-feed the robot's mind.) However, it is desirable that the robot have the capability to do this for itself.

Can Clone a Robot's Mind:

Once one robot has experienced the world, its experiences can be transferred to other robots so that they don't have to climb the learning curve. Like amnesia victims, they might be brought into a world with which they are already familiar.