It seems to me that feelings, emotions and drives are very important in developing a computer that:
must be entirely guided by their feelings, since they have little
or no abstract guidance, and zero verbal guidance.)
The computer's behavior will be:
How It Might Be Done:
The robot can feel heavier and move slower when it is getting "tired". Greater effort may be required to move and to operate. Feelings won't stop the robot but will tug at its "psyche". Its clock speed might be made variable as a punishment/reward mechanism. Also, its ability to concentrate might be compromised by its feelings. It needs to jump when something startles it. It needs to experience a (undesirable) state of arousal.
The robot will feel a compulsion to move, explore, and investigate (constantly agitated.). When it is in the presence of its "parent", it might feel a sense of peace and be able to sit still for a little while when it is cuddled.
You might ask why I think that it would be desirable to program emotions into robots, assuming that it's even do-able. My rationale is that if I'm trying to create something with human characteristics, I want to hew as closely to human thinking and feeling as possible, even if this strategy is to be abandoned when things move beyond the conceptual stage. Somehow, the robot must operate without explicit cookbook programming. Emotions exert general influences and generate motivations without dictating any specific actions. At the same time, it also seems necessary to devise motivators in order to get the robot to do anything besides sit and wait to be told what to do.
Also, I would want our robots to feel love, gratitude, personal loyalty, and other positive feelings rather than simply being heartless machinessaints rather than unfeeling monsters.
It seems to me that emotions are states that modify our responses and thoughts. Being in a given state will increase the probability of responding in ways appropriate to that state and thinking thoughts apposite to that state. (The robot's subconscious may bring up warnings about potential hazards, particularly after its "brain" has been reorganizing information while it sleeps.)
Must suppress short-term. Must repress long-term in order to retain harmful feelings for future disposition and at the same time, must keep the harmful feelings from interfering with current affairs. Stamp collecting. Gradual fading over time.
I've found it easier to imagine how to implement anger than other feelings. Fight or flight. Dealing with thresholds. Lashing out at everything. Deliberately trying to destroy. Making noise. Need violence with anger. Will adopt an assertive or aggressive mind-set and be relatively apt to indulge in aggressive or assertive behavior. Will go into overdrive. Clock rates and "energy levels" will rise. Might deliberately break things and then face punishment or regret over the loss of what it broke. Activate when frustrated or attacked. Must decide whether to flee or to be angry. Desire to dominate, impose will upon the world. Amygdala. Moods must attenuate, must shift. May need to wait until some understanding is gained before implementing. Must elevate state of anger in the priority stack. Must lower the trigger-point thresholds for violent or angry responses when the anger level is raised, although the controller will have the power to override (suppress) this angry feeling. Can store up angry feelings for later disposition through repression. Angry actions will be more probable. Determination and "adrenalin" will go with anger.
But how does the robot know which actions are angry actions? How does it associate anger with angry actions? How does a tiny child learn to hit? Is it imitating its parents? Is hitting an instinctive behavior? Do we make physically lashing out an instinct with our robot?
Associated with flight. Associated with presumption of anger by othersi.e., feeling threatened. Withdrawal, self-protection, anticipation. Difficulty of separating imagination from reality.
Relaxation. Increased clock rate. Temporary lowering of self-critical (guilt) feeling, "voices"; elevated level of self-acceptance. Must have an internal model of "self". Temporary freedom from conflict, contention. Actions: smiling. Optimism. Energetic. Pockets of thought, feeling, and assessment held at bay until either the barriers are lowered due to a mood swing, or an event triggers a dump. Elevated clock rate. Kinesthetic joint sensors express light feeling. Elevated skeletal muscle voltages.
Dependency. Altruism. Gratitude. Nurturing. Desire to meld. Feeling of need, interdependency. Loneliness.
Will need moral and ethical standards (principles) of conduct.
(Tied in with its self-appraisal.) Silent inner dialogue (in English, of course). Ability to simulate. An inner dialogue implies the ability to couch inner activities in natural languagea non-trivial capability (natural language processing) but one which has already received a considerable amount of attention. The stopper for us is that we are trying to create a robot which will understand language and the world in terms of direct experience instead of simply as a rule-based manipulation of symbols by a facile but mindless machine.
To slow the computer's clock, since computers are not designed with variable clock frequencies, we could steal cycles with software.
All of the emotions will exist simultaneously and will be elevated when circumstances warrant. They will be guides to action.
We may be able to influence the robot's behavior but how do we know that it is really feeling anything?
What happens if the robot doesn't care whether bad things happen to it or not? How do we give it an instinct for self-preservation? Suppose that it experiences various emotional states and makes decisions and so forth but doesn't actually feel anythingdoesn't feel pain and doesn't care whether it lives or dies.
That may happen. If so, we will still have learned a lot and have made a lot of progress. However, we could also be neutral about such matters but we're not. We believe that life is real and life is earnest. Maybe the robot won't be neutral, either.
Among the requirements for emotion seems to me to be the ability to raise and lower clock speeds and physical power levels and response rates. The robotic system has to hold some resources in reserve for emergency situations.
From the standpoint of actual programming, there will indeed be a multitude of "agents" which will act upon the ego.