The Robots Are Getting Closer - 9
1/11/2003

Home

Future Index

The Long-Term Roles of Robots
    Dr. Moravec anticipates that by roughly 2030, robots will surpass humans in intelligence, and that by 2050, the robots will take over.
   This assumes that computers can become that powerful. We know that, one way or another, it can be done, because Nature does it. I see a clear road for a continuation of Moore's Law through 2012. (I'm assuming that circuit dimensions will reach the 22-nanometer level by 2011.) If The Cell concept holds up, we could be at 100 teraflops speeds by or before 2018.If RAM prices remained on their current cost curve, RAM prices could fall to $250 per terabyte by 2018. 100 terabytes of disk might cost $250 or less, so that $750 might possibly pay for the processing power for a human-caliber robot by that date. 
    Fifteen years in the future seems like a long way away, but it's no farther away than 1987. 2030 is as far away as 1975. That's not terribly far off, either.
    As I've mentioned elsewhere, robots wouldn't necessarily have to have any volition or desire to take over the world. (Personally, I should think that you'd have to be a little crazy to want the job, particularly if you were a robot and didn't need the money.) But after thinking further about it, I'm afraid that the big danger would lie in the kinds of humans who develop viruses or hack websites. They might be thrilled with the challenge of creating a ruthless, power-mad robot.
    Given frequent (or even continuous) recording of their memories and mental states, robots could enjoy virtual immortality. A robotic bomb could strike a target with the knowledge that it would feel no pain, that it could maintain wireless continuity of existence at a central site, and that it could simply send forth another avatar. Robots would have no obvious interest in creature comforts. It really comes down to how accurately  we can make robots after our own image (which, since we're made in the Image of God, means that the robots would also be made in God's Own Image?) My real point, though, is that robots might not have any need to compete with humans (or each other), unless we can figure how to make them neurotically discontent.
    This brings up a point: if you were a robot, would you choose to be neurotically discontented? We humans, as a species, have various kinds of psychological pathologies, including schizophrenia, paranoia, obsessive-compulsive disorder and psychopathology. Would we choose these conditions for ourselves if we had a choice? I don't think so (unless, possibly, we were already schizoid, paranoid, obsessive-compulsive or a psychopath). These pathologies certainly aren't necessary to profoundly interesting and productive work. I suspect that robots would choose to be logical (as well as emotional) and devoid of psychopathology. As such, I don't see them, no matter how intelligent, seeking to dominate or to expand unless they're deliberately made mentally twisted. Even then, I would see them possibly working against attempts to mentally distort them.
    It's hard to see things from anything other than an anthropocentric perspective, but let's try. Let's try to imagine how humans might look to an alien species, or to robots.

Humans have instincts that lead them to:

How Many Humans Should There Be?
    One of the questions that robots might ask is: how many humans should exist on Earth and in space? Zero? 6,000,000,000? 6,000,000? 6,000?

Can We Develop Robots Within This Century That Achieve Human Levels of Intelligence?
    I think the first problem here is to distinguish between intelligence and "humanness"... emotions, volition, self-awareness, etc. Deep Blue can beat any human chess player on the planet, but that doesn't mean that it experiences pleasure when "it" plays or wins, or pain when it loses. It's just a machine, without a built-in capacity to feel or to be self-aware. Your pocket calculator can run rings and circles around you when it comes to multiplying and dividing. The PC on which I'm writing this is inordinately faster and more accurate than I am when it comes to performing all manner of operations, but it's still just a mindless machine. 

There's a Difference Between Proficiency, and Self-Awareness. 
    I think that in order to develop an artificial intelligence in our own image, we'll need to give it feelings, drives, emotions, and self-awareness, including a desire to imitate, and the ability to compare its performance with those of others around it. I think this can be done, although have a clue about how to make a computer feel real pleasure and real pain. I think it can be done because, in our own cases, pain consists of electrical signals that pass from neuron to neuron until they reach the brain. But how does the brain convert those signals to overpowering discomfort? How do we make a computer feel overpowering discomfort? 
    I think self-awareness and the establishment of goals and the planning of procedures to meet them is a lot simpler than making a computer feel intense pain and intense pleasure.

If We Can Develop Robotic Substitutes for Humans, Would We Want to Do It?
    Do we want to develop robots that are fast enough and cheap enough to outperform humans in virtually all measures of human endeavor? Dr. Moravec argues that competition will drive us to develop whatever can be developed, but experience with nuclear weapons suggests to me that this doesn't have to be the case. I think we might want to stop short of developing an artificial intelligence that is just like us. 
    I suspect that an artificial intelligence wouldn't want to eat of the forbidden fruit of knowledge that gives us ambition, and discontent. I suspect that dolphins and whales may operate contentedly like this. (Whales never had to compete with anything, or fear any predator. They may be bovine in their contentment.) 
    One of the concerns is that we will somehow unwittingly unleash an Overmind upon ourselves that will take control before we know what's happening, but this presumes (I think) anthropomorphic interpretations arising from our animalistic inheritance. What's missing are emotions, self-awareness. and volition.


Back                                  1   2   3   4   5   6   7   8   9   10   11   12                                   Next