The Robots Are Getting Closer - 4
Computer Technology Forecast for Robotics


Future Index

Computer Technology Forecast: An Update
    Today's 3-Ghz Pentium 4 computer is geared to 130-nanometer design rules. If we halve our circuit dimensions, corresponding to the 65-nanometer design rules projected for the latter half of 2005, then all other things being equal, we should be able to quadruple our computer speed with no increase in power. This is because our transistors would only be 1/4th as large, and should need only 1/4th as much power to switch states. This would allow clock speeds of the order of 12 GHz, in keeping with Intel's speed predictions for 2005. Halving circuit sizes again, to 32-nanometers would allow clock speeds of the order of 45-50 GHz, which I'm anticipating for the 2010 time frame. If we ask how small we would have to make our circuitry in order to boost clock speeds by a factor of 333,000, we arrive at about 0.2  nanometers, or 2 Angstrom units, or less than one atomic diameter. Ahem. This may be slightly unrealistic.

Table I, Below, Depicts Two Computer Speed Forecasts:: a Moore's-Law model, shown in columns two and three, and an extrapolation of the timetable prepared by the Semiconductor Industry Association, listed in columns four and five. (This table doesn't take into account any technological surprises, like The Cell.) 
    The Moore's Law model assumes that circuit densities on a chip can continue to double every two years rather than every 18 months, as was the case with memory chips in the past.
    The Semiconductor Industries Association's Roadmap is historically less ambitious than what leading chip vendors such as Intel have actually accomplished. The Roadmap would have circuit densities doubling every 3 years instead of every other year. Semiconductor makers would like this because it wouldn't push them so hard, and would allow them to recoup more costs from their investments in new equipment.

Table I - Computer Speeds and Feature Dimensions vs. Time

Size, nm.
Moore's Law
Speed, Ghz
Size, nm.
Speed, GHz
12/1999 250? 0.9 250? 0.9
12/2002 130 3 130 3
12/2005 65 12 90 6
12/2009 32 50 65 12
12/2013 16 200 45 25
12/2017 8 800 22 100
12/2021 4 3,200 16 200
12/2025 2 12,500 8 800
12/2029 1 50,000 5.6 1,600
12/2033 5 A.  200,000 2.8 6,400
12/2037 2.5 A. 800,000 2 12,500

    In fairness to Dr. Moravec, it should be noted that his 2030 date for human-level processing speeds assumes that 100,000 gigops or gigaflops should be sufficient to rival the human brain, and given a Moore's-Law rate of improvement through 2031, we would then have 100 teraflops clock speeds on a chip, with circuit features running about 7 Angstrom units, or 2-to-3 atoms in size.

Parallel Processing Rides to the Rescue
    Parallel processing. is the basis for present-day supercomputers. These multi-teraflops behemoths have thousands of identical microprocessors, and the IBM one-petaflops processors planned for later in this decade will have of the order of 1,000,000 microprocessors. 
    The IBM/Sony/Toshiba one-teraflops (peak speed?) Cell microprocessor chip, targeted for late next year (2004), is intended to achieve its blazing speed by incorporating a number (4 to 16) SIMD (Single-Instruction, Multiple-Data) streams on one chip. IBM manufactures the PowerPC RISC (Reduced Instruction Set Computer) family of microprocessor chips, which should lend themselves well to a  multiprocessor-on-a-chip concept. 
    A key consideration is that of power input. The chips I'm describing would be limited to power inputs no greater than, perhaps, ~100 watts. Anything much greater than that would heat the room too much.

One Question Is That of How Much Improvement Can Be Achieved by Clever Processor Design.
    In its Itanium line of server microprocessors, Intel achieves a sustained 14 gigaflops throughput with 4 Itanium 2 processors running at 1.0 GHz. This is significant because it indicates how much faster a chip may crunch numbers than its clock speed would suggest. It should be noted that the Itanium chips already utilize parallel processing, although they differ from SIMP devices in that they are highly sophisticated, multi-instruction stream constructions intended for general purpose use.
    (Both Intel and IBM are introducing hyperthreading into their product lines. However, Intel's hyperthreading will increase processor speeds by about 20%, while IBM's hyperthreading can virtually double a processor's speed. Hyperthreading essentially involves placing or emulating Single-Instruction-Multiple-Data (SIMD) dual processors on the chip.)
A Third Miniaturization Dimension: Adding More and More Transistors to the Chip
    Complicating this increased miniaturization is the fact that we are trying to add more and more transistors on a chip, giving rise to a third dimension of power dissipation, but also, another way to speed up the chip. Chip designers are meeting this challenge by, among other stratagems, powering only those transistors that are instantaneously in operation. By 2005, 1,000,000,000 transistors may be mounted on a chip, with, by simple extrapolation, 8,000,000,000  transistors per chip by 2010.  These transistors might be used for parallel processing on the chip, as might be the case with the "Cell". If so, the Cell might operate 5 times as fast in 2010, with 8 times as many processors on the chip, permitting peak parallel-processing speeds of 40 teraflops!. (This is "playing with numbers" on my part, and has an exceedingly low confidence level. The cell is vaporware at the moment, and my wild projections regarding what it might do in 2010 are vaporware squared. It's included only as an example of the wild cards that might accelerate the arrival of teraflops and petaflops chips.)

Do We Need 100 Teraflops Speeds to Rival Human Intelligence?
    I personally suspect that robots might not have to achieve even 100 teraflops speeds to rival human cognitive processes. Part of the basis for this statement is the matter of what can be accomplished with the processing power we've got. We're still a significant distance from fully speaker-independent speech recognition, but we're tackling the problem with computers with speeds in the hundreds-of-megahertz range, at least a factor of 20,000 to 30,000 below the minimum processing power that even I would propose for a human level artificial intelligence. 
    Another basis for this statement is the fact that animals are designed to detect and escape predators, whereas, at least initially, robots don't have to be constructed to meet such challenges. Also, humans are designed for potentially Olympic-level athletic performances, whereas robots don't need to be built with this in mind.
    A third basis for this statement is that I think robots could initially be designed to remember in sketchier detail than humans.
    All of these constraints could be lifted later, as computer speeds and storage capacities increase. For example, an artificial intelligence could at first be given only enough storage capacity to record a few month's worth of learning. Then a few months later, another few month's worth of data capacity could be added, until cost per byte becomes  low enough to outrun the calendar.

Robots Could Be Linked Wirelessly to Large Central Computers
    This would be a way to muster the computing power necessary for AI without requiring that it be squeezed onto a portable platform. We already have supercomputers that can begin to approach human-class processing speeds. By 2005, supercomputers should cross the 100-teraflops threshold. Of course, to pass beyond the experimental, AI processing will have to be low-cost.

Supporting Hardware
    It doesn't do any good for the processor to run faster and faster unless the computer's memory can  run fast enough to feed data to the processor(s), and to remove the results. This takes the form of RAM, disk, and bandwidth.

Random Access Memory (RAM)
    Random access memory prices have dropped recently because of sagging computer sales, but they'll probably firm up this year.
    Older, slower random access memory can be purchased as cheaply as $100 a gigabyte, but faster RAM runs $250 to $350 a gigabyte. This is approximately what Moore's Law projections would predict for 2003.
    By 2005, RAM should cost about $100 a gigabyte, or $100,000 a terabyte.
    For full-fledged human level intelligence, we might need a terabyte or more of RAM.
    Table II below shows projected RAM prices as a function of time.

Table II - DRAM Costs per Terabyte, as a Function of Time

YEAR Size, nm.
Cost, $,
Size, nm.
Cost, $/
12/1999 250 $1,000,000 250 $1,000,000 $500,000? $1,000,000
12/2002 130 $250,000 130 $250,000 $100,000 $250,000
12/2005 65 $100,000 90 $160,000 $40,000 $62,500
12/2009 32 $25,000 65 $100,000 $10,000 $10,000
12/2013 16 $6,250 45 $50,000 $2,500 $1,500
12/2017 8 $1,600 22 $12,500 $625 $240
12/2021 4 $400 16 $6,250 $160 $37.5
12/2025 2 $100 8 $1,600 $40 $6
12/2029 1 $25 5.6 $800 $10 $1
12/2033 5 A. $6 2.8 $200 $2.5 $0.15
12/2037 2.5 A. $1.5 2 $100 $0.63 $.025

    Column 2 lists circuit-feature sizes assuming that circuit densities continue to double every year, in accordance with Moore's Law. (This is my best bet for what will happen through 2009.)
    Column 3 shows the projected costs associated with this trend. 
    Column 4 lists circuit feature sizes as listed in the Semiconductor Industry Association's 15-year technology roadmap.
    Column 5 presents the corresponding DRAM prices.
    Column 6 sets forth the prices of somewhat-slower, obsolescent DRAM selling at clearance sale prices.
    Column 7 gives the prices of DRAM if it were to continue to fall at its historic  halving-every-18-months rate (Moore's Law for DRAM).

Disk Drives
    Table III below delineates possible costs per terabyte for hard disk storage as a function of time.

Table III - Disk-Storage Costs per Petabyte as a Function of Time


Halves Annually

1.5 Years

12/1999 $9,000,000 $9,000,000
12/2002 $1,112,500 $1,112,500
12/2005 $140,000 $280,000
12/2009 $9,000


12/2013 $563 $9,500
12/2017 $35 $1,500
12/2021 ? $235
12/2025 ? $37
12/2029 ? ?
12/2033 ? ?
12/2037 ? ?

    Column 2 depicts storage costs per petabyte if prices continue to halve every year, as has happened for the past few years.
    Column 3 projects prices per petabyte if prices begin to halve every 18 months, as was the case with RAM. (Note that there's now no clear end in sight for disk capacities.)

Disk Storage Capacities
    The National Storage Industry Consortium has set a goal of dsk drives that can store one terabit per square inch by 2006. That would enable 3-to-4 terabyte, 3.5" disk drives by 2006, in keeping with the doubling-every-year capacity increases that have characterized disk drives for the past decade. NIST has awarded Seagate a $21,000,000, 5-year research contract to develop disk storage techniques that will increase disk storage capacities by a fact or 100 or more, eventually allowing, perhaps, 20-or-more terabytes to be stored on a 3.5" disk drive. Impressive as are these numbers, it is perhaps worth mentioning that the human brain, with 1015 synapses, might store something like 1,000 terabytes of information. Of course, the brain is constructed of unreliable components, and may require redundancy to a degree that isn't necessary in computers. In that case, 200 terabytes might be a more realistic estimate of the brain's storage capacity. (My estimates are very uncertain.) By 2010, 200 terabytes might be achievable with 10 20-terabyte disk drives. And by 2014, that number might shrink to one or two disk drives.

    Bandwidth capacities are something I haven't previously tried to anticipate. The fastest DRAM memories can transfer data to and from the computer at 400 MHz. I expect to see that rise to 533 MHz with DDR II memories before the end of 2003. RAMBUS memory currently operates at 1,066 MHZ, but it transfers data in blocks, and there's a little latency loss in setting up transfers.
    Looking ahead, I would suppose that DRAM bandwidths will parallel DRAM capacities, doubling every 2 years.
    Disk bandwidths increase as the square root of the disk capacity
    Table IV, below, projects SDRAM and disk data bandwidths as a function of time. 

1999 0.1 0.066
2002 0.4 0.133
2005 1.6 0.4
2009 6.4 1.6
2013 25.6 6.4
2017 100 25.6
2021 400 100
2025 1,600 400
2029 6,400 1.600
2033 25,000 6,400
2037 100,000 25,000

Blue Gene and ASCI Purple
    About two years ago, I reported plans for IBM's Blue Gene, a petaflops computer. A petaflops is 1015 floating point operations per second, or several hundred thousand times as fast as today's fastest desktop computers. Today, there have been announcements updating the plans for Blue Gene, and another IBM supercomputer called ASCI Purple (An Incredible Calculator). IBM expects to deliver a 360-teraflops version of Blue Gene to the Livermore Radiation Laboratory by 2005 (Supercomputer speed race on again). IBM will also produce a 100-teraflops computer known as ASCI Purple which is advertised as matching the speed of the human brain. (Evidently, someone is taking Dr. Hans Moravec's speed of 100 teraflops as the speed of operation of the human brain.) And as all of this makes fresh headlines, "The Cell" is presumably ticking away, waiting for its bombshell 2005 debut (IBM, Sony, Toshiba team on processor architecture for broadband).
    In the meantime, Cray Computer has announced plans for a petaflops computer by 2010 (Cray fills need for computer speed).
    It wouldn't seem impossible to find teraflops computing speeds in linked desktop computers by 2010, especially if special-purpose chips sets were available. Matrox had a seven-board, 100-gigops graphics processor on the market several years ago. Reportedly, nVidia has a one-terops graphics card available now.
    Human-brain processing speeds are getting closer

2009 and 2013: What Can Be Done in the Near-Term
    What's of particular interest to me is what happens in 2009 and in 2013. By that time, The Cell's descendents may give us enough processing power to approach or rival human-level intelligence. What then becomes significant is the cost of storage.
    By 2010, a terabyte of slightly slower SDRAM could be had for about $10,000, with a petabyte of disk storage running $9,000. That should easily fall within the budget of a university AI research program. One-tenth of that storage (100 gigabytes of SDRAM and 100 terabytes of disk storage) would cost a total of $1,900, and should lie within the grasp of individual researchers.
    Of course, the processing power to utilize this hardware would add to these total costs. 
    By 2013, a terabyte of slightly slower SDRAM should run about $2,500, with a petabyte of disk capacity setting someone back about $600, putting this capability within the reach of the serious amateur. 
    University programs might come up with 10 terabytes of RAM and 10 petabytes of disk for a total investment of about $30,000. 40 terabytes of RAM could be had for $100,000, which is still not out of reach of university research budgets.
    Such research programs might start small, and grow over a period of years.
    By 2017, low-priced RAM should run $625 a terabyte, with disk storage no more than $600 a petabyte..

    I predict that within ten or fifteen years, everyone is going to carry a PDA/Cellphone that can link them to the Internet, and possibly, pick up TV broadcasts. This will be used to check on weather conditions, andf possibly, to serve as a an ID and credit card. Yes, I know: it exists today. Early adopters have them now. But I think that in 10 or 15 years, things will have settled out, and everyone will be carrying such a unit. 

Pentium 4 overhaul set for 2005 - Tech News -

Back                                  1   2   3   4   5   6   7   8   9   10   11   12                                   Next