Robots Are Getting Closer - 13
Ahead: Final Conclusions
The final conclusion from all the discussion below is that IBM has promised a one-teraflops microprocessor chip by 2005 , and Intel has promised a one-terops chip by 2010. We already have a 37 Teraflops supercomputer, with two one-petaflops supercomputers (from IBM and Cray Research) promised for 2010.
It's a "peak of wild surmise", but I believe that we'll see a low-cost 100 teraflops microcomputer system... fast enough to meet Dr. Hans Moravec's requirements for a human-level artificial intelligence... before the computer revolution stops revolving, and probably, on a Moore's Law time schedule: i. e., by or before 2025.
Updated Computer Technology Forecast Table
The tables below show some computer technology predictions I made in 1997, compares them with what has actually happened between 1997 and 2003, and updates the original predictions through 2012 based upon what I now know.
Why Is This Important?
Computer speeds and storage capacities have increased by a factor of about ten billion over the past half-century, and computers are now a cornerstone of our civilization. How much longer can these Moore's Law increases... these doublings of circuit densities every two years... continue? And if we're to reach Hans Moravec's hundred-trillion-calculations-per-second-and-hundred-trillion-bytes-of-storage threshold for human-level artificial intelligence, we still to improve price/performance levels by a factor of about 10,000.
Will we make it?
We're Probably Good Through 2010
We're probably good through 2010, since Intel thinks it can maintain Moore's Law over that time period. By 2009, circuit features will have gotten as small as 32 nanometers. The Semiconductor Industry Association has carried their predictions down to 22-nanometer circuit features, so that's probably also going to happen, and probably in 2011 (although the Semiconductor Industry Association anticipates it happening in 2011).
Of course, there's no guarantee that this will take place, but the semiconductor industry is clearly planning for such an eventuality..
Beyond 2012, it's a leap of faith.
But then, it's always been a leap of faith. Twenty-two nanometers would be smaller by a factor of 20 than was thought, in the 1980's, to be theoretically possible.
Some Researchers Think That Moore's Law Can Continue Beyond 2012
Some researchers, including Dr. Hans Moravec and Dr. Carver Mead, think that Moore's Law can continue beyond 2012--not forever, but for awhile.
I also subscribe to that white hope.
By 2021, circuit features would be about 5 nanometers, falling to about 1 nanometer by 2031. One nanometer would be about three or four atoms across.
Moore's Law Has Applied to Microprocessor Speeds As Well As to Circuit Densities
Not only are we cramming more and more transistors onto a chip: we're also running them faster and faster, boosting chip power levels. Computer clock speeds can increase as transistor sizes fall. However, power levels are becoming an ever-thornier problem. The 1.25 GHZ IBM PowerPC4 microprocessor dissipates 125 watts. That's a a lot of power for an air-cooled chip.
We may be able to continue increasing chip densities without necessarily increasing computer clock speeds.
The Importance of Clock Speeds
Generally, it takes several clock cycles to execute an instruction. Computer designers have circumvented this by "pipelining"... setting up a kind of "assembly line" for computer instructions. This works well as long as the assembly line to make a change. But clock speeds are a measure of computer speeds when instructions are executed in sequence. Problems in which the next step to be taken depends upon the results of a previous calculation profit most from increased clock speeds. Increasing clock speeds is the traditional way in which the speeds of "uniprocessors" have been increased.
You might enjoy re-reading
HATS/TABES forecast through (tongue-in-cheek) the year 2030.
Table 1 - Computer Technology Forecast Through 2012 (Reasonably Likely)
|Computer Technology Forecast Table: 1997 - 2012|
|1/97||Pentium Pro 0.2||0.2||$3,000||$3,000||$60,000||$60,000||350|
Pentium II .233
|1/98||Pentium II 0. 266||0.27||$2,000||$2,000||$36,000||$36,000||250|
|4/99||Deschutes 0.4||1/99||Pentium II 0.45||0.45||$1,250||$1,250||$22,500||$22,500||250|
|4/00||Katmai 0.6||1/00||Pentium III 0.9||0.9||$750||$1,000||$15,000||$10,000||180|
|4/01||Willamette 0.9||1/01||Pentium 4 1.5||1.5||$500||$500||
|4/02||Merced II 1.2||1/02||Pentium 4 2.2||2.2||$300||$116||$6,000||$2,100||130|
|4/03||Merced II 1.6||1/03||Pentium 4 3||3.0||$188||$84||$4,000||$1,250||130|
|4/04||P8 2.2||1/04||? 4.5||4.0||$125||$60||$2,700||$800||90|
|4/05||P8 3||1/05||? 7||5.3||$80||$42||$1,800||$566||90|
|4/06||P8 II 4||1/06||? 10||7.0||$47||$30||$1,200||$364||65|
|4/07||P9? 5||1/07||? 14||9.4||$32||$21||$800||$242||65|
|4/08||P9 II? 6.5||1/08||? 20||12.5||$20||$15||$567||$162||45|
|4/09||P9 II? 8||1/09||? 28||16.6||$12||$11||$400||$108||45|
|4/10||P10? 9.5||1/10||? 40||22.0||$8||$8||$260||$72||32|
|4/11||P10 11||1/11||? 56||30.0||$5||$6||$180||$48||32|
|4/12||P10 13.3||1/12||? 80||40.0||$3||$4||$120||$32||22|
Table 2 - Computer Technology Forecast From 1/1/2013 Through 12/31/2030 (Problematical)
It seems plausible that there will be major technological changes between 2012 and 2030.
|Computer Technology Forecast Table: 2013 - 2030|
The numbers in the tables should probably be regarded as food for thought rather than specific forecasts, particularly after 1/1/2013. Beyond 1/1/2013, they become tongue-in-cheek.
The tables show the results of two Moore's-Law types of decline: one formula in which quantities increase or decrease by about 1/3rd a year (10/18ths every other year), and the other in which quantities change by 1/2, (9/18ths) every other year.
I believe that we will ultimately get at least to 16-gigabit RAM chips, since these would be compatible with 22-nanometer features and memory cells 60 to 90 nanometers on a side. This would permit SDRAM memories costing $1,600 a terabyte or less by 2015. 32-gigabit chips may be possible with 22-nanometer feature dimensions. That might permit RAM at $800 a terabyte by 2017.
We may very well shift to other memory approaches that admit of greater price performance ratios than 16 gigabytes for $25.
I believe that we will ultimately see at least 50 GHz clock speeds, or equivalent speeds in asynchronous chips.
I believe that we will ultimately see multi-teraflops chips or modules.
I believe that we will ultimately see multi-terabyte disk drives.
Beyond these goals, which I expect to see realized between now and 2013, I feel very antsy about the rest of the numbers set forth in the table above. But that's nothing new. I've always been very antsy forecasting more than about 10 years out.
Computer Clock Speeds
Columns 1 and 2: What I Predicted Back in 1997
Column 1 gives the month (April) and year for which I made the predictions back in 1997 that are given in Column 2. They were made for a low-priced computer, and don't represent the state-of-the-art (top-of-the-line) at the end of the year.
As you can see, this 1997 projection predicts that clock speeds would 100-fold every 15 years.
Columns 3 and 4: What Actually Happened Between 1997 and 2003, and My Updated Predictions for 2003 to 2012
As Column 3 indicates, I have now shifted from April to January 1, since most semiconductor-technology roadmaps presumably apply to the end of the year (equivalent to the beginning of the next year), and I have listed the cutting-edge processors that have been, or that I think may become available on December 31 (actually on January 1 of the following year).
Column 4 delineates what actually happened over the years 1997 through 2003, and gives my best guesses concerning what will become available by January 1 of that years 2004 through 2012.
In the 2003 update, clock speeds would 180-fold every 15 years.
Comparing Apples and Oranges
In comparing the numbers in column 2 with the numbers in column 4, it must be remembered that the parameters cited in Column 4 are for cutting-edge processors, whereas those in Column 2 are for more-affordable processors. Even so, it may be seen that the actual and projected microprocessor clock speeds, as seen today, are pulling away from the clock speeds that I predicted in 1997. There's an interesting story behind this, as discussed below.
The 1997 Semiconductor Industry Association's 15-Year Technology Roadmap
In 1997, I had no idea how fast desktop computers would get. Computer microprocessors had come from using very slow and simple 8-bit microprocessors back in 1977 to highly sophisticated 32-bit microprocessors in 1997. Clock speeds had risen by a factor of 40 to 80, but their effective speeds had climbed far more rapidly, from a few thousand instructions per second in 1977, in the Tandy Radio Shack 80, to 300 million instructions per second in January, 1997, in the 200 MHz Pentium Pro. So what I used as a guide was the 1997 Semiconductor Industry Association's (SIA's) 15-year technology roadmap. It called for the microprocessors of 2012 to operate at 3 GHz across the chip, with 13-GHz "hot spots" on the chip. Accordingly, I spaced my clock projections on the basis of this roadmap. After all, more authoritative source could you find than the SIA's 15-year roadmap?
The answer (as I now read the augurs) is that the SIA's roadmaps are wishful thinking by the semiconductor industry, who want the largest companies... Intel, IBM, AMD, etc... to slow down. Part of the reason might be "Moore's Lament".... the multi-billion dollar costs of semiconductor "fabs". Stretching the timetable would allow additional time to amortize the wallet-busting costs of these facilities. So far, though, large semiconductor companies are still planning a new round of shrinkage every two years, driving the technology to smaller and smaller dimensions. As a result, Intel latest microprocessor, the 3.06 GHz Pentium 4, already affords 3.06 GHz across the chip, with 6 GHz "hot spots" (compared to the SIA's 1997 roadmap plan for 2012, which called for 3 GHz across the chip, with 13-GHz "hot spots")..
The 2002 Semiconductor Industry Association's 15-Year Technology Roadmap vs. Intel and IBM
Now the SIA has drafted a new 15-year roadmap through the year 2016 that calls for a doubling of circuit densities every three years instead of every two years, in accordance with a stetched-out version of Moore's Law. But Intel, IBM, etc., are continuing to follow the original Moore's Law, doubling circuit densities every two years, and Intel has just reiterated its intention to continue on this schedule at least through the 32-nanometer node, which Intel will reach in 2009, and the SIA roadmap has scheduled for 2013.
Clock Speed Slowdown Just Ahead?
Cahner's Instat Microprocessor Report states that Intel will produce a Pentium 4 chip with a clock speed above 4 GHz in 2004, but that "Intel processor clock speed ramp will slow in 2004 as power dissipation concerns in desktop PCs begin to create barrier to higher clock frequencies. Intel will use Hyper-Threading to keep processor performance on the Moore’s Law ramp." (Intel Definitely in Desktop PC Processor Driver's Seat Until 2003).
Cahner's Microprocessor Report is issued by a company that lives or dies on the accuracy of its forecasts.
On the other hand, Intel has announced that it will achieve 10 GHz speeds by 2010, leading me to project 4.5 Ghz speeds by the end of this year, 7 GHz speeds by the end of next year, 10-GHz speeds by 2005, and ~50 GHz speeds by 2010. IBM/Sony/Toshiba have announced that they will introduce the Cell microprocessor in 2004 with a clock speed of 4 GHz. Intel is working on the kind of on-chip power management schemes that allow them to operate an upcoming 2.2 GHz Banias laptop at 20-watt power levels, rising "above 3.3 GHz" in 2004. If they can produce a 20-watt, 3.3+ GHz chip in 2004, they might be able to produce a 100-watt, 7 GHz chip for desktops in 2004.
Andrew Grove, the Chairman of Intel's Board, has said that power problems will become dominant around the end of this decade--but not in 2004.
An Intel Forecast for 2010
Meanwhile, at the Intel Developer's Forum last spring, Intel's Chief Technology Officer, Pat Gelsinger, has said, "We're on track, by 2010, to build 30-GHz devices of 10 nanometers or less, delivering a terra-instruction of performance."
Column 5 shows a possible progression in speed corresponding to an endpoint of 30 GHz in 2010.
Based upon this presumed endpoint, I have forecast a clock speed of 5.3 GHz by the end of 2004 (1/2005), and a clock speed of 7.0 GHz by the end of 2005.
Column 6 shows my 1997 prediction of RAM prices. This was generated using the traditional Moore's Law halving-every-18-months rate of decline for RAM prices.
Column 7 was created assuming that RAM prices halve every 24 months rather than every 18 months, in keeping with Intel's current rates of doubling circuit densities every two years. I suppose it's possible that RAM prices could fall faster than I've shown here, but prices in the $3,000-to-$4,000-per-terabyte range seem to be in the offing for 2012.
Disk prices are difficult to call. For years, disk prices halved every 2.25 years. Then beginning in the mid-nineties, disk prices began to halve every year.
In the meantime, IBM had declared back in 1991 that once disk densities reached about 62.5 gigabits per square inch, we'd reached the end of the line. At higher recording densities, magnetic flux would leak across magnetic domain boundary walls, and magnetic recording would become spontaneously incoherent.
This limitation was widely cited throughout the latter nineties, and into the year 2000. Now, it seems, magnetic recording densities hundreds of times greater than IBM's limit are feasible! Multi-terabyte disks are in the pipeline.
Computer technology forecasting isn't always easy, flying, as it does, in the face of repeated announcements that Moore's Law is finally about to end.
In the updated price list in Column 9, I've assumed that disk prices will drop by a third every year. However, an alternate guess would place the cost of disk at something like $10 a terabyte by the end of 2012, down by a factor of the order of 125 from the end of 2002.
Beyond 2012: The Out Years
Will both clock speeds and densities double every other year through 2012?
We have a reasonable expectation of seeing a continuation of the current level of pell-mell Moore's Law progress through 2012 (1/1/2013). These numbers have been incorporated in Intel's publicly revealed plans out to the 32-nanometer insertion node (through1/1/2011). The 22-nanometer design rule represents the end point of the SIA's current 15-year technology roadmap (albeit for 2016 in the SIA plan, rather than the 2011 date in Intel's roadmap). Beyond that, "there be dragonnes". It's amazing that bulk silicon has gotten as far as it has Whether it can continue to function with features down to, below 22 nanometers is, I suspect, unknown at this time.
Will transistor densities (but not clock speeds) increase beyond 2012?
Beyond the issue of feature size is the question of clock speeds. Increasing the number of transistors on a chip by 100 while reducing their areas by 100 would allow us to maintain the same clock speed for a given power input. However, it would also allow us to increase the clock speed by a factor of the order of 100, but only at the cost of increasing the power dissipation of the chip by a factor of 10,000. So far, chipmakers have somehow managed to let us have our cake and eat it, too, but that can't go on forever. We might find ourselves in 2030 with 1-nanometer features and ten trillion transistors on a "chip", but with the same clock speeds we have in 2012.
These constraints apply whether we're using silicon chips or whether we're employing carbon nanotubes.
Progress will taper off before it quits
Of course, the first thing that will happen is that rates of progress will taper off. We won't hit a "brick wall". Part of what's so striking about computer progress is that we've already had 10 orders of magnitude improvement in computer speeds and storage capacities, with 12 orders of magnitude in the offing, and all of this without skipping a beat. (The petaflops supercomputers that IBM and Cray are promising for 2010 will be nearly a trillion times faster than UNIVAC I.) It's certainly understandable that progress might slow down as we switch from current approaches to something else. Progress might even slow, as the transition is made, and then speed up again.
We think in terms of what's going to happen in a few years. If we're lucky, we have eons stretching out ahead of us. What will computers be like a century from now? A millennium from now?
I'm pessimistic about increasing both clock speeds and chip densities beyond 2012
Personally, I'm pessimistic about increasing computer clock speeds beyond 2012 while simultaneously increasing chip densities, at least at the pavement-pounding rates that Moore's Law implies. However, there are other ways to speed up computing, and the unveiling of "The Cell" in 2004-2005 may showcase these.
Shrinking feature sizes without increasing clock speeds would still benefit RAM chips
Improvements in feature size would have benefits well beyond increased microprocessor speeds, by permitting greater storage densities and capacities. The 1997 SIA 15-year roadmap included 256-gigabit RAM chips. These would have sufficient capacity, if they could eventually be manufactured as cheaply as earlier RAM chips (falling ultimately to about $25 for 256 Gigabytes), to permit multi-terabyte RAM memories. Speed is important for them, also, but they might generate much less power than would corresponding microprocessor chips. For example, they might use magnetic or micro-mechanical storage techniques that might not require as much power. (Using a one-centimeter by one-centimeter chip, 256-megabit SDRAM memory cells would have to be 20 nanometers on a side!)
Can we go all the way down to the molecular or atomic level to store and process information?
I don't know enough to even guess at this. One problem would be that of addressing an individual atom or molecule. Another would be that of ensuring that the individual.
If we could address individual atoms, there are of the order of 1015 (one quadrillion) atoms per square centimeter. If we could store eight bits per atom, a square centimeter would hold a quadrillion bytes... something like what the synapses of the human brain might encode.
It's hard to imagine going smaller than an atom.
Electronic signals propagate about 1,000,000 times as fast as neural signals. At first blush, it might seem that an artificial intelligence could operate at 1,000,000 times the speed of the human brain, but there are limitations other than neural propagation rates. Surprisingly, as discussed below, we might be hard-pressed to exceed by many orders of magnitude the processing capacities of the human brain within the same volume and 100-watt power limitation. Of course, machine intelligence may be able to improve enormously upon the brain's reaction times, and upon its memory capacity for data, and its computational abilities... functions for which the brain isn't designed.
Can we go three-dimensional?
The answer is yes, but the problem is power dissipation. Random thermal noise would seem to set a lower limit on the differences in energy states that can be used for computational purposes. At room temperature, the average thermal energy per degree of freedom is about 1/25th electron-volt. (One electron-volt represents an equilibrium temperature of about 7,736 degrees Kelvin, and 1/25th of that is about 309 degrees Kelvin or 36 degrees Celsius. one electron volt is 1.602 X 10-19 joules of energy.). This means that if the difference in energy levels between one switching state of a transistor and its alternate switching state were reduced to 1/25th of an electron-volt, the transistor would spontaneously flip between binary states, spending half its time in one state and half its time in the other. If we were to set a requirement that there is an error probability of no more than e-25 or roughly 1 in 10 billion that a transistor be in the wrong state because of random thermal switching, then we might use one electron-volt, or 1.6 X 10-19 joules as the switching energy threshold for our transistor. In that case, 6.25 X 1018 state switches per second would generate one watt of power, and 6.25 X 1020 switches per second would generate 100 watts of power. (With error-correcting circuitry, we might approach closer to 6.25 X 1019 state switches per second per watt, giving us a factor-of-ten improvement, although a tradeoff emerges. Error-checking redundancy must be increased as the switching threshold is decreased.)
Note that this is independent of whether this circuitry is barn-sized, or whether it fits on a thumbnail.
To put this in the context of present-day computers, suppose that we are operating at a clock speed of 3.125 GHz. In that case, two billion transistors all switching at that 3.125 GHz speed and running at the thermal limit would generate one watt of power. Right now, we're packing about a billion transistors on a chip using 90-nanometer design rules. My computer is running at a 3.2 GHz clock speed, and dissipating. perhaps, about 50 watts of power. In other words, we're already at about 1% of that thermal speed-power product limit.
If in the future, we upped our clock frequency to about 10 GHz, and our transistor count to 60 billion, our chip would generate 100 watts of power running at that thermal-noise limit. That would give us a processing speed of the order of, perhaps, 200 times those of today's microprocessors.
A count of 60 billion transistors per chip would be expected at the 11-nanometer design node, currently scheduled to appear
A power budget of one hundred watts for the main microprocessor might constitute a practical upper power limit for desktop PCs.
This isn't terribly far away from where we are right now.
Could we beat this limit by running at cryogenic temperatures? The answer is "Yes, but... " The problem lies in the power that we would have to expend in order to pump 100 watts up from, say, 30º Kelvin. And anyway, operating at 30º K would only give us an improvement in the speed-power product by a factor of about 10. We'd probably spend, viz., 900 watts pumping 100 watts from 30º K up to room temperature. We might be better off running at room temperature.
Could we operate in space? Again, it's "Yes, but... " In space, we have to dissipate heat through radiation. Radiative transfer of heat is governed by the Stefan-Boltzmann Law that says that the amount of power radiated per unit area varies as the fourth power of the absolute temperature. Radiating 100 watts at 30º K would require 10,000 times the area required to radiate the same 100 watts at room temperature (300º K).
What about the brain? Where does it stand with respect to this speed-power limit?
If Ray Kurzweil is correct that the brain runs at 2 X 1016 (20 quadrillion) operations per second, this would still fall well below the thermal speed-power limit of about 6 X 1018 switches per second for one watt of power dissipation, and the human brain generates about 100 watts of power.
Could automation cut chip costs?
I think that the answer is yes, and that may be one way forward.
Could a slower development pace cut chip costs?
Again, I think that the answer is yes. Slower development would allow more time to pay for for semiconductor fabs, permitting lower prices.
Is parallel processing a white hope for future computing?
For certain types of computation, yes. Parallel processing is analogous to running a factory with a great many employees: It's great as long as you can keep them all busy.
Let's Give Three Cheers for Our Chip Engineers!
I think that the 20th-century semiconductor engineers who brought about the incredible 10,000,000,000-to-1 improvement in computer speeds and capacities deserve more recognition than they've gotten. This has truly been historically unprecedented, and has inaugurated a technological revolution. (Communications still have a long way to go, in erms of implementation.) We pay a great deal of attention to "celebrities"... athletes, actors and actresses, and media figures... and virtually none to the real heroes and heroines of our age: the scientists and engineers who make it all possible.
9 10 11