The Mega Foundation

Computer Technology Update

March 23, 2007

Home

Investment Index

Where We Started:    
    Progress in computer technology over the past half century beggars belief.
1951: UNIVAC I
    The first commercially available digital computer, the multimillion dollar UNIVAC I (below ),
performed about a thousand floating point calculations a second, and stored about a thousand numbers in a mercury delay line. In 1957, it took up the ground floor of our building at Case, consumed 125 kilowatts of electrical power, and required 5,200 vacuum tubes.  Today's $1,000 laptop can perform billions of calculations a second, and can store a hundred billion numbers while utilizing 15 watts of power.
Where We're Going Next: 
2011: Intel's Teraflops Supercomputer-on-a-Chip
    In 2011, Intel expects to market its Polaris chip which will
execute a trillion floating point calculations per second, and can store, perhaps, a trillion numbers. Its experimental version requires only 62 watts of power and only 100 million transistors (fewer than today's Pentium chips).
    If the same kind of advances had been made in the automotive marketplace that have occurred in computers, a Ferrari costing $100,000 in 1951 would cost 1 today and 0.01 in 2011.
2011: The Supercomputers of 2011
    The 2011 supercomputer that would be equivalent to the UNIVAC I in cost, size, and power requirements would be IBM's Blue Gene/Q and its rivals that will crank out
3-10 petaflops--3-10 quadrillion floating point calculations per second! This will be represent a several-trillion-fold speed increase over the 60-year period, or about a 10-fold speed increase every five years. (If we could project this ahead, Intel would be fielding a 10-teraflops microprocessor chip in 2016, a 100-teraflops chip in 2021, and a 1000-teraflops (one-petaflops)  chip in 2026. But we'll see. It all depends upon when the music stops.)
Moore's Law: Chip Transistor Densities Double Every Two Years 
    From the middle 1960's to 2000, the number of transistors that could be manufactured on a fingernail-sized chip doubled every 18 months--a phenomenon known as "Moore's Law", named for the co-founder of Intel Corporation, Gordon Moore, who first enunciated it. Beginning in or around 2000, the rate of doubling stretched out to every two years (still a blistering pace), and Intel has promised that this rate of progress will continue until at least 2018 (and most probably, after that date).
    By now, transistor counts on (highly uniform) production memory chips recently reached the billion-transistor-per-chip level, while the highest transistor count on a microprocessor chip will reach 790  million on the IBM PowerPC6 chip, due out later this year. These chips are characterized by 45-nanometer circuit features. (By comparison, a living cell is about 10,000-to-20,000 nanometers in size.)
    This relentless progression toward ever-higher transistor counts over the past half-century is an historical watershed that has transformed, and continues to transform electronics, and by extension, our everyday world. When I began my professional career at Case in 1956, we had to individually test each transistor we bought from Texas Instruments or Philco because there was so much variability in their electrical characteristics. Now we buy them by the billion for less than we paid for a single transistor in 1956. This is a "Golden Age" for digital electronics, just as it's a "Golden Age" for astronomy and for biology. 
    Some day, the Moore's Law march will end, but in the meantime, history is in the making.  ("Around and around she goes, and where she stops, nobody knows!")

Where We Are Today (March, 2007):
    Intel recently introduced its
Core 2 Duo quad-core microprocessor running at a 2.66 GHz clock speed. I don't have benchmark speeds for this chip, but I would suppose that it would crank out somewhere between 10 and 20 gigaflops (billion floating point operations per second) on a clear day.
    IBM, Sony and Toshiba began a joint development effort in 2001 on the Cell, aimed at developing a one-teraflops (one trillion floating-point operations per second) chip that could be used in general-purpose computing as well as in Sony's Playstation 3. The Cell consists of one or more cores mounted on a chip. Each core consists of a "Power Processing Element" that controls 8 "Synergistic Processing Elements", and each "Synergistic Processing Element" can perform 8 floating point operations per clock cycle. Given a clock speed of 4 GHz, this provides 256 gigaflops per core. By mounting four cores on the Cell chip, it's possible for a single Cell chip to reach a teraflops. 
    The first silicon appeared in 2005, running at 3.2 GHz and using 90-nanometer design rules, and delivered about 200 gigaflops. In late 2006, a 4 GHz, three-quarter teraflops version debuted in Sony's Playstation 3.
    It might be worth mentioning that Sony's Playstation 3 can perform two trillion floating point calculations a second, while the XBox 360 maxes out at one trillion floating point calculations a second.
    At the moment (February, 2007), DRAM memory is advertised for $80 a gigabyte, and disk storage runs $400 a terabyte, with the first 3-inch terabyte disk drives having been introduced in January. Right now, 8-gigabyte flash memory costs $96 at MacWorld. 

What We'll See in the Rest of 2007 and 2008:
    A second-generation Cell, implemented in 2008(?) in 65-nanometer logic, will run at 6 GHz, and will deliver proportionately higher performance (1.5 teraflops?)
    How fast will the pending upgrade of these gaming machines be? Presumably, the second generation Cell Processor, with its clock rate bumped from 4 GHz to 6 GHz, will hit 3 teraflops.  
    IBM's Power6 chip will become available this summer with a clock speed in excess of of 5 GHz (as high as 6 GHz). It will be a quad-core microprocessor, with each of the four cores harboring two integer number-crunchers, two floating point processors, a decimal arithmetic floating point unit, and an altivec vector processor, giving it, I would suppose, a top speed in at least the tens of gigaflops range. It will be fabricated with 65-nanometer design features with a chip size of 341 mm2.


Moore's Law Marches On

    Intel has just reaffirmed its commitment to a Moore's-Law pace of progress, doubling chip transistor counts every two years (Table 1, below). The green numbers represent Intel's planned capabilities and time frames, while the carmine numbers are simply an "if this goes on" extrapolation showing where Moore's Law would lead if it continued on through 2030. Of course, there might well be changes in strategy and, perhaps, in technology even before 2014.

Table 1 - Single-Chip Transistor Counts, Assuming that Transistor Densities Double Every Other Year

Year

Size, 
nm.
Transistors,
Billions*
2005-2006 65 1
2007-2008 45 2
2009-2010 32 4
2011-2012 22 8
2013-2014 16 16
2015-2016 11 32
2017-2018 8 64
2019-2020 5.5 128
2021-2022 4 256
2023-2024 2.75 512
2025-2026 2 1,024
2027-2028 1.4 2,024
2029-2030 1 4,096

* - for memory chips. Microprocessor chips tend to lag behind. For example, the Intel Penryn chip, scheduled to debut in the 2007-2008 time frame, will boast 0.4 billion transistors, whereas two-billion transistor (highly uniform) dynamic RAM chips may be in production. 


    To sum it up, we have a roadmap through 2016. Beyond 2016 lies terra incognito.
    Although a few select chips with 45-nanometer features will be manufactured this year (mostly by Intel, Texas Instruments and Advanced Micro Devices), volume production won't begin until 2008.
    We'll be able to pack four times as many transistors on a 22-nanometer chip in 2011 as we can on the 45-nanometer chips that will enter pilot production in 2007. By 2015, we'll boost transistor densities by another factor of four, leading to an overall transistor count that is 16 times what it will be in 2007--implying 20-25 billion transistors on a chip.
     Hewlett Packard has just demonstrated a new technique for making memory chips with 100 times the density of today's chips, affording 64-to-128 megabit memory chips when it becomes necessary to do so. Such chips should enable a terabyte of RAM for $250, debuting in your computer in, or a little after 2020.

IBM's Cell Computer


Intel's Tera-Scale Initiative

    Computer makers are making a tectonic shift in their architectural strategies from the single-processor approach that they have championed for the past 35 years to multicore, parallel processing chips. With the product of speed and transistor-count becoming more difficult to enhance (having topped out at a level of about 3.8 GHz and 200,000,000 transistors on a chip), multiprocessor chips are the newly favored way to continue to increase computer throughput... a step that has already been taken by its rivals, IBM and Advanced Micro Devices. Intel has mounted a Tera-Scale Computing initiative, employing 80 processor cores, coupled with more specialized cores for performing special functions, and arriving by 2011`Teraflop chip. This chip will be able to perform about one trillion floating point calculations per second... something that in 1996, required 10,000 Intel Pentium Pro processors, and required 500 kilowatts of electricity and more than 2,000 square feet of floor space (the ASCI Red Supercomputer delivered to the Sandia National Laboratories in 1996, shown below).Asci By contrast, Intel's new Teraflops (trillion floating point operations per second) chip, code-named Polaris,  dissipates 62 watts of power and can fit on your fingernail. This would imply a speed of about 12.5 gigaflops per core. Each core consists of two independent 32-bit floating-point processors with single -cycle instruction execution. The chip  sports 100,000,000 transistors compared to 291,000,000 transistors on the Intel 2 DuoCore chip, and requires 2 square centimeters of silicon compared to 1.43 cms. for the 2 DuoCore chip. It was built using 65-nanometer lithography, but will presumably be implementsed for production in 32-nanometer silicon. It isn't x86 compatible. It uses a 3.2 GHz clock, so each processor performs about two floating-point operations per clock cycle.
    By cranking the clock speed up to 5.6 GHz, the chip exceeded 1.8 teraflops, at the expense of 265 watts of power input (requiring liquid cooling).
    Within two to four more  years (2012-2014), Intel could be packing billions of transistors on a chip, and could support a sizable number of independent processors. (Is this a response to IBM's, Sony's and Toshiba's development of The Cell? Of course, the original Cell chip can perform about a quarter of a trillion floating-point computations per second, so the Intel chip is four times faster than a Cell chip.) 
    In any case, teraflops microprocessors are on their way.) 
    Remember, we need something like 100 teraflops to support human-like levels of artificial intelligence, but all sorts of other applications exist for such computing power, including holographic displays, and virtual reality computations.
    There are many. many applications for which parallel processing is a fully-effective way to go. "Possible uses include artificial intelligence, instant video communications, photo-realistic games and real-time speech recognition, said the firm." However, for overall computing, including situations like numerical integration in which the next number depends upon the number that preceded it, or the computer has to make a decision based upon the results of its current computations, the computer can't exploit parallel processing. Generally speaking, computing speeds for general-purpose computations go something like the logarithm of the number of parallel processors.
    This, I think, could be why Intel has waited so long to embrace parallel processing.
    At the same time, the only way to materially increase the speeds of processor cores is to boost their clock speeds. Intel's top clock rate in its Pentium 4 line of processors is, as mentioned above, 3.8 GHz.) IBM has just announced that it will introduce a 5 gigahertz PowerPC 7 this year, and will up the clock rate of its Cell processor to 6 GHz. This is a hopeful step, pointing toward a renewal of the "clock wars". Still, clock rates in high-density microprocessor chips may rise painfully slowly. 
Our Ultra-Fast Game Controllers  

The Speed-Transistor-Count Product
    I have speculated that thermal noise might set a lower limit upon the minimum amount of energy required to alter a transistor's state. 
    The thermal noise level at room temperature is about 1/40th of an electron-volt, or about 1.6/40 X 10-19 joules = 4 X 10-21 joules. I suspect that this means that if the energy required to switch a transistor from an off state to an on state were the same as this random-noise energy level of 4 X 10-21 joules, then the transistors on a chip would be spontaneously switching on and off because of random noise. If it does work this way, we  would probably want to use transistors with switching energies that are many times this random noise energy level. If we set the transistor switching energy equal to 10-19 joules, it would be 25 times the noise level. In that case, by way of example, a chip with one billion (109) transistors with a 10 GHz (1010 Hz)  clock with all the transistors switching on or off every cycle would generate 109 X 1010 X 10-19 joules/second = 1 watt. If we set 100 watts as a maximum level of power dissipation for a microprocessor, then a chip with one hundred billion transistors all of which are switching ten billion times every second would generate this 100-watt maximum.
    By contrast, Intel's announced Penryn processor will have 400 million transistors and might operate at 4  gigahertz, giving it a speed-transistor-count product of 0.16 X 1019 . If it used our hypothetical minimum-energy transistors with 10-19 joule switching thresholds, it would generate about 0.16 watts. In practice, the actual Penryn chip might dissipate 100 watts, meaning that it would be using 100/.16 = 625 times the minimum switching energy. So right now, chip technology would be something like 625-fold removed from this hypothetical random-noise limit.
    Intel's new teraflops Polaris chip described above operates at 3.2 GHz and has 100 million transistors, so its speed-transistor-count product is 0.032 X 1019 . It dissipates 62 watts, so it's 62/.032 =1,937.5-fold removed from the noise-energy limit. It's implemented using 65-nanometer design rules. For production purposes, it might become cheaply manufacturable at 45 nanometers.  If we suppose that its future descendant's clock speed might ultimately reach 20 GHz (leading to a 6.25 teraflops computing speed), then it would be running at 310 times the power dissipation that minimum-energy transistors would generate. In that case, a circuit-feature size reduction of 17.6-to-1 (call it 16-to-1) might be possible before it hit the thermal-noise-driven, minimum energy  limit. This would call for.2.75-nanometer circuit features, and might lead to 31 billion transistors on a chip. The chip ought to be able to generate 6.25 X 310 = 1,937.5 teraflops.
    Of course, this is all very crude scaling. However, the point is that if this thermal-noise limitation is an ultimate barrier to ever-increasing chip speeds, it will be a long time before it rears its ugly head (assuming that 20-Ghz clock speeds and 2.75 design rules are ever feasible). 
    It might be of interest to recapitulate the current value for the speed-transistor-count product in relation to what I think might be a theoretical limit for this number.
    Intel's peak clock speed today is 3.8 GHz, or 3.8 X 109 Hz. One billion transistors on a chip would be 109 transistors, and if all its transistors are switching continuously, that would imply a speed-transistor-count product of 3.8 X 109 Hz X 109 transistors on a chip = 3.8 X 1018.transistor switchings per second per watt. In order to deal with round numbers, I'll assume that Intel's 3.8 GHz Pentium 4 is overclocked to 4 GHz, making this number = 4 X 1018.transitions per second. I've previously speculated that the minimum energy required to switch a transistor can't be less than the random thermal at room temperature, which is about 1/40th of an electron-volt, or about 4 X 10-21 joules. If I set my transistor switching energy at 4 X 10-21 joules, then at any given time, half my transistors would be in the wrong state spontaneously because of thermal noise,  To avoid random thermal errors, I would want my transistor switching energy to be quite a bit larger than 4 X 10-21 joules to avoid spontaneous thermal transitions. If, for example, I required a transistor switching energy 25 times larger than 4 X 10-21 joules, it would make the transistor switching energy 100 X 1010-21 joules or 10-19 joules Then 4 X 1018 transistor switchings a second multiplied by 10-19 joules per switch would imply a minimum power dissipation of 0.4 watts. If I considered  100 watts to be the maximum acceptable power dissipation for a desktop computer, then the maximum transistor count at a 4 GHz clock speed would be 100 watts/0.4 watts, or 250 billion transistors on a chip. That's still many years away, arriving, according to Table 1, in the 2021-2022 time frame.
    If, between now and 2021, chip makers were to up their chips' clock speeds to 10 gigahertz, then the maximum transistor count before thermal noise became a possible limiting factor would be 100 billion transistors on a chip--a number that might be expected late in 2018.
    This assumes that my speculation is correct about thermal noise setting this kind of upper bound upon the speed-transistor-count product, and that all the transistors on the chip are switching continuously... assumptions that may not be true.

IBM's 500 GHZ Transistor
    IBM has just announced a transistor that runs at 300 gigahertz at room temperature. But it's important to note that what counts in microprocessor speeds is the speed-transistor-count product. Power dissipation is what limits microprocessor clock rates. As transistor size shrinks, so does its power dissipation. We can either mount more of them on a chip or we can raise our clock speeds, but we can't do both merely by shrinking transistor dimensions.
    Clock speed improvements may eventually occur, but, perhaps, not in the immediate future.

RAM Memory Sizes and Costs
    Table 1 is also important in that the right-hand column may be taken as a proxy for the memory complements  (adding about $100 to the price) that will be standard on new computers. We are being told that Microsoft new Vista version of Windows, due out late this year or early next year, will be configured for one gigabyte of memory. Any new computer anyone buys now should sport at least a gigabyte of RAM memory.
    Table 1 shows that by 2010, new computers will probably come with 4 gigabytes of RAM for $100, and by 2016, that will have risen to 32 gigabytes for $100.
    One terabyte of RAM for $100 wouldn't be expected before 2026, assuming that it arrives at all.
    Table 2, below, is copied from my 1991 computer technology forecast paper, together with with a paragraph describing how much each level of RAM storage would hold.
    The left-hand number in the "MBYTES IN A PC" column might represent the amount of DRAM you could buy for $100.
    The numbers in green show RAM speeds converted to synchronous-RAM speeds. Synchronous RAM  reached a 133-MHz clock rate in 2000, a 266-MHz (Double Data Rate) level in 2001, and a 566-MHz (Double Data Rate 2) speed in 2005, so we're running well ahead of the numbers that I felt were required back in 1991.
    Starting in 2007, DRAM price/performances may fall behind my 1991 forecast (reproduced below) a bit because of the slowing of Moore's Law from a doubling of transistor densities every 18 months to a doubling of transistor densities  every two years. For example, the new forecast for one-terabyte memory chips becomes 2025-2026 (see Table 1 above) rather than the 2019-2022 time period cited in Table 2 below, although whether or not computer technology can ever deliver a one-terabit chip is still unknown.

"Table 2 provides a more detailed DRAM forecast, including speed requirements."
YEAR Mb/CHIP MBYTES IN A PC REQUIRED SPEED
1989 4 0.256-1 80 nsec. (12.5 MHz)
1992 16 1-4 60 nsec. (16.7 MHz)
1995 64 4-16 40 nsec.  (25 MHz)
1998 256 16-64 25 nsec. (40 MHz)
YEAR Gb/CHIP GBYTES IN A PC REQUIRED SPEED
2001 1 0.64-0.256 10 nsec. (100 MHz)
2004 4 0.256-1 8 nsec.   (125 MHz)
2007 16 1-4 6 nsec.   (167 MHz)
2010 64 4-16 4 nsec.   (250 MHz)
2013 256 16-64 2 nsec.   (500 MHz)
2016 1,024 64-256 1 nsec.(1,000 MHz)
2019 4,096 256-1,024 0.8 nsec. (1,250 MHz)
YEAR TB/CHIP TERABYTES IN A PC REQUIRED SPEED
2022 16 1-4 0.5 nsec.  (2 GHz)
2025 64 4-16 0.3 nsec.  (3.33 GHz)
2028 256 16-64 0.2 nsec.  (5 GHz)
2031 1,024 64-256 0.1 nsec.(10 GHz)

    "To give an idea what this kind of storage would mean, a typical PC's RAM in the year 2000 will be able to store several minutes of compressed video, several thousand compressed color images, or 100,000 pages of text. (Their hard drives should be able to store 20 times this much data.) Nine years from now, the Random House encyclopedia, an unabridged dictionary, and the full Roget's Thesaurus may be a part of everyone's standard PC library. By the year 2010, your PC RAM, and the non-volatile memory that supports it, should be able to store, perhaps, two HDTV 2-hour movies, or several hundred thousand books, including illustrations. By 2020, these numbers will rise to two hundred hours of recorded video, or every document that has ever been printed. By 2030, your PC's RAM, plus hard drive (or equivalent), should be able to store aerial photography of the world, or about a 40-mile-by-50-mile "virtual world" recorded at 600-dpi accuracy."

    Of course, what really counts is disk capacity, which has typically run at least 100 times the storage capacity of DRAM. By 2010, your 3-terabyte drive should be able to store, perhaps, 100 two-hour HDTV movies, or 1,000,000 books.

Flash Memory
    This is the only off-the-beaten-track concept that I discussed in my 1991 forecast that has survived--and thrived--15 years later. In 1991, I projected 40-megabyte flash cards in 1994 for $40--$1,000 a gigabyte. Now, MacMall has an 8-gigabyte flash drive for $85... $10.63 a gigabyte! (I just bought a 4-gigabyte jump drive for $60.... $46 at MacMall.)
    Flash drives would seem to have a terrific future. They're very light and small, and they consume no power. They're the practical choice for cell phones, personal digital assistants, and music players. As multi-gigabyte 
    I use them in lieu of Iomega cartridge drives and floppy drives. and at 8 gigabytes, I could about use them as external hard drives. Eight gigabytes would store two 4-gigabyte movies. Eight gigabytes would also store all my data. I store my files on flash drives that I can switch from desktop to laptop and back again, and to take my files with me when I travel. Also, my files are secure even if something happens to my hard drive. (I periodically back up my flash drive contents on my hard drive.)
    Of equal value are small  flash drives that sell for $10 after rebates. These could be used to store archival data that would otherwise be saved on CD's or DVD's.
    Where will they go from here? http://news.com.com/Bye-bye+hard+drive,+hello+flash/2100-1006_3-6005849.html 
    Table 3 shows a low-confidence forecast of flash drive prices. It's low-confidence because flash memory prices have been dropping faster than Moore's Law would predict. That can't continue very long because flash memories rely upon the same technology as dynamic RAM (DRAM) memories, and must soon follow the same capacity-doubling-every-two-years price curve that characterizes DRAM.memory. Also, as we get on up into the terabytes,

Year

Size, 
nm.
Transistors,
Billions*
Capacity,
Gigabytes
2006 65 2 4
2007 45 8 8
2008 45 16 16
2009 32 32 32
2010 32 64 64
2011-2012 22 128 128
2013-2014 16 256 256
2015-2016 11 512 512
2017-2018 8 1,024 1,024
2019-2020 5.5 2,048 2,048
2021-2022 4 4,096 4,096
2023-2024 2.75 8,192 8,192
2025-2026 2 16,384 16,384
2027-2028 1.4 32,768 32,768
2029-2030 1 65,536 65,536

Hard Drive Storage
    Disk storage capacities have followed a Moore's Law of their own.
    In 1966, the Burroughs 5500 had a state-of-the-art, closet-sized 15-megabyte hard drive that added hundreds of thousands of dollars to the price of the B5500. The IBM 1130 came with a large-pizza-sized, 1-megabyte removable disk drive.
    In 1982, IBM introduced the PC-XT with a 10-megabyte hard drive.
    By 1987, my Macintosh II had a (very expensive) 80-megabyte hard drive.
    In 1991, IBM pointed out that the theoretical limit for disk storage capacity was around 10 gigabits (1010 bits) per square centimeter. At that point, magnetic domains would be 100 nanometers on a side, and magnetic flux leakage would begin magnetizing neighboring domains. That led me, in 1991, to project a theoretical limit of about 500 gigabytes on a 3" hard drive.
    In 1992, I paid $800 for a 200-megabyte external hard drive.
    Today I got an ad for 160-gigabyte external hard drive for $60, with free shipping.
    Hard drives did reach a domain-size limit of 100 nanometers on a side, just about where IBM said they would stall out. Now the industry is switching from longitudinal recording, for which IBM's theoretical limit held true, to perpendicular recording, for which the new theoretical limit is about 150 gigabits per square centimeter.
    Hitachi has just (January 5, 2007) announced the first terabyte (1,000-gigabyte) 3" hard drive manufactured today (January 5, 2007) is made by Hitachi and stores a terabyte (1,000 gigabytes). Ten of these would store the entire printed text in the Library of Congress.
    The new upper limit on hard drive capacities predicted by hard disk manufacturers, using perpendicular and patterned recording, is something like 1,000 terabytes (one petabyte). 
    What will we do for an encore? I don't know, but one of these years, we'll find out.
    Incidentally, it seems to me that disk technology has preempted exotic competing technologies such as  laser-holographic memories. Laser-holographic memories were advertised as holding the potential for one-terabyte drives.
    Ha!
    Here's the disk capacity forecast I made in 1991 (Table 3 below): We're already (January, 2007) at the 1,000 gigabyte capacity level (which I wouldn't have predicted before 2015), and we'll probably hit 8 terabytes by 2012-2014. Petabyte storage will probably arrive sometime in the 2020-2030 time frame, driven by consumer demand and Darwinian competition in the marketplace.. (A petabyte is what I've supposed for the capacity of the human brain, assuming that a synapse stores one byte, and there are a thousand trillion--1015--synapses in the human brain.) Beyond that, who knows?

TABLE 3 - DISK CAPACITIES REQUIRED TO BACK UP RAM, 1990-2030

YEAR REQUIRED DISK SIZE REMARKS
1990 8 megabytes 1,200-megabyte disks are available. No problem now.
2000 1 gigabyte 20-40 GB disks, with 125 MB/sec. xfer rates. Still OK.
2010 64 gigabytes 500 GB disks. 500 megabyte/sec. xfer rate. About eight times the size of maximum RAM and pushing IBM's flux leakage limit for magnetic media.
2020 8 terabytes Barely possible using foreseeable technology.
2030 1 petabyte Not possible using foreseeable technology.

Previous Computer Technology Predictions: A Stroll Down Memory Lane
    Back in the 1980's, chip miniaturization was predicted to 'hit a brick wall" when circuit features were shrunk to about one micron, or roughly the wavelength of light (which varies from 0.4 microns to 0.8 microns). A great deal of research was expended investigating far ultraviolet, x-ray, and electron beam techniques for the production of silicon chips with ever-smaller circuit dimensions. However, these techniques all had problems when it came to cheaply mass-producing semiconductor chips. In the meantime, the Japanese found a way to manipulate light sources so that circuits could be produced with features below the normal diffraction limits, and it became possible to extend existing manufacturing processes without resorting to more exotic ultraviolet, x-ray, or electron beam photolithography systems. 
    In 1991, IBM pointed out that the minimum size for circuit features would be about 0.2
m (200 nanometers), and that we could expect to hit that limit around 2000. That was as low as optical lithography could go in chip manufacture, not to mention other fundamental problems..
    In 1997, Scientific American ran an article on the future of computer chips and of Moore's Law. The article concluded that because of fundamental quantum mechanical limits, we'd gone about as far as we could go, and 0.2 microns would be a brick wall.
    In 1999, respected Intel researcher Paul Packan published an article in Science in which he observed that we'd just about hit that brick wall. In addition to all the other problems, there was the fundamental deterrent of quantum mechanical leakage through gate structures, and of too few dopant molecules in a transistor to cause it to function as a transistor, As if that weren't enough, the father of Moore's Law, Gordon Moore himself, added "Moore's Lament" as a codacil to his famous law. Moore's Lament was that the cost of a semiconductor fab was becoming so high that pretty soon, it would match the Gross Domestic Product.
    I watched in fascination as 2000, with its "brick wall", rolled around. There seemed to be no slowing-down in the works. What was going to happen?
    Now we know. The semiconductor industry smashed through that 200-nanometer"brick wall" as though it were tissue paper. Today's cutting-edge chips are manufactured with 65-nanometer features, and Intel is talking about 11-nanometer features by 2015-2016 (36-folding today's transistor counts).

Exotic Computer Technologies That Never Went Anywhere But Were Great For Funding Your Pet Projects
Laser-Holographic Storage
    This certainly has to come at the top of the list. It sounded awesome forty years ago, and it was great for funding anyone who wanted to work on lasers or holography. It probably still works on the new kid on the block at DARPA or NASA Headquarters. My first exposure to it came in 1956 when there was discussion of perovskite structures that might store a bit per lattice site in transparent crystals. It came up again in 1967 when I was invited to a display in the next building of a laser-holographic computer memory system that could store 100 petabytes in a cubic centimeter of potassium iodide. A couple of hours before the demonstration was to take place, a secretary called me and told me that the demonstration had been postponed, and that she would call me when it was rescheduled. That was 39 years ago, and I'm still waiting for her phone call. Since then, talk about laser-holographic systems that could some day store a terabyte on an optical disk have re-emerged. Some day, it will probably happen in the form of a terabyte DVD, but I'm no longer holding my breath.
Optical Computing
    Back in the mid-seventies, a guy in our Data Systems Lab at the Marshall Space Flight Center had an optical computing lab. It had the usual optical bench, with light sources, lenses, and optical filters. it had the potential to thousand-fold computing speeds. The guy who ran the lab took a medical retirement for a bad back, which improved magnificently once he retired. His lab left with him.
    My next exposure to optical computers came in the latter 80's when I visited Oak Ridge National Laboratories and was same the same kind of optical bench setup with the light sources, lenses, and filters. They were working on an optical computer that would be a thousand times as fast as current computers.
    In 1991, when I wrote the 40-year computer technology forecast that I've been quoting in this update, I included this paragraph about the ongoing promises for optical computing:
   
"These forecasts don't include the possible impact of optoelectronic computing techniques that could purportedly provide 10- to 1,000-fold improvements in computing speeds. Also, both the number of parallel processors on a chip and the assumption of a 64 picosecond (16 Gigahertz) instruction time for each processor may be too conservative for the 2020-2030 time frame."
   
Now and then, I still see references to optical computing that will be 10 to 1,000 times as fast as current computers.
Quantum Computers
    "Quantum"... hey, that's an awe-inspiring term that reaches out and grabs you just like "laser-holographic". That's like, "These results were generated by a computer, so don't even think about questioning them!"
    I first heard about quantum computers back in the latter 80's when the trade magazines began to talk about "High Electron-Mobility Transistors" (HEMT) that could operate at frequencies as high as 500 MHz. Circuit dimensions were getting so small back in the late 80's that quantum mechanical effects were beginning to interfere with normal circuit operations
.(!) (Yeah, that's what they said.) So why not design a computer that would take advantage of these quantum mechanical anomalies and compute with them. There were a few problems such as the fact that the circuitry had to operate close to absolute zero, and the fact that these circuits couldn't do anything but encrypt and decrypt code, but they were what you funded if you wanted to sound really hip. Think how it would sell in Congress!
    Twenty years later, you still here now and then about quantum computing. Maybe in twenty more years, someone will show that it can't work, and then it can be given a decent burial.
Charge-Coupled Memories
    These had a run back in the 80's as a flash memory surrogate, but they eventually ended up in video cameras until they were replaced recently by CMOS image sensors.
Magnetic Bubble Memories
    Magnetic bubble memories had a major play back in the 80's. They eventually fell by the wayside, but not before I included them in my 1991 computer technology forecast. Big mistake! They went south shortly after I published" in June, 1991.


Printers
    I laid eyes on my first laser printer in the summer of 1985. It was black-and-white and cost $10,000. It was the wonder of its day.
    In 1991, I predicted that black-and-white laser printers might be available for $500 by 2000. I recently saw a color laser printer advertised for $200.
    A black-and-white, ink-jet ("bubble-jet") with 360 dots-per-inch resolution was available from Canon in 1991 (I believe for $400)..
    Now, ink-jet printers are so cheap they're often given away.
    The new frontier for printers may lie in manufacturing. Otherwise, I don't know of any printer improvements I would propose except for lower-priced ink cartridges.

Displays
Following up on My 1991 Display Predictions
    In June, 1991, I wrote that flat-panel LCD displays might be expected to edge out CRT's between 2000 and 2010. That seems to be what's happening. LCD screens as large as 45" are available. Some of the more expensive  LCD displays can handle the full 1,920 X 1,080 resolutions of 1080p HDTV, which may keep them in the running. Gateway sells such a display for $799.
    Four-megapixel, 2.560 X 1,600-pixel computer displays are the current state-of-the-art, costing about $2,000 at the present time.
    We might see 8-megapixel computer displays by 2010-2012, although that's just a wild guess.
Flat-Screen TV Display Technologies
    Electroluminescent panels never made it out of the starting gate.
    Light-emitting diode displays might become a possibility some day, but so far, they haven't made the cut.
    Color plasma panels are hot, but limitations on their resolution, currently at 1,366 X 768, may doom them in the long run. (A 1,920 X 1,050 plasma TV is available now.)
    Cold cathode displays, particularly those using carbon nanotube emitters, are becoming available, although what advantages they might have are unclear.
    Laser holographic displays are "always a bridesmaid, never a bride". They're not yet in the lineup of HDTV displays at Circuit City.
    Texas Instruments' "novel display idea" of Digital Light Projection (DLP) is very much alive and well. MacMall advertises a 51", 1,920 X 1,080 DLP HDTV for $3,400.
    The near future will probably see more and more 1,920 X 1,080 TV displays at ever-lower prices.
    Several future features that I forecast in 1991 that still haven't happened are 3-D TV, wall-sized screens, virtual reality/telepresence wraparound displays, and high-grade videoconferencing. This is all coming; it's just a question of when. Major efforts are underway by several major TV manufacturers to popularize 3-D displays.  Wall-sized 100" screens may develop organically from today's 50"-to-60" displays, or they may await printed displays that can turn a wall into one huge display. Virtual reality/telepresence displays pre-suppose low-cost, 3-D, wall-sized displays, and the bandwidths to support them. They may not appear until the second decade of the 21st century. Microsoft is entering the videoconferencing fray, with a 1,300 X 1,000 videoconferencing video camera and display. (Videoconferencing has been held hostage to high enough bandwidths to allow it.).
Predictions Regarding Future Displays
    I'd expect 1,080p flat-screen TVs to be common by 2010. Wall-sized displays may be more common for conference rooms by then. There may be some expansion of 3-D displays, although it's so dependent upon marketing that it's hard to foretell. I don't expect printed displays or organic TV displays by then, although organic light-emitting-diode displays are currently being used in cellphones.
    Ultrahigh-Definition TVs are also appearing at electronics shows, and a 32-megapixel display has even made its debut. Because of the investment in two-megapixel HDTV, I would be surprised to see higher resolution TV appear any time soon. HDTV recorders and cameras are only now starting to show up. UHDTV may first appear in Internet-based and local recording and playback.
    By 2020, I would expect to see wall-sized, 3-D displays, including, maybe, printed, flexible displays.

Scanners
    In 1991, I called for 300-dpi flatbed color scanners selling for no more than $500 including a page feeder. I expected them to be part of all-in-one copiers/printers/scanners/fax peripherals. Of course, that has happened. By now, these are available for $100, which is a price I could hardly have imagined back in 1991.

Video Camera Input
    In 1991, I forecast 8-megapixel resolutions surfacing between 2005 and 2010 for scanners, video cameras, LCD projection equipment, fiber-optic cable service and storage capabilities. That was predicated upon the conversion to HDTV becoming mandatory by 1998. Instead, the new due date is 2008... ten years later. The Bell systems dragged their feet on fiber-to-the-curb, and now, cable companies are beating them to the punch. Most TVs don't yet afford 2- megapixel resolution, much less 8 megapixels. So I now expect it will be in the 2015-2020 time frame before we start to see much 8-megapixel TV.
    In the meantime, scanners are certainly up to such high total pixel count requirements, and displays are about halfway there, at 4-megapixels.
    I would put 8-megapixel video cameras beyond 2015, and 32-megapixel video cameras beyond 2025.
    Cameras that recognize the objects they're examining are still in the science-fiction category.


2007:  Dual- or quad-core processor, one gigabyte of RAM, 1,280 X 1,024 or 1,400 X 900 or 1,680 X 1,050/ or 1,920 X 1,200 display, 160-1,500 GB hard drive, wireless keyboard and mouse, DVD reader/writer, a/b/g wireless router,
YEAR CPU RAM (GB) DISK (TB)    
2007   1 0.16-1.5    
2008   1 0.25-2    
2009   2 0.32-2    
2010   2 0.5-3    
2011   4      
2012   4      
2013   8      
2014   8      
2015   16      
2016   16      
2017   32      
2018   32      
2019   64      
2020   64      
2021   128      
2022   128      
2023   256      
2024   256      
2025   512      
2026   512      



2008:  Quad-core processor,

2009:  8-core processor, 2 GB of RAM,

2010: 16-core processor, 2 GB of RAM,

2011:  32-core processor, 4 GB of RAM,

2012:  80-core processor, 4 GB of RAM

2013:  80-core processor, 8 GB of RAM

2014:    n-core processor, 8 GB of RAM

2015:    n-core processor, 16 GB of RAM

2016:    n-core Processor, 16 GB of RAM

2017:    n-core processor, 32 GB of RAM