1/16/2001:

SAT-Practice Word of the Day: assuage
Intermediate Word of the Day: adjure
Difficult Word of the Day: blepharitis
 
Most of O. Henry's short stories are, to my way of thinking, dated and a bit on the sententious side. But his humorous short stories (of which the best-known is, "The Ransom of Red Chief") are the wittiest I've ever seen. One of his books, "The Gentle Grafters", tells of the exploits of those two con-men extraordinaire, Jefferson Peters and Andy Tucker. I enjoyed them so much that in undergraduate school, I labored at producing my own Chinese copy of an O. Henry "Gentle Grafters" story, akin to "The Seven Percent Solution" or Benford's, Bear's, and Brin's sequels to Asimov's 'Foundation" series. I wasn't very jolly at the time, I was perfectionistic, and there were no word processors, so everything had to be retyped every time I made a change. I wasn't able to complete it. But in the 1980's, I took up the cudgels again, and this time, I finished the O. Henry knock-off that you're about to read: "The Salting of the Earth".

   
Sixty years ago, as a child, I eagerly anticipated the day when humanity would gain the power to boost its intelligence and to extend its youth-span. These are both (I thought) empowerments that will enable us homo-sapients to advance faster toward our ultimate destiny. It now seems to me that both of these capabilities are coming into hailing range. 
Extending the youth span

Smart Drugs
1/8/2000 Update:
My computer failure pushed the "Mind Booster" review toward the back burner. However, the little spalpeen has quit kicking and scratching, and tomorrow, I may be able to make good on my promise.
1/8/2000 Update: Sorry. I spent today's disposable time data-mining Banner News items. I'll try again tomorrow. (In the meantime, I've been taking Huperzine-A for a few days, and so far, I've experienced no ill effects from it (said the window washer, as he fell past the 40th floor)

Words for the Techno-Weenie: As stated in a just-completed computer technology forecast in the latest issue of Ubiquity Magazine , IBM is displaying technology that might conceivably support the present rate of Moore's Law computer technology improvement through the year 2013*, leading to computers with 400 times the speeds and 400 times the RAM capacities of present-day computers (500,000 MIPS, 100,000 megabytes). Today's Banner News discusses some of the intermediate technological "nodes" or "insertion points" that we are scheduled to pass on our way to the computers of 2013. What gives this march of progress suspense and immediacy is the fact that by 2010, circuit features would be about 80 atoms wide, by 2020, they would be only 8 atoms wide, and by 2030, they would be 0.8 atoms wide! (Ideas like stacking chips in three dimensions don't really help much because our two-dimensional chips give off so much heat that they would have to be aggressively liquid-cooled if we were to try to stack them on top of each other.) Of course, even after we reach whatever limits of miniaturization are realizable, it's always possible to increase the degree of parallelism in the processors themselves, and to employ multiple microprocessors, conceivably mounting them on the same chip. However, microprocessors typically generate 20 to 40 watts of power. Eight-to-sixteen such processors might start to dump too much heat into the computer room.



* - We really only have assurances of continued Moore's-Law progress through 2005.


    An alternative strategy could be to network processors through fiber-optic cables. One would have access to remote computational power on demand. Fiber-optic cable capacities are 1,000-folding every decade, while semiconductors, in accordance with Moore's Law, are 100-folding every decade. In 20 years, communications rates could be 1,000,000 times what they are today, with data swishing back and forth faster than it now sloshes between a microprocessor and its cache memory. (What happens if a super-intelligent, artificial intelligence--an Overmind--arises within, or is released into the network? Whoo-oo-oo-oo-oo! Happy Halloween!)
   One Banner News article deals with the fact that as the number of atoms in a transistor declines, the point is reached where there aren't enough silicon atoms for the few-parts-per-million trace elements (dopants) that convert the silicon into a semiconductor. Once we have only 250,000 silicon atoms in our transistor, there will only be one trace atom present. And if we get below that threshold.... Carbon nanotube transistors have been suggested as a far-out alternative.
    The theoretically-smallest carbon nanotubes are about 0.4 nanometers (4 Angstroms) in diameter. That's pretty small. That would correspond to the design features we would require in 2025 if we were to stay on a Moore's-Law shrinkage curve. Of course, right now, such a number has no practical significance except to suggest that, perhaps, carbon nanotubes needn't be ruled out as very tiny circuit elements.
    It also seems to me that the days of conventional silicon circuits must surely be numbered. We have shrunk transistor dimensions five orders of magnitude in five decades (from 18 millimeters in 1950 to 0.18 micrometers in 2000). Three more orders of magnitude would take us slightly below atomic dimensions. Conventional silicon depends for its fortuitously-convenient properties, upon bulk-silicon effects. Although other materials might be mounted on silicon substrates, bulk-silicon effects must surely be reaching their miniaturization limits. Other technologies may take us down to the atomic level, but it won't (I think) be done with bulk-silicon.
   Another vital factor in these deliberations is that of manufacturing costs.
   It's interesting to compare some of these sizes with the sizes of organic entities. Cells are of the order of 10-to-20 microns in length. Our memory cells are currently of the order of 0.4 microns in length with 0.18-micron circuit features. We are approaching production of 1,000,000,000-transistor memory chips, using memory cells 0.05 microns across. Small organic molecules are of the order of 0.001 microns-or-less across, or less than 1/10,000th the length of a cell. The sizes of the simple molecules that comprise a cell stand in relation to the size of a cell the way that our individual cells stand in relation to us. They're about 1/10,000th the size of a cell, just as our cells are about 1/10,000th our size. A cell is comprised of about 1,000,000,000,000 molecules.
    The human brain harbors 100,000,000,000 neurons. By 2010, if we remain on a Moore's Law curve, we should be able to place 100,000,000,000 transistors on a chip. One artificial-neuron design uses 7 transistors to emulate a neuron. At that rate we would need 7 chips to emulate the brain. However, the same artificial neuron design that uses 7 transistors to emulate a neuron requires 5 transistors to replace a synapse, and there are, perhaps, 10,000 synapses for every neuron. (5,000 synaptic pairs) So it would take fifty thousand,100,000,000,000-transistor chips to emulate the brain. On the other hand, computer chips are a billion times faster than neurons, and probably a million times faster than synapses. Also, transistors are relatively reliable devices compared to neurons and synapses. If we're willing to slow our electronic neurons and synapses to 1/50,000th of their top speeds, we might be able to place 50,000 of them together in a small package without generating too much heat to permit such a concentration.

   The reason this is so important is because rapidly-rising computing power forms one of the cornerstones of our economy. It will also form the cornerstone for robotics and artificial intelligence, so limitations upon its continued improvement are very important in casting humanity's technological horoscope. In evaluating what's happening, it's important to note that not only are costs, sizes, and power requirements dropping exponentially, computer speed can rise exponentially because of the exponential reductions in size and power requirements. However, exponential growth can't continue through terribly many ten-foldings before hitting some wall. The table below shows some of the characteristics of future chips. The second column, ATOMS, gives the sizes of features--e. g., the widths of transistors--measured in atomic diameters. Column 3, ATOMS/BIT, shows very roughly how many atoms would be available per 1-bit memory cell. This includes conductors to access the memory cells and insulators around the electrically-charged elements of the cells. At 8,000,000 atoms, there would still be enough atoms to dope the silicon with trace elements to render it semi-conducting. BITS/CHIP specifies the number of bits that can be stored on a state-of-the-art RAM chip at each point in time (109 bits today). SPEED, GHz is a very simplistic set of extrapolations that assume that smaller dimensions would support the blistering clock speeds tabulated in Column 5. (They probably won't.) Column 6, TRANS/CHIP, is a fanciful tabulation of the number of transistors that might be mounted on future microprocessor chips.
YEAR ATOMS ATOMS/BIT BITS/CHIP SPEED, GHz TRANS/CHIP
2000 800 8,000,000,000   109 1 42,000,000
2010 80  8,000,000 1011 100 1,000,000,000
2020 8 8,000 1013 10,000 25,000,000,000
2025 3 320 1014 100,000 125,000,000,000
2030 1 8 1015 1,000,000 500,000,000,000

     I don't expect the numbers in this table to be realizable. I merely present them to show what would be required if computer technology were to continue in the same channel in which it has traditionally run over the last 50 years.
    Years ago, 300 GHz was the highest frequencies at which electronics devices had been made to run. 10,000 GHz goes with wavelengths in the far infrared, corresponding to a color temperature of the order of 100º K. 100,000 GHz is the middle of the infrared spectrum, corresponding to a wavelength of 3
m. 1,000,000 GHz is in the near-ultraviolet, representing a wavelength of 3,000 Angstroms.
    UNIVAC 1 used a mercury delay-line memory that stored 1,024 36-bit numbers that passed through the computer about 1,000 times a second. Could one future possibility be a fiber-optic delay line that would transmit 10
12  - 1015 bits through the computer 1,000,000,00 to 30,000,000,000 times a second? That would be the modern equivalent of the old mercury delay line. We should soon have digital circuitry that could accommodate those speeds.
   Intel is on record hoping that new approaches to microprocessor technology will come to the fore after 2010. I could imagine other memory concepts, such as optical storage of one kind or another, having both the speed and capacity to possibly continue when semiconductor memories reach their elastic limits.

   To get across the mind-expanding changes that would be required to follow a Moore's Law curve through the year 2030, if we imagine our Year-2000 transistors (which are already too small to be seen by the most powerful optical microscopes) to be one meter across, then we will have to shrink our transistors to one millimeter (1/25th-inch) across by 2030. One million 2030 transistors would have to fit in the invisibly-small space currently occupied by one transistor. Even to remain on a Moore's-Law curve through 2010, we'll have to shrink our dimensions until we can pack 100 transistors into the space occupied by one transistor today. Most of the discussions about this that I'm seeing say that extreme-ultraviolet lithography should carry us through the rest of the lifetime of silicon semiconductor technology "as we know it".
   One bright spot on this horizon is IBM's announcement that they've found a way to reduce transistor surface widths to 0.01
m. That would carry us through a Moore's Law progression to 2013. And, as I've mentioned, there should be at least 5 more years of improvements after that, as prices fall and multiprocessor systems become cheaper and more popular.
    Another way to look at the challenges that lie ahead is in terms of heat and power dissipation. Suppose that in ten years, we hundred-fold the number of transistors we can mount upon a single chip, by making them 1/10th as wide as they are today (requiring 1/100th the chip real estate). If we assume that the energy required to switch a transistor is proportional to its area, then each transistor will dissipate only 1/100th as much energy when it switches states as today's transistors. But since there are 100 times as many of them on chip, the total energy dissipated by the chip during  each clock cycle  will be the same as it is today. But we hope to run them 100 time as fast as today's chips, whizzing along at a 100 GHz clock speed. That means that they would require 100 times as much power as today's chips, and would dissipate 100 times as much heat. In reality, transistors probably shrink in depth as well as length and width, so, since their volumes may be 1/1,000th what they are today, they might only dissipate 10 times the power. Of course, that's still not permissible. Today's fastest microprocessors have to be aggressively cooled. One of the ploys used to lower heat dissipation is to run the chips at lower voltages. But now, there's another barrier. Random thermal energy at room temperature is about 1/40th of an electron-volt. If a transistor voltage is to act as a barrier to electrons, it must be many times that 1/40th volt, or too many electrons will climb over the voltage "hill". Also, the change in energy involved in a transistor energy transition must be many times the 1/40th-of-an-electron-volt background noise. Now consider a chip that has 10,000,000,000 transistors and is running at 10 GHz. That's about where we expect to be in four more years. Suppose we say that the maximum amount of power (and heat) that it can be allowed to dissipate into our computer room is 10 watts. If we equate 10 watts to 10 billion transistors switching 10 billion times a second, the amount of energy in an energy transition in each transistor each time it switches must be  10/1020 joules or about 10-19 joules. But 1/40th of an electron-volt is about 1/25th X 10-19 joules, or about 1/25th of the maximum switching energy that's compatible with 10 watts of power dissipation if that many transistors are switching 10,000,000,000 times a second. Of course, in a microprocessor, not every transistor is going to switch on every cycle.
    If we were to ramp this up to the 2010 era and were to envision 100,000,000,000 transistors running at 100 GHz, we'd be faced with a "speed-power" product that's 100 times as great as it will be in 2005, or 10,000 times greater than it is now. To continue in this vein to 2020, the number of state transitions would 100,000,000-fold. And to make it to the projected 2030  level, this number would rise by a factor of 1,000,000,000,000! Of course, in going from a 3.6 megahertz clock speed, with a transistor count of 235,000 in the latter 80's, to a 1,500 megahertz clock speed, with a transistor count of 42,000,000, we've already increased transistor switching per chip per second by a factor of about 75,000. On the other hand, those switching energies weren't remotely close to 1/40th of an electron-volt.
    It's going to be a challenge.
    We're still far above speed-power products that are subject to the Heisenberg Uncertainty Principle






Mike Hess Comments Anent the "Severely Gifted"
Patrick Wahl's Discussion of the Word "Nonce"

Patrick's Question About Plans for the Website

      AFTER THE FIRE

Upon this hill laid bare to sun
The bee attends the flowers,
And by the toppled chimney stone
The shadow tells the hours.

Domestic as the kitchen clock
That measured out our day
Before the foe bypassed the lock
And found a secret way.

The spider who had claimed the wall
Now claims the sapling pine,
And angles for his evening meal
With silver reel and line.

The mouse finds other store of grain,
The sparrow other eaves,
The cricket shakes his tambourine
Beneath the maple leaves,

As heedless of his brother's shout
As we were heedless then
Before the enemy without
Became the foe within.

        --"Window to the South."
                  Vivian Smallwood