A Paper Presented at the


Von Braun Civic Center

Huntsville, Alabama

May 15, 1991


Dr. Robert N. Seitz

Huntsville Research Laboratory,

Georgia Tech Research Institute

June 12, 1991


    It should be emphasized that any opinions or conclusions expressed in this paper are my own and do not reflect those of either the Georgia Tech Research Institute or the U. S. Army Missile Command. This paper has been prepared without inputs from, or review by either agency. Forecasting the future is a very-uncertain business and only one thing can be said for certain: the future will not evolve exactly as I have forecast it. Perhaps the most useful function a technology forecast serves is to stimulate ideas and discussion concerning the future, and to suggest what may happen, given present trends.

1.0 Introduction
In 1979, I gave a talk at the IEEE Rocket City Seminar called "The Future of Personal Computing" (reproduced as Appendix A) in which I attempted to make specific personal computer predictions for the years 1989 and 2000. How well did I do? Let's find out.
Table 1 presents, in the left-hand column, the state of the art in personal computers in 1979. The middle column provides the forecasts I made back in 1979 for 1989 and the right hand column summarizes the predictions I made for the year 2000.

YEAR 1979 1989 (as forecast in 1979) 2000 (as forecast in 1979)
RAM 64,000 bytes 256,000 bytes 16,000,000 bytes
BUBBLE 0 1,000,000 bytes 64,000,000 bytes
STORAGE Audio Cassette Recorder Diskette One gigabyte (optical?)
DISPLAY 32-column black & white 512 X 512 3-bit color  1024 X1024 color
PRINTER None Dot matrix Color. Letter-quality?


Table 2 shows the 1989 forecast, while the 1991 postscript compares Table 2 with what actually happened in 1989. (Obviously, magnetic bubble memories would not have been a good investment.)

TABLE 2 - 1989 VIEWGRAPH (FORECAST) WHICH I MADE IN 1979_____________________________________________________________________________





















Table 3 lists some applications for personal computers that I predicted back in 1979. As is the case with Table 2, Table 3 looks like yesterday's news by now.












    All right. Where do we go from here?

2.0 Technology Forecast
   First, I think that the computer revolution still has a long way to go, and that some major developments lie just ahead, such as the marriage of computer and video technologies as video systems go digital, the impending revolution in home telephone service as telephones go wide-band and digital, the inauguration of low-cost videophone and teleconferencing services, the advent of speech recognition and "voicewriters", and other major developments described below.

2.1 How Accurately Forecasts Can Be Made; Dynamic Random Access Memory Forecasts
Before updating the forecasts for the year 2000 and beyond, it may be in order to discuss whether such forecasts can accurately be made, particularly in such an explosively growing discipline as electronics technology. I think they can. Computer technology has improved in a precisely predictable way for the last twenty five years. Dynamic random access memory (DRAM) chip densities and number-of-bits/$ have doubled every eighteen months, 100-folding every decade. Computer circuit densities have doubled every two years (Moore's Law), 32-folding every decade. Magnetic densities have doubled every two-and-a-quarter years, 20-folding every decade. Why do these disparate technologies advance at such constant rates? I don't know. One would expect that, with a world full of competing players and with breakthroughs occurring from time to time, progress would be more fitful. However, although efforts have been made by companies and countries to leapfrog over their competition, in the end, the exponential growth curves of chip and magnetic media densities have remained unperturbed. By now, semiconductor makers have institutionalized these constant growth rate expectations in their ten-year corporate plans. For example, 16-megabit (Mb) DRAM chips have recently been introduced by the major semiconductor purveyors (Texas Instruments, NEC, Hitachi, Toshiba, and Samsung, to name a few) for sampling by their favored customers. The planned sampling date for 64-megabit DRAM chips is 1994, right on schedule. Similarly, the planned introduction date for the sampling of 256-Mb chips is early 1997, and the planned debut of 1,000-Mb (one-gigabit) chips is the year 2000. The National Advisory Committee on Semiconductor Technology is proposing that the U. S. try to steal a march on the rest of the world by developing a one gigabit static RAM (SRAM) chip by the year 2000. (A one-gigabit static RAM chip represents a significant escalation in technology goals because it would require 4 times as many transistors on a chip as a one-gigabit dynamic RAM chip.) Beyond this level of packing-densities, further reductions in memory cell size are uncertain because they depend upon an electronics "breakthrough" to sidestep quantum-mechanical "short-circuiting" when circuit feature sizes fall below about 0.2 µ. (The one-gigabit static RAM chip will require 0.12µ line widths.) However, even if this quantum-tunneling phenomenon proves to be an impenetrable barrier to the further shrinkage of microcircuitry, prices for the 1-gigabit chips should continue to decline through at least 2006 as these chips go into mass production. The price of 1 gigabyte of RAM should drop from about $50,000 a gigabyte today to about $1,000 a gigabyte in the year 2000 and then to about $50/gigabyte in the year 2006, guaranteeing price/performance improvements of at least 1000:1 over the $50/megabyte that is available today. I personally believe that, even if the 0.2 µ barrier proves to be insurmountable, prices will even drop below $50/gigabyte by 2010 using such innovations as multi-chip packaging or wafer-scale integration.
At the same time, there are promising efforts underway to harness the quantum-tunneling effect in a way that converts a barrier into an opportunity. For example, Caltech is working on a Stark-effect transistor which would permit circuit features as small as 0.01-0.001 µ. Hitachi is working on a 0.1 µ metal oxide semiconductor (MOS) transistor, on a new 0.05 µ device that takes advantage of quantum noise, and also on atomic scale devices. Quantum field effect transistors would permit 0.02 µ design rules which would also be 10 times faster than current devices and would permit 100,000,000,000-bit RAM chips. If one or another of these efforts is successful and progress in computer technology can continue untrammeled through the year 2030, DRAM densities would reach the levels shown in Table 4 below.

1990 4 megabits (Mb) 1-4 megabytes 0.8 µ cell size
2000 256 megabits (Mb) 64-256 megabytes 0.3 µ cell size
2010 64 gigabits (Gb) 16- 64 gigabytes Might use multi-layered chips
2020 4 terabits (Tb) 1-4 terabytes 0.005 µ cell size.
2030 256 terabits (Tb) 64-256 terabytes 0.1 µ cells throughout 1 cm3

Table 5 provides a more detailed DRAM forecast, including speed requirements.
1989 4 0.256-1 80 nsec.
1992 16 1-4 60 nsec.
1995 64 4-16 40 nsec.
1998 256 16-64 25 nsec.
2001 1 0.64-0.256 10 nsec.
2004 4 0.256-1 8 nsec.
2007 16 1-4 6 nsec.
2010 64 4-16 4 nsec.
2013 256 16-64 2 nsec.
2016 1,024 64-256 1 nsec.
2019 4,096 256-1,024 0.8 nsec.
2022 16 64 0.5 nsec.
2025 64 256 0.3 nsec.
2028 256 1,024 0.2 nsec.
2031 1,024 4,096 0.1 nsec.

    To give an idea what this kind of storage would mean, a typical PC's RAM in the year 2000 will be able to store several minutes of compressed video, several thousand compressed color images, or 100,000 pages of text. (Their hard drives should be able to store 20 times this much data.) Nine years from now, the Random House encyclopedia, an unabridged dictionary, and the full Roget's Thesaurus may be a part of everyone's standard PC library. By the year 2010, your PC RAM, and the non-volatile memory that supports it, should be able to store, perhaps, two HDTV 2-hour movies, or several hundred thousand books, including illustrations. By 2020, these numbers will rise to two hundred hours of recorded video, or every document that has ever been printed. By 2030, your PC's RAM, plus hard drive (or equivalent), should be able to store aerial photography of the world, or about a 40-mile-by-50-mile "virtual world" recorded at 600-dpi accuracy.
How will we use all this storage capacity? We'll soak it up and call for more. By 2030, your computer may be a nearly sentient entity, recording, abstracting and correlating everything that its video imaging devices and microphones see and hear. This will require a lot of speed and storage. It will probably be tuned into databases during the night, using "knowbots", working in collaboration with intelligent databases, to search for information of potential interest to you. For example, your PC might be combing the country for the technical specifications and best prices of items that you might want to buy. Your pocket computer may store everything you see and hear, filing it so that it can be recalled using any one of several "key word" or "key event" relationships. Your computer may become an intelligence amplifier, searching out relationships, running simulations, and perhaps eventually, generating hypotheses, like the computer on the Starship Enterprise. Even if human-caliber reasoning is forever beyond the span of computer capabilities, computers can trade speed and brute force techniques for finesse as is done with chess playing automata.

2.2 Flash Memory
Electronically erasable read-only memories (EEPROMs) have been used in computers for two decades. However, they are now being pressed into service as non-volatile backup for dynamic RAM. They are no less expensive than DRAM and much more expensive than disk storage. Also, unlike DRAM, their lives are limited to something like 500,000 rewrites. However, there are plans afoot for wafer-scale implementation of these EEPROM "flashcards" to reduce their costs below that of DRAM. Beginning in 1992, they should be available from Japanese semiconductor makers using 16 megabit chips and to become cost competitive with 16 Mb DRAMs in 1993. By 1994, 40 megabyte flash cards are predicted to cost about $75. Their principal application, at least at first, is expected to be in laptop and notepad PCs.
In the event that disk and optical storage devices can't keep pace with the capacity and data rate demands of DRAM over the next 40 years, flash cards may afford another way to provide non-volatile backup for DRAM.

2.3 Microprocessors
In the area of microprocessors, Intel has announced its plans through the year 2000. An 80486 microprocessor with a 50 MHz clock rate running at about 35 million instructions per second (MIPS) will be introduced during 1991. (Fujitsu has announced a 100 MHz, 100 MIPS, 64-bit super-scalar 80486 which incorporates two arithmetic units and one floating point unit, as well as a digital signal processor, although no plans to manufacture it have been were discussed in this reference.) Intel's 80586 chip is scheduled to make its debut in 1993, and is to be fabricated in BiCMOS (bipolar metal oxide semiconductor) using 0.6 µ design rules. It will contain 4,000,000 transistors and will operate at a 50 MHz clock speed which will then be upgraded to a 100 MHz clock rate to yield speeds in excess of 100 MIPS, or about 6 times faster than 1990's 80486. In 1995, Intel plans to introduce the 80686 that will incorporate 16,000,000 BiCMOS transistors using 0.4 µ design rules, and will run at 100 MHz. In the year 2000, Intel plans a 100,000,000-transistor chip (the 80886?) which will house 4 arithmetic units (AUs) and 2 floating point units (FPUs). The individual AUs will run at 750 MIPS, using a 250 MHz clock with wide-bus, triple-super-scalar architecture. The chip may include fast digital signal processors for graphics, voice recognition and data compression. Motorola has leaked plans for a 4,000,000,000 operations per second (4 Gigops) reduced instruction set computer (RISC) for the year 2000 which uses a "symmetric super-scalar" concept. Meantime, in the supercomputer computer arena, the next target of opportunity has become the "terops machine"—a computer capable of performing one trillion operations per second. Such a machine is predicted for sometime in the '90s.

    Table 6 offers an extrapolation of computer speeds through the year 2030 assuming that technology improves as it has in the past.

1990 20 MIPS Assumes the 25 MHz Motorola 68040 chip.
2000 2,000 MIPS Intel's planned 80886(?) chip.
2010 64,000 MIPS 1 GHz clock, 16 processors @ 4,000 MIPS/processor
2020 2,000,000 MIPS 256 processors @ 8,000 MIPS/processor
2030 64 Terops 4,096 processors @ 16 Gigops/processor (64 psec. instr. time)

    These forecasts don't include the possible impact of optoelectronic computing techniques that could purportedly provide 10- to 1,000-fold improvements in computing speeds. Also, both the number of parallel processors on a chip and the assumption of a 64 picosecond instruction time for each processor may be too conservative for the 2020-2030 time frame
    What will we do with all this speed? We currently need orders of magnitude increases in computer speeds to support intelligent vision, speech recognition, graphics manipulation and artificial intelligence. For example, a full-blown laser holographic display could require about a billion operations per frame. At 24 frames a second, that would equate to about 24 gigops per second.

2.4 Digital Signal Processors
    Special purpose microprocessor chips known as digital signal processors (DSPs) will also play important roles in future PCs. They are already in use in almost every consumer device built today. These are prevalent in modems, printers, graphics accelerators, and in the form of floating point chip sets, in central processing units (CPUs). By customizing them for various applications, they can be made to crunch numbers up to 1,000 times faster than conventional, general-purpose microprocessors. Among the DSPs that may appear over the next ten years are processors optimized for speech recognition, machine vision, graphics output processing (including rendering), and 2:1 data compression of general files, 4:1 data compression of text, 25:1 compression of still images, and 200:1 compression of live video. Much faster processors than are available today are also needed for color printing, and for artificial intelligence functions. If artificial intelligence (AI) expands as promised over the coming decade, specialized chips for neural networking, fuzzy logic, pattern recognition, and genetic learning are a few of the foreseeable DSPs which will debut in our consumer world.
Data compression is already wired into fax machines and modems. By the year 2000, digital signal processor chips will probably become an integral part of disk controllers, graphics cards, video cameras, and speech recognition boards.
What digital signal processors will do for us beyond the year 2000 is something I can't properly imagine, other than more of the same. However, whatever can ultimately be done with computers will hinge to a large extent upon the capabilities of low-cost DSPs. It is estimated that computer vision systems are currently at least 1,000 times slower at recognizing visual objects than a human being. Consequently, we need a 1,000-fold improvement in visual-image-processing DSPs to achieve real-time machine vision. However, the payoff for this investment could be immense. Accurate real-time machine vision will have momentous implications for factory automation and the price of manufactured goods, to say nothing about its impact on household robots, computer-assisted driving, and all its other applications.

2.5 Magnetic Disk Technology

Floppy Disks

Computer disk forecasts call for 3.5" floppy disks to reach 20 megabyte capacity by 1994, and to attain capacities of 300 to 500 megabytes by 2000. This is being achieved by employing the kind of laser tracking used for optical disks to greatly increase the number of tracks on floppy ("floptical") disks.

Non-Removable Disks

Non-removable disks, which have just become available in a 2.4-gigabyte size for PCs, should reach 20-to-40 megabyte capacities by 2000.

There is a mismatch developing between magnetic disks and tape drives, on the one hand, and DRAM on the other. DRAM capacities are 100-folding every ten years while magnetic-recording densities are increasing by a factor of only 20. Furthermore, disk read-and-write speeds depend upon the linear magnetic density that only increases as the square root of the magnetic areal density—i.e., by a factor of  or about 4.5 every decade. As a result, the time required to fill RAM from a disk will tend to increase by a factor of 18 (100/4.5) every decade. The current ploy for offsetting this developing disparity between disk transfer rates and RAM capacities is to transfer data in parallel. Seagate has recently announced a disk drive for supercomputers which transfers data in an 8-bit-parallel mode at a 27 megabyte/second rate. Such a disk drive could support PC RAM complements through the year 2000, particularly if magnetic linear densities increase by a factor of four or five. However, by 2010, if RAM were to increase to 500 gigabytes, the outlook for disk drives would become rather murky. Even at 500 megabytes a second, 1,000 seconds or about 17 minutes would be required to fill the 2010 PC RAM complement of 500 gigabytes. And beyond the year 2010, matters become even less favorable. As previously mentioned, flash-cards may provide a way to circumvent this problem.

IBM has recently announced that flux leakage between magnetic domains may pose a lower limit of about 0.1 µ upon magnetic domain sizes. This would lead to a maximum storage density of about 7 gigabytes/square inch, or about a terabyte (1,000,000 megabytes) on a 9-sheave 5.25" disk drive.

  • Table 7 illustrates these problems.

    1990 80 megabytes 1,200-megabyte disks are available. No problem now.
    2000 5 gigabytes 20-40 GB disks, with 125 MB/sec. xfer rates. Still OK.
    2010 500 gigabytes 500 megabyte/sec. xfer rate. About the same size as RAM and pushing IBM's flux leakage limit for magnetic media.
    2020 50 terabytes Not possible using foreseeable technology.
    2030 5,000 terabytes Not possible using foreseeable technology.


    One development which may be expected (and which is already here in the IBM world) is the intelligent disk controller that attempts to optimize disk access and buffering. Another development will be data compression chips built into the disk controller to implement different compression algorithms for different kinds of data.

    Magnetic Tape

    At 2 gigabytes of storage, magnetic tape capacities are no longer significantly greater than the capacities of small disks, and access is much slower. However, the individual tape cassettes are inexpensive and removable, which is why tape drives are still used. Tape data transfer rates are even lower than disk data transfer rates. By 1995, tape capacities are supposed to reach 12 gigabytes, or about the same as non-removable disks. Magnetic tape drives face an uncertain future and may well become obsolete.

    2.7 Optical Storage

    Optical Paper Tape

    A write-once-read-many (WORM) optical plasticized-paper tape drive capable of storing 1 terabyte (1 trillion bytes) is in the offing. However, over the long haul, paper tape suffers from the same data transfer rate squeeze as other magnetic and optical media.

    Optical Disk Drives

    By recording on both sides of the disk, optical drives are expected to reach capacities of 5 megabytes by 1995, and by switching from red to green or blue light, and by encoding more than one bit per position, they may store 20 megabytes by the end of the decade. Optical drives are also appearing which employ a phase-change technique that renders them erasable and archival at the same time.

    Optical drives are slower than magnetic disks and offer no obvious prospects for improved data transfer rates. Also, the wavelength of light (about 0.5 µ) would appear to be a fundamental limitation to optical storage densities. It is hard to see how spot size in an optical drive can become much smaller than the wavelength of the light that is used to read from and write to it. It might be expected that holographic techniques would offer greater storage capacity then present-day spot-storage laser drives but laser-holographic devices were evaluated in the 1960s and 1970s and found to store no more per square micron than spot storage techniques. Solid-crystal laser storage techniques utilizing potassium iodide and potassium bromide crystals were also explored and discarded.

    For these reasons, I am hesitant about projecting optical storage capacities beyond the year 2000.

    2.8 Printers

    One of the printer trends which is presently emerging is the consolidation of fax, copier, printer and typewriter functions into one printer, one scanner and one modem. After all, there is no reason to pay for these expensive components more than once. Eastman Kodak has just announced a copier/laser printer retailing for $7,000.

    Among printers, the laserwriter, ink jet and dot matrix printer are the front runners. Laser printers, which cost $10,000 in 1985 are now available for <$1,000. Canon's 360-dot-per-inch (dpi) Bubblejet can give laser printers a run for the money in terms of image quality but is much slower (2 minutes per page) than a laser printer. Improvements in laser printers will probably include higher resolution (600-1200 dpi), faster operation (16 pages per minute) or lower prices (<$500) by the year 2000. Another potential area of improvement lies in providing the full range of Pantone colors for color printer outputs. Right now, their color ranges are good but not good enough to scan the full Pantone scale. Color calibration from the scanner to the printer may be another desirable feature. The ability to print dull or shiny might represent another possible dimension in color printing. Coupling a printer to a scanner to adapt it to copying and facsimile functions is probably more a matter of hardware and software integration than anything else, and may represent a good marketing opportunity for a small company. Interfacing a fax machine with a PC might also afford entrepreneurial opportunities.

    Another class of printer might be portable printers to accompany notepad and pocket computers—e.g., a printer that would print your checks.

    Photographic-quality, continuous-tone color printers should soon be affordable. They exist today but at a cost of tens of thousands of dollars. Color printing has been slow in coming because it has been difficult to get continuous shading of colors from one hue to another. Also, color printers have been a prohibitively expensive specialty item, particularly since most text is printed in black and white. However, several affordable color printers have recently been introduced which provide continuous color. Mitsubishi and Lasertechnics offer 300-dpi, continuous-tone color printers at prices ranging from $13,000 to $15,000. Honeywell and 3M have collaborated on a <$10,000 dry-silver paper color printer but it isn't usable yet because its blue dye is unstable. 3M hopes to have corrected this problem by the end of 1991. Longer-term, a low cost (<$10,000) color laser printer is rumored to be in the pipeline for 1992. The major advantage to a color laser printer is that color prints should cost 30¢ rather than $3 a page. By the year 2000, color printers may be sufficiently cheap (~$1,000) that they can be utilized for printing black-and-white text as well as color. Table 8 sets forth some forecasts for prices and performances of color printers.

    The inauguration of low cost color printers should lead to a much wider use of color, and may eventually give high-resolution (greater than 300 dot-per-inch) electronic still cameras a viable role in the marketplace.


    1991 $10 K 300 Uses Honeywell/3M's dry silver process.
    1992 $10 K 300 Color laser printer (source to be determined)
    1994 $5 K 300 Printer/color scanner/copier/fax
    1997 $2 K 300 Color with black & white capability
    2000 $1 K 300 Color with black & white capability
    2010 $500 1,200 Full Pantone color spectrum, B&W capability, cheap prints, possibly 3-D prints, possibly holographic prints.


    2.9 Display Devices

    Color cathode ray tube (CRT) displays are the products of a mature technology. The maximum pixel count available for computer-aided design is 1,600 X 1,280, or about 2,000,000 pixels. This is available in 19" displays, yielding a resolution of about 100 dpi. The largest color display of which I'm aware is a new $40,000, 2,000 X 2,000-pixel, 20" X 20" CRT for air traffic controllers. Slow improvements may be expected in color CRTs and by the year 2000, I would expect 2,000,000-pixel HDTV monitors, 4,000,000-pixel, computer-aided design CRTs and perhaps 8,000,000-pixel, air-traffic-control displays.

    Low cost 1,600 X 1,280, 100-dpi color CRTs ought to be fairly cheap (<$1,000) by 2000.

    The white hope for the future lies in various kinds of flat-screen displays, and particularly, color LCDs. Small color LCDs with a total pixel count of about 92,000 (about 300 X 300 resolution) have been available for the past few years in pocket TVs. Hitachi is marketing a 10", 1,120 X 780, 512-color LCD in a new $8,000 laptop computer, and Nippon Electric Corporation (NEC) and Toshiba are offering similar products. Sharp feels that it will be able to sell a 14" color-TV panel for $120 by 1995. Japan has set a goal of developing a 40" color LCD capable of displaying 2,000,000 pixels by 1996. This is an ambitious goal, since a 2-megapixel, 40" direct-view color-TV LCD requires 6,000,000 pixels (2,000,000 for each color). Also, TV displays, must operate faster than CRT displays, and are more difficult to build. Earlier color LCDs used passive matrices but current efforts are directed toward thin-film transistor, active-matrix LCDs that can switch much faster and can give sharp, high-contrast displays. The problems are:

    (a) yields are low, since every transistor must be perfectly bonded to its corresponding pixel;

    (b) amorphous silicon has been used to date but it is too slow for TV. Polycrystalline silicon does 100 times better but it must be bonded to a glass panel at temperatures high enough to soften the glass.

    These considerations make LCDs a different ball game from other semiconductor devices. Flat-screen color LCD TVs may be as much as 10 years away, although color LCDs for computers should become common over the next few years. If we assume that 2 megapixel color video screens are available at a reasonable cost by 2000, then 8 megapixel color TV screens could possibly be available around 2005 or 2010. They might appear in 2005 if LCD pixel densities can be quadrupled every 5 years. But if not, surely we ought to be able to quadruple them in 10 years once we make the sea change to solid state displays. 32-megapixel color screens might arrive sometime around or after 2010 (2015? 2020). 128-megapixel displays might become available after that (2015? 2020? 2030?). Of course, "available" doesn't necessarily mean cheap. (I need to warn you that I am making these forecasts without any technology history as a guide, and they should be assigned a low confidence level.) Also, for home use, 128,000,000-pixel displays may not be worth their additional cost.

    The larger screen displays will probably be projection systems, perhaps using a viewgraph projector, simply because of the expense and difficulty of building very large (>40") direct-view displays. A page-sized, 8,000,000-pixel projection plate would require a pixel density of about 300-dpi (about twice the pixel density of current pocket color TVs); a 32,000,000-pixel projection plate would require 600-dpi resolution; and a 128,000,000-pixel plate would call for 1,200-dpi resolution—values beyond our current capabilities.

    It seems reasonable to suppose that color LCDs will edge out CRTs between 2000 and 2010. Of course, laser-holographic systems or some other type of display may displace existing display concepts in the 21st century. The ideal display would probably be a three-dimensional image in mid-air.

    One interesting question is that of how much resolution is needed before we reach the point of diminishing returns. A 32-megapixel display would give resolution over a 60° X 80° field of view that would approach photographic quality. On the other hand, total pixel counts as great as several billion pixels would be required to counterfeit reality over all eight octants of a sphere. Other problems, such as brightness approaching the kilowatt-per-square-meter intensity of sunlight, the 32-bit range of colors which the eye can distinguish rather than today's emerging standard of 24-bit color, and the reproduction of dazzling reflections off reflecting surfaces are among the challenges facing the designers of display systems that attempt to simulate the real world.

    Other types of solid state displays include electroluminescent panels, light emitting diodes, plasma panels, cold-cathode, flat-panel CRTs, and laser holographic displays.

    Electroluminescent (EL) panels have two problems that hamper their use: the lack of a sufficiently bright-blue phosphor, and a half-life of about 1,500 hours. The jury is still out regarding whether a satisfactory blue phosphor can be found or developed. EL displays work best when excited by frequencies of 400 to 1,000 Hz, which is inconvenient for 60 Hz power sources. The 1,500-hour limitation could cause them to lose half their brightness in a year or so of typical TV use.

    A suitable blue light-emitting diode has only recently been developed but it may open the door to full color LED displays.

    Color plasma panels have recently become a possibility.

    Cold cathode CRTs use electric field emission from a large number of etched micro-tips to illuminate a CRT screen. The problem with them has been sputtering and erosion of the micro-tips by ion bombardment arising because of the high electric field strengths. This leads to very uneven emission patterns. A French company, Leti, claims to have overcome this problem and promises to ship displays by the end of 1991.

    Computer generated laser holographic displays demand mind-boggling computational capabilities. Holographic displays require that the appropriate views be provided over a range of angles, so that there is really a large number of slightly-different images which have to be generated. MIT's Media Lab has devised techniques for greatly reducing the laser holographic computational load from about 25,000,000,000 computations per second to about 25,000,000 computations per second. Even so, it seems doubtful that laser holographic TV displays will be feasible much before the end of this century. However, laser-holographic still images may be computer-generated well before the year 2000.

    Another novel display idea is one developed by Texas Instruments under DARPA sponsorship. It uses a set of microscopic electrically deflectable mirrors on a silicon substrate.

    One of the missing ingredients in present-day CRTs is suitability for wraparound installations. Wraparound displays could be valuable for wall-mounted displays, and are important for such applications as virtual reality, teleoperation, wide-screen TV and cockpit displays. In principle, CRTs can be mosaicked together, but in practice, this is a difficult and expensive thing to do. Direct-view LCDs might be mosaicked more effectively, and this technique may constitute a way to implement large screen, concave displays. However, the most likely way to generate wraparound displays is probably the use of projection systems. Correction for the planar-to-spherical optical distortion produced when a planar screen projects upon a spherical surface might be effected electronically by appropriately distorting the image on the planar projection LCD.

    Stereo display technology is another attractive possibility. Although there are many ways to implement stereo display systems, one of the leading approaches is based upon the active eye-wear, polarized glasses which StereoGraphics Corp. markets. Liquid-crystal shutters in these 6 oz. glasses are activated by infrared signals from the receiver at a 120 Hz switching rate. Right and left hand images are then presented on a display at 120 frames per second, together with a synchronized infrared signal which alternately closes the right and then the left shutter on the stereo glasses. Another is a stereo display marketed by Dimension Technologies (Rochester, NY) which doesn't require glasses. It uses an array of tiny lenses mounted over a strip-lit color LCD so that the left and right eyes see different views. The viewer has to be in an "allowed" location to catch the 3-D effect; otherwise, the depth is inverted. However, in practice, this is not the problem it might seem to be because the mind refuses to register common objects when they appear to be inside out. Also, there is some perception of seeing behind objects when the viewer's head is moved, and objects in the "foreground" move relative to objects in the "background" in a way that heightens the 3-D effect. Horizontal resolution is halved when this approach is used, so a high-resolution display is desirable. The third approach to 3-D displays is the laser holographic approach. With holographic displays, the viewer can see behind objects in the foreground. Their unique advantage resides in the fact that, with laser holographic displays, you see what you would see from various angles.

    Laser holographic displays will probably arrive in the 21st century when computers are fast enough to drive them and to support model-based encoding. (See Section 3.3 for a definition of model-based encoding.)

    (June postscript: During the month that has passed since I wrote this, a few of the developments that I foresaw years down the road have already been been announced. In the area of laser-holographic displays, I have been advised that it is putatively possible to generate a 3-dimensional image in mid-air using

    2.10 Scanners

    Nine years from now, in the year 2000, I would certainly expect to see 300-dpi, flatbed color scanners selling for no more than $500 including a page feeder. However, I expect them to be a part of the all-in-one copier/printer/scanner/fax peripheral.

    Between data compression hardware and rapidly growing storage capacities, there should be a growing market for 600-dpi and 1,200-dpi scanners, particularly if they can be sold for $500. Higher resolution would be particularly valuable for optical character recognition, where 300-dpi resolution is marginal.

    Alternatively, scanning might be performed by charge coupled device (CCD) color video cameras capable of resolving at least 300-dpi. A video-camera-based scanner will probably be capable of 3-D scanning of the environment, with registration and reconstruction of the 3-D model of the environment performed electronically as described in Section 3.3. The camera might be hand-held or mounted on a scanning frame for studio use. Page scanning may require a manually or electrically operated scanning table, or a scanning-mirror arrangement. By the year 2005, if 8,000,000-pixel cameras become available, pages could be scanned in a camera-wink at 300 dpi. Thirty-two million pixel cameras, if and when they arrive, would afford 600-dpi page resolution.

    2.11 Video Camera Input

    I believe that video cameras are going to become a more-common component of our PCs as we begin to use them as videophones, HDTV video recorders, and windows into "cyberspace". The highest-resolution, low-cost CCD color camera of which I'm aware employs three 410,000-pixel chips—one for each primary color—to provide a 550-line by 700-line picture. However, two-million pixel cameras are required for HDTV broadcasts and are undoubtedly high on the priority lists of semiconductor developers for the high-definition camcorder market. Hopefully, such HDTV cameras will be available by the end of this decade at reasonable prices. Stereo camera pairs or stereo provisions for single cameras may also be popular.

    Eight-million-pixel resolution is needed to scan a full page at 300 dpi, as well as to fill a large (8' X 5.5') screen with near-photographic-quality video imagery. My guess would be that such extra-high definition video might appear during the period between 2005 and 2010, not only in terms of the state of the art of video cameras but also in terms of that of LCD projection equipment, fiber-optic cable service, and storage capacities. As discussed in Section 3.3, we may be utilizing model-based video by that time and using video cameras in different ways than we do now. Also, between now and 2010, video cameras are probably going to become smarter, incorporating object recognition capabilities. Early in the next century, the video camera interface may be able to recognize most or all of the objects that it photographs.

    Thirty-two million pixel resolution video cameras and associated display equipment probably lie somewhere near or beyond 2010.

    The video cameras of the year 2030 are beyond my realm of conjecture.

    2.12 Non-Visual Input

    Model based encoding (see Section 3.3) carries with it a requirement for kinematic inputs regarding what the models are doing—recordings of running, walking, smiling, rolling along a highway, etc. These motion sequences will presumably be recorded in order to realistically animate computer-based models. This sort of recording is apparently being made or generated to operate the life-size dolls at Disneyworld and at Chuck-E-Cheez pizza. For virtual reality displays, additional inputs will need to be gathered such as temperature, tactile stimuli (rough or smooth, wet or dry, hard or soft), auditory, and kinetic (shaking, bouncing, rolling). For virtual reality, some sort of "cyberglove" or exoskeletal arm may be employed to provide chiral input to the computer (viz., a "data glove").

    Silicon Graphics Corporation is currently selling a sonic digitizer that uses sonic pulses to digitize in three dimensions. Such a device might be also pressed into service to digitize motion.

    2.13 Non-Visual Feedback

    A certain degree of motion feedback can be provided with the visual techniques employed in TV broadcasts today, augmented by low-frequency rumblings and 3-dimensional, compact-disk-quality "surround sound". For virtual reality, force and torque reflection through the exoskeletal arm may eventually be provided, as is done today with remote manipulators. Tactile feedback, including hot or cold and wet or dry, may be provided. You will then be able "feel" the objects and shapes that appear to be suspended between you and the screen. Shaking of controls may be a way to provide kinesthetic feedback. Air-powered motion seats, consisting of a segmented plastic or rubber cushion activated by an air tank/air compressor and electrically operated valves, may be a low-priced way to provide kinematic feedback to the operator. Eye and head trackers may be a part of the repertoire. Tactile-feedback suits exist and if virtual reality becomes sufficiently attractive, might eventually become available for individuals. Will we ever add odors (smell-a-vision)?

    3.0 Potential Computer Applications

    Looking ahead 10, 20, 30 or 40 years into the future, tomorrow seems like science fiction. But as progress creeps upon us day by day, we adapt to it day by day and clamor for more. And then, looking back, it seems mellow and prosaic. Each new gadget or clever toy stirs its ounce of wonder but then becomes accepted for what it is—just another gadget. And in fact, technology doesn't change the vital issues in our lives that much, anyway. Forty years from now, the interstate highways will probably look much as they do now. Many of the buildings we occupy now we'll still be occupying forty years from now. By the same token, though, if past is prologue, important alterations will occur even though they may seem unavoidably natural in retrospect. The past forty years have seen such important changes as the forty hour work week (down from forty-four), flexitime, the shift from manufacturing to the service sector, the sexual revolution, and, unfortunately, a rise in the divorce rate and drug use. The next forty years may see some of us working and shopping from home, as amplified in Section 3.6. Artificial intelligence and specialized robotics applications are bound to improve productivity in both manufacturing and service industries. Near-sentient, anthropomorphic robots, if they are feasible and if they should appear during the next forty years, would surely have a profound impact upon every aspect of our society. Videophones and telepresence promise to revolutionize the way we do business. Automobiles may change markedly over the next forty years, not only because of computer improvements but also because of environmental constraints.

    To me, it seems easier to forecast technological developments than it is to predict when or whether those technological developments will be applied. Just because something can be done doesn't mean that it should or will be done. For example, a remote thermostat-adjuster could probably be devised for the incorrigible couch potato, but the cost probably isn't worth the benefit. Railroad right-of-ways could have been used for automatically driven trailer-trucks but that hasn't been done. Cultural inertia can play a role, too, as well as the touchstone of practical considerations which aren't obvious but which can assume overriding importance. For example, wireless links to portable computers could be unpopular if low-power RF is found to be a cancer-inducer. Cost is a crucial factor: Learjets are wonderful but most of us can't afford them. Then, too, new technology generally arrives gradually, with many stages of improvement.

    The essence of all this is that I don't feel nearly as comfortable predicting these applications and their due dates as I do the technology schedules which undergird them. Some of the developments cited below will probably come earlier than I am forecasting, and some may come later, or not at all. Undoubtedly, unforeseen developments will also occur to surprise us all.

    3.1 The Notepad Computer

    It will accompany you everywhere. The size and shape of a tablet, it will be about 1/2" thick, and will weigh a pound or so. It will have a "pen" or stylus and will function like a magic slate. It will be your notepad computer, and I believe that it will serve as the Swiss Army knife of the computing world. Notepad computers are just now becoming available. For example, Fujitsu has recently introduced a 2-pound, 8 MHz, 80286, notepad computer which is 1/2" thick. Nippon Electric Corporation (NEC) is selling a 3-pound "Carryword" for $700, and Hewlett Packard's recent announcement of a $695 folding pocket PC with a QWERTY keyboard is another harbinger of what is soon to come. Notepad computers are generally equipped with 1 MB of RAM, and with very small (2") floppy or hard disks. Current models utilize a black-and-white liquid crystal (LCD) display for output and a stylus or pen for input. The LCD is generally characterized by at least a 640 X 480 pixel count, providing 72-dot-per-inch (dpi) resolution over the 6.5" X 9" viewing area that corresponds to the printed part of a printed page. A keyboard is desirable, if not necessary, and may be either carried as a part of the notepad PC or may be attachable for editing and text entry functions. The notepad computer may dock with desk top peripherals and with its keyboard, perhaps using an infrared or radio-frequency link. The appearance of the PC-on-a-chip projected for the 1994-95 time frame should encourage the creation of low cost, light weight, PC-compatible notepad and pocket computers. Notepad computers are already capable of recognizing hand-printed characters and will probably be capable of reading cursive handwriting, and of reading it progressively better as we go through the decade. By the end of the decade, they may be capable of speech recognition and of taking limited dictation. Links (RF and IR) to peripherals, to wireless local area networks (WLANs), to in-building cellular telephone networks, and perhaps to the Iridium or Alcatel satellite nets will also become available, although cost of use and accessibility are uncertain elements in predicting their market potential.

    By the year 2000, notepad computers will probably incorporate a color LCD, and 64 to 256 MB of non-volatile RAM, and may use flash cards or floppy disks for external storage. One pundit has forecast that, by the year 2000, the notepad computer will provide a 2,048 X 2,048 color LCD display, will be capable of recognizing four people, will recognize the spoken word, and will tie into larger mainframes using the aforementioned WLAN, cellular phone and low-cost satellite links. It's worth noting, though, that a 1,280 X 960-pixel count would yield 150-dpi resolution over a printed page. Is higher resolution than this worth what it would cost? For some applications, the answer is probably "yes" and notepad computers will probably eventually be equipped with 2,600 X 2,000 pixel color displays supporting 300-dpi resolution. For example, one future possibility is that of a three-dimensional display which can be viewed without special glasses. Dimension Technologies makes such a display, which requires an LCD with the highest available resolution. Going beyond 300-dpi resolution may pose problems because the transparent electrical conductors which switch the LCD pixels on and off become too thin to remain good conductors.

    This "geewhiz" technology is all very well but what does it have to do with everyday life? Why should notepad computers be considered to be the "Swiss Army knives" of the computing world? The answer is that either notepads or folding pocket PCs are expected to become the "assistant in your pocket". You will probably use them for nearly everything administrative. Before the year 2000, you may be using a notepad computer as a calender alarm, entering appointments, birthdays, anniversaries, medication schedules, dental and medical checkup reminders, magazine and license expiration dates, car servicing milestones, and periodically-paid bill reminders. (I use such a program now on my Macintosh. When I first turn on my computer or at any specified times during the day, Smart Alarms warns me of impending appointments or birthdays.) You may keep your checkbook and your budget in them, possibly using them to print out checks at the store. They may serve as, or as adjuncts to credit cards, giving you a clear picture of your financial status at all times as measured against your budget and pending income—for example, when you're deciding whether or not you can afford to add something to your credit account. They may generate form letters for you, together with income tax forms, legal forms, and expert-system-based legal and medical advice. You may use them as expert-system investment guides and investment performance monitors. They may generate shopping lists, based upon home inventory lists. (I do this today with my Macintosh.) They may contain several types of specialized calculators. You may be able to plug them into your car, boat, or RV.

    Meanwhile, the forerunners of the "assistant in your pocket" are probably here today in the form of personal organizers such as Sharp's Wizard and Casio's Digital Diary. These smaller pocket computers are not as well suited to word processing and other keyboard-intensive functions as notepad and laptop computers but they do fit better in your pocket (or your purse).

    Given a CCD color video camera (probably after they year 2000), notepad computers may be used to scan in and, where applicable, read documents. They will contain a road atlas, and perhaps, maps, of major cities and listings of motels, hotels, campgrounds and restaurants which may be located with the aid of the atlas or the city maps. They will contain an address and telephone book, will be able to dial for you and may serve as a cellular telephone and perhaps as a cellular videophone over "micro-cellular" nets or in-building WLANs. Sometime during the next 20 years, they are expected to be able to carry on simple conversations with you, asking for clarification where necessary. They may provide a telephone answering and screening service, eventually carrying on a simple conversation with whoever calls. They will collect relevant news, weather reports, stock quotations, electronic mail, voice mail, and color facsimile transmissions to you. They may double as your personal TV. They may record TV shows and movies for you, based on your known preferences. They will contain voice recognition circuitry that, sometime between 2000 and 2010, should permit dictation of continuous speech, ignoring other speakers and recognizing those proper names and acronyms that are a part of your environment. As storage capacities expand, notepad or pocket computers may be used as VCRs. Given the proper software and "courseware", notepad and pocket PCs may revolutionize education. Intelligent tutoring systems have shown themselves to be significantly superior to human pedagogues in the narrow areas for which they are programmed. You will play a game and learn as you go. They will undoubtedly be used for homework. They may be used as "living" maintenance manuals or for assembly instructions. You will probably use them for library and commercial access. You may use them as shopping aids, displaying prices or other information concerning merchandise as you walk through a store. For example, you might query Consumer Reports while looking at a particular piece of merchandise at a store. You may call up books from the library (probably after the year 2000) and read them off your notepad. You may receive magazines on them in electronic form. They will eventually serve as a real-time language translator for you, converting foreigners' spoken remarks to English and your spoken comments into foreign languages. In short, they are expected to become your indispensable companion.

    One possible feature that may appear in notepads is active noise cancellation, used in concert with "whisper-mikes".

    Notepads will presumably be used on a flat surface, like a tablet, unlike desktop computers where the display behaves like a window. Notepad computers may dock with desktop computers, using the notepad as notepad-and-pen, and the desk-top display as a window into cyberspace.

    Notepad and pocket PCs should help reduce paper consumption. If we can carry a magic slate on which we can whistle up any printed page we please, including color photographs and illustrations, we probably won't feel such a need to print everything we need on paper.

    I once read a science fiction story by Frederick Pohl set in the 25th century in which everyone owned a "joystick". The "joystick" was linked by RF to a central computer and it performed the functions of the notepad computer which I've described above. The "joystick' is about to arrive, 400 years earlier than Frederick Pohl predicted.

    Looking ahead, as notepad storage capacities rise into the gigabyte and perhaps the terabyte range, whole libraries may be stored in them. Books, magazines and video programs may become part of your temporary or permanent files.

    During the period from 2000 to 2010, I'm going to guess that notepad and pocket computers will double as portable telephones and videophones, accessing wireless local area networks inside buildings, and using your car-phone channel or cellular communications channels outside buildings. By that time, software packages should be seamlessly integrated, and notepad computers may be serving as windows into a virtual-reality (cyberspace) environment, simulating the office, the classroom, store counters, or the showroom. They will probably utilize 3-D displays by that time. (See Section 3.6, "Virtual Reality" for additional detail.) The period from 2010 and 2020 may be the era during which pocket and notepad PCs become sufficiently cheap and rugged that every child carries a Fisher-Price or PlaySchool pocket computer.

    The most compelling deterrents to this rosy scenario that I see are, first, the lack of an integrated set of low-cost software packages, and second (and it arises from the first deterrent), the difficulty of learning the ins and outs of all those expensive, independent software packages. Today, many of the functions that I've described above can be performed more or less well by existing software packages, but the cost of buying all of them would be astronomical. Not only that but, having once bought them, the user would face the challenge of learning how to use them all. Since each package today is stand-alone, each developer adds a multiplicity of features and options which require a lot of time and effort to master.

    Another problem that must exist with today's laptops is the danger that an expensive notepad computer might be lost or stolen. Declining prices will help, but it may also be possible to program the notepad to operate only for someone who knows the proper password. Someday, if in-building cellular phones become cheap and ubiquitous, it might be possible to program your notepad to telephone you if an attempt is made to use it without the correct password. Can we imagine a time when GPS receivers are so cheap that your notepad computer can tell you where it is?

    3.2 Desk-top Computers

    For at least the next 20 years, I would expect desk-top computers to play the role they play today . They will have larger displays, and RAM and disk memories, as well as the printer/copier/scanner/fax. Notepad computers will probably dock with desktop computers, as they do today. In the future, the docked notepad might function as an intelligent desk-tablet while the desktop display might serve as function as a window-wall into "cyberspace", as mentioned in the previous section.

    Although the line between workstations and PCs is blurring, there will probably be a continuing industrial market for high-end workstations, whose capabilities will diffuse to desktop PCs five-to-ten-years after they appear in workstations, and to laptops and notepads five-to-ten years after they surface in desktop computers.

    The fate of minicomputers and large central computers seems to me to be difficult to call. Networks of personal computers and workstations may supplant large computers if operating systems can handle the complexity.

    I would guess that supercomputers will probably continue to push the state-of-the-art and play a role in research, simulations, and weather calculations.

    3.3 Computers and High Definition TV

    One of the major computer technology applications of the 1990s is expected to be the conversion of TV to an all-digital format. All five of the U.S. consortia which are competing before the Federal Communications Commission (FCC) for approval of a U.S. high definition television (HDTV) standard are proposing purely digital designs. High definition television is touted to provide a resolution of either 787.5 X 1,280 or 1,050 X 1,920. This has momentous implications for the TV industry, not only because it will permit HDTV broadcasts over existing TV channels with sharper, steadier, and higher-resolution imagery than is available today, but also, because it will open the door to further improvements as computer technology advances. Until now, television stations have had to broadcast, at 30 frames a second (or 1,800 frames a minute or 108,000 frames an hour), a slowly changing sequence of frames which contain essentially the same information. Usually, even those objects within the frame which are moving change only slightly from one 30th-second frame to the next, and the rest of the scenery may not change at all. As a result, the information transmitted by a TV transmitter is highly redundant. In the future, TV stations, instead of re-transmitting essentially the same scene over and over, will transmit a still picture and then will transmit only compressed updates. The same data-compaction strategy will be used for video recorders and for multimedia recordings on computers.

    Beyond these approaches lies a revolutionary switch from today's scene-based encoding to model-based encoding. With model-based encoding, a computer, drawing upon video camera scenes taken at different angles and, perhaps, upon manual identification of objects within the fields of view, constructs a 3-dimensional model of the broadcast studio set imagery to be stored or transmitted. The video camera also takes snapshots of the objects in the scene from different angles. Both the 3-dimensional model and its "texture-map" snapshots are transmitted to the TV receiver, together with lighting information and information concerning where the objects in the scene can bend or rotate. Then the receiver reconstructs the scene at the studio by animating and rendering its computer based model in accordance with relatively-concise instructions from the broadcast studio. Some of the more common of these objects, such as a kinematic model of a generic human being or desks and chairs, might already be stored in the receiver's model library and might need only a set of parameters to custom-tailor them to approximately reproduce a particular individual or object. This approach has the advantage not only of great reductions in data storage and transmission requirements but also of providing a 3-D model which could be viewed from all angles and which would be suitable for laser-holographic displays. It should also lend itself to arbitrarily high levels of resolution. The reason this isn't available today is that the computational requirements for such a process are enormous—probably beyond the reach of the fastest of today's supercomputers. However, between the orders of magnitude improvements anticipated for microprocessors, possible improvements in rendering and animation algorithms, and the possibility of realizing these algorithms in silicon in the form of application-specific integrated circuits, model-based encoding may become a practical reality within the next 10 to 20 years.

    The present FCC schedule calls for selection of an HDTV format in 1993. By 1994 or 1995, the first HDTV receivers should begin to appear, at a an estimated retail price of about $3,500. In the meantime, HDTV camcorders and VCRs may enter the marketplace. By 2000, hopefully, a significant fraction of TV broadcasts may be available in HDTV format. By 2010, if HDTV succeeds, the majority of all TV broadcasts will probably be made in HDTV format.

    In the meantime, Japanese companies have committed to the development of a 40", high resolution, flat screen color TV by 1996. This will be a tall order (see Section 3.8). Flat screen color TVs, probably LCD-based, but conceivably utilizing some other technology such as light emitting diodes, plasma panel displays, electroluminescent displays, or cold cathode devices, will become prevalent after the turn of the century.

    At the present time, TV sets appear to be splitting into two types: small bedroom or personal TVs, and large projection or direct-view family room displays. If you have visited Circuit City or Video Concepts lately, you may have noticed that about the half their inventories now consist of large screen (>30") TV sets. Large screen TVs have begun to sell like hotcakes. Large-screen, HDTV-resolution displays will probably first appear in the mid-to-latter 90s as teleconferencing screens used in conjunction with the Bell Systems' Integrated Services Digital Network (ISDN) and later with the Bell Systems' Synchronous Optical Network (SONET). They will probably employ either projection equipment or a mosaic of flat screens. This development might also be expected to reduce business travel requirements. At the same time, between now and 1995, you will probably be able to buy a video data compression/codec card which, together with a low-cost, charge-coupled device (CCD) color video camera, will convert your PC to a somewhat-jerky videophone which can function using ISDN service over your existing telephone lines. The big question is the cost of ISDN service, but inasmuch as AT&T and the Bell Systems are planning to switch to all-digital telephone service over the next few years, hopefully, this will become available to us at a competitive rate.

    Looking ahead, I could foresee concave, wraparound family room screens illuminated by TV projectors and measuring anywhere from 3 feet by 5.5 feet to 7.5 feet by 13 feet, with the latter built into one end of a family room. Given these kinds of dimensions, HDTV would provide near-photographic resolution at a distance of about 14 feet from a 3 foot by 5.5 foot screen in the year 2000. If HDTV resolution could be doubled to 2,100 X 3,840 (requiring 4 times as many pixels), near-photographic resolution could be achieved at a distance of 12 feet from a 5 X 9 foot screen in 2010. If it could be doubled again to 4,200 X 7,680, photographic resolution could be achieved using an 8 X 13 foot screen at a distance of 10 feet , which is probably as close as one would want to sit to a wall-sized screen, anyway.

    The above scenario assumes that we'll be using something like LCD-based projection systems. However, if laser-holographic displays come to fruition, then some type of screen-less, virtual-image projection system might constitute a cheaper and better alternative to large-screen displays.

    3.4 The Impending Revolution in Telephone Service

    After a century of slow progress, telephone service is about to make a quantum leap by going wideband and all digital. The infrastructure for this transition has been constructed over the past thirty years as AT&T has converted its long lines from analog to digital-transmission modes, and more recently, from copper to fiber-optic cables. Now these "communications superhighways" are complete and there remains only the task of completing the "access roads". The first wideband service to become available will use your existing copper telephone wires and will offer you a bandwidth of 144 kilobits/second. Costs haven't been established but are expected to approximate $34 a month. Costs will be crucial, particularly for residential subscribers. Wide-band service is available today if one is willing to pay for it. Telephone companies are in business to make money and may be expected to charge all that the trade will bear for this wider bandwidth. However, if it isn't cheap enough, it won't be used. Companies may be expected to use ISDN to send computer data and fax transmissions between offices before it percolates into the home. Perhaps the most exciting feature that ISDN will support is low data rate videophone service. Low data rate videophones are available today but cost about $50,000 a station. However, prices are expected to drop precipitously as data compression algorithms and codecs are redesigned around specialized chip sets that then go into mass production. The quality of the video transmissions is high except for the fact that motion is jerky and images appear to be frozen between major scene changes. (The most recent videophone offered by Picturetel is said to circumvent this jerky-image problem.) Also, there may be much better video data compression algorithms using fractal or wave-packet techniques that can give full-motion video over ISDN circuits.

    ISDN is now available in certain parts of Huntsville, with connections to other areas expected within a year.

    Beyond ISDN will come the Synchronous Optical Network (SONET). SONET is a fiber-optic (FO) based communications system which will offer first megahertz and then gigahertz data transmission rates. SONET is commercially available now in selected cities such as NYC. At the moment, FO cable is still a little more expensive than old-fashioned telephone cable, but by the mid-90s, FO costs are expected to be competitive. One would expect offices to be rewired with FO cable before homes are restrung. Initially, FO circuits will probably run at AT&T's T1 rate of 1.5 megabits/second, but by the end of the century, 1 gigabaud transmission rates are scheduled to become available. Again, costs will certainly be an issue and business will undoubtedly be the first to use such services. T1 lines would permit the two-way transmission of full motion color video and might be expected to find rapid acceptance in teleconferencing. One- gigabaud lines would permit the two-way transmission of very high-resolution stereo color video.

    In short, by the year 2000, we should be enjoying wider telephone bandwidths than we've seen since telephone service first began.

    Another important communications development of the 1990s is the anticipated establishment, in 1997, of Motorola's Iridium global telephone cellular satellite system. The Iridium 77-satellite system is expected to provide reasonable-cost ($3.00 per minute), point-to-point, pocket telephone communication between any two points on the globe (except for points far inside the Arctic Circle). There will be 80 Iridium channels, each of which will support 2400-baud data rates. Communications will be time-division-multiplexed so that more than one voice conversation can be squeezed into a 2400-baud channel. Other companies such as the Alaskan-Canadian Telephone Company (Alcatel) are also planning similar systems. One of the questions surrounding these global cellular systems will be that of capacity. Eighty channels doesn't sound like much capacity for the whole world.

    One important dividend which is accruing now that computers are becoming fast enough to mediate speech and images in real-time is that visual and audio data is being compressed and multiplexed, thereby using available bandwidth more efficiently.

    The telephone revolution will probably continue throughout our forecast period as wider and wider bandwidth SONET links become commonplace and cheap. (Oddly enough, we are devising successively better data compression schemes while at the same time widening available bandwidths.) We are probably heading toward a day when actual physical presence will often be replaced by "telepresence"—a you-are-there simulation of a remote environment and of your surrogate presence in that environment, with whatever impact that may have upon travel. Of course, much of this impact upon travel may already have occurred as a result of the invention of the telephone. Adding vision is merely another step in a continuing process.

    3.5 Voicewriters

    One staple of science fiction (see Isaac Asimov's "Second Foundation") has been the "voicewriter" which can transcribe the spoken into the printed word. There are problems with voice recognition, however. How is a computer to distinguish between homonyms such as "here" and "hear", or "weight" and "wait"? This requires contextual understanding rather than simple transliteration. Also, extraneous speakers may be a problem. How can you dictate in a crowded office? What happens if someone wanders up and speaks to you? Another problem may be acronyms, proper names, and "made-up" words or words that you mispronounce. Also, what about "uhs" and "ahs"? Still another problem may be that people may find it awkward to dictate using the kind of phraseology they would write. Finally, editing can probably best be done with a stylus and a keyboard. If these problems can be overcome, then voicewriters should become a reality during the latter 90s or in the early years of the next century. Voice recognition requires a fast digital signal processor and a lot of storage but this should soon be available. There is a voice recognition program available today for a PC that can handle several hundred words. IBM is purportedly testing a 20,000-word speech recognition unit which is undergoing evaluation within the company. Burr Brown is providing 100 to 600 word speech recognition vocabularies for its factory automation equipment. The voice recognition unit needs to be trained to recognize the speaker's voice but can then recognize words with >99% accuracy. There is an obvious telephone company market here for operator-assistance automation, for hands-free control systems, and for office typing. The PC-based ability to recognize continuous speech spoken by multiple speakers with a 5,000-word vocabulary is projected for the mid-90s. The ability to take dictation with a large (20,000-word or greater) vocabulary is forecast for the year 2000.

    Sometime within the next ten-to-fifteen years, you may find Smith Corona and Brother marketing voicewriters for $300 or $400 as successors to present-day word processors.

    3.6 Virtual Reality

    The term "virtual reality" refers to efforts to generate a user environment that is virtually indistinguishable from reality itself. Virtual reality technology has been developed and is being steadily enhanced for training simulators and cockpit displays where realistic reconstructions of hypothetical scenarios are essential. Its ultimate embodiment would be that portrayed on the Holideck of the Starship Enterprise. Unfortunately, in the 20th century, that level of capability lies beyond even the foreseeable.

    What could we do with virtual reality? One obvious application is entertainment. A system which allows you to enter "cyberspace" and, for example, to fly or to revisit historical events or to participate in dramas and romances would seem to be the ultimate dramatic experience, particularly if the computer could respond intelligently and realistically to situations and could heighten the illusion of reality.

    The technology of virtual reality is also closely linked to the teleoperation of remote devices* . For example, you might vicariously experience a walk upon the Moon or drill for oil under the ocean. Within 20 or 30 years, you might rent a robot from Budget Rent-a-Robot and visit the Smithsonian in Washington without ever leaving town. You might send your autocar to Krogers and then teleoperate your Waldo to pick up the week's groceries from your home. Another important application could be the networking of virtual reality stations. You might, for instance, enter a "cyberspace" in which a computer model of your place of business is reproduced (including scanned images of your office). You might seem to be looking into your real-world office (based upon actual video photography of your real-world office), with your desk, telephone, (and later, your videophone), your file cabinets, bookcases, or whatever is in your real office. You would be able to "sit down" at your simulated desk, open your files and extract whatever you wanted from them, and begin working with them through your notepad computer. You would be able to "pick up" your telephone and call anyone you wanted by telling your computer the name of the person you wanted to call. If you received a call at work, it would be re-routed to your computer so that you could answer phone calls that came to your place of business (call forwarding). You might open the "door" of your "office" and walk down the "hall" outside your office (which would be represented by scanned photographs of the hall outside your office) to someone else's office and talk with them "face-to-face". (Scanned images of your co-workers who were on the net could be projected into the scene, doing and saying whatever they were doing and saying.) They would see you enter their simulated office, sit down in their simulated chair, and carry on a discussion with them as though you and they were in their office even though, in reality, the two of you might actually be at home. Then both of you might go to a conference room to join other staff members in a meeting*. In that way, you could all go to work without ever leaving home. Will such things actually happen? Probably so, to one degree or another. It can probably be simulated today. These kinds of teleconferences are carried out routinely on TV. The principal missing ingredient is low-cost graphics processors fast enough to carry out the simulations, and their appearance should be only a matter of time. Right now, these systems require supercomputers and expensive (low production) user-interface hardware. However, as technology advances, as computer networking increases, and as communications bandwidths improve, your PC will more and more become a window looking into a commonly shared "cyberspace". (This will be particularly true if 3-dimensional displays become common and if it becomes common to attach one or two video cameras to a PC so that it can be used as a videophone).

    The reason I think that this kind of office simulation might allow workers to remain physically at home is because for practical purposes, they would be at the office. A manager could see that they're not off fishing. When the Lisa/Macintosh displays were first introduced, high resolution displays were too expensive to permit such an approach, but in the future, it will probably happen, and perhaps sooner rather than later. In any case, virtual reality software packages will probably appear throughout the 90s and will whet our appetites for more.

    Virtual-reality programs are becoming available for PCs and Macintoshes. For instance, there is a Macintosh program called "Virtus Walkthrough" which allows the user to generate rooms or a building and then "move" through them, viewing them at different locations and from different angles. A virtual-office or virtual-world operating system could well be the successor to the Macintosh/Windows graphical user interface paradigm*.

    The most-important and perhaps most-demanding technical challenge facing virtual-reality simulations is probably the graphics display. Each of your eyes has a field of view of about 135° by 135° and, moving only your eyes, you can see throughout a range of about 180° by 270°. Furthermore, you are capable of 20-to-40 arc-second resolution, which translates into 1,000,000,000 pixels or more over your entire field of view. By comparison, the current state of the art in color graphics displays is the 4,000,000-pixel Sony air traffic display mentioned in Section 2.9. Clearly, we have a long way to go.

    Some of the virtual reality displays that have been used to date employ helmet-mounted displays. These offer the advantage of portability and the elimination of distractions as you maneuver through your virtual reality. Some problems with helmet mounted displays are their weight and bulk and the extremely high pixel densities required to give high resolution in such small displays. 1,000-line helmet mounted displays have been tested at the Wright Patterson Air Development Center but they are currently very expensive and their resolutions are far below the highest that the eye can resolve. Nor is there a known way to generate the 5,000-line-per-inch displays we need for the true counterfeiting of visual reality. Perhaps an as-yet-univented semiconductor device can provide the very-high pixel densities needed for this application.

    The more-likely developmental scenario is that in which conventional displays on conventional PCs gradually evolve. Virtual reality will probably be implemented with software, steadily improving conventional displays, and graphics accelerators.and may eventually become available in wraparound format. This in turn will spur emphasis upon virtual reality displays.

    As mentioned in Section 2.9, wraparound displays will probably be created using a projection system. One way this might be accomplished is through the use of a foveal display. The human eye has low resolution over almost all of its angular range, with high resolution only in the very narrow angular range subtended by the fovea. In principle, we can apply the same strategy to a display system. Such a system might be constructed using two projection displays mounted above one's head, pointing down. One display would function as a wide-angle, low-resolution display, while the other served as a very-narrow angle, very-high-resolution display. The low resolution beam would be deflected by a mirror on the viewer's head toward a hemispherical screen mounted in front of the viewer. The high-resolution beam would be also be reflected toward the hemispherical screen but, using an eye-tracker, would be slaved to the direction-of-gaze of the viewer's eye. This is being attempted today, and two of the problems are the cost and idiosyncrasies of eye-trackers. (Settling time for the servoed mirror would seem to me to be a possible problem, though I'm unaware of whether it is.) The great advantage to a foveal display is the fact that we could match the resolution of the human eye using displays with resolutions three orders of magnitude less than the 1,000,000,000-pixels required for a brute-force maximum-resolution display.

    3.7 Computers in the Office

    The last 12 years have certainly seen changes in commercial computer use. In 1979, a few small-business and professional users were just beginning to explore the uses of a personal computer. Today, there is a personal computer on almost every desk. Word processors have virtually replaced electric typewriters. Facsimile transceivers have become ubiquitous. Microprocessors are found in every major office device, in household appliances and in many toys. Supermarkets are now semi-automated. Electronic mail is becoming very popular because it obviates the requirement for time-consuming "nose-rubbing" that attends person-to-person communications. Voice mail is becoming popular and will undoubtedly become more so as computers come equipped with self-contained speakers and microphones, and local area networks (LANs) become widespread.

    I, and those around me, now prepare and "desk-top-publish" our reports—i.e., we no longer depend upon a secretary/typist. It is quicker and simpler to prepare a report on the computer from start to finish than it is to write out the report long hand, give it to a typist, and then proofread what the typist has typed. I even type ideas, notes, outlines, office paperwork, and telephone numbers into the computer—i.e., I think into the computer. This will probably become common during the next decade or so, particularly as speech-recognition equipment becomes effective, with secretaries assuming the role of administrative assistants.

    So what comes next?

    I think the 90s will be the decade of office networking and the automation of paperwork. It may also be the decade when videoconferencing and video-mail begin to replace personal presence. The technology for this basically exists. Within the next few years, you will probably be able to direct your computer to connect you with someone else sitting down the hall, and perhaps across the country. The computer will place the phone call over your LAN or over an ISDN circuit, dialing again if their line is busy (as fax machines do now). Then you will see them on your computer screen as well as hear them.

    Working from home may become more common, as we have described in Section 3.6.

    Want to make some money? Using state-of-the-art, off-the-shelf hardware, assemble a local video-conferencing/video mail system using Ethernet, low-cost camcorder video cameras, video data compression boards from C Cube or UVC, and multimedia boards which allow you to display TV on your PC. Market this as a "get-your-feet-wet" system to universities and companies who want to explore cutting-edge office technology. Meanwhile, try to assemble a lower-cost system designed for video communication over wide-band LANs. By 1995, digital HDTV equipment ought to be available which will help support video-telepresence technology.

    Want to make more money? Then help organizations set up automated office networks. Appointments and meetings will be set up through the net, as will project goals and reports. Paperwork will flow electronically through the net, perhaps collecting signatures through pen-based notepad PCs. Commonly shared documents will be available on the net, reducing the need for multiple copies.

    Still another way to make money is to establish a bulletin-board catalog of merchandise. Vendors would be charged an advertising fee to list their goods or services on the bulletin board. The list should include descriptions of the equipment and prices. It would save printing and mailing costs for the marketeers, and should allow the inclusion of much more information than a catalog can provide.

    One of the greatest problems in buying computer equipment is determining which piece of computer equipment to buy and where to buy it at the best price. Computer magazines try to review hardware and software for users, but locating and then reading the articles is a time-consuming proposition, after the user has to make a number of phone calls to determine the best price. If this were set up on a bulletin board, with bottom-line, tabular-type information regarding the best choices in each category, it should attract a lot of customer interest. (The bulletin-board operator might even seek out the results of independent tests by large organizations that test hardware and software for their inhouse clientele.)

    Beyond the year 2000, much will probably depend upon progress in artificial intelligence research. Speech recognition should be a common concomitant of word processors by 2010, as well as other voice-command applications. Intelligent telephone dialers and answering machines should be available by or before that time. Context-sensitive, voice input/output, portable language translators should be common by 2010. Telepresence, and perhaps teleoperation of remote equipment should be common by then.

    Unfortunately, there is probably no Rx for attendance at time-wasting meetings or the writing of justifications until sentient robots can replace humans in these sordid matters. (And then we would run the risk of discriminating against robots.)

    3.8 Computers in the Home

    "The more things change, the more they remain the same."

    When I look around our house, I see very little that has changed over the past 30 years. The microwave and the VCR are new, but everything else would seem "old hat" to a time traveler from 1961. Go back 50 years and the television set converts to a radio, and the air conditioner disappears from the home (but not from the movie theater). They both existed but weren't common in the household.

    My picture of the future is that visible change will probably be similarly subdued over the next 40 years. Most changes will probably occur subliminally, in the form of smarter appliances. My principal candidates for visible change will be the home-office-terminal-cum-virtual-reality station, the large-screen TV, the videophone, the automower/scrubber/waxer/sweeper/vacuum, and perhaps, anthropomorphic robots. General-purpose robots capable of replacing a househusband or housewife will be a tall order, and if they ever come to be, will open a Pandora's Box of long-range complications. (Equal rights for robots? Do robots eventually take over? Shades of "R.U.R."!) However, if they ever arrive, the changes should be profound, not only in the home but throughout society as humans are displaced by robots, which brings us to the subject of robots and artificial intelligence.

    3.9 Robots and Artificial Intelligence

    In the fifties and sixties, there was much talk about robotics and artificial intelligence (AI). Many grants were received and many papers were written that had such words in their titles as "theorem-proving", "heuristic programming", "automatic programming", "automata theory", and "decision trees". Unfortunately, the problem of developing self-organizing machines turned out to be a lot tougher than it had first seemed. Eventually, funding dried up and the paper titles became much more modest. To the best of my knowledge, we are still waiting for the breakthrough that will allow computers to learn like human infants. However, though rates of progress are difficult to quantify, it appears that solid progress has been made toward less ambitious goals, and that useful AI applications are appearing in the marketplace. Much depends upon software innovation, which is difficult to project. However, it is also clear in retrospect that over the last thirty years, progress has been restrained by hardware limitations, irrespective of what software gains may or may not be made. As these hardware limitations are lifted, AI and robotics should benefit in major ways.

    Expert systems are already proving to be highly useful in the marketplace. They can often outperform human experts within their narrow areas of expertise.

    Neural network chip sets are being developed by most major semiconductor vendors for various kinds of applications that require adaptive behavior. Kanji character recognition and intelligent vision are two applications which may profit from neural networks.

    Fuzzy logic chips, which can make qualitative as well as quantitative decisions, are appearing in TV sets, camcorders and microwave ovens. The Japanese are reputedly using fuzzy logic to aid in the parallel parking of cars.

    Genetic learning, in which optimal behavior evolves in response to environmental challenges, is in its early stages.

    During the 1990s, expert systems should become widespread and perhaps invisible components of control systems and various types of advisory computer programs. Neural networks and fuzzy logic chips will also probably appear as components in other devices. Intelligent databases will probably play an important role in concert with "knowbots" in extracting information that you might want from a sea of data.

    Between now and the year 2000, intelligent tutoring systems will probably become available to teach a variety of subjects better than a human tutor can present them. Today's Reader Rabbit will probably give way to interactive games that will entertain and teach at the same time. Algebra, trigonometry, analytic geometry and calculus may be taught this way, allowing students to learn and play at the same time.

    MIT has developed a robotic mail delivery system. Specialized factory robots are common. During the summer of 1990, Radio Electronics published plans for building an automatic lawn-mowing robot called "The Lawn Ranger". "The Lawn Ranger" uses a simple scheme to guide it around the lawn. The user mows an outline strip around the edge of the section of grass to be mowed. "The Lawn Ranger" then continues in a spiral pattern until it reaches the center of the section. There, it shuts itself off. Its simple guidance system utilizes infrared light-emitting diodes and photocells mounted between the teeth of a comb at the front of the mower to generate a signal which steers it in whichever direction keeps grass between the teeth.

    Between now and 2000, automated lawn mowers will probably appear, first, in commercial applications, where labor costs make lawn-mowing very expensive and warrant high automower price tags, and later, as consumer appliances. The key to a successful automower probably lies in setting up sensors at the corners of the yard that is to be mowed, with a rotating infrared or radio-frequency beacon on top of the mower. The automower's position could then be determined by triangulating on these beacons, using times of detection to calculate angles. The automower's computer would be trained once, at which time it would store a terrain map of the yard, including locations of trees, flower beds, and other fixed obstructions. Unpredicted obstructions in a yard, like a child, a rock, a tree branch, a bike or a parked car would be sensed using onboard tactile, acoustic or machine-vision sensors, and appropriate halt or mow-to-the-right-around-it maneuvers would be triggered. When a tree or other object interrupted one or another of the beacon signals, the mower would keep going until it picked up the missing signal, perhaps using the simple grass detector used by "The Lawn Ranger". The automower would probably profit from a rain detector so that it would have enough sense to come in out of the rain. One problem would be that in which a piece of nylon rope or wire were on the lawn. Someone can probably earn a lot of money, not to mention the undying gratitude of every homeowner in the civilized world, by developing and marketing such an automower.

    Another gadget that has announced in the month between the time this paper was first written and this first update is the "Scout-About". This is a little R2D2-shaped plastic watchdog that roams about your house, moving every 20 minutes, listening for burglars. It is equipped with heat- and motion-detectors, and apparently houses a microphone which can detect the sound of breaking glass 150 feet away. It will be made by Samsung and will retail for less than $1,000. If it detects an intruder, it will silently call the police (or whomever you want it to notify). If you have been planning on a career in burglary, you have probably waited too late.

    One of the problems with robotics is the question of what we define to be a robot. Is an automatic washer a robot? Is a dishwasher a robot? Simple robots are very much with us in the home, the car, and the factory in the form of specialized machinery or smart appliances. On the other hand, anthropomorphic robots are still no more than a meaningless curiosity.

    One promising commercial area in the robotics field might be to market modular kits that contain low-cost components needed to assemble a remotely- or computer-controlled automaton. The pieces and parts must exist, since they are manufactured for radio-controlled models and other toys. What is needed is to pull them together so that hobbyists and small companies can obtain kits that will allow them to easily assemble robotic devices. The Heathkit subsidiary of Zenith (in Benton Harbor, MI) has pioneered in this vein with their Hero series of robots, but sale of low-cost parts and modules would probably stimulate more robotics interest and activity.

    3.10 Intelligent Vision

    The present state of machine vision is not very impressive. Computer-vision systems are primarily successful when dealing with highly structured environments, examining familiar patterns of horizontal and vertical lines—e.g., semiconductor chips. One reviewer has estimated that machine vision speeds would have to improve by a factor of about 1000:1 to rival the human eye. Other estimates put the requisite speed improvements even higher. However, more is required for vision than raw speed. Context sensitivity and rudimentary intelligence are important to vision, and the term "intelligent vision" recognizes this requirement. Motorola has developed a line of intelligent vision systems called Applied Intelligent Systems as a part of its suite of Flexible Factory Automation Tools (FFAT). At the same time, machine vision is already an important constituent of automation systems worldwide. The electronics inspection market invested $386,000,000 in automated inspection equipment in 1990.

    Based upon our forecasts for microprocessor and digital signal processor speed enhancements, I'm going to guess that intelligent vision systems will begin to approach the recognition capabilities of the human eye by 2000, and will surpass the eye in specialized situations by 2010. Automatic target recognition circuitry will be universal in "brilliant weapons" and smart sensors by 2010. By 2020, excellent intelligent vision systems should be available at a low enough price to permit incorporating them into multi-purpose robots, automobiles, aircraft, and boats. By 2030, they should be cheap enough to be found in toys.

    3.11 Automotive Impact

    Computer technology has already had a significant impact upon cars and trucks, controlling engines, transmissions, adaptive suspensions, and anti-skid brakes. There are cruise controls, headlight-on warnings, key interlocks, auto-locking doors, four-door locking and unlocking systems that won't lock the car if the keys are in the ignition, remote keyless entry systems, maintenance diagnostic systems, air bags, and "smart" instrumentation.

    Among the near-term automotive innovations are additional air bags to protect passengers from the sides and the rear. Active suspensions, adaptive braking, and active noise cancellation are among the innovations under development. Alternative fuel and ceramic engines are receiving justifiable attention from automotive manufacturers. Computer-coupled radar, sonar, or infrared collision avoidance systems (including protection from rear and side collisions) and computer-assisted driving aids are in an experimental stage. Intelligent highways are appearing which monitor traffic flow and warn motorists of potential traffic snarls, road construction bottlenecks and traffic hazards. Onboard maps, road atlases and navigation aids become more probable as low-cost color LCDs, PCs on a chip, and non-volatile memory continues to drop in price. Local weather forecasts might be useful. Voice activated controls might become cheap enough and useful enough to be worthwhile. (During the past month, Sanyo has announced a voice-controlled compact disk changer for cars. You'll have to want it pretty badly to pay the more-than $1,000 price that it will initially cost, but it is undoubtedly a harbinger of things to come.) Audio alarms for low oil level, overheating, or alternator malfunction may be present. All these innovations should become prevalent within the current decade.

    Down the road, automatic highways or slot-car-like guideways would seem to be inevitable. From a consumer's standpoint, they should render self-contained vans or motorhomes quite popular, since a family could sleep while the computer drove them through the night. Gas stations might be modified to permit the automatic refueling of vehicles. Protective guideways which would prevent cars and trucks from skidding sideways (though not from sliding forward) if they came upon a patch of slippery pavement would seem to be the best approach to automatic highways. Even if automatic highways don't appear, expert driving systems may replace human operators on conventional roads, particularly if coupled with intelligent highways which can warn of hazardous driving conditions. Their greatest impact might be felt by the trucking industry as driving became entirely computer-controlled (with manual override for secondary roads). Over a period of 20 or 30 years, truck drivers might gradually disappear. It is clear that freight transport will continue to be a ground-based function because of the weights involved, whereas passengers might eventually travel by air.

    The period between 2000 and 2010 is my guess as a plausible time frame for the appearance of automated highway operation, at least on an experimental basis. One of the problems with this scenario is the discomfort one might feel driving one's car on a highway containing first-generation driverless trucks. It might be somewhat less anxiety producing if there were a guideway which restricts the truck to operation near the center of its lane. An alternative to mixing car-truck traffic on the same "autoway" might be to use old railroad right-of-ways for truck guideways.

    3.12 Personal Aircraft

    One of the spinoffs of the computer revolution might sooner or later be the practical, low-cost, family aircraft. It is clear that for short to moderate trips, ground travel times can't be reduced much below what they are right now. High speed transportation systems, including airlines and maglev trains, are constrained by ground-based delays to transit times which render them marginal for trips shorter than 200 miles. It appears to me that only a portal-to-portal personal air transportation system—the air-car—can break these bottlenecks.

    Some of the problems that have thwarted the development of personal aerial transportation are:

    a. the inability to fly under all weather conditions,

    b. the danger of crashing because of pilot error or equipment malfunction,

    c. the need for expensive, periodic inspection and overhaul,

    d. air traffic congestion and control problems,

    e. the need for a runway,

    f. the high cost of avionics, and

    g. noise in a quiet neighborhood.

    These are formidable problems and they may or may not be solvable within the next 40 years. The contributions that onrushing computer technology might make to low-cost aircraft seem to me to be:

    a. Expert autopilots and active ("fly by wire") control systems which can outperform human pilots.

    Such autopilots and active control systems exist for military aircraft such as the B-2 and for forthcoming commercial aircraft such as the C-330, but will need to become orders-of-magnitude cheaper and perhaps more capable if they are to replace human pilots under all weather conditions and in tight quarters. They would need to be able to take off and land with precision from a backyard or a neighborhood runway under gusty conditions, and in the presence of wind shear and perhaps even microbursts, if this is possible. They would also need to be fail-safe and able to monitor flight-hardware status during flight.

    b. Built-in diagnostic equipment and/or robot inspection systems that might someday be able to perform today's labor-intensive aircraft inspections, thereby reducing the cost of this process.

    c. The high cost of avionics

    This is a problem that I would expect to see diminish. LORAN receivers for boats have declined in price from several thousand dollars to several hundred dollars over the past ten years. A low-cost (>$250) GPS satellite receiver of the type under development for trucks and automobiles could provide accurate position information both horizontally and vertically. Other light-aircraft instrumentation should drop in price as solid-state sensors and digital electronics become cheap, cheap, cheap!

    d. Air traffic congestion

    Air traffic congestion would be orders-of-magnitude greater than it is today. Air traffic control is currently being semi-automated. The kind of air traffic control which personal aircraft would necessitate would have to be fully automated, and would have to depend upon autonomous autopilots to eliminate potentially-fatal pilot error in such crowded skies.

    Other problems, such as noise abatement, could be severe. Imagine the noise levels if the sky were full of personal aircraft!

    I'm not saying it will be easy!

    3.13 Teleoperation and Unmanned Vehicles

    In my opinion, teleoperated unmanned vehicles are an idea whose time has come. Teleoperated unmanned ground vehicles have very exciting possibilities. Today, we are constrained by the very-limited availability and high cost of wide-band communications channels. However, between the developing availability of wide-band telecommunications links (e.g., SONET and additional satellite links) and foreseeable improvements in data compression, teleoperation may become a major industry. The example of Budget Rent-a-Robot may be a bit far out, but teleoperated equipment is already in service for hazardous tasks such as deep-sea diving. As virtual reality concepts become more prevalent, a partial substitution of telepresence for travel may be accepted as a cost-saving measure akin to the telephone or the (hypothetical) videophone.

    In the military arena, we have already developed unmanned weapons systems in the form of smart weapons and the cruise missile. "Brilliant" weapons are currently under development, and their impressive role in the U. N.-Iraq war should do nothing but underscore their military value.

    There appears to be no technological barriers prohibiting the development of teleoperated unmanned ground vehicles. The Unmanned Ground Vehicle (UGV) Joint Project Office located at Redstone Arsenal, Alabama, is currently sponsoring the integration, product engineering, and field testing of 14 surrogate (manned or unmanned) teleoperated vehicles (STVs) that will permit the determination of the kinds of problems and issues which always arise when a new type of vehicle is fielded for the first time. These STVs transmit wideband video data through fiber-optic cables paid out by the vehicles as they move away from their control stations. This may sound cumbersome and vulnerable but in practice, the fiber-optic cables are fairly rugged, are jam-proof, and reasonably well suited to short-range UGV operations. A radio-frequency backup link is available and will be used for training purposes, though not for operational use because of the difficulty of getting a worldwide bandwidth allocation. However, improvements in data compression hardware coupled with semi-autonomous operation should alleviate this problem by the year 2000. Of course, in wartime, unmanned vehicles might be assigned the bandwidths that would otherwise be utilized by manned vehicles. Looking farther ahead to the year 2010, teleoperated vehicles may be dropped behind enemy lines, with a narrow-beam airborne link to a parachuted control center tied to teleoperated vehicles through fiber-optic cables. These teleoperated devices would not only keep soldiers out of harm's way but would allow them to be reckless and intimidating to an enemy.

    One can imagine retrofit kits that can convert manned military vehicles to unmanned operation.

    There is probably a commercial market for low-cost unmanned ground vehicles for police work, offshore patrolling, guard duty, or the fighting of fires—e.g., forest fires.

    The Unmanned Vehicle Systems (UAV) Joint Program Office located in Crystal City, Virginia, is developing a family of UAVs for use in the 90s and the early 21st century. These UAVs are an excellent candidate for semi-autonomous and perhaps fully autonomous operation which steadily-improving artificial intelligence capabilities can be expected to facilitate. Since they are airborne, changes during a mission tend to occur more slowly with UAVs than they do with ground-based vehicles. By or shortly after the year 2000, automatic target recognition (ATR) should contribute to all classes of military sensors as ATR becomes faster and lower-priced. The anticipated progress in aircraft electronics discussed in the previous section, coupled with improvements in design and materials, such as high strength-to-weight-ratio joined wing concepts, low-drag flying wings, and efficient, lighter-weight, two-cycle engines should support numerous improvements in UAVs as block upgrades are made. Between now and 2010, the expected revolution in video data compression techniques should help get video imagery home to ground control stations, as well as reducing vulnerability to jamming.

    Perhaps the best-known example of an unmanned aerial vehicle is the aforementioned cruise missile.

    There would appear to be a wider market for UAVs than the military market. Drug interdiction, border patrol, forest surveillance and traffic monitoring are the some of the proposed civilian applications. Another application I could imagine would be the testing of advanced aircraft concepts. UAVs can be built small to reduce the ground hazard from falling debris. UAVs might be used to test the personal-aircraft concept (Section (3.12) without risking human lives.

    4.0 Summary

    How likely is it that these fantastic-seeming developments will come to pass?

    I feel comfortable about most of the predictions for the year 2000. They are based upon plans and programs that are already in the corporate pipelines. I would expect that, perhaps, 80% of them would be realized during the next nine years. I am less sanguine about predictions for 2010, but the inertia of the programmed developments between now and the year 2000 may lend them some credence. Let's give them a 60% probability of occurrence. For the year 2020, the outlook is still more nebulous, and I would expect, perhaps, 40% of my predictions (give or take 20%) to reach fruition. Beyond 2020, things become awfully "iffy". For 2030, I would expect, perhaps, a 20% hit ratio. Much depends upon whether some of the technological barriers that are looming in front of us 10 or 20 years hence can be surmounted.

    My greatest concern now that a month has passed and a few of the devices that I forecast years down the road have already been announced is that my applications forecasts are too conservative. There are clever ways to deploy the technology that exists at any given time that may be used to accelerate the introduction of avant-garde devices; the Samsung "Scout-About" is a case in point.

    Between now and the year 2000, it would seem reasonable to me to expect to see: