Technology: Fast and Smart

Designers race to build the supercomputers of the future

  • Share
  • Read Later

(2 of 8)

The two fields of computer research are at different stages in their life cycles. Artificial intelligence is just getting started: the first commercial projects appeared less than five years ago, and are now finding widespread application (see following story). The supercomputer manufacturers, on the other hand, having supplied high-speed processors to government labs and intelligence agencies for a quarter-century, are now experiencing a growth so explosive that it has taken even the most optimistic industry leaders by surprise. Sales of the machines, which cost $5 million to $25 million each, have increased 25% a year or more over the past decade, and in 1988 will pass the $1 billion-a-year mark for the first time.

Some 300 supercomputers now work at tasks as diverse as ferreting out oil deposits, analyzing muscle structures and creating special effects for Hollywood films. With the spread of supercomputer networks, high-speed computing power is available to anyone with a personal computer and a telephone hookup. "The world will never be the same," says Doyle Knight, director of the John von Neumann National Computer Center in Princeton, N.J. "Soon every industry, every science, every walk of life will in some way be touched by supercomputing."

Speed and power are what distinguish supercomputers from their humbler cousins. In the early days of the industry, speed was measured in thousands of FLOPS, an acronym for floating-point operations per second, in which the decimal point is moved in very large and small numbers. Today's largest machines are measured in gigaFLOPS, or billions of operations a second. Tomorrow's will be measured in teraFLOPS, trillions of operations a second. A single supercomputer going at teraFLOPS speed will have the power of 10 million personal computers working at full throttle.

The most powerful supercomputers are surprisingly small and sleek, some not much bigger than a California hot tub. But looks can be deceiving. Supercomputers often squeeze out the last bit of processing speed by shrinking the distances electrons have to travel within their wiring. They are tightly packed workhorses that require a whole array of supporting equipment. Some employ full-size mainframe computers just to shuttle programs in and out of their processing units. The machines may be connected, by cable or satellite, to hundreds of remote terminals that can transform raw numerical output into stunning 3-D graphics. They often need industrial-size refrigeration units to keep the rush of electronic signals within them from melting down their circuitry. The thermal output of the University of Minnesota's supercomputers is used to heat a garage.

For most of the supercomputer era, the market for the most powerful machines has been dominated by one firm, Cray Research of Minneapolis. With 178 of its distinctive C-shaped models installed around the world, Cray accounts for 60% of all the supercomputers sold. The closest competitor, located directly across the Mississippi River in St. Paul, is the company from which Cray split off in 1972: Control Data Corp. CDC, which in 1983 created a supercomputer subsidiary called ETA Systems, is holding steady with a 12.7% market share. Coming up quickly is a trio of Japanese manufacturers -- NEC, Hitachi and Fujitsu -- that entered the supercomputer race in 1983 and has since captured 23% of the world market.

  1. 1
  2. 2
  3. 3
  4. 4
  5. 5
  6. 6
  7. 7
  8. 8