(7 of 8)
IBM is not relying solely on Chen, however. As the supercomputer market reaches the magic $1 billion-a-year figure that has traditionally been the company's threshold of interest, IBM has at least six different supercomputer efforts under way, although some are primarily research projects. One experiment involves a special-purpose computer called GF-11 that fills an entire 500-sq.-ft. room. Another computer, called RP-3, will consist of eight 8-ft. cubes arranged like a giant merry-go-round in a 35-ft. ring. But even these machines will be dwarfed by IBM's most ambitious supercomputer, the TF- 1, a behemoth whose specifications include 4,000 miles of internal wiring, 33,000 high-speed processing units and a single switching device measuring 80 ft. in diameter. When completed, the TF-1 should be capable of top speeds 2,000 times as fast as today's supercomputers.
IBM's real concern in the supercomputer market may be not Cray Research but Hitachi, Fujitsu and NEC. With their first generation of supercomputers, the Japanese made clear their intention to wipe out America's 25-year lead. Today their fastest machines compare favorably with any supercomputer made in the U.S. In some applications they outperform the most advanced U.S. models. During a test comparing the newest single-processor Hitachi S-820/80 and a two-processor Cray X-MP, the Hitachi machine beat the Cray by about 10 to 1. Says Yukihiko Karaki, a professor at Senshu University in Tokyo: "Looking at these figures, one might say that Japanese users can do without Cray supercomputers."
To date the Japanese have concentrated on speeding up the performance of their fastest processing chips. As a result, they now make the world's most powerful single-processor supercomputers. But they have not, so far, begun linking large numbers of individual processors together. It is there, in parallel processing, that the U.S. still has the edge over the Japanese. A handful of small American manufacturers, including Bolt Beranek and Newman, NCUBE and Ametek Computer Research, have already started marketing parallel machines that can zip through equations at such blistering speeds that they threaten to put conventional supercomputers on the endangered list.
The sticking point with parallelism, however, is the software. Tens of thousands of man-years have been put into writing programs for traditional supercomputers. "Going parallel means starting over," says Thomas Nash at the Fermi National Accelerator Laboratory. That is why the news from Sandia last week was so important. It confirmed that there are dramatic increases in speed to be achieved by breaking large problems into small pieces and solving them simultaneously. Says David Kuck, Chen's former professor at the University of Illinois: "What's going to happen in the next decade is that we'll figure out how to make parallelism work."
