The computer at the University of Illinois is simulating something that no one saw: the evolution of the universe in the aftermath of the Big Bang. Re- creating conditions that may have prevailed billions of years ago, the computer reveals on a remote screen how massive clouds of subatomic particles, tugged by their own gravity, might have coalesced into filaments and flattened disks. The vivid reds, greens and blues of the shapes are not merely decorative but represent the various densities of the first large structures as they emerged from primordial chaos in the near vacuum of space.
At the Massachusetts Institute of Technology, another computer is struggling to learn what any three-year-old child already knows: the difference between a cup and a saucer. What the youngster sees at a glance, the computer must be taught, painstakingly, one step at a time. First it must comprehend the concept of an object, a physical thing distinguished from the space around it by edges and surfaces. Then it must grasp the essential attributes of cupness: the handle, the leakproof central cavity, the stable base. Finally, it must deal with the exceptions, like the foam-plastic cup whose heat-insulating properties are so good that it does not need a handle.
These experiments illustrate the paradox at the heart of today's computer science. The most powerful computing machines -- giant number crunchers possessed of speed and storage capacities beyond human comprehension -- are essentially dumb brutes with no more intellectual depth than a light bulb. At the other extreme are computers that have begun to exhibit the first glimmers of human-like reasoning, but only within the confines of narrowly defined tasks.
For 40 years scientists have labored to make headway at these two frontiers of computer research. One group, working with the lightning-fast machines known as supercomputers, is always pushing for more raw power, more blazing speed. The other group, writing programs that show the rudiments of artificial intelligence, explores the mysteries of human thought. Each of these two grand scientific enterprises, backed by billions of research dollars and blessed with some of the century's best minds, has proceeded as if the other did not exist.
But there are signs that the two broad avenues of computer research may be starting to converge, that today's most advanced machines may someday evolve into electronic brains that are not just incredibly fast but smart as well. The quest has been taken up by almost every major nation. And no wonder: the potential rewards -- in industrial productivity, scientific research and national security -- are staggering. Grown men glow with childlike excitement when they describe robots that will see their way around a factory, typewriters that will take dictation, defense systems that will make the world safe from nuclear arms.
