(2 of 7)
Though chess has lately been the best-publicized measure of a machine's humanity, it is not the standard gauge. That was invented by the great British computer scientist Alan Turing in a 1950 essay in the journal Mind. Turing set out to address the question "Can machines think?" and proposed what is now called the Turing test. Suppose an interrogator is communicating by keyboard with a series of entities that are concealed from view. Some entities are people, some are computers, and the interrogator has to guess which is which. To the extent that a computer fools interrogators, it can be said to think.
At least that's the way the meaning of the Turing test is usually put. In truth, midway through his famous essay, Turing wrote, "The original question, 'Can machines think?,' I believe to be too meaningless to deserve discussion." His test wasn't supposed to answer this murky question but to replace it. Still, he did add, "I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."
Guess again. With the century's end in sight, no machine has consistently passed the Turing test. And on those few occasions when interrogators have been fooled by computers, the transcripts reveal a less-than-penetrating interrogation. (Hence one problem with the Turing test: Is it measuring the thinking power of the machines or of the humans?)
The lesson here--now dogma among researchers in artificial intelligence, or AI--is that the hardest thing for computers is the "simple" stuff. Sure they can play great chess, a game of mechanical rules and finite options. But making small talk--or, indeed, playing Trivial Pursuit--is another matter. So too with recognizing a face or recognizing a joke. As Marvin Minsky of the Massachusetts Institute of Technology likes to say, the biggest challenge is giving machines common sense. To pass the Turing test, you need some of that.
Besides, judging by the hubbub over the Kasparov match, even if computers could pass the test, debate would still rage over whether they think. No one doubted Deep Blue's chess skills, but many doubted whether it is a thinking machine. It uses "brute force"--zillions of trivial calculations, rather than a few strokes of strategic Big Think. ("You don't invite forklifts to weight-lifting competitions," an organizer of exclusively human chess tournaments said about the idea of man-vs.-machine matches.) On the other hand, there are chess programs that work somewhat like humans. They size up the state of play and reason strategically from there. And though they aren't good enough to beat Kasparov, they're good enough to leave the average Homo sapiens writhing in humiliation.
Further, much of the progress made lately on the difficult "simple" problems--like recognizing faces--has come via parallel computers, which mirror the diffuse data-processing architecture of the brain. Though progress in AI hasn't matched the high hopes of its founders, the field is making computers more like us, not just in what they do but in how they do it--more like us on the inside.