(6 of 7)
According to the model, our brain subconsciously generates competing theories about the world, and only the "winning" theory becomes part of consciousness. Is that a nearby fly or a distant airplane on the edge of your vision? Is that a baby crying or a cat meowing? By the time we become aware of such images and sounds, these debates have usually been resolved via a winner-take-all struggle. The winning theory--the one that best matches the data--has wrested control of our neurons and thus of our perceptual field.
As a scientific model, pandemonium has virtues. First, it works; you can run the model successfully on a computer. Second, it works best on massively parallel computers, whose structure resembles the brain's structure. So it's a plausible theory of data flow in the human brain, and of the criteria by which the brain admits some data, but not other data, to consciousness.
Still, says Chalmers, once we know which kinds of data become part of consciousness, and how they earned that privilege, the question remains, "How do data become part of consciousness?" Suppose that the physical information representing the "baby crying" hypothesis has carried the day and vanquished the information representing the rival "cat meowing" hypothesis. How exactly--by what physical or metaphysical alchemy--is the physical information transformed into the subjective experience of hearing a baby cry? As McGinn puts the question, "How does the brain 'turn the water into wine?'"
McGinn doesn't mean that subjective experience is literally a miracle. He considers himself a materialist, if in a "thin" sense. He presumes there is some physical explanation for subjective experience, even though he doubts that the human brain--or mind, or whatever--can ever grasp it. Nevertheless, McGinn doesn't laugh at people who take the water-into-wine metaphor more literally. "I think in a way it's legitimate to take the mystery of consciousness and convert it into a theological system. I don't do that myself, but I think in a sense it's more rational than strict materialism, because it respects the data." That is, it respects the lack of data, the yawning and perhaps eternal gap in scientific understanding.
These two "hard" questions about consciousness--the extraness question and the water-into-wine question--don't depend on artificial intelligence. They could occur (and have occurred) to people who simply take the mind-as-machine idea seriously and ponder its implications. But the actual construction of a robot like Cog, or of a pandemonium machine, makes the hard questions more vivid. Materialist dismissals of the mind-body problem may seem forceful on paper, but, says McGinn, "you start to see the limits of a concept once it gets realized." With AI, the tenets of strict materialism are being realized--and found, by some at least, incapable of explaining certain parts of human experience. Namely, the experience part.
Dennett has answers to these critiques. As for the extraness problem, the question of what function consciousness serves: if you're a strict materialist and believe "the mind is the brain," then consciousness must have a function. After all, the brain has a function, and consciousness is the brain. Similarly, turning the water into wine seems a less acute problem if the wine is water.