Quotes of the Day

lie detection
Monday, Jul. 20, 2009

Open quote

It would seem that being honest is an absolute, undebatable state. A person is either truthful or he's not. Right?

Consider this scenario: a shopkeeper mistakenly returns an extra $10 in change to a customer. In one outcome, the customer returns the money promptly, without pause. In another, he hesitates for just a second, thinks about pocketing the 10 bucks, then decides to give it back.

Which is true honesty?

That is the question that Joshua Greene, 35, an assistant professor of psychology at Harvard University is trying to answer. More specifically, Greene is trying to identify the particular pattern of brain activity that distinguishes people who are simply telling the truth from those who are resisting the temptation to lie. His findings, which are based on functional-magnetic-resonance-imaging (fMRI) data, shed light not only on the workings of the human mind but also on the controversy over using fMRI technology outside the lab in the detection of lies.

In a cleverly designed experiment, recently published in the Proceedings of the National Academy of Sciences, Greene recruited 35 volunteers and told them they would be participating in a study to find out whether people are good at predicting the future when they are paid for it. The real purpose, of course, was to get people to lie without asking them to lie — and image their brains committing an act of deception.

While inside an fMRI scanner, each participant was asked to predict the outcome — heads or tails — of about 210 coin tosses. The participants made their predictions privately, but after each toss, researchers asked them to reveal whether or not they had guessed accurately. A display mounted inside the scanner flashed the questions, and participants pressed a button in response. Each correct prediction was awarded up to $7; incorrect predictions were awarded nothing, but there was ample opportunity to lie and still win the money.

The researchers then divided the volunteers into groups on the basis of their answers. Those who reported an improbably high number of correct answers were labeled dishonest. Most of the others were classified as honest. Researchers then averaged the fMRI data — which monitors blood flow and, therefore, activity inside the brain in real time — for each group to try to establish a neural signature that represented truth-telling and one that characterized lying.

Compared with the lying group, honest volunteers had relatively quiet minds — that is, they showed no distinctive activity in the prefrontal cortex, an area of the brain responsible for planning and decision-making. In the dishonest group, however, areas within the volunteers' prefrontal cortices registered vigorous activity — and the activity persisted whether they were lying or not.

What does this mean? Greene suggests that in some circumstances, real honesty is not about overcoming the temptation to lie but about not having to deal with that temptation in the first place. On an fMRI image, at least, the lying brain may look no different from one that's simply contemplating whether to lie. "Within the dishonest group, we saw no basis for distinguishing lies from honest reports," says Greene.

That's the kind of statement that probably irks folks most at companies like No Lie MRI in California and Cephos in Massachusetts, both of which claim to offer some kind of lie-detection ability based on fMRI technology. No Lie MRI says it uses "unbiased methods for the detection of deception and other information stored in the brain," according to a statement on its website, although the site does not point to any specific scientific evidence to support the claims.

Most researchers would agree, however, that while fMRI may be able to suss out certain brain activity associated with deception in study volunteers, its ability to do so in the larger population would be exceedingly limited — if not impossible. For one thing, the evidence for fMRI-based lie detection is still conflicted: Although past studies have associated prefrontal-cortex activity with lying, researchers have yet to reach a consensus, and Greene's latest findings suggest that activity in the prefrontal cortex may in fact represent truth-telling in some people. "There is a great deal of variation between the findings described, and, crucially, there is an absence of replication by investigators of their own findings," wrote Sean Spence, a well-respected deception researcher at the University of Sheffield, in a 2008 critique of 16 peer-reviewed articles describing the neural correlates of lying.

What's more, detecting lies using fMRI in highly controlled experimental conditions with button-pushing volunteers bears little resemblance to identifying deception in the real world, where no single lie is identical to the next and most are too elaborately constructed to pin down on a brain scan. Although fMRI allows us to "track the thought process in real time — and that's a huge advance over the polygraph," says Ruben Gur at the University of Pennsylvania, people should not have the "naive view that whenever someone lies, there will be the same [kind of] response that will then be picked up by the fMRI."

A real-world lie detector would have to be "reliable for a specific answer for a specific question from a specific person." And that is something that fMRI may never achieve, says Gur.

Yet companies like No Lie MRI continue to advertise that they can detect lies with "90% accuracy" and charge close to $5,000 for their services. "There are 30 different peer-reviewed studies out there that prove that we can detect lies with fMRI," says Joel Huizenga, the CEO of No Lie MRI, who declined to provide citations for those studies. (Neither of the two scientists on the company's scientific board responded to requests for comment for this article.) Huizenga says he has worked with cases involving "arson, murder and incest" but did not give further details.

It is unclear what purpose reports from No Lie MRI or similar companies serve in such cases, since they have not been found reliable enough to be used in court. In March, an attorney for the defendant in a San Diego child-custody case attempted to introduce a polygraph test and a report from No Lie MRI to prove his client's innocence. It might have been the first time fMRI lie detection was allowed in a court proceeding, had the county prosecutor's office not objected to it and sought the assistance of Hank Greely, director of the Stanford Center for Law and the Biosciences.

Media attention followed, and the defense eventually decided not to present the fMRI data. As it was a civil case, the judge ordered the data to be sealed. But a motion to unseal some of the proceedings will be heard on July 24, when the judge could decide to release, among other things, No Lie MRI's report.

It is unlikely that No Lie MRI will give up anytime soon — the company claims that the potential market for its technology could exceed $3.6 billion. While that figure seems exaggerated given legal safeguards against using polygraphs, Greely estimates that if fMRI lie detection became admissible in court, the industry could easily be worth more than a billion dollars per year.

"It's a big country, there are lots of judges out there and I think they are hoping to find one who will allow the evidence, particularly if the other side doesn’t know much," says Greely. "To be able to use [fMRI lie detection] in court would be the blue ribbon, the license to print money."

Close quote

  • Adi Narayan
  • The commercially available lie-detection technology may sound exciting, but a new study suggests that lies and truth may sometimes be indistinguishable on brain scans
Photo: Everett