By Jonah Lehrer, The New
Yorker
Here’s a simple arithmetic question: A bat and ball cost a dollar and ten cents. The bat costs a dollar more than the ball. How much does the ball cost?
Here’s a simple arithmetic question: A bat and ball cost a dollar and ten cents. The bat costs a dollar more than the ball. How much does the ball cost?
The vast majority of people
respond quickly and confidently, insisting the ball costs ten cents. This
answer is both obvious and wrong. (The correct answer is five cents for the
ball and a dollar and five cents for the bat.)
For more than five decades,
Daniel Kahneman, a Nobel Laureate and professor of psychology at Princeton, has
been asking questions like this and analyzing our answers. His disarmingly
simple experiments have profoundly changed the way we think about thinking.
While philosophers, economists, and social scientists had assumed for centuries
that human beings are rational agents, Kahneman, the late Amos Tversky, and
others, including Shane Frederick (who developed the bat-and-ball question),
demonstrated that we’re not nearly as rational as we like to believe.
When people face an
uncertain situation, they don’t carefully evaluate the information or look up
relevant statistics. Instead, their decisions depend on a long list of mental
shortcuts, which often lead them to make foolish decisions. These shortcuts
aren’t a faster way of doing the math; they’re a way of skipping the math
altogether. Asked about the bat and the ball, we forget our arithmetic lessons
and instead default to the answer that requires the least mental effort.
Although Kahneman is now
widely recognized as one of the most influential psychologists of the twentieth
century, his work was dismissed for years. Kahneman recounts how one eminent
American philosopher, after hearing about his research, quickly turned away,
saying, “I am not interested in the psychology of stupidity.”
The philosopher, it turns
out, got it backward. A new study in the Journal of Personality and Social
Psychology led by Richard West at James Madison University and Keith Stanovich
at the University of Toronto suggests that, in many instances, smarter people
are more vulnerable to these thinking errors. Although we assume that
intelligence is a buffer against bias—that’s why those with higher S.A.T.
scores think they are less prone to these universal thinking mistakes—it can
actually be a subtle curse.
West and his colleagues
began by giving four hundred and eighty-two undergraduates a questionnaire
featuring a variety of classic bias problems. Here’s an example:
In a lake, there is a patch
of lily pads. Every day, the patch doubles in size. If it takes 48 days for the
patch to cover the entire lake, how long would it take for the patch to cover
half of the lake?
Your first response is
probably to take a shortcut, and to divide the final answer by half. That leads
you to twenty-four days. But that’s wrong. The correct solution is forty-seven
days.
West also gave a puzzle
that measured subjects’ vulnerability to something called “anchoring bias,”
which Kahneman and Tversky had demonstrated in the nineteen-seventies. Subjects
were first asked if the tallest redwood tree in the world was more than X feet,
with X ranging from eighty-five to a thousand feet. Then the students were
asked to estimate the height of the tallest redwood tree in the world. Students
exposed to a small “anchor”—like eighty-five feet—guessed, on average, that the
tallest tree in the world was only a hundred and eighteen feet. Given an anchor
of a thousand feet, their estimates increased seven-fold.
But West and colleagues
weren’t simply interested in reconfirming the known biases of the human mind.
Rather, they wanted to understand how these biases correlated with human
intelligence. As a result, they interspersed their tests of bias with various
cognitive measurements, including the S.A.T. and the Need for Cognition Scale,
which measures “the tendency for an individual to engage in and enjoy
thinking.”
The results were quite
disturbing. For one thing, self-awareness was not particularly useful: as the
scientists note, “people who were aware of their own biases were not better
able to overcome them.” This finding wouldn’t surprise Kahneman, who admits in
“Thinking, Fast and Slow” that his decades of groundbreaking research have
failed to significantly improve his own mental performance. “My intuitive
thinking is just as prone to overconfidence, extreme predictions, and the
planning fallacy”—a tendency to underestimate how long it will take to complete
a task—“as it was before I made a study of these issues,” he writes.
Perhaps our most dangerous
bias is that we naturally assume that everyone else is more susceptible to
thinking errors, a tendency known as the “bias blind spot.” This “meta-bias” is
rooted in our ability to spot systematic mistakes in the decisions of others—we
excel at noticing the flaws of friends—and inability to spot those same
mistakes in ourselves. Although the bias blind spot itself isn’t a new concept,
West’s latest paper demonstrates that it applies to every single bias under
consideration, from anchoring to so-called “framing effects.” In each instance,
we readily forgive our own minds but look harshly upon the minds of other
people.
And here’s the upsetting
punch line: intelligence seems to make things worse. Education also isn’t a
savior; as Kahneman and Shane Frederick first noted many years ago, more than
fifty per cent of students at Harvard, Princeton, and M.I.T. gave the incorrect
answer to the bat-and-ball question.
No comments:
Post a Comment