Saturday, November 27, 2010

When smart does stupid!

The meaning of wrong
Professor Marc Hauser is now on leave, and his courses at Harvard have been cancelled. (Gaye Gerard/Getty Images)
Joseph Brean, National Post ยท Saturday, November 27, 2010

A recent blockbuster scandal at Harvard University, in which a top researcher in animal cognition was found to have committed scientific misconduct, did far-reaching damage in Massachusetts and beyond.

It sullied the reputation of Marc Hauser, once a rock star scientist, now on leave with his courses cancelled, research retracted, and his bustling laboratory on America's most elite campus all but shuttered.

The whole mess -- ongoing for three years but publicly revealed by the Boston Globe in August and now under investigation by the National Science Foundation -- even seemed to cast a shadow of doubt on the entire field of evolutionary psychology, a newish discipline that seeks to explain human behaviour though the history of our genes, and occasionally succeeds.

One of the scandal's lesser-known casualties, though, is a ground-breaking online experiment that marks a key moment in a major historical trend -- the slow reconciliation of psychology and philosophy.

Professor Hauser's Moral Sense Test, linked from his Harvard webpage, tests people's intuitions about right and wrong by asking them to suggest punishments for people behaving immorally.

It is aimed at one of his main interests, the evolution of human morality, and it reflects a newly scientific view of right and wrong, in which data are replacing pure, abstract philosophizing.

Right and wrong are not revealed in abstract reflection, according to this theory. They are not based in universal principles, or utilitarian benefit, or religious doctrine, like just about every other moral theory ever conceived. Rather, true insight into human morality comes from asking people what they think, and pointing out how their moral judgments can be reliably and repeatedly skewed in the laboratory.

As it turns out, what they think is quite peculiar.

New research out of the University of Toronto Scarborough nicely illustrates this idea. It shows that people are more likely to behave immorally if to do so does not involve a deliberate action, such as cheating on a test if the answers are readily available.

The difference, the researchers suggest, is intention. To do wrong, you must mean to, regardless of what actually happens. This is why a doctor can morally withhold lifesaving treatment from a dying patient, but he cannot do so in order to save someone else with their organs, even though the outcome is the same.

What Professor Hauser was doing, by putting similar scenarios to his online participants, is a variation on the Trolley Problem, which is not so much a problem as a thought experiment -- an imaginary scenario in a rich philosophical tradition of zombie doppelgangers, swampmen created in bolts of lightning, brains in vats, twin earths and deceptive demons, all invented to argue some point or other.

The Trolley Problem was more or less invented by Philippa Foot, a British philosopher who died last month, and whose illustrious career at Oxford was overshadowed in her memorials by this funny little brainteaser that is not complicated, but very deep.

A powerful authority in the postwar upheaval in moral philosophy, Foot distilled her thinking about the principle of double effect (that is, a single action can have simultaneous good and bad outcomes) to the problem of the "trolley," in which a runaway train is heading for five people working on the track, and you can save them by diverting the train onto a spur where a single man is working, killing him but saving the five.

Should you divert it? Most people say yes, because you do not intend to kill the man. He is just collateral damage to the greater good of saving the five, and his death is morally neutral.

But what if the same train is approaching, and you are standing on a bridge over the tracks with a fat man. If you push him off onto the tracks, his weight will stop the train and he will be killed, but the five workers will be saved. Do you push him? Most people say no. Deliberate killing, even with the same utilitarian costs and benefits, is a step too far.

That is the classical formulation of the Trolley Problem, but there are many variations, including more complicated ones involving rotating Lazy Susans and looping tracks. At root, it is about intention, and the assignation of blame in a messy world.

Casually known as Trolleyology, this field of experimental philosophy (or X-Phi, as the uber-nerdy shorthand has it) has been around for a while but is enjoying a resurgence partly because of its use in American soldier training, where "double effect" is staggeringly relevant, and measured in real civilian lives.

Trolleyology is the tool offered to prepare them for the battlefield. It allows them to understand the real-world difference between NATO killing civilians by accident, and al-Qaeda killing them on purpose.

By teasing out the differences between scenarios that seem morally identical, Trolleyology reveals the quirks of cognition that can skew our judgment this way or that. As such, it is pure psychology.

But is also purports to reveal something like a universal moral instinct, with its own peculiar logic, and this begins to tread on philosophy.

It makes morality seem rule-based and consistent, like language, and it offers the alluring opportunity to test philosophical ideas by scientific standards, with repeatable data.

As the cognitive scientist Steven Pinker, Professor Hauser's even more illustrious colleague at Harvard, once put it, "Far from debunking morality, then, the science of the moral sense can advance it, by allowing us to see through the illusions that evolution and culture have saddled us with and to focus on goals we can share and defend."

For modern psychology, which is more comfortable in the objective pose of medicine than in the ethereal mists of philosophy, this experimentalism is standard operating procedure.

But for moral philosophers, it is new and foreign. Using data to resolve philosophical disputes is especially weird, and should not make sense.

Philosophers generally do not care about data, and they do not care what you think, or if they do it is only to show why you are wrong. In many cases, what you think is irrelevant to the question of what is right or true. The rules of morality are traditionally deduced from first principles, not discovered in the survey answers of a few people on the Internet.

But if what is right or true is, by its very nature, dependent on what people say in surveys, then philosophy has had its nose buried in the books for far too long.

If morality is as Professor Hauser and others envision it -- a natural system with biological origins and an evolutionary history -- then philosophy, the love of wisdom, starts to blur with psychology, the science of mind.

The roots of morality is not a new question. Plato and Socrates pondered the eternal Good. Nietszche's masterwork is On The Genealogy of Morals. But something happened in the middle of the middle of the twentieth century that set in motion a trend that is still evident today.

Not only had the Second World War made these questions of morality seem desperately important, but also, in universities, psychology had split from philosophy to become a field of its own.

Questions of the mind were divided into the practical and the abstract, but the cut was not entirely clean. With Trolleyology and X-Phi, it is starting to heal.

Universal moral rules are problematic from the get-go. Rule-based behaviour can be misleading, as best illustrated in the best-known of modern philosophical thought experiments, John Searle's Chinese Room, in which a man who does not speak Chinese sits in a locked room as Chinese writing is passed to him under the door. He uses a rule book to compose replies, which he slips back out, appearing to conduct a conversation and showing that intelligent behaviour does not require true understanding.

Trolleyology is vulnerable to the same criticism as the Chinese Room, best formulated by the cognitive scientist and arch-humanist Daniel Dennett, who derisively called it an "intuition pump," meaning it sets you up to draw a certain kind of conclusion and ignore other possibilities, which is a pretty good description of the Moral Sense Test.

The MST remains active, to a degree, in Prof. Hauser's absence. Fiery Cushman, a psychologist in Harvard's Moral Cognition Laboratory who is not affiliated with Prof. Hauser but once studied under him, continues to monitor much of the traffic through the webpage, and is using it for various experiments. He said it is still an active research tool and has been used in a dozen research papers.

There is "something of a rapprochement" going on between philosophy and psychology, he said, as they increasingly share investigatory techniques like the MST.

"What's certainly happening, especially in moral philosophy, is there's a renewed excitement about making moral theories responsive to what we know about human minds and behaviour," he said. "But at the end of the day, psychologists and philosophers are still after answers to different questions."

As ever, those questions boil down to what is right, what is wrong, and what to do about it.

For his part, Professor Hauser has decided, like so many big-name scientists, to write a book for a general audience. It is to be called Evilicious: Why We Evolved A Taste For Being Bad. One imagines he has plenty to say.

0 Comments:

Post a Comment

<< Home