Saturday, September 05, 2009

 

Dance With Dissonance


Massimo Pigliucci has an interesting post at his blog, Rationally Speaking, about the differences between "skepticism," "cynicism," "gullibility" and (although Professor Pigliucci doesn't mention the term) "personal incredulity" of the sort so often appealed to by creationists.

Essentially, he maintains that:

Whenever confronted with a new claim, it's reasonable to think that the null hypothesis is that the claim is not true. That is, the default position is one of skepticism.

The question then is what are the risks of such a position, which turn on:

... the difference between type I and type II error. A type I error is the one you make if you reject a null hypothesis when it is in fact true. In medicine this is called a false positive: for instance, you are tested for HIV and your doctor, based on the results of the test, rejects the default (null) hypothesis that you are healthy; if you are in fact healthy, the good doctor has committed a type I error. It happens (and you will spend many sleepless nights as a consequence).

A type II error is the converse: it takes place when one accepts a null hypothesis which is in fact not true. In our example above, the doctor concludes that you are healthy, but in reality you do have the disease. You can imagine the dire consequences of making a type II error, also known as a false negative, in that sort of situation. (The smart asses among us usually add that there is also a type III error: not remembering which one is type I and which type II...)

As Pigliucci notes:

Human beings are thus bound to navigate the treacherous waters between Scylla and Charybdis, between being too skeptical and too gullible. And yet, the two monsters are not of equal strength: if we accept the assumption that there is only one reality out there, then the number of false hypotheses must be inordinately higher than the number of correct ones. In other words, there must be many more ways of being wrong than right.

Pigliucci points out (but does not delve into) the fact that there are problems with the very concept of a "null hypothesis" (which is discussed at some length in Elliot Sober's Evidence and Evolution: The Logic Behind the Science). Not least of the problems is how you frame the hypothesis and the level of the proposition. For example, what would a skeptic deem the proper null hypothesis to the proposition "science can accurately describe 'reality'"? Maintaining that science has delivered sufficient evidence that it accurately describes 'reality' to overcome the null hypothesis that it can't do so suffers from the same circularity as Hume's Problem of Induction.

Still, for questions that are clearly answerable by science (such as whether vaccines cause autism), Pigliucci's formulation of a proper skepticism (with its acknowledged dangers) works well enough.

And speaking of the anti-vaccination crowd, Orac at Respectful Insolence has a very interesting post about a study into the reasons why people hold so tenaciously to beliefs that are clearly and unequivocally not supported by evidence. Rather than using "Bayesian updating," where a decision maker incrementally and rationally changes his or her opinions in accordance with new information, a large majority of the people studied used "motivated reasoning," where the decision maker responded to information "defensively, accepting and seeking out confirming information, while ignoring, discrediting the source of, or arguing against the substance of contrary information." This phenomena has long been known. As Francis Bacon (1561-1626) said: "The general root of superstition is that men observe when things hit, and not when they miss; and commit to memory the one, and forget and pass over the other." The motivation of such reasoning is believed to be an attempt to reduce "cognitive dissonance."

As the authors of the study (Prasad, M., Perrin, A., Bezila, K., Hoffman, S., Kindleberger, K., Manturuk, K., & Powers, A. (2009). "There Must Be a Reason": Osama, Saddam, and Inferred Justification" Sociological Inquiry, 79 (2), 142-162 DOI: 10.1111/j.1475-682X.2009.00280.x) describe it:

[O]ur interviews revealed an interesting and creative reasoning style that we call inferred justification: recursively inventing the causal links necessary to justify a favored politician's action. Inferred justification operates as a backward chain of reasoning that justifies the favored opinion by assuming the causal evidence that would support it. As with the situational heuristics described above, respondents begin with the situation and then ask themselves what must be true about the world for the situation to hold.

The term "cognitive dissonance" is overused, in that not everyone whose reasoning someone else finds inexplicable or fallacious (as, say, "New Atheists" find the reasoning of "theistic evolutionists") is suffering from internal dissonance. But it could easily apply to those who maintain, in the face of all contrary evidence, that vaccines cause autism, given the highly emotional setting of a parent, perhaps subconsciously feeling guilt, observing the progression of a child's illness. Indeed, the authors of the study conclude: "... motivated reasoning may be strongest when the stakes are highest."

Therefore, Orac cautions:

I'm just as human as any of the participants in this study. Indeed, any skeptic who thinks he or she is not just as prone to such errors in thinking is not a skeptic but suffering from self-delusion. The only difference between skeptics and non-skeptics, scientists and nonscientists, in this regard is that skeptics try to make themselves aware of how human thinking can go wrong and then act preemptively to try to keep those normal human cognitive quirks from leading them astray. Indeed, guarding against these normal human failings when it comes to making conclusions about the natural world is the very reason we need science and why we need to base our medicine on science.

I'd just add that, if you have a deep emotional attachment to science and think it's use in determining the facts of the natural world is very important, you will have something of a built-in defense against motivated reasoning. The collective nature of science guarantees that not everyone engaged in it will share your own foibles and motivations. As the philosopher of science, David L. Hull, put it: "Scientists rarely refute their own pet hypotheses ... but that's all right. Their fellow scientists will be happy to ..." Thus, what you find most motivating will simultaneously tend to negate your own biases.

At least then your cognitive dissonance will be with science itself and you may be more likely to notice it.
.

Comments:
Cross-posted from Orac's pad:

I've been reading Timothy D Wilson's book "Strangers to Ourselves" in which he proposes that most of our thinking, emotions, attitudes and behaviours are carried out by an 'adaptive unconscious'. The adaptive unconscious is where the quick and dirty responses to life's situations occur. The adaptive unconscious 'communicates' with the conscious through feelings. Our conscious thoughts are very much the junior partner and have little or no introspective ability for looking into the adaptive unconscious.

The adaptive unconscious is the result of long term evolutionary pressures, and its function has evolved to respond quickly and correctly (enough) to environmental and social events such that the individual survives and passes on their genes. The unconscious is adaptive in the sense that it 'learns' and incorporates fresh data into self-narratives (or world views). I believe that most of this incorporation is not based on unconscious deductive reasoning but on unconscious abductive reasoning. The adaptive unconscious takes on an effect (the alpha male is angry) and produces a hypothesis to explain the effect (I've stolen his food). This is forms a handy extension to the self-narrative (don't steal the bosses food) - even though the hypothesis was not tested. I did say that the adaptive unconscious is quick and dirty - it only needs to be effective for the most threatening or emotive situations. The unconscious belief that 'all snakes are deadly' may not be logically true but it is a better unconscious quick and dirty rule than a slower conscious thought 'lets take our time to look up in a book to see if this snake is dangerous or not'.

Wilson goes on to explore how the self-narratives built up in the adaptive unconscious may be out of step with our deliberate conscious self-narratives, and we don't even realise.

So it is quite possible for an individual to unconsciously hold two logically contradictory self-narratives (e.g. I believe in God who created the world in 6 days and also I believe in the scientific method) and as long as the two self-narratives don't conflict in the quick and dirty adaptive unconscious there will be no feelings of dissonance passed forward to the conscious brain to worry about.

Similarly if a belief (vaccination harmed my child) is held strongly in the adaptive subconscious, then conscious contrary evidence is unlikely to carry enough emotive clout to undo the anti-vax belief. People do put a lot of unconscious effort into defending their unconscious self-narratives. A challenge to their unconscious self-narrative is a challenge to their self-autonomy and generates strong defensive feelings - which the conscious mind then tries to elaborate into a logical argument.

All of the above ties in with the idea of motivated reasoning - it is just that most of the motivation is hidden from our conscious thoughts.
 
Sounds like an interesting account, much of which makes sense. What sort of empiric evidence does he have for it?
 
The book is full of references to experiments in cognitive and social psychology. There are 14 pages of notes supporting the statements in the chapters, and the notes refer to 20 pages of bibliography (including some of his own work).

Essentially he summarizes many findings to conclude that people cannot verbalize many of the cognitive processes that psychologists assumed were occurring inside their heads. Indeed people are not aware of these processes occurring.

He uses the proposed adaptive unconscious as a way of explaining the differences between what we feel in the first split second, and what we say later. He also goes on to show how the adaptive unconscious shapes our conscious reactions even though we have no insight into the reasons for why we say or do particular things.

One example he gives (within an American context) is how many white people will consciously declare (truly) that they are not racially prejudiced, but their first unconscious behavioural response is prejudiced - and how African Americans pick up on the unconscious behaviour.

What impressed me was the potential explanatory power of the concept. Why people believe things they 'ought' to believe, but behave in a different way. Why self improvement is so difficult. Why self-assessment correlates so poorly with actual behaviour.

Lots to think about. The key message for me was that we are not the rational, logical, beings we think we are.
 
The key message for me was that we are not the rational, logical, beings we think we are.

That's something I've been saying for a long time. It's nice to know there is empiric evidence for it. Wilson's book just went into my Amazon wish list. (Someday, when I win the lottery, that list will get shorter.)
 
Post a Comment

<< Home

This page is powered by Blogger. Isn't yours?

. . . . .

Organizations

Links
How to Support Science Education
archives