If there is one theme which illustrates the need for formal research over "simple" common sense it is the theory of probability. It is ubiquitous, but difficult at best and counterintuitive at worst.
We like to see ourselves as basically rational beings, and the homo economicus that form the basis of most economical theories is supposed to be capable of making rational choices. Yet humans are mainly emotional, not rational beings. We use tools for our rational and logical needs, as we do most of our thinking and choosing with emotions.
The dichotomy between sense and sensibility, reason and emotion, is mostly a false one. It might work on paper and in folk psychology, but in order to understand the human mind it is a big mistake to pry the two apart. Emotions are quick and automatic reactions to changes in our environment and as such serve our survival. Sudden noises make us startle, infringements of our territory make us angry etc. Basic emotions (happiness, sadness, anger, fear) are physical states and tend to colour our perceptions. So-called secondary emotions (pride, jealousy...) are often called social emotions as they have to do with our relationships and attitudes towards others in our social environment. The ways in which we seek or avoid these emotions, the tendencies we have in feeling them form the basis for our personality. And cognition for that matter.
So, where is logic, where is reason, where is the "information processing" that the cognitive psychologists say we do all the time? Well, it's there of course, in that same mind-body. Again, even the most basic perceptional processes are about spotting patterns, regularities, similarities, structures - the building blocks of a "rational", logical and structured world view. Our perception is based on expectations that we can only cast in presence of regular, predictable structures. We get emotional satisfaction and pleasure out of symmetry, structure and regularity (and when these are violated within limits).
So if our emotions are logical and our reasoning emotional, where does it leave us? And what does probability have to do with it? Well, it means we are complicated and it is not easy to study us, especially if you start by leaving one half out of the equation, as psychologists often do. One example of this is the research on our rationality, which often entails making choices. The premise is that there is a "best choice" and we arrive at our choice after deliberation. The best choice is often defined as being the most likely to be beneficial, which means you need to understand probabilities to make that rational, correct choice.
I'm messing things a bit, I didn't mean to write about emotions but couldn't help it really. The original idea was to write about probabilities, as people generally fail in estimating the probabilities of different outcomes in even the simplest cases. Emotions crept in as I was trying to think what would explain these failures. And then I wanted to avoid the reason vs. emotion thing and it all got out of control.
Why do people take part in the lottery, even though the probability of winning is close to zero? The answer: they don't care about the probability as such. They know someone wins every week, and it could be them. And if they'd win, it would change their life completely and as the price of the ticket is low, why not take that chance. It's just a game...
OK. Although, when you start adding together the cost of lottery tickets over, say, 15 years, playing the game has actually prevented you from making some of the life improvements you've dreamed about. The casino always wins in the end, but the possibility for an individual player to win life-changing sums of money draws people in. Fairgrounds and markets have always been full of games where the odds are against the player, and internet has brought a new surge of these systems, some of them legal, some illegal.
They share two things: first, they appeal to you emotionally to get you involved; second, they rely on your poor sense of probabilities to damp the voice of reason telling you not to play.
Probabilities are hairy. Imagine you are flipping a coin. Nine times in a row you get heads. What's the probability that the tenth flip produces heads as well? Alarmingly many would answer something else than 50%. There are two factors at play here. First, people confuse the probablity of getting ten heads in a row (0.5^10 = 0.0009765625) with the probability of getting heads in any one flip (0.5). The second problem is the idea that in the long run, there should be an equal number of heads and tails. This is true, and if you can test this by flipping a coin a couple of million times (or get the computer do that for you) and the score should be relatively even. But this doesn't change the odds for any one throw, as the history of flips isn't stored in the coin. The coin itself doesn't know which way to land, and there's no natural law to force it to start producing tails after a long run of heads.
The other problem is that very few things in "real life" are actually random like a coin flip or a roll of dice. Also, we are rarely as attentive in keeping track of events as we are in the coin-flipping experiment. In fact, we tend to remember events that have emotional significance (ha! emotions!). So for instance, anyone making paranoid theories about the traffic lights being always changed red for them might suffer from both problems. First, as the lights are not random, it might be that they arrive at a certain crossroads always at the same time of its cycle, depending on some other set of lights somewhere else. Also, they might be more likely to remember the times they had to stop than those times they could just drive through the crossroads without thinking about it.
This is still easy, but now we get to the really weird stuff that throws even maths professors off. Have you heard about the Monty Hall problem? That wikipedia-link gives the problem and a number of solutions, as well as explains what the hassle was. In short, you have this game. You have to choose one of three doors, there is a car behind one of them, and a goat behind the other two. If you guess correctly, you get the car. The twist is here: after you make your choice, the game show host opens one of the two remaining doors and reveals a goat. He now asks if you want to stick with your original choice or if you want to switch. The question is, should you switch or keep your original choice.
And, counter-intuitively the answer is, and even with a very clear margin, you should switch. This problem featured in a recent NY Times article, and there's a game you can play to actually see that switching is the better strategy.
I'd think most people say they wouldn't want to switch. This is partly because they have miscalculated the odds (thinking it is a 50-50 chance) but also because they abhor the potential situation where they have undone a correct choice by changing later (and shown lack of belief in their initial choice etc.). As the NYT-article tells, this Monty Hall fallacy (when the problem was first presented, even maths professors wrote in to the paper claiming they'd made a mistake, and that switching makes no difference) has now been spotted in experimental paradigms used to study decision-making.
The research on cognitive dissonance is an attempt to look at all the factors that are involved in making simple choices, and one of the paradigms is to choose (or have monkeys choose, as in NYT's example) from three initially equally favoured items, first one of two, and then one of the remaining two. The researchers have shown that the item that loses in the first duel is likely to lose in the second as well. The explanation has been essentially the same as in the Monty Hall -case where people don't want to switch - there's a need to justify the first decision, or as some say, second-guessing isn't evolutionarily a good idea.
However, if you have even slight differences in the initial preference of the three items, then the odds in the second choice aren't 50-50, but 1/3 - 2/3, just like in the Monty Hall -problem. Let's take the example used in NYT: there are three Smarties, red, green and blue. Let's assume the first choice is between red and blue, and red wins. The second choice is now between blue and green, and green often wins. Now, if there is even the slightest order of preference, this is not just a case of wanting to justify the first choice, but also statistically the more likely outcome. Look at all possible orders of preference of these three, where red is higher than blue. There are three: R G B, R B G, and G R B. In two out of three of these, green is higher than blue, therefore making the odds of green being picked in the second choice 2/3 rather than 1/2.
This is all fascinating, and the fact that even trained scientists get these wrong is on one hand very relieving, and on the other hand very scary. It shows, however, that no matter what you think about sense or sensibility, in some issues there's no substitute for diligent research and learning.
Wednesday, April 09, 2008
Probably too difficult
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment