Thinking and Deciding, 4th Edition - by Jonathan Baron

The logical theory of probability

The logical theory of probability is the theory usually assumed in introductory treatments of probability, especially with respect to card games and dice. The theory is useful when we can define a set of logically equivalent, or exchangeable, propositions. The evidence for each of these propositions is the same, and any two of them can be exchanged without affecting our beliefs in their truth, so their probabilities must be the same. Typically, when we make calculations in card games, we regard every card remaining in a deck as equally likely to be drawn, because the only evidence we have for any particular card is simply that it is somewhere in the deck and we have exactly this evidence for each of the cards. The cards are logically equivalent.

A disadvantage is that it is usually impossible to find exchangeable events. I cannot sensibly enumerate equally likely possibilities in order to calculate the probability of rain tomorrow, in the way that I would enumerate the possible poker hands in order to calculate the probability of a flush. Therefore the logical theory renders many everyday uses of probability nonsensical. The logical view is ordinarily useless as a justification for making probability judgements, except in textbook (or casino) cases of "fair" (perfectly unbiased) coins and roulette wheels (ludic fallacy).

Inappropriate extreme confidence

One reason for inappropriately high confidence is failure to think of reasons why one might be wrong. Such inappropriate confidence could, in turn, cause a person to stop searching for alternative possibilities, leading to insufficient thinking. Another reason for inappropriate extreme confidence is that people may base their confidence on the apparent implication of the evidence they have, without first asking how credible that evidence is. More overconfidence is found when subjects know less about a topic. Subjects were more overconfident in predicting the behavior of someone else in a game than in predicting their own behavior. They seemed to ignore the fact that their evidence about the other person was weaker than the evidence about themselves. In general, the direction of the evidence is more salient than its weight or credibility, and subjects may not search beyond the direction.

The psychology of hypothesis testing

The congruence heuristic: "To test a hypothesis, think of a result that would be found if the hypothesis were true and then look for that result (and do not worry about other hypotheses that might yield the same result)." It appears that subjects in experiments asked questions designed to yield yes answers if their favored hypothesis were true, even when these were not the best questions.

Two prescriptive heuristics can be used to avoid the negative effects of the congruence heuristic. In the first, we ask, "How likely is a yes answer, if I assume that my hypothesis is false?" A good test is one that has a high probability of some answer if the hypothesis is true and a low probability if it is false. The second heuristic advises, "Try to think of alternative hypotheses; then choose a test most likely to distinguish them -- a test that will probably give different results depending on which is true." For example, in medical diagnosis, before thinking about what test to do, we would think about what diseases ought to be considered. We would not just consider the most likely or the most dangerous but would look for alternatives as well. The advantage of this heuristic over the first is this: If we fail to consider alternatives, we may end up asking a question or doing a test that would have the same answer for several different hypotheses. For example, the white blood cell count may be high for other infections besides our favored hypothesis of appendictis, so this test may not distinguish appendicitis from some of the major alternative hypotheses. By thinking of specific alternative hypotheses, we may get a better idea of the probability of a result if the favored hypothesis is false. It may even be useful when we do not have a favored hypothesis. These heuristics are both candidates for prescriptive models for any situation in which one is evaluating hypotheses.

Distortion of beliefs by desires

In general, people do not like to think of themselves as liars or bad decision makers, and they manipulate their own beliefs so as to convince themselves that they are not, and were not in the past. This appears to be a kind of wishful thinking, possibly also involving self-deception. When a person runs into evidence against this belief, evidence that a bad decision may have been made, the person changes his beliefs about his own desires ("I must really have wanted it, or I wouldn't have done it for so little money," or "put in so much effort," and so forth). These beliefs about desires, in turn, may influence the desires themselves. Just as we want to think of ourselves as good decision makers, we want to think of ourselves as good belief formers. When a belief is challenged, our first impulse is often to bolster it, in order to maintain our belief in our earlier intelligence. We want to have been right all along -- whereas it would be more reasonable to want to be right in the present (even if that means admitting error.) This is what makes us into lawyers, hired by our own earlier views to defend them against all accusations, rather than detectives seeking the truth itself.

The role of regret in decisions

Individuals differ in their tendency to experience regret. Schwartz and his colleague (2002) made up a five-item self-report questionnaire to measure this tendency. The items included: "Whenever I make a choice, I'm curious about what would have happened if I had chosen differently."; "If I make a choice and it turns out well, I still feel like something of a failure if I find out that another choice would have turned out better."; and "Once I make a decision, I don't look back" (reverse scored). This test correlated with a test of "maximization," the desire to get the best option in all choices. People who tended to maximize were also generally less satisfied with their consumer purchases. The opposite of a maximizer is a "satisficer," someone who is satisfied with what is good enough, even if it isn't the best. Maximizers tended to have a hard time making choices, and they often consider other options even after they have made a choice, e.g., checking other radio stations than the one they are listening to while in the car.

Regret may also explain why people are more likely to settle for the default, which is what they get if they take no action (often nothing), when they are confronted with too many options. The difficulty of making a decision, the fear of not picking the best option, and the fear of the resulting regret, may lead people "not to choose" at all. (Of course, they do choose. They choose the default.)

The ambiguity effect

Subjects in experiments will pay money to avoid making choices in which the probabilities seem to be "unknown." A situation in which the "probability is unknown" is called ambiguous. Effects of ambiguity can influence our responses to real risks in the world. Some risks, such as those of nuclear power and DNA technology ("genetic engineering"), are perceived as ambiguous, and others, such as the risk of accidents in coal mining (which is very high), are perceived as known. If ambiguity affects our social choices, we will opt for the known risks over the unknown ones -- perhaps wrongly.

Ambiguity effects violate the sure-thing principle. In Chapter 10, I argued that this principle is normative, since outcomes that do not depend on our choice and that do not occur should not affect our utilities for outcomes that do occur. The personal theory of probability implies that the idea of "unknown" probabilities makes little sense. Because probabilities are properties of the person, not the world, the only way in which a probability could be "unknown" is for a person not to have reflected enough about the situation. To say that a probability is "unknown" is to assume that probabilities can be known only if relative frequencies have been observed or if the possibilities can be analyzed logically into exchangeable alternatives.

Ultimately, I would argue, the ambiguity effect is a kind of framing effect, dependent on the way a problem is described. Consider an apparently unambigous case, in which an urn has fifty red balls and fifty white ones. It would seem that the probability of a red ball is .5, but think about the top layer of balls, from which the ball will actually be drawn. We have no idea what the proportion of red balls is in that layer; it could be anywhere from 100% to 0%. By thinking about the situation in this way, we have turned an unambiguous situation into an ambiguous one. In sum, ambiguity may be a result of our perception that important information is missing from the description of the decision. In the balls-and-urn example, we brought out the missing information by focusing attention on the top layer of balls. Information is always missing in any situation of uncertainty, though, and so we can make any situation ambiguous by attending to the missing information. Conversely, we can make any ambiguous situation into an unambiguous one by imagining it as one of a sequence of repeated trials.

Emotional effects of the reference point

Our emotional reactions are stronger to events produced by action than to those produced by inaction. Kahneman and Miller (1986) suggested that "the affective response to an event is enhanced if its causes are abnormal" and that actions are considered abnormal because "it is usually easier to imagine abstaining from actions that one has carried out than carrying out actions that were not in fact performed." We have a tendency to see the omission as the reference point. Actions leading to the worse outcome are seen as more blameworthy than any omission. Conversely, actions leading to the better outcome are seen as better than any omission. In another study, it was found that anticipated joy in response to positive outcomes was stronger when the outcomes were the result of action rather than inaction. The emotional effects of outcomes are amplified when the outcomes result from actions. When we evaluate outcomes, we often look for points of comparison, or "norms," even when no decision was involved. Mental simulations of alternative outcomes function as reference points to which the real outcome is compared.

Choice under certainty

If our decisions were made by trying to maximize utility, they would not depend on the way we are asked about the decision or on the presence of options that we would reject. We may in fact try to maximize utility much of the time. But some of the time we use various heuristics instead. Our decision utility, what would be inferred from our choices, is this different from our true (experienced) utility. For example, we ignore small differences among options, integrate outcomes when doing so increases the attractiveness of an option, make decisions according to the most "important" dimension, weigh losses more heavily than gains, look for simple reasons for choosing (such as dominance), and compare outcomes to convenient reference points such as the status quo or the default. Because we do not attend to all relevant attributes, our attention can be manipulated.

Many of these heuristics are reasonable ones. They save the time required for calculation or more elaborate analysis. If we use them knowingly for that purpose, we can avoid real trouble. When decisions are important, we can check to see that they are not misleading us. For example, we can try framing the decision in a different way. (Would I still refuse the second job offer, if I weren't using the earlier offer as a reference point?) Or we can try to carry out a rudimentary decision analysis. Or we can develop heuristics to counter the ones that get us into trouble. (To avoid the endowment effect of goods, imagine a choice between the good in question and the money, assuming you had neither. To counter the certainty effect, think about _un_certainties lying behind apparent certainties. For the effects of framing, compare everything to the status quo. For mental accounting, integrate everything possible.)

Heuristics become especially perilous when we become overconfident of their value, so that we reject superior procedures, or when we become commited to them as methods in themselves, without understanding that they are tools of convenience and not normative standards in their own right.

Quantitative judgment

Why are people so resistant to the making of decisions by formula? Universities and colleges typically pay an admissions staff substantial amounts to read each application and weigh all of the variables. Business managers devote great time and effort to such matters as the setting of sales and production quotas. In both of these cases, there is a need for human judgment, to be sure. Somebody has to read the letters of recommendation and (at least) assign numbers to them , but there is no apparent need for the human judge when it somes to combining all of the various pieces into a single overall judgment. This is done better by formula. One objection to formulas is the argument that individuals cannot be (or should not be) reduced to a single number. When we put individuals in rank order for some purpose, however, we are already reducing them to a single scale. The only issue is whether we do it well or badly given all the goals we ought to have. Many of the people who object to the use of formulas are unaware of a hidden cause for their belief -- overconfidence in their own powers of judgment. We are not always, or usually, good judges of our own ability as judges. As a result, we waste effort making judgments that could be made more accurately by formula. The use of a formula guarantees that errors will be made, but we hope, however vainly, that human judgments will be perfect. We need to "accept error to make less error."

The choice of personal goals

The choice of goals is a decision problem in which we evaluate the consequences of our decision just as we would evaluate the consequences of any other decision, except that the goals we apply to this decision include our goals for our future goals. We must evaluate the total consequence of having a new goal and being able to achieve it to whatever degree we expect. When we make career choices, the relevant question is not how much we now want to do the work of a scientist or teacher but rather how much we want the goals of a scientist and the level of achievement of them that we could expect. We may find that we do not want ourselves to have certain goals in the future, and we can even try to bind ourselves so that we do not develop them, like a scientist refusing support from the military because they fear that their political goals will change as a result of accepting it.

The creation of life plans and personal goals is often taken to be a kind of discovery, as though the answer was there all along and our task is only to find it, but what does it mean to say that the answer was "there all along?" The decision to get married, to have a child, to get involved in politics, to learn to play tennis, is usually more like taking on a new set of goals than it is like pursuing goals already present. It is as though we permit another person -- the "spouse" in us, the "parent," the "activist" -- to share our life. We cannot find our goals simply by asking what they have been or what other people would like them to be, for the goals we would choose in these ways could differ from the goals we would want to have for our own futures. Many popular ideas about good personal goals are simply attempts to hold up certain ways of living -- the fashionable life, competitive success, peace of mind, piety -- as better than others. They are not very helpful for planning our personal lives, because they do not provide reasons or evidence for choosing these personal goals instead of many other possibilites we could choose.

Risk

We can avoid the deleterious effects of intuitive thinking if we learn to think quantitatively. This does not mean that we must have numbers. It does mean that we realize that a well-made decision requires comparisons of quantities. If we have a feeling for the quantities, we can make a good guess at what the decision would be even if we do not know the numbers themselves. Quantitative thinking of this sort is not widespread. People do not even notice its absence. Many people, for example, say that decisions are "difficult" because there are costs as well as benefits. Many of these people do not seem to consider the possibility of looking at some overall measure, such as death rates or life expectancy, or doing something like a decision analysis. When we think quantitatively, we can make everyday decisions in a sensible way, and we will also know when some public decisions, such as whether to build more nuclear power plants, requires a more detailted quantitative analysis by experts. We will also understand that it does not matter whether risks arise through action or omission, or whether the cause of a risk is human or natural. We will understand why we should not pay more to reduce risk to zero, if we can use the money better in other ways by reducing some other risk imperfectly.