elephant in the room, n. An important issue that people are reluctant to acknowledge or address; a social taboo.
elephant in the brain, n. An important but unacknowledged feature of how our minds work; an introspective taboo.
Here is the thesis we’ll be exploring in this book: We, human beings, are a species that’s not only capable of acting on hidden motives—we’re designed to do it. Our brains are built to act in our self-interest while at the same time trying hard not to appear selfish in front of other people. And in order to throw them off the trail, our brains often keep “us,” our conscious minds, in the dark. The less we know of our own ugly motives, the easier it is to hide them from others. Self-deception is therefore strategic, a ploy our brains use to look good while behaving badly. Understandably, few people are eager to confess to this kind of duplicity. But as long as we continue to tiptoe around it, we’ll be unable to think clearly about human behavior. We’ll be forced to distort or deny any explanation that harks back to our hidden motives. Key facts will remain taboo, and we’ll forever be mystified by our own thoughts and actions. It’s only by confronting the elephant, then, that we can begin to see what’s really going on.
So what, exactly, is the elephant in the brain, this thing we’re reluctant to talk and think about? In a word, it’s selfishness—the selfish parts of our psyches. But it’s actually broader than that. Selfishness is just the heart, if you will, and an elephant has many other parts, all interconnected. So throughout the book, we’ll be using “the elephant” to refer not just to human selfishness, but to a whole cluster of related concepts: the fact that we’re competitive social animals fighting for power, status, and sex; the fact that we’re sometimes willing to lie and cheat to get ahead; the fact that we hide some of our motives—and that we do so in order to mislead others. We’ll also occasionally use “the elephant” to refer to our hidden motives themselves. To acknowledge any of these concepts is to hint at the rest of them. They’re all part of the same package, subject to the same taboo.
At least four strands of research all lead to the same conclusion—that we are, as the psychologist Timothy Wilson puts it, “strangers to ourselves”:
1. Microsociology. When we study how people interact with each other on the small scale—in real time and face to face—we quickly learn to appreciate the depth and complexity of our social behaviors and how little we’re consciously aware of what’s going on. These behaviors include laughter, blushing, tears, eye contact, and body language. In fact, we have such little introspective access into these behaviors, or voluntary control over them, that it’s fair to say “we” aren’t really in charge. Our brains choreograph these interactions on our behalves, and with surprising skill. While “we” anguish over what to say next, our brains manage to laugh at just the right moments, flash the right facial expressions, hold or break eye contact as appropriate, negotiate territory and social status with our posture, and interpret and react to all these behaviors in our interaction partners.
2. Cognitive and social psychology. The study of cognitive biases and self-deception has matured considerably in recent years. We now realize that our brains aren’t just hapless and quirky—they’re devious. They intentionally hide information from us, helping us fabricate plausible prosocial motives to act as cover stories for our less savory agendas. As Trivers puts it: “At every single stage [of processing information]—from its biased arrival, to its biased encoding, to organizing it around false logic, to misremembering and then misrepresenting it to others—the mind continually acts to distort information flow in favor of the usual goal of appearing better than one really is.” Emily Pronin calls it the introspection illusion, the fact that we don’t know our own minds nearly as well as we pretend to. For the price of a little self-deception, we get to have our cake and eat it too: act in our own best interests without having to reveal ourselves as the self-interested schemers we often are.
3. Primatology. Humans are primates, specifically apes. Human nature is therefore a modified form of ape nature. And when we study primate groups, we notice a lot of Machiavellian behavior—sexual displays, dominance and submission, fitness displays (showing off), and political maneuvering. But when asked to describe our own behavior—why we bought that new car, say, or why we broke off a relationship—we mostly portray our motives as cooperative and prosocial. We don’t admit to nearly as much showing off and political jockeying as we’d expect from a competitive social animal. Something just doesn’t add up.
4. Economic puzzles. When we study specific social institutions—medicine, education, politics, charity, religion, news, and so forth—we notice that they frequently fall short of their stated goals. In many cases, this is due to simple execution failures. But in other cases, the institutions behave as though they were designed to achieve other, unacknowledged goals. Take school, for instance. We say that the function of school is to teach valuable skills and knowledge. Yet students don’t remember most of what they’re taught, and most of what they do remember isn’t very useful. Furthermore, our best research says that schools are structured in ways that actively interfere with the learning process, such as early wake-up times and frequent testing. Again, something doesn’t add up.
Our Thesis in Plain English
1: People are judging us all the time. They want to know whether we’ll make good friends, allies, lovers, or leaders. And one of the important things they’re judging is our motives. Why do we behave the way we do? Do we have others’ best interests at heart, or are we entirely selfish?
2: Because others are judging us, we’re eager to look good. So we emphasize our pretty motives and downplay our ugly ones. It’s not lying, exactly, but neither is it perfectly honest.
3: This applies not just to our words, but also to our thoughts, which might seem odd. Why can’t we be honest with ourselves? The answer is that our thoughts aren’t as private as we imagine. In many ways, conscious thought is a rehearsal of what we’re ready to say to others. As Trivers puts it, "We deceive ourselves the better to deceive others."
4: In some areas of life, especially polarized ones like politics, we’re quick to point out when others’ motives are more selfish than they claim. But in other areas, like medicine, we prefer to believe that almost all of us have pretty motives. In such cases, we can all be quite wrong, together, about what drives our behavior.
PART I Why We Hide Our Motives
Social Grooming
The primatologist Robin Dunbar has spent much of his career studying social grooming, and his conclusion has since become the consensus among primatologists. Social grooming, he says, isn’t just about hygiene—it’s also about politics. By grooming each other, primates help forge alliances that help them in other situations. Note that these primates don’t need to be conscious of their political motivations. As far as natural selection is concerned, all that matters is that primates who do more social grooming fare better than primates who do less. Primates are thereby endowed with instincts that make them feel good when they groom each other, without necessarily understanding why they feel good.
There are facets of our evolutionary past that we spend less time poring over because we don’t like how they make us look. In this sense, our problem isn’t that the light is too dim, but that it’s too harsh. Consider these two broad “lights” where the keys to our big brains might be found:
Ecological challenges, such as warding off predators, hunting big game, domesticating fire, finding new food sources, and adapting rapidly to new climates. These activities pit humans against their environment and are therefore opportunities for cooperation.
Social challenges, such as competition for mates, jockeying for social status, coalition politics (alliances, betrayals, etc.), intra-group violence, cheating, and deception. These activities pit humans against other humans and are therefore competitive and potentially destructive.
Many of us would prefer the keys to our intelligence to be found somewhere in the pleasing light of ecological challenges, implying that our extra gray matter evolved in service of cooperation. “We grew smarter,” the story would go, “so we could learn more, collaborate better against the harsh external world, and improve outcomes for everyone”: win-win-win. But many signs suggest that the keys to our intelligence lie in the harsh, unflattering light of social challenges, the arena of zero-sum games in which one person’s gain is another’s loss. It’s not that we’re completely unaware of these competitive, zero-sum instincts—we just tend to give them less prominence when explaining our behavior.
The earliest Homo sapiens lived in small, tight-knit bands of 20 to 50 individuals. These bands were our “groves” or “forests,” in which we competed not for sunlight, but for resources more befitting a primate: food, sex, territory, social status. And we had to earn these things, in part, by outwitting and outshining our rivals. This is what’s known in the literature as the social brain hypothesis, or sometimes the Machiavellian intelligence hypothesis. It’s the idea that our ancestors got smart primarily in order to compete against each other in a variety of social and political scenarios.
Signals and Signaling
A signal, in evolutionary biology, is anything used to communicate or convey information. Unblemished skin or fur, for example, is a signal of a healthy organism; compare a prize-winning beagle to a mangy mutt. A growl is a signal of aggression—and the growl’s depth is a signal of the creature’s size. Signals are said to be honest when they reliably correspond to an underlying trait or fact about the sender. Otherwise they are dishonest or deceptive.
The temptation to deceive is ubiquitous. Deception allows an agent to reap benefits without incurring costs. That’s why the best signals—the most honest ones—are expensive. More precisely, they are differentially expensive: costly to produce, but even more costly to fake.
Sometimes it’s even necessary to do something risky or wasteful in order to prove that you have a desirable trait. This is known as the handicap principle. It explains why species with good defense mechanisms, like skunks and poison dart frogs, evolve high-contrast colors: unless it can defend itself, an animal that stands out quickly becomes another animal’s lunch.
In the human social realm, honest signaling and the handicap principle are best reflected in the dictum, “Actions speak louder than words.” The problem with words is that they cost almost nothing; talk is usually too cheap. Which is a more honest signal of your value to a company: being told “great job!” or getting a raise?
One thing that makes signaling hard to analyze, in practice, is the phenomenon of countersignaling. For example, consider how someone can be either an enemy, a casual friend, or a close friend. Casual friends want to distinguish themselves from enemies, and they might use signals of warmth and friendliness—things like smiles, hugs, and remembering small details about each other. Meanwhile, close friends want to distinguish themselves from casual friends, and one of the ways they can do it is by being unfriendly, at least on the surface. When a close friend forgets his wallet and can’t pay for lunch, you might call him an idiot. This works only when you’re so confident of your friendship that you can (playfully) insult him, without worrying that it will jeopardize your friendship. This isn’t something a casual friend can get away with as easily, and it may even serve to bring close friends closer together. Thus signals are often arranged into a hierarchy, from non-signals to signals to counter-signals. Outsiders to an interaction may not always be able to distinguish non-signals from counter-signals. But insiders usually know how to interpret them, if only on an intuitive level.
Norms
Human groups develop norms because they (typically) benefit the majority of people in the group. Now, some norms, especially top-down laws, can be oppressive or extractive and an overall detriment to the societies that enforce them. But most norms—especially of the bottom-up, grassroots variety—are beneficial; they’re one of the main ways we suppress competition and promote cooperation. In other words, we hold ourselves back, collectively, for our own good.
But the threat of some kind of punishment must always be present, or a “norm” is little more than hot air. “Covenants,” says Thomas Hobbes, “without the sword, are but words.” Similarly, you can’t have enforcement without creating a de facto norm, regardless of whether you’re willing to admit that it’s a norm or not. In cults of personality, for example, such as those that formed around Mao Zedong or Steve Jobs, criticizing the leader is often frowned upon, and punished even by people other than the leaders themselves even if “criticizing the leader” isn’t officially forbidden. The essence of a norm, then, lies not in the words we use to describe it, but in which behaviors get punished and what form the punishment takes.
If you refrain from hitting people because you’re afraid they’ll hit you back, that’s not a norm. If you’re afraid of speaking out against a dangerous regime because you’re worried about retaliation from the regime itself, that’s not a norm. But if you’re worried that your neighbors might disapprove and even coordinate to punish you, then you’re most likely dealing with a norm. It’s this third-party, collective enforcement that’s unique to humans.
Among laypeople, gossip gets a pretty bad rap. But anthropologists see it differently. Gossip—talking about people behind their backs, often focusing on their flaws or misdeeds—is a feature of every society ever studied. And while it can often be mean-spirited and hurtful, gossip is also an important process for curtailing bad behavior, especially among powerful people.
Reputation is also important for incentivizing people to help enforce norms. Standing up to norm violators can be risky, especially when they’re powerful. It’s rarely in people’s best interests to stick out their necks to punish transgressors. But throw some reputation into the mix and it can suddenly become profitable. Someone who helps evict a cheater will be celebrated for her leadership. Who would you rather team up with: someone who stands by while rules are flouted, or someone who stands up for what’s right?
Cheating
Why do we cheat? It’s simple: cheating lets us reap benefits without incurring the typical costs.
For a piece of information to be “common knowledge” within a group of people, it’s not enough simply for everyone to know it. Everyone must also know that everyone else knows it, and know that they know that they know it, and so on. It could as easily be called “open” or “conspicuous knowledge.”
Cheating is largely an exercise in discretion; in order to get away with something, you need to keep others from finding out about it. Sometimes only one dimension of secrecy is relevant. When you cheat on a test, for example, all that matters is whether one particular person—the professor—finds out. Conversely, when you drink alcohol on the street, it matters very little which particular people, or even how many of them, realize what you’re doing; what matters more is how openly it’s known. And this is where a thin brown bag can make all the difference. If you brazenly flaunt an open beer bottle, the police are likely to give you trouble. This is because when you drink openly, it’s clear not only to the police that you’re breaking the law, but also to every passing citizen, including the most prudish members of the morality brigade (as well as impressionable children and their concerned parents). A police officer who turns a blind eye to conspicuous public drinking is open to a lot more criticism, from everyone involved, than an officer who ignores discreet public drinking. In this case, the brown paper bag doesn’t fool the police officers themselves, but it provides them with just enough cover to avoid taking flak from their constituents.
The takeaway for the would-be cheater is that anything that hampers enforcement (or prosecution) will improve the odds of getting away with a crime. This is where discretion comes in. Such discretion can take many forms:
Pretexts. These function as ready-made excuses or alibis.
Discreet communication. Keeping things on the down-low.
Skirting a norm instead of violating it outright.
Subtlety. In honor cultures, an open insult is considered ample provocation for violence. In contrast, an insult that’s subtle enough not to land “on the record” will often get a pass.
All of these techniques work by the same mechanism, in that they prevent a norm violation from becoming full common knowledge, which makes it more difficult to prosecute.
Self-Deception
Here’s the puzzle: we don’t just deceive others; we also deceive ourselves. Our minds habitually distort or ignore critical information in ways that seem, on the face of it, counterproductive. Our mental processes act in bad faith, perverting or degrading our picture of the world. In common speech, we might say that someone is engaged in “wishful thinking” or is “burying her head in the sand”—or, to use a more colorful phrase, that she’s “drinking her own Kool-Aid.”
Self-deception is useful only when you’re playing against an opponent who can take your mental state into account. You can’t bluff the blind forces of Nature, for example. When a hurricane is roaring toward you, it’s no use trying to ignore it; the hurricane couldn’t care less whether or not you know it’s coming. Sabotaging yourself works only when you’re playing against an opponent with a theory-of-mind. Typically these opponents will be other humans, but it could theoretically extend to some of the smarter animals, as well as hypothetical future robots or aliens. Corporations and nation-states also use some of these self-sabotaging tactics vis-à-vis each other and the public at large. Self-deception, then, is a tactic that’s useful only to social creatures in social situations.
It’s hard to overstate the impact of what Schelling, Trivers, Kurzban, and others are arguing. Their conclusion is that we, humans, must self-deceive. Those who refuse to play such mind games will be at a game-theoretic disadvantage relative to others who play along. Thus we are often wise to ignore seemingly critical information and to believe easily refuted falsehoods—and then to prominently advertise our distorted thinking—because these are winning moves. As Trivers puts it, “We deceive ourselves the better to deceive others.”
Still there’s an important lingering question. If the goal of self-deception is to create a certain impression in others, why do we distort the truth to ourselves? What’s the benefit of self-deception over a simple, deliberate lie? There are many ways to answer this question, but they mostly boil down to the fact that lying is hard to pull off. For one thing, it’s cognitively demanding. Beyond the cognitive demands, lying is also difficult because we have to overcome our fear of getting caught.
In light of this, often the best way to get others to believe something is to make it a reality. When you’re playing chicken, it won’t do much good to yell at your opponent, “Hey, I’ve torn off my steering wheel!” He won’t believe you until he sees that you’ve actually done it. Similarly, often the best way to convince others that we believe something is to actually believe it. Other people aren’t stupid. They’re aware that we often have an incentive to lie to them, so they’re watching us, eagle-eyed, for any signs of deception. They’re analyzing our words (often comparing them to things we said days, weeks, or months ago), scrutinizing our facial expressions, and observing our behaviors to make sure they conform to our stated motives. The point is, our minds aren’t as private as we like to imagine. Other people have partial visibility into what we’re thinking. Faced with the translucency of our own minds, then, self-deception is often the most robust way to mislead others. It’s not technically a lie (because it’s not conscious or deliberate), but it has a similar effect. “We hide reality from our conscious minds,” says Trivers, “the better to hide it from onlookers.”
Wear a mask long enough and it becomes your face. Play a role long enough and it becomes who you are. Spend enough time pretending something is true and you might as well believe it. Incidentally, this is why politicians make a great case study for self-deception. The social pressure on their beliefs is enormous. Psychologically, then, politicians don’t so much “lie” as regurgitate their own self-deceptions. Both are ways of misleading others, but self-deceptions are a lot harder to catch and prosecute.
Instead of a single monolithic process or small committee, modern psychologists see the brain as a patchwork of hundreds or thousands of different parts or “modules,” each responsible for a slightly different information-processing task. Some modules take care of low-level tasks like detecting edges in the visual field or flexing a muscle. Others are responsible for medium-sized operations like walking and conjugating verbs. Still higher-level modules (which are themselves composed of many lower-level modules) are responsible for things like detecting cheaters and managing our social impressions. The point is that there are many different systems in the brain, each connected to other systems but also partially isolated from each other. The artificial intelligence researcher Marvin Minsky famously described this arrangement as the “society of mind.” And like a society, there are different ways to carve it up for different purposes. And crucially, as Haidt stressed, the different parts don’t always agree. A fact might be known to one system and yet be completely concealed or cut off from other systems. Or different systems might contain mutually inconsistent models of the world.
What this means for self-deception is that it’s possible for our brains to maintain a relatively accurate set of beliefs in systems tasked with evaluating potential actions, while keeping those accurate beliefs hidden from the systems (like consciousness) involved in managing social impressions. In other words, we can act on information that isn’t available to our verbal, conscious egos. And conversely, we can believe something with our conscious egos without necessarily making that information available to the systems charged with coordinating our behavior.
This sort of division of mental labor is simply good mind design. As psychologists Douglas Kenrick and Vladas Griskevicius put it, “Although we’re aware of some of the surface motives for our actions, the deep-seated evolutionary motives often remain inaccessible, buried behind the scenes in the subconscious workings of our brains’ ancient mechanisms.” Thus the very architecture of our brains makes it possible for us to behave hypocritically—to believe one set of things while acting on another. We can know and remain ignorant, as long as it’s in separate parts of the brain.
Self-discretion is perhaps the most important and subtle mind game that we play with ourselves in the service of manipulating others. This is our mental habit of giving less psychological prominence to potentially damaging information. It differs from the most blatant forms of self-deception, in which we actively lie to ourselves (and believe our own lies). It also differs from strategic ignorance, in which we try our best not to learn potentially dangerous information. Picture the mind as a society of little modules, systems, and subselves chattering away among themselves. This chatter is largely what constitutes our inner mental life, both conscious and unconscious. Self-discretion, then, consists of discretion among different brain parts. When part of the brain has to process a sensitive piece of information—wanting to get the upper hand in a particular interaction, for example—it doesn’t necessarily make a big conscious fuss about it. Instead, we might just feel vaguely uneasy until we’ve gained the upper hand, whereupon we’ll feel comfortable ending the conversation. At no point does the motive “Get the upper hand” rise to full conscious attention, but the same result is accomplished discreetly. Information is sensitive in part because it can threaten our self-image and therefore our social image. So the rest of the brain conspires—whispers—to keep such information from becoming too prominent, especially in consciousness. In this sense, the Freuds were right: the conscious ego needs to be protected. But not because we are fragile, but rather to keep damaging information from leaking out of our brain and into the minds of our associates. Self-discretion can be very subtle. When we push a thought “deep down” or to the “back of our minds,” it’s a way of being discreet with potentially damaging information. When we spend more time and attention dwelling on positive, self-flattering information, and less time and attention dwelling on shameful information, that’s self-discretion.
Counterfeit Reasons
Rationalization, sometimes known to neuroscientists as confabulation, is the production of fabricated stories made up without any conscious intention to deceive. They’re not lies, exactly, but neither are they the honest truth. Humans rationalize about all sorts of things: beliefs, memories, statements of “fact” about the outside world. But few things seem as easy for us to rationalize as our own motives. When we make up stories about things outside our minds, we open ourselves up to fact-checking. People can argue with us: “Actually, that’s not what happened.” But when we make up stories about our own motives, it’s much harder for others to question us—outside of a psychology lab, at least. And we have strong incentives to portray our motives in a flattering light, especially when they’re the subject of norm enforcement.
Rationalization is a kind of epistemic forgery, if you will. When others ask us to give reasons for our behavior, they’re asking about our true, underlying motives. So when we rationalize or confabulate, we’re handing out counterfeit reasons. We’re presenting them as an honest account of our mental machinations, when in fact they’re made up from scratch. When we use the term “motives,” we’re referring to the underlying causes of our behavior, whether we’re conscious of them or not. “Reasons” are the verbal explanations we give to account for our behavior. Reasons can be true, false, or somewhere in between (e.g., cherry-picked).
Above all, it’s the job of our brain’s Press Secretary to avoid acknowledging our darker motives—to tiptoe around the elephant in the brain. Just as a president’s press secretary should never acknowledge that the president is pursuing a policy in order to get reelected or to appease his financial backers, our brain’s Press Secretary will be reluctant to admit that we’re doing things for purely personal gain, especially when that gain may come at the expense of others. To the extent that we have such motives, the Press Secretary would be wise to remain strategically ignorant of them. What’s more—and this is where things might start to get uncomfortable—there’s a very real sense in which we are the Press Secretaries within our minds. In other words, the parts of the mind that we identify with, the parts we think of as our conscious selves (“I,” “myself,” “my conscious ego”), are the ones responsible for strategically spinning the truth for an external audience. This realization flies in the face of common sense. In everyday life, there’s a strong bias toward treating the self as the mind’s ultimate decision-maker—the iron-fisted monarch, or what Dennett calls the mind’s Boss or Central Executive. As Harry Truman said about his presidency, “The buck stops here”—and we often imagine the same is true of the self. But the conclusion from the past 40 years of social psychology is that the self acts less like an autocrat and more like a press secretary. In many ways, its job—our job—isn’t to make decisions, but simply to defend them. “You are not the king of your brain,” says Steven Kaas. “You are the creepy guy standing next to the king going, ‘A most judicious choice, sire.’ “ In other words, even we don’t have particularly privileged access to the information and decision-making that goes on inside our minds. We think we’re pretty good at introspection, but that’s largely an illusion. In a way we’re almost like outsiders within our own minds.
Perhaps no one understands this conclusion better than Timothy Wilson, a social psychologist who’s made a long career studying the perils of introspection. Starting with an influential paper published in 1977 and culminating in his book Strangers to Ourselves, Wilson has meticulously documented how shockingly little we understand about our own minds. Wilson writes about the “adaptive unconscious,” the parts of the mind which lie outside the scope of conscious awareness, but which nevertheless give rise to many of our judgments, emotions, thoughts, and even behaviors. “To the extent that people’s responses are caused by the adaptive unconscious,” writes Wilson, “they do not have privileged access to the causes and must infer them.”
This, then, is the key sleight-of-hand at the heart of our psychosocial problems: We pretend we’re in charge, both to others and even to ourselves, but we’re less in charge than we think. We pose as privileged insiders, when in fact we’re often making the same kind of educated guesses that any informed outsider could make. We claim to know our own minds, when, as Wilson says, we’re more like “strangers to ourselves.” The upshot is that every time we give a reason, there’s a risk we’re just making things up. Every “because” clause, every answer to a “Why?” question, every justification or explanation of a motive—every one of these is suspect. Not all will turn out to be rationalizations, but any of them could be, and a great many are.
We have many reasons for our behaviors, but we habitually accentuate and exaggerate our pretty, prosocial motives and downplay our ugly, selfish ones.
PART II Hidden Motives in Everyday Life
Body Language
Humans are strategically blind to body language because it often betrays our ugly, selfish, competitive motives. To acknowledge the signals sent by our bodies “feels dangerous to some people,” say Alex Pentland and Tracy Heibeck, “as if we were admitting that we are ruled by some base animal nature.”
The point is, body “language” isn’t just a way to communicate. It’s also functional; it has material consequences. If we lunge aggressively toward another person, for instance, we better be prepared to fight. And owing to these consequences, body language is inherently more honest than verbal language. It’s easy to talk the talk, but harder to walk the walk. This is the principle of honest signaling. Signals need to be expensive so they’re hard to fake. More precisely, they need to be differentially expensive—more difficult to fake than to produce by honest means. In the human realm, we find honest signals underlying much of our body language. An open posture makes a person vulnerable, for example, which is more dangerous (i.e., costly) for people in tense situations than for people in calm situations. An open posture is therefore an honest signal of comfort. Similarly, it’s dangerous to hug someone when you feel threatened by them, ensuring that a hug remains an honest signal of trust and friendship. And so it’s this quality—honesty—that makes body language an ideal medium for coordinating some of our most important activities. It’s simply too easy, too tempting, to lie with words. So in matters of life, death, and finding mates, we’re often wise to shut up and let our bodies do the talking.
Of all the signals sent and received by our bodies, the ones we seem least aware of are those related to social status. And yet, we’re all downright obsessed with our status, taking great pains to earn it, gauge it, guard it, and flaunt it. This is a source of great dramatic irony in human life. Because of their privileged position, high-status individuals have less to worry about in social situations. They’re less likely to be attacked, for example, and if they are attacked, others are likely to come to their aid. This allows them to maintain more relaxed body language. They speak clearly, move smoothly, and are willing to adopt a more open posture. Lower-status individuals, however, must constantly monitor the environment for threats and be prepared to defer to higher-status individuals. As a result, they glance around, speak hesitantly, move warily, and maintain a more defensive posture. High-status individuals are also willing to call more attention to themselves. When you’re feeling meek, you generally want to be a wallflower. But when you’re feeling confident, you want the whole world to notice. In the animal kingdom, this “Look at me!” strategy is known as aposematism. It’s a quintessentially honest signal. Those who call attention to themselves are more likely to get attacked—unless they’re strong enough to defend themselves. If you’re the biggest male lion on the savanna, go ahead, roar your heart out. The same principle explains why poisonous animals, like coral reef snakes and poison dart frogs, wear bright warning colors. They may not look too tough, but they’re packing heat. In the human realm, aposematism underlies a wide variety of behaviors, such as wearing bright clothes, sparkling jewelry, or shoes that clack loudly on the pavement. Wearing prominent collars, headdresses, and elaborate up-dos and swaggering down the street with a blaring boom box all imply the same thing: “I’m not afraid of calling attention to myself, because I’m powerful.”
But status is more than just an individual attribute or attitude—it’s fundamentally an act of coordination. When two people differ in status, both have to modify their behavior. Typically the higher-status person will take up more space, hold eye contact for longer periods of time (more on this in just a moment), speak with fewer pauses, interrupt more frequently, and generally set the pace and tenor of interaction. The lower-status person, meanwhile, will typically defer to the higher-status person in each of these areas, granting him or her more leeway, both physically and socially. In order to walk together, for example, the lower-status person must accommodate to match the gait of the higher-status person. Most of the time, these unconscious status negotiations proceed smoothly. But when people disagree about their relative status, nonverbal coordination breaks down—a result we perceive as social awkwardness (and sometimes physical awkwardness as well).
Depending on the type of status at play in a given interaction—dominance or prestige—the participants will adopt different patterns of body language. This becomes especially clear when we consider eye contact. In contexts governed by dominance, eye contact is considered an act of aggression. It’s therefore the prerogative of the dominant to stare at whomever he or she pleases, while submissives must refrain from staring directly at the dominant. When a dominant and a submissive make eye contact, the submissive must look away first. To continue staring would be a direct challenge. Now, submissives can’t avoid looking at dominants entirely. They need to monitor them to see what they’re up to (e.g., in order to move out of their space). So instead, submissives resort to “stealing” quick, furtive glances. You can think of personal information as the key resource that dominant individuals try to monopolize for themselves. They use their eyes to soak up personal info about the other members of the group, but try to prevent others from learning about them. In contexts governed by prestige, however, eye contact is considered a gift: to look at someone is to elevate that person. In prestige situations, lower-status individuals are ignored, while higher-status individuals bask in the limelight. In this case, attention (rather than information) is the key resource, which lower-status admirers freely grant to higher-status celebrities.
Social status influences how we make eye contact, not just while we listen, but also when we speak. In fact, one of the best predictors of dominance is the ratio of “eye contact while speaking” to “eye contact while listening.” Psychologists call this the visual dominance ratio. Imagine yourself out to lunch with a coworker. When it’s your turn to talk, you spend some fraction of the time looking into your coworker’s eyes (and the rest of the time looking away). Similarly, when it’s your turn to listen, you spend some fraction of the time making eye contact. If you make eye contact for the same fraction of time while speaking and listening, your visual dominance ratio will be 1.0, indicative of high dominance. If you make less eye contact while speaking, however, your ratio will be less than 1.0 (typically hovering around 0.6), indicative of low dominance.
The three areas of social life we’ve examined in this chapter—sex, politics, and status—are laced with norms governing our behavior. What we may hope to accomplish in each area is often at odds with the interests of others, which can easily lead to conflict. That’s why societies have so many norms to regulate behavior in these areas, and why we (as individuals) must take pains to conduct ourselves discreetly. As a medium of communication, body language gives us just the cover we need. Relative to spoken language, it’s considerably more ambiguous. While the overall patterns of body language may be consistent, any isolated behavior will have many interpretations. Such ambiguity, as we’ve seen in earlier chapters, can be a feature rather than a bug—especially when we’re trying to hide our intentions from others.
Consider how we use our bodies to “say” a lot of things we’d get in trouble for saying out loud. It would be appallingly crass to announce, “I’m the most important person in the room”—but we can convey the same message, discreetly, simply by splaying out on a couch or staring at people while talking to them. Similarly, “I’m attracted to you,” is too direct to state out loud to someone you just met—but a smile, a lingering glance, or a friendly touch on the wrist can accomplish the same thing, with just enough plausible deniability to avoid ruffling feathers. The point is, relative to spoken messages, nonverbal messages are much harder to pin down precisely, making it easier to avoid accusations of impropriety.
This is the magic of nonverbal communication. It allows us to pursue illicit agendas, even ones that require coordinating with other people, while minimizing the risk of being attacked, accused, gossiped about, and censured for norm violations. This is one of the reasons we’re strategically unaware of our own body language, and it helps explain why we’re reluctant to teach it to our children.
Laughter
Laughter is an involuntary behavior. It’s not something we actively decide to do; our brains simply do it, naturally and spontaneously. In this way, laughter is similar to other involuntary behaviors like breathing, blinking, flinching, hiccuping, shivering, and vomiting. But whereas these are merely physiological, laughter is an involuntary social behavior. We use laughter to flirt, bond with friends, mock our enemies, probe social norms, and mark the boundaries of our social groups. It’s a response to social cues, laced with interpersonal significance, and yet “we”—the conscious, deliberate, willful parts of our minds—don’t get to decide when we do it.
Laughter is necessarily coupled with play. If the mood is serious, a terrible face will elicit a scream, but if the mood is playful, the very same stimulus will elicit a laugh. We can actually distinguish two closely related meanings of laughter, depending on context. When we laugh at our own actions, it’s a signal to our playmates that our intentions are ultimately playful (although we may seem aggressive). This is the kind of laugh a young child might give after play hitting an adult or other child, or that adults give when they’re gently poking fun at someone. It’s the behavioral equivalent of “Just kidding!” or a winking emoji at the end of a text message . When we laugh in response to someone else’s actions, however, it’s a statement not about intentions but about perceptions. It says, “I perceive your actions as playful; I know you’re only kidding around.” This is reactive laughter, the kind elicited in response to an external stimulus. Jokes and other forms of humor are one such stimulus, but being tickled, chased, or surprised in a game of peekaboo all work the same way. Both uses of laughter function as reassurances: “In spite of what might seem serious or dangerous, I’m still feeling playful.” And the “in spite of” clause is important. We don’t laugh continuously throughout a play session, only when there’s something potentially unpleasant to react to. Like all acts of communication, laughter must strive to be relevant. When it’s obvious that everyone is safe and happy—while quietly playing Monopoly, for example—there’s no need to belabor the obvious. We need to reinforce that “We’re just playing” only when circumstances arise which might, if not for the laughter, be mistaken for too serious or dangerous
In the broadest sense, there are at least two ways to use the danger of norms for comedic effect. The first is to feint across the norm boundary, but then retreat back to safety without actually violating it. The second way is to step across the boundary, violating the norm, and then to realize, like a child jumping into snow for the first time, “It’s safe over here! Wheee!”.
A real danger of laughter, then, is the fact that we don’t all share the same norms to the same degree. What’s sacred to one person can be an object of mere play to another. And so when we laugh at norm violations, it often serves to weaken the norms that others may wish to uphold. This helps explain why people charged with maintaining the highest standards of propriety—schoolmarms, religious leaders, the guardians in Plato’s Republic, the Chinese officials who banned puns—have an interest in tamping down on laughter and humor.
Laughter may not be nearly as expressive as language, but it has two properties that make it ideal for navigating sensitive topics. First, it’s relatively honest. With words, it’s too easy to pay lip service to rules we don’t really care about, or values that we don’t genuinely feel in our gut. But laughter, because it’s involuntary, doesn’t lie—at least not as much. “In risu veritas,” said James Joyce; “In laughter, there is truth.” Second, laughter is deniable. In this way, it gives us safe harbor, an easy out. When someone accuses us of laughing inappropriately, it’s easy to brush off. “Oh, I didn’t really understand what she meant,” we might demur. Or, “Come on, lighten up! It was only a joke!” And we can deliver these denials with great conviction because we really don’t have a clear understanding of what our laughter means or why we find funny things funny. Our brains just figure it out, without burdening “us” with too many damning details.
Conversation
Every remark made by a speaker contains two messages for the listener: text and subtext. The text says, “Here’s a new piece of information,” while the subtext says, “By the way, I’m the kind of person who knows such things.” Sometimes the text is more important than the subtext, as when a friend gives you a valuable stock tip. But frequently, it’s the other way around. When you’re interviewing someone for a job, for example, you aren’t trying to learn new domain knowledge from the job applicant, but you might discuss a topic in order to gauge the applicant as a potential coworker. You want to know whether the applicant is sharp or dull, plugged-in or out of the loop. You want to know the size and utility of the applicant’s backpack. In casual conversation, listeners have a mixture of these two motives. To some extent we care about the text, the information itself, but we also care about the subtext, the speaker’s value as a potential ally. In this way, every conversation is like a (mutual) job interview, where each of us is “applying” for the role of friend, lover, or leader. Conversation, therefore, looks on the surface like an exercise in sharing information, but subtextually, it’s a way for speakers to show off their wit, perception, status, and intelligence, and (at the same time) for listeners to find speakers they want to team up with. These are two of our biggest hidden motives in conversation.
This view of talking—as a way of showing off one’s “backpack”—explains the puzzles we encountered earlier, the ones that the reciprocal-exchange theory had trouble with. For example, it explains why we see people jockeying to speak rather than sitting back and “selfishly” listening—because the spoils of conversation don’t lie primarily in the information being exchanged, but rather in the subtextual value of finding good allies and advertising oneself as an ally. And in order to get credit in this game, you have to speak up; you have to show off your “tools.” It also explains why people don’t keep track of conversational debts—because there is no debt. The act of speaking is a reward unto itself, at least insofar as your remarks are appreciated. You can share information with 10 or 100 people at once, confident that if you speak well, you’ll be rewarded at the subtextual level. But why do speakers need to be relevant in conversation? If speakers deliver high-quality information, why should listeners care whether the information is related to the current topic? A plausible answer is that it’s simply too easy to rattle off memorized trivia. You can recite random facts from the encyclopedia until you’re blue in the face, but that does little to advertise your generic facility with information. Similarly, when you meet someone for the first time, you’re more eager to sniff each other out for this generic skill, rather than to exchange the most important information each of you has gathered to this point in your lives. In other words, listeners generally prefer speakers who can impress them wherever a conversation happens to lead, rather than speakers who steer conversations to specific topics where they already know what to say.
Consumption
The idea that we use purchases to flaunt our wealth is known as conspicuous consumption. It’s an accusation that we buy things not so much for purely personal enjoyment as for showing off or “keeping up with the Joneses.”
Peer pressure is a powerful force, and advertisers know how to harness it to their advantage. Lifestyle or image-based advertising influences us by way of the third-person effect, rather than (or in addition to) Pavlovian training. The next time you see a brand advertisement for a popular consumer product, try asking yourself how the ad might be preying on your signaling instincts.
To sum up, we are conspicuous consumers in more varied and subtle ways than most of us realize. Advertisers understand this part of human nature and use it to their advantage. But ads aren’t necessarily preying on our irrational emotions, brainwashing us into buying things that aren’t useful to us. Instead, by creating associations that exist out in the broader culture—not just in our own heads, but in the heads of third parties—ads turn products into a vocabulary that we use to express ourselves and signal our good traits.
Art
Art poses a challenge for evolutionary thinkers. It’s a costly behavior, both in time and energy, but at the same time it’s impractical. Art doesn’t put food on your table, look after your children, or keep you warm at night—at least not directly. So art, on its face, seems like a waste of time and energy. And natural selection doesn’t look kindly on waste. How, then, did our instincts for art evolve?
In his book The Mating Mind, the evolutionary psychologist Geoffrey Miller gives a promising answer. Miller argues that while ecological selection (the pressure to survive) abhors waste, sexual selection often favors it. The logic is that we prefer mates who can afford to waste time, energy, and other resources. What’s valuable isn’t the waste itself, but what the waste says about the survival surplus—health, wealth, energy levels, and so forth—of a potential mate.
Human art is more than just a courtship display, that is, an advertisement of the artist’s value as a potential mate. It also functions as a general-purpose fitness display, that is, an advertisement of the artist’s health, energy, vigor, coordination, and overall fitness. Fitness displays can be used to woo mates, of course, but they also serve other purposes like attracting allies or intimidating rivals. And humans use art for all of these things.
Importantly, human artists don’t need to be conscious of this motive. Humans, as we’ve seen many times throughout the book, are adept at acting on unconscious motives, especially when the motive in question (e.g., showing off) is antisocial and norm-violating. What’s important isn’t whether we’re aware that we’re using art as a fitness display, but rather the fact that art works as a fitness display. It serves a useful and important purpose, both to artists and consumers, so we shouldn’t be surprised to find ourselves endowed with instincts both to make and enjoy art.
This way of approaching art—of looking beyond the object’s intrinsic properties in order to evaluate the effort and skill of the artist—is endemic to our experience of art. In everything that we treat as a work of “art,” we care about more than the perceptual experience it affords. In particular, we care about how it was constructed and what its construction says about the virtuosity of the artist. Consider our emphasis on originality in works of art. We prize originality and spurn works that are too derivative, however pleasing they might otherwise be to our senses or intellect. Here again, we betray our concern for using art to evaluate the artist. Insofar as art is a perceptual experience, it shouldn’t matter whether the artist copied another artist in producing the work, but it makes a world of difference in gauging the artist’s skill, effort, and creativity. “We find attractive,” says Miller, “those things that could have been produced only by people with attractive, high-fitness qualities such as health, energy, endurance, hand–eye coordination, fine motor control, intelligence, creativity, access to rare materials, the ability to learn difficult skills, and lots of free time.”
Artists, in turn, often respond to this incentive by using techniques that are more difficult or demanding, but which don’t improve the intrinsic properties of the final product. “From an evolutionary point of view,” writes Miller, “the fundamental challenge facing artists is to demonstrate their fitness by making something that lower-fitness competitors could not make, thus proving themselves more socially and sexually attractive.” Artists routinely sacrifice expressive power and manufacturing precision in order to make something more “impressive” as a fitness display.
Charity
James Andreoni proposed a different model for why we donate to charity. Instead of acting strictly to improve the well-being of others, Andreoni theorized, we do charity in part because of a selfish psychological motive: it makes us happy. Part of the reason we give to homeless people on the street, for example, is because the act of donating makes us feel good, regardless of the results. Andreoni calls this the “warm glow” theory. It helps explain why so few of us behave like effective altruists. Consider these two strategies for giving to charity: (1) setting up an automatic monthly payment to the Against Malaria Foundation, or (2) giving a small amount to every panhandler, collection plate, and Girl Scout. Making automatic payments to a single charity may be more efficient at improving the lives of others, but the other strategy—giving more widely, opportunistically, and in smaller amounts—is more efficient at generating those warm fuzzy feelings. When we “diversify” our donations, we get more opportunities to feel good.
Five factors that influence our charitable behavior:
1: Visibility. We give more when we’re being watched.
2: Peer pressure. Our giving responds strongly to social influences.
3: Proximity. We prefer to help people locally rather than globally.
4: Relatability. We give more when the people we help are identifiable (via faces and/or stories) and give less in response to numbers and facts.
5: Mating motive. We’re more generous when primed with a mating motive.
The conclusion is pretty clear. We may get psychological rewards for anonymous donations, but for most people, the “warm fuzzies” just aren’t enough. We also want to be seen as charitable. Griskevicius calls this phenomenon “blatant benevolence.” Patrick West calls it “conspicuous compassion.” The idea is that we’re motivated to appear generous, not simply to be generous, because we get social rewards only for what others notice. In other words, charity is an advertisement, a way of showing off.
“Why does charity make us attractive to mates, teammates, and social gatekeepers?” In other words, which qualities are we demonstrating when we donate, volunteer, or otherwise act selflessly? Here again there are a few different answers. The most obvious thing we advertise is wealth, or in the case of volunteer work, spare time. In effect, charitable behavior “says” to our audiences, “I have more resources than I need to survive; I can give them away without worry. Thus I am a hearty, productive human specimen.” This is the same logic that underlies our tendency toward conspicuous consumption, conspicuous athleticism, and other fitness displays. All else being equal, we prefer our associates—whether friends, lovers, or leaders—to be well off. Not only does some of their status “rub off” on us, but it means they have more resources and energy to focus on our mutual interests. Those who are struggling to survive don’t make ideal allies. Charity also helps us advertise our prosocial orientation, that is, the degree to which we’re aligned with others. (We might also call it “good-neighborliness.”) Contrast charity with conspicuous consumption, for example. Both are great ways to show off surplus wealth, but consumption is largely selfish, whereas charity is the opposite. When we donate to a good cause, it “says” to our associates, “Look, I’m willing to spend my resources for the benefit of others. I’m playing a positive-sum, cooperative game with society.” This helps explain why generosity is so important for those who aspire to leadership. No one wants leaders who play zero-sum, competitive games with the rest of society.
There’s one final quality that charity allows us to advertise: the spontaneous, almost involuntary concern for the welfare of others. Variations on this trait go by various names—empathy, sympathy, pity, compassion. When we notice someone suffering and immediately decide to help them, it “says” to our associates, “See how easily I’m moved to help others? When people near me are suffering, I can’t help wanting to make their situation better; it’s just who I am.” This is a profoundly useful trait to advertise; it means you’ll make a great ally. The more time other people spend around you, the more they’ll get to partake of your spontaneous good will.
Education
In the signaling model, each student has a hidden quality—future work productivity—that prospective employers are eager to know. But this quality isn’t something that can be observed easily over a short period, for example, by giving job applicants a simple test. So instead, employers use school performance as a proxy. This works because students who do better in school, over the long run, tend to have greater work potential. It’s not a perfect correlation, of course, and there are many exceptions, but by and large, school performance predicts future work performance (and therefore earnings). In other words, educated workers are generally better workers, but not necessarily because school made them better. Instead, a lot of the value of education lies in giving students a chance to advertise the attractive qualities they already have.
The traditional view of education is that it raises a student’s value via improvement—by taking in rough, raw material and making it more attractive by reshaping and polishing it. The signaling model says that education raises a student’s value via certification—by taking an unknown specimen, subjecting it to tests and measurements, and then issuing a grade that makes its value clear to buyers. Of course, these two processes aren’t mutually exclusive. While labor economists tend to downplay the signaling model, it’s well known as an explanation and is popular among sociologists of education. No one claims that signaling explains the entire value of education. Some learning and improvement certainly does take place in the classroom, and some of it is critical to employers. This is especially true for technical and professional fields like engineering, medicine, and law. But even in those fields, signaling is important, and for many other fields, signaling may completely eclipse the learning function.
So it’s a mixed bag. Schools help prepare us for the modern workplace and perhaps for society at large. But in order to do that, they have to break our forager spirits and train us to submit to our place in a modern hierarchy. And while there are many social and economic benefits to this enterprise, one of the first casualties is learning.
Medicine
We’ve now arrived at the unpalatable conclusion that people in the United States currently consume too much medicine. We could probably cut back our medical consumption by a third without suffering a large adverse effect on our health. This conclusion is more or less a consensus among health policy experts, but it isn’t nearly as well-known or well-received by the general public. Many people find the conclusion hard to reconcile with the extraordinary health gains we have achieved over the past century or two. Relative to our great-great-grandparents, today we live longer, healthier lives—and most of those gains are due to medicine, right? Actually, no. Most scholars don’t see medicine as responsible for most improvements in health and longevity in developed countries. Yes, vaccines, penicillin, anesthesia, antiseptic techniques, and emergency medicine are all great, but their overall impact is actually quite modest. Other factors often cited as plausibly more important include better nutrition, improvements in public sanitation, and safer and easier jobs. Since 1600, for example, people have gotten a lot taller, owing mainly to better nutrition. More to the point, however, the big historical improvements in medical technology don’t tell us much about the value of the marginal medicine we consume in developed countries. Remember, we’re not asking whether some medicine is better than no medicine, but whether spending $7,000 in a year is better for our health than spending $5,000. It’s perfectly consistent to believe that modern medicine performs miracles and that we frequently overtreat ourselves.
We are less interested in “health at any cost,” and more interested in treatments that third parties will appreciate. Like King Charles II, we want the very best medicine for ourselves (especially when others can see that it’s the best). Like the woman bringing food to a sick friend, we want to help people in need (and maximize the credit we get for it). And because there are two reasons to consume and provide medicine—health and conspicuous care—we end up overtreated.
Religion
As we’ve seen throughout the book, beliefs aren’t always in the driver’s seat. Instead, they’re often better modeled as symptoms of the underlying incentives, which are frequently social rather than psychological. This is the religious elephant in the brain: We don’t worship simply because we believe. Instead, we worship (and believe) because it helps us as social creatures. We’ll examine various features of religion, including (but by no means limited to) supernatural beliefs. We’ll be approaching them as social technologies designed to discourage cheating and facilitate cooperation within a community. It’s in light of these goals that the stranger facets of religion begin to make sense.
As we’ve seen many times, sacrifice is socially attractive. Who makes a better ally: someone who’s only looking out for number one or someone who shows loyalty, a willingness to sacrifice for others’ benefit? Clearly it’s the latter. And the greater the sacrifice, the more trust it engenders. Friends and family make sacrifices for each other all the time. But we can’t sacrifice for every person we might meet for an ephemeral, one-off interaction. The solution that religions have struck upon is for members to make ritual sacrifices in the name of the group. In nominal terms, many sacrifices are made to a god, but following Durkheim, we should note that God often functions as a symbol for society. So whenever people make a sacrifice to your god, they’re implicitly showing loyalty to you—and to everyone else who worships at the same altar. Crucially, rituals of sacrifice are honest signals whose cost makes them hard to fake. It’s easy to say, “I’m a Muslim,” but to get full credit, you also have to act like a Muslim—by answering the daily calls to prayer, for example, or undertaking the Hajj. Actions speak louder than words, and expensive actions speak the loudest. Personal sacrifices, then, are a way of “paying one’s dues” to a social group. Some groups require a large upfront payment in the form of an initiation ritual, like a fraternity hazing or military boot camp. By setting up barriers to entry and forcing initiates to pay a high cost, groups ensure that only the most devoted and committed are admitted as members. Regular religious rituals work the same way, but rather than (or in addition to) requiring one large upfront cost, these are smaller ongoing costs—a way of paying dues on a weekly or yearly basis.
But note, crucially, that sacrifice isn’t a zero-sum game; there are big benefits that accrue to the entire community. All these sacrifices work to maintain high levels of commitment and trust among community members, which ultimately reduces the need to monitor everyone’s behavior. The net result is the ability to sustain cooperative groups at larger scales and over longer periods of time. Today, we facilitate trust between strangers using contracts, credit scores, and letters of reference. But before these institutions had been invented, weekly worship and other costly sacrifices were a vital social technology. In 1000 a.d., church attendance was a pretty good (though imperfect) way to gauge whether someone was trustworthy. You’d be understandably wary of your neighbors who didn’t come to church, for example, because they’re not “paying their dues” to the community. Society can’t trust you unless you put some skin in the game.
Sermons generate common knowledge of the community’s norms. And everyone who attends the sermon is tacitly agreeing to be held to those standards in their future behavior. If an individual congregant later fails to show compassion, ignorance won’t be an excuse, and everyone else will hold that person accountable. This mutual accountability is what keeps religious communities so cohesive and cooperative.
As we’ve pointed out in previous chapters, the value of holding certain beliefs comes not from acting on them, but from convincing others that you believe them. This is especially true of religious beliefs. They aren’t particularly useful or practical for individuals in isolation, and yet we experience large social rewards for adopting them and/or punishment for not adopting them. This is what it means for a belief to be an orthodoxy. Whether you accept it can make the difference between the warm embrace of fellowship and the cold shoulder of ostracism. Faced with such powerful incentives to believe, is it any wonder our brains fall in line?
But why do communities care what we believe? Why do our peers reward or punish us? Consider the belief in an all-powerful moralizing deity—an authoritarian god, perhaps cast as a stern father, who promises to reward us for good behavior and punish us for bad behavior. An analysis of this kind of belief should proceed in three steps. (1) People who believe they risk punishment for disobeying God are more likely to behave well, relative to nonbelievers. (2) It’s therefore in everyone’s interests to convince others that they believe in God and in the dangers of disobedience. (3) Finally, one of the best ways to convince others of one’s belief is to actually believe it. This is how it ends up being in our best interests to believe in a god that we may not have good evidence for. For similar reasons, it’s also useful to believe that God is always watching—and that He knows everything, even our “private” deeds and innermost thoughts, and will judge us with perfect justice. The more fervently we profess belief in such a god, the more we’ll develop a reputation for doing right at all times, even when other people aren’t watching. This kind of reputation is especially attractive in those we seek as leaders, since they have a lot of room to behave badly behind closed doors. At the margin, these beliefs cause believers to behave more morally than they would otherwise. And from the point of view of a perfectly selfish organism, this extra “good” behavior is an unfortunate cost. The ideal situation would be for the brain to be able to have its cake (convincing others that it fears God’s wrath) and eat it too (go on behaving as if it didn’t fear God at all). But human brains aren’t powerful enough to pull off such perfect hypocrisy, especially when others are constantly probing our beliefs. So the next best thing is often to internalize the belief, while remaining inconsistent enough to occasionally give in to temptation.
The craziness of religious beliefs can function as a barometer for how strong the community is—how tightly it’s able to circle around its sacred center, how strongly it rewards members for showing loyalty by suppressing good taste and common sense. The particular strangeness of Mormon beliefs, for example, testifies to the exceptional strength of the Mormon moral community. To maintain such stigmatizing beliefs in the modern era, in the face of science, the news media, and the Internet, is quite the feat of solidarity. And while many people (perhaps even many of our readers) would enjoy being part of such a community, how many are willing to “pay their dues” by adopting a worldview that conflicts with so many of their other beliefs, and which nonbelievers are apt to ridicule? These high costs are exactly the point. Joining a religious community isn’t like signing up for a website; you can’t just hop in on a lark. You have to get socialized into it, coaxed in through social ties and slowly acculturated to the belief system. And when this process plays out naturally, it won’t even feel like a painful sacrifice because you’ll be getting more out of it than you give up.
Politics
The fact that we attach strong emotions to our political beliefs is another clue that we’re being less than fully honest intellectually. When we take a pragmatic, outcome-oriented stance to a given domain, we tend to react more dispassionately to new information. We do this every day in most areas of our lives, like when we buy groceries, pack for a vacation, or plan a birthday party. In these practical domains, we feel much less pride in what we believe, anger when our beliefs are challenged, or shame in changing our minds in response to new information. However, when our beliefs serve non-pragmatic functions, emotions tend to be useful to protect them from criticism.
Our hypothesis is that the political behavior of ordinary, individual citizens is often better explained as an attempt to signal loyalty to “our side” (whatever side that happens to be in a particular situation), rather than as a good-faith attempt to improve outcomes.
The desire to signal loyalty helps explain why we don’t always vote our self-interest (i.e., for the candidates and policies that would bring us, as individuals, the greatest benefit). Rather, we tend to vote for our groups’ interests. Naturally, on many issues, our group and self-interests align. But when they don’t, we often choose to side with our groups.
Conclusion: Putting the Elephant to Use
Better Situational Awareness: The first benefit is situational awareness—a better, deeper understanding of the human social world.
Physician, Heal Thyself: If we’re going to second-guess our coworkers and friends, we shouldn’t give ourselves an easy pass. In fact, knowing about our own blind spots should make us even more careful when pointing fingers at others. After all, many of our perceptions are colored by self-interest, including our perceptions of what other people are up to. So let’s set aside the speck in their eyes, and attend to the log in our own. The next time you butt heads with a coworker or fight with your spouse, keep in mind that both sides are self-deceived, at least a little bit. What feels, to each of you, overwhelmingly “right” and undeniably “true” is often suspiciously self-serving, and if nothing else, it can be useful to take a step back and reflect on your brain’s willingness to distort things for your benefit. There’s common ground in almost every conflict, though it may take a little digging to unearth it beneath all the bullshit. Above all, what the elephant teaches us is humility. It’s a call for more thoughtful interactions with our fellow self-deceivers, a spur to step outside our own conniving minds. There’s a second side to every story, if only we can quiet our egos enough to hear it.
Showing Off: While it may not suit everyone, an ability to talk candidly about common human motives can signal some attractive qualities. People who are able to acknowledge uncomfortable truths and discuss them dispassionately can show a combination of honesty, intellectual ability, and perhaps even courage (or at least a thick skin). And those who can do so tactfully, without seeming to brag, accuse, or complain, may seem especially impressive.
Choosing to Behave Better: Another benefit to confronting our hidden motives is that, if we choose, we can take steps to mitigate or counteract them. For example, if we notice that our charitable giving is motivated by the desire to look good and that this leads us to donate to less-helpful (but more-visible) causes, we can deliberately decide to subvert our now-not-so-hidden agenda.
Enlightened Self-Interest: This is the notion that we can do well for ourselves by doing good for others. It’s the philosophy described by Alexis de Tocqueville, preached by Adam Smith, and practiced by Benjamin Franklin. In the biological literature, it’s known as “indirect reciprocity” or “competitive altruism.” Remember the Arabian babblers we met in Chapter 1? Each bird works its tail feathers off to provide food and protection for the group, not from the goodness of its heart but largely out of self-interest. And so too in our species. In light of this, we absolutely need ideals—not just as personal goals to strive for, but also as yardsticks by which to judge others and to let ourselves be judged in return. There’s real value to be had in promising to behave well (and in staking our reputation on that promise), in large part because it makes us more attractive as an ally. Such a pledge can’t guarantee our good behavior, of course. We may still cut corners here and there, or cheat when no one’s looking. But it nevertheless incentivizes us to behave better than if we refused to be held to any standard. And yes, if we profess high ideals but then fail to live up to them, that may make us hypocrites. But the alternative—having no ideals—seems worse. “Hypocrisy,” writes La Rochefoucauld, “is the tribute that vice pays to virtue.” In other words, it’s taxing to be a hypocrite, but that very tax is a key disincentive to bad behavior.
Designing Institutions: A common problem plagues people who try to design institutions without accounting for hidden motives. First they identify the key goals that the institution “should” achieve. Then they search for a design that best achieves these goals, given all the constraints that the institution must deal with. This task can be challenging enough, but even when the designers apparently succeed, they’re frequently puzzled and frustrated when others show little interest in adopting their solution. Often this is because they mistook professed motives for real motives, and thus solved the wrong problems. Savvy institution designers must therefore identify both the surface goals to which people give lip service and the hidden goals that people are also trying to achieve. Designers can then search for arrangements that actually achieve the deeper goals while also serving the surface goals—or at least giving the appearance of doing so. Unsurprisingly, this is a much harder design problem. But if we can learn to do it well, our solutions will less often meet the fate of puzzling disinterest.
We should take a similar approach when reforming a preexisting institution by first asking ourselves, “What are this institution’s hidden functions, and how important are they?” Take education, for example. We may wish for schools that focus more on teaching than on testing. And yet, some amount of testing is vital to the economy, since employers need to know which workers to hire. So if we tried to cut too much from school’s testing function, we could be blindsided by resistance we don’t understand—because those who resist may not tell us the real reasons for their opposition. It’s only by understanding where the resistance is coming from that we have any hope of overcoming it. Not all hidden institutional functions are worth facilitating, however. Some involve quite wasteful signaling expenditures, and we might be better off if these institutions performed only their official, stated functions. Take medicine, for example. To the extent that we use medical spending to show how much we care (and are cared for), there are very few positive externalities. The caring function is mostly competitive and zero-sum, and—perhaps surprisingly—we could therefore improve collective welfare by taxing extraneous medical spending, or at least refusing to subsidize it. Don’t expect any politician to start pushing for healthcare taxes or cutbacks, of course, because for lawmakers, as for laypeople, the caring signals are what makes medicine so attractive. These kinds of hidden incentives, alongside traditional vested interests, are what often make large institutions so hard to reform. Thus there’s an element of hubris in any reform effort, but at least by taking accurate stock of an institution’s purposes, both overt and covert, we can hope to avoid common mistakes. “The curious task of economics,” wrote Friedrich Hayek, “is to demonstrate to men how little they really know about what they imagine they can design.”