The Origin of Wealth: Evolution, Complexity, And the Radical Remaking of Economics - by Eric D. Beinhocker


  • How can something as complex and highly structured as the economy be created and work in a self-organized and bottom-up way?
  • Why has the complexity and diversity of the economy grown over time? And, why does there appear to be a correlation between the complexity of an economy and its wealth?
  • Why has the growth in wealth and complexity been sudden and explosive rather than smooth?

Any theory that seeks to explain what wealth is and how it is created must answer these questions. Modern science provides just such a theory. This book will argue that wealth creation is the product of a simple, but profoundly powerful, three-step for­mula—differentiate, select, and amplify—the formula of evolution. The same process that has driven the growing order and complexity of the biosphere has driven the growing order and complexity of the "econosphere."

We are accustomed to thinking of evolution in a biological context, but modern evolutionary theory views evolution as something much more general. Evolution is an algorithm; it is an all-purpose formula for innovation, a for­mula that, through its special brand of trial and error, creates new designs and solves difficult problems. Evolution can perform its tricks not just in the "substrate" of DNA, but in any system that has the right information- processing and information-storage characteristics. In short, evolution's simple recipe of "differentiate, select, and amplify" is a type of computer program— a program for creating novelty, knowledge, and growth. Because evolution is a form of information processing, it can do its order-creating work in realms ranging from computer software to the mind, to human culture, and to the economy.

From a scientific standpoint, the distinction between a metaphorical ver­sus a literal understanding of the global economy as an evolutionary system is critical. Saying that economic systems are like biological systems does not tell us much that is scientifically useful. But saying that both economic and biolog­ical systems are subclasses of a more general and universal class of evolution­ary systems tells us a lot. This is because researchers believe that there are general laws of evolutionary systems. Scientists consider certain features of nature universal. For example, gravity works the same way on the earth as it does in the farthest reaches of the universe, and it works the same way on atoms, apples, and galaxies. Modern evolutionary theorists believe that, like gravity, evolution is a universal phenomenon, meaning that no matter whether the algorithm is running in the substrate of biological DNA, a computer program, the economy, or in the substrate of an alien biology on a distant planet, evolution will follow certain general laws in its behavior.

If the economy is truly an evolutionary system, and there are general laws of evolutionary systems, then it follows that there are general laws of econom­ics—a controversial notion for many.

Just what kind of an algorithm is evolution? What does it do? The evolution­ary philosopher Daniel Dennett calls evolution a general-purpose algorithm for creating "design without a designer."

Evolution creates designs, or more appropriately, discovers designs, through a process of trial and error. A variety of candidate designs are created and tried out in the environment; designs that are successful are retained, repli­cated, and built upon, while those that are unsuccessful are discarded. Through repetition, the process creates designs that are fit for their particular purpose and environment. If the conditions are right, competition between designs for finite resources drives the emergence of greater structure and complexity over time, as evolution builds on the successes of the past to create novel de­signs for the future. Then as the world changes, so too do the designs that evolution creates, often in brilliant and sometimes surprising ways. Evolu­tion is a method for searching enormous, almost infinitely large spaces of possible designs for the almost infinitesimally small fraction of designs that are "fit" according to their particular purpose and environment.

Economic evolution is not a single process, but rather the result of three interlinked processes. The first is the evolution of technology, a critical factor in economic growth throughout history. The evolutionary economist Richard Nelson of Columbia University has pointed out that there are in fact two types of technology that play a major role in economic growth. The first is Physical Technology; this is what we are accustomed to thinking of as technology things such as bronze-making techniques, steam engines, and microchips. Social Technologies, on the other hand, are ways of organizing people to do things. Examples include settled agriculture, the rule of law, money, joint stock companies, and venture capital.

In order for technologies to have an impact on the world, someone, or some group of peo­ple, needs to turn the Physical and Social Technologies from concepts into reality. In the economic realm, that role is played by business. Businesses fuse Physical and Social Technologies together and express them into the envi­ronment in the form of products and services.

Businesses are themselves a form of design. The design of a business en­compasses its strategy, organizational structure, management processes, cul­ture, and a host of other factors. One of the major themes of this book is that it is the three-way coevolution of Physical Technologies, Social Technologies, and business designs that accounts for the patterns of change and growth we see in the economy.

Since the late nineteenth century, the organizing paradigm of economics has been the idea that the economy is an equilibrium system, essentially a system at rest. As we will see, the primary in­spiration for economists from the late nineteenth through the mid-twentieth centuries was not biology, but physics, in particular the physics of motion and energy. The mainstream paradigm of economics over the past hundred years has portrayed the economy as a system that moves from equilibrium point to equilibrium point over time, propelled along by shocks from technology politics, changes in consumer tastes, and other external factors.

In the 1980s and early 1990s, researchers began to experi­ment with models of economic phenomena that were radically different from traditional models. Rather than portraying the economy as a static equilibrium system, these models presented the economy as a buzzing hive of dynamic activity, with no equilibrium in sight. Just as the pattern of a whirl­pool arises from interacting water molecules, these models showed complex patterns of boom and bust and waves of innovation emerging from the inter­ actions of simulated agents, just as they do in the real economy. I will refer to this body of work as Complexity Economics. If the economy is indeed a complex adaptive system, then this has four important implications.

  1. First, it means that for the past century, economists have fundamentally misclassified the economy and that the mainstream eco­nomic theory reflected in textbooks, management thinking, and govern­ment policies today is either wrong or, at best, only approximately right.

  2. Second, viewing the economy as a complex adaptive system provides us with a new set of tools, techniques, and theories for explaining economic phenomena.

  3. Third, it means that wealth must be a product of evolutionary processes. Just as biological evolution summoned complex organisms and ecosystems out of the primordial soup, economic evolution has taken humankind from a state of nature to the modern global economy, filling the world with order, complexity, and diversity along the way.

  4. Fourth and finally, history shows that each time there has been a major shift in the paradigm of economic theory, the tremors have been felt far beyond the academic world. Adam Smith's ideas had an important influence on the growth of free trade in the nineteenth century; Karl Marx's vision inspired revolutions and the rise of socialism in the early to mid-twentieth century; and the intellectual dominance of Anglo-American Neoclassical economics coincided with the ascendancy of global capitalism in the latter decades of the twentieth century. It will probably be several decades before the full socio-politico implications of Complexity Economics become clear.


The Classical period of economics ended with a compelling framework in place for describing how markets balance the needs of consumers with the economics of production, and naturally progress to a point that satisfies both. But an important question remained unanswered: For a given com­modity, a given set of utilities, and a given production process, what exactly would the price be? Could we calculate it? Could we predict it?

According to the theories of Walras, Jevons, Pareto, and the other Marginalists, in a market economy the participants freely trade their way to a state where they are as satisfied as possible, given the resources available. Through this trading the economy glides to an equilibrium, a natural resting point, where supply equals demand, where resources are put to their most efficient use, and where the welfare of society is Pareto optimal. As Voltaire's Dr. Pangloss put it, "In this best of possible worlds... all is for the best." What was perhaps most remarkable about the Marginalists' achievement was that economics now had a mathematical theory that showed how, left to its own devices, a free-market economy would reach this Panglossian state with the inevitability of a ball rolling to the bottom of a bowl. Walras declared that his "pure theory of economics is a science which resembles the physico-mathematical sciences in every respect." Jevons believed that he had created a "calculus of moral effects." And Pareto proclaimed, "The theory of economic science thus acquires the rigor of rational mechanics." In their view, the Marginalists had succeeded in their dream of turning economics into a true mathematical science.

The Neoclassicals tended to view innovation as an external, or exogenous, factor: a random variable that affected the economy—like the weather—but was outside the bounds of economic study. Schumpeter, however, believed that innovation had to be viewed as internal, or endogenous, to the economy and central to its understanding. He insisted that for growth to occur, there must be "a source of energy within the economic system which would of itself disrupt any equilibrium that might be attained." For Schumpeter, that source of energy was the figure of the entrepreneur, whom he wrote about in almost heroic terms. According to Schumpeter, technological progress occurred in a random stream of discoveries. The commercialization of new technologies, however, faced numerous barriers, ranging from the need for financing to the intransigence of old habits and mind-sets. Thus, like water behind a dam, the random rain of discoveries built up over time. In Schum­peter's theory, entrepreneurs played the role of dam breakers, unleashing a flood of innovation into the marketplace. In this way, growth comes to the economy not in a steady stream, but as Schumpeter famously put it, in "gales of creative destruction." The origin of wealth, according to Schumpeter, lies in the heroic efforts of individual entrepreneurs.

According to Robert Solow the key to productivity is technology. Solow's model implied that the United States and other Western countries did not become rich because of a lucky endowment of natural resources or because of capital falling like manna from heaven. Rather, they became rich through a virtuous cycle in which technology improvements led to capital's becoming more productive, which in turn led to more savings, which in turn led to more capital investment. Without technology growth, capital would only grow in proportion to population, and wealth per capita would simply level off. Back in 1956, long before the term became fashionable, Robert Solow had discovered the knowledge economy.

By the end of the twentieth century, Traditional Economics was thoroughly dominated by the Neoclassical paradigm with its foundational notions of rational, optimizing consumers and producers making choices in a world of finite resources, and (with the exception of investments in technology) those choices being bounded by decreasing returns. This combination of self-interest and constraints then drive the economy to the Pareto optimal point of equi­librium. The methodology of economic analysis was also dominated by the use of mathematical proofs that began with a set of assumptions and then built logically up to a set of conclusions. The Neoclassical growth theory pio­neered by Solow claimed to answer the great question of wealth creation, while the Neoclassical general equilibrium theory of Arrow and Debreu osten­sibly answered the great question of wealth allocation.

The twentieth-century economists had thus realized their ambition to cre­ate a set of rigorous, well-defined mathematical models for describing the workings of the economy. Although the dream of completely integrating the micro and macro perspectives under the Neoclassical paradigm had not been fully realized, one could nonetheless travel from the atomistic world of individual decision makers to the sweep of national economies within a log­ically consistent framework and set of assumptions.


When Walras imported the concept of equilibrium from physics into economics, he gained mathematical precision and scientific predictability. But he paid a high price for that gain—realism. The mathematics of equilibrium required Walras and later economists to make a set of highly restrictive assumptions that have increasingly detached theoretical economics from the real world. Traditional Economics has what computer programmers call a "garbage in, garbage out" problem. If you feed a computer bad inputs, it will with absolute precision and flawless logic grind out bad outputs. Likewise, most Traditional Economic models begin with unrealistic assumptions and then, with mathematical inevitability, work their way to equally unrealistic conclusions.

To many critics, the assumptions of Traditional Economics do not look like a legitimate case of coarse graining. Instead, it appears that beginning with Walras and Jevons, economists began arbitrarily making up assumptions about perfect rationality, godlike auctioneers, and so on, with the sole pur­pose of making the equilibrium math work.

Of all the assumptions in Traditional Economics, perhaps the strongest and most obviously unrealistic is its model of human behavior. The standard model, often referred to as perfect rationality, is built on two fundamental assumptions. The first is that people pursue their self-interest in economic matters. Economists recognize that in the real world, people occasionally do engage in acts of genuine altruism (though this is notoriously tricky to define), but argue that as a simplification, it is rea­sonable to assume that people will generally do whatever is in their eco­nomic self-interest. The second part of the assumption is that people pursue their self-interest in fantastically complex and calculating ways. Economists regularly assume that we take into account factors such as inflation rates, esti­mates of future government spending, and the trade deficit in our daily deci­ sion making. Economists also assume that we process all this information using equations and calculations that they themselves find difficult to solve.

Furthermore, in order to make human behavior predictable, economists traditionally assume that these superhuman robots live in theoretical worlds that are much simpler than the real world people actually inhabit. Tradi­tional models typically assume that all the information needed to make deci­sions is completely and instantly available for free. The reality, of course, is that we often have to make decisions with incomplete or ambiguous infor­mation, or if we wanted more information, it would cost us time and money to get it. Other typical assumptions about the world we live in include:

  • There are no transaction costs (e.g., no fees, taxes, legal restrictions, or other costs or barriers to buying and selling)
  • All products are pure commodities sold only on price (e.g., no brands or differences in product quality)
  • Companies are always working as efficiently as possible

This combination of assumptions has caused Axel Leijonhufvud, a macro- economist at the University of California, Los Angeles, to comment that Traditional Economics models "incredibly smart people in unbelievably sim­ple situations," while the real world is more accurately described by "believably simple people coping with incredibly complex situations."

Posi­tive feedback is an accelerating, amplifying, self-reinforcing cycle. Negative feedback is the opposite: a decelerating, dampening, self-regulating cycle. A classic example of negative feedback is a thermostat. If your house gets cold, the thermostat switches on the heat. As the heat rises past a set point, the thermostat switches off, until the house cools back down below the set point. The thermostat dampens the fluctuations of heat in the house, keeping the temperature close to the set point.

In chapter 1, I defined a complex adaptive system as a system of interact­ing agents that adapt to each other and their environment. Complex adaptive systems are a subcategory of open systems. It takes energy to process infor­mation, sustain order, and create complex patterns. For example, an ant colony takes in energy and matter through the food and material it brings into the nest; it uses that energy and material to fight entropy as the colony builds its nest and organizes its activities. The presence of free energy is what enables a complex adaptive system such as an ant colony to stay away from equilibrium, create order, and be dynamic over time. If you remove that energy, then entropy takes over and the system decays and eventually reaches a state of stasis or equilibrium. As one of the participants at the SFI meeting, University of Michigan theorist John Holland, once put it, "in fact, if the sys­tem ever does reach equilibrium, it isn't just stable. It's dead."

If the economy were a closed equilibrium system, its defining characteristic would be a trend toward less order, complexity, and structure over time, as entropy sends any closed equilibrium system inevitably toward a featureless stasis. Closed equilibrium systems do not spontaneously self-organize; they do not generate patterns, structures, and complexity; and above all, they do not create novelty over time. All the movement, buzz, organization, and activity of the economy outside your window cannot be the product of a closed equilibrium system. The growth of economic activity from the Stone Age until now has been one long story of fighting entropy on a grand scale—something that could only happen in an open disequilibrium system.

The Neoclassical model that lies at the heart of Traditional theory was built on a misused metaphor. Without realizing it and with the best intentions, the late-nineteenth-century economists borrowed from physics a set of ideas that fundamentally misclassified the economy as a closed equilibrium system. This approach set the framework for the Traditional Economics we see today. Unfortunately, this misclassification has acted as a straitjacket, forcing economists to make highly unrealistic assumptions and limiting the field's empirical success.


As discussed, Traditional Economics has tended to focus on how the economic pie is allocated rather than how the pie got here in the first place. Even Traditional growth theory implicitly begins with "assume an economy." The process of economy formation presents us with a first-class scientific puzzle and one of the sharpest distinctions between Traditional Economics and what I will describe as Complexity Economics.

When scientists compare two theories, they do so according to the correspon­dence principle. According to the correspondence principle, a new theory should reproduce the successes of the old theory, explain the failures of the old the­ory, and offer new insights that the old theory does not.

Five "Big Ideas" That Distinguish Complexity Economics from Traditional Economics

Complexity EconomicsTraditional Economics
DynamicsOpen, dynamic, nonlinear systems, far from equilibriumClosed, static, linear systems in equilibrium
AgentsModeled individually; use inductive rules of thumb to make decisions; have incomplete information; are subject to errors and biases; learn and adapt over timeModeled collectively; use complex deductive calculations to make decisions; have complete information; make no errors and have no biases; have no need for learning or adaptation (are already perfect)
NetworksExplicitly model interactions between individual agents; networks of relationships change over timeAssume agents only interact indirectly through market mechanisms (e.g., auctions)
EmergenceNo distinction between micro- and macroeconomics; macro patterns are emergent result of micro-level behaviors and interactionsMicro- and macroeconomics remain separate disciplines
EvolutionThe evolutionary process of differentiation, selection, and amplification provides the system with novelty and is responsible for its growth in order and complexityNo mechanism for endogenously creating novelty, or growth in order and complexity


Dynamics: The Delights of Disequilibrium

Systems that are either perfectly deterministic or truly random are quite rare. Most phenomena in the universe are somewhere in the middle; they mix determinism and randomness in complex and unpredictable ways. In the twentieth cen­tury, science came to accept the messy and the indeterminate.

The economy is a dynamic system. All this means is that the economy changes over time. A convenient way to describe a dynamic system is in terms of stocks and flows. A stock is an accumulation of something, such as the balance in a bank account or water in a bathtub. The rate at which a stock changes over time is known as a flow, for example, the rate of money flowing into or out of a bank account, or water flowing into and out of a bathtub. The economy is full of dif­ferent stocks that change over time, for example, the total supply of money or the number of people employed. Each of these stocks has corresponding flows, or rates of change over time.

When one starts thinking of the economy as a collection of stocks and their related flows, it quickly becomes apparent that the various stocks and flows are connected to each other in complex ways. Such chains of relationships between stocks and flows in a dynamic system are known as feedback loops. It is not difficult to see how dynamic systems can quickly become quite complex if one has multiple stocks and flows interacting via both positive and negative feedback loops. The positive feedbacks drive the system, accel­erating it, but at the same time the negative feedbacks are fighting back to dampen and control it. When time delays are thrown in, the driving and damp­ing can get out of balance, and out of synch, causing the system to oscillate in highly elaborate ways.

Our second observation about the economy is that it is a nonlinear system. Nonlinear dynamic systems are not always intuitive, and as we will see later, this can create prob­lems when people try to make decisions in them. The fact that we can get such widely varying behaviors simply from tweak­ing one variable demonstrates an important characteristic of nonlinear dynamic systems: sensitivity to initial conditions. A related characteristic is that nonlinear dynamic systems are path depen­dent, or in other words, history matters.

The Economy is Complex but Not Chaotic

Because nonlinear dynamic systems were not well understood until relatively recently, Tradi­tional economists historically made do by either using nonlinear relation­ships in static models or using linear relationships in dynamic models, where in each case the equations could be solved. In contrast, the Complexity Eco­nomics approach recognizes that the economy is both nonlinear and dynamic and as a result uses mathematical tools and computer simulations that have been more recently developed.

Most researchers would say that calling the economy chaotic would be too simple and narrow a categorization. Truly chaotic systems tend to have relatively few variables and few degrees of freedom. It is more appropriate to call the economy complex. As we have noted, the economy has a massive number of stocks and flows dynamically connected in an elaborate web of positive and negative feedback relationships; those feedback relationships have delays and operate at different timescales, and the system is riddled with nonlinearities. Thus it is probably fair to say that at some times, and in some dimensions, the economy may exhibit chaotic behavior, but the econ­omy also exhibits growth, decay, periodic limit cycles, quasi-periodic limit cycles, and a whole panoply of other behaviors.

Agents: Mind Games

It is important to note that the key behavioral assumptions of Traditional Economics were not developed because anyone thought they were a good description of real human behavior; they were adopted to make the math work in the equilibrium framework.

Herbert Simon put forward a competing theory of decision making—a theory he called bounded rationality. Basically, it said that humans are economically self-inter­ested and smart—but not that smart. Simon's theory took into account our lack of perfect information and the large but still finite processing power of our brains. As we mentioned before, Simon claims that instead of following perfect rationality, we satisfice—basically, we take the information we have, and we do the best we can.

The Traditional model assumes that its Spock-like agents are perfect and never make mistakes, or if they do, their mistakes are randomly distributed noise around the correct answer. In other words, in the Traditional model people share no common errors or biases. The behavioral economics view, on the other hand, is that "to err is human." Some of the common errors and biases that researchers have uncovered in normal, nondepressed people include these:

  • FRAMING BIASES. Exactly how an issue is framed can affect how we think about it. Compare, for example, the two questions "Should Britain adopt the euro?" and "Should Britain abolish the pound?" Under perfect rationality, this framing should not matter.

  • REPRESENTATIVENESS. People have a bad habit of drawing big conclusions from very small and biased samples.

  • AVAILABILITY BIASES. People tend to make decisions based on data that is easily available as opposed to finding the data that is really needed to make a good decision. This is, in effect, "looking for your lost keys under the lamppost" because that is where the light is best.

  • DIFFICULTIES JUDGING RISK. Most people have a tough time reason­ing with probabilities and assessing risks.

  • SUPERSTITIOUS REASONING. We tend to only look for the most proximate causes of things and often confuse random chance with cause and effect. Examples range from sports stars wearing their "lucky socks" to governments trying to reduce unemployment by simply making it more difficult to fire people.

  • MENTAL ACCOUNTING. Traditional Economics treats all money the same. However, people tend to put money into different mental compartments. For example, many people make a monthly con­tribution to a retirement plan even if they have outstanding credit-card balances. This is not economically rational, because the return on investment in the retirement account (even after tax savings) will likely be less than the credit-card interest. Nevertheless, people often view their retirement contribution as sacred and wall it off from current spending.

Inductive Rationality

Cognitive science is the label given to the field that studies the "software" of the human mind (as opposed to the "hardware" of the brain). This field draws on many areas, including neuroscience, psychology, artificial intelligence, linguis­tics, evolutionary theory, anthropology, and philosophy.

The picture that is emerging from cognitive science research is that the human mind is capable of incredible feats of information processing and learning—just in ways that are very different from the picture portrayed by perfect rationality. For example, humans may not be brilliant at calculating long equations, but they are amazing storytellers and story listeners.

Stories are vital to us because the primary way we process information is through induction. Induction is essentially reasoning by pattern recognition. It is drawing conclusions from a preponderance of evidence. We like stories because they feed our inductive thinking machine, they give us material to find patterns in—stories are a way in which we learn.

Humans particularly excel at two aspects of inductive pattern recognition. The first is relating new experiences to old patterns through metaphor and analogy making. Second, we are not just good pattern recognizers, but also very good pat­tern-completers. Our minds are experts at filling in the gaps of missing infor­mation. The ability to complete patterns and draw conclusions from highly incomplete information enables us to make quick decisions in fast moving and ambiguous environments.

Pattern recognition and storytelling are so integral to our cognition that we will even find patterns and construct narratives out of perfectly random data. In essence, people just make up stories to explain what they think is a pattern.

The opposite of induction is deduction. Deduction is a process of reasoning in which the conclusions must logically follow from a set of premises, for example, "Socrates is a man, and all men are mortal. Therefore, Socrates is mortal." Humans use deduction as well as induction, but we are not as good at deduction. Interestingly, while humans are relatively good at induction and relatively poor at deduction, computers are the opposite.

Deduction only works on very well-defined problems such as chess moves; for deduction to work, the problem cannot have any information missing or ambiguity. Deduction is thus a pow­erful method of reasoning, but inherently brittle. While induction is more error prone, it is also more flexible and better suited for the incomplete and ambiguous information that the world throws at us. It thus makes evolution­ary sense that we would be built this way.

The modern cognitive science view of human behavior is thus almost the exact inverse of the Traditional Economic view of perfect rationality. Per­fect rationality assumes we are 100 percent deductive and, like Deep Blue, always work on unambiguous, well-defined problems. It also assumes that we don't learn; after all, there is no need to learn if we are already perfect. The cognitive science view also helps provide an explanation for the anom­alous results that experimental economists have come up with. The framing problems, availability biases, anchoring, and other effects fit with the picture of humans as fast pattern-recognizers and pattern-completers. Sometimes in our inductive haste, we make mistakes and miss logical connections. Evolu­tion has made us fast, flexible, and usually right, as compared with being slow, brittle, but perfectly logical.

The Mind of an Agent

John Holland, a computer scientist at the University of Michigan, Keith Holyoak, a psychologist at UCLA, Richard Nisbett, a psychologist at Michi­gan, and Paul Thagard of Princeton's Cognitive Science Laboratory have devised a general model of induction that is an example of what a new Homo economicus might look like. The basic structure of Holland and company's model is as follows:

  • AGENT. There is an agent interacting with other agents and its environment.

  • GOALS. The agent has some goal or goals it is trying to achieve, and thus the agent can perceive gaps between its current state and its desired state, for example, "I'm hungry" or "I'm in danger." The agent's job is to make decisions that bring it closer to its goals.

  • RULES OF THUMB. The agent has rules of thumb that map the current state of the world to actions. These are called condition-action rules, or better known as IF THEN rules. An agent's collection of rules of thumb at any point in time is referred to as the agent's mental model.

  • FEEDBACK AND LEARNING. The agent's mental model keeps track of which rules have helped it achieve its goals and which rules have moved the agent farther from its goals. Historically successful rules are used more often than unsuccessful rules. Feedback from the environment thus causes the agent to learn over time.

Networks: Oh What a Tangled Web We Weave

Research has shown that networks have a number of very general prop­erties that apply whether one is talking about a network of particle inter­actions, a web of neurons in the brain, or people in an organization.

In random networks, the phase transi­tion from small clusters to giant clusters happens at a specific point, when the ratio of edges to nodes exceeds the value of 1 (i.e., on average, one edge for every node). One can think of the ratio of one edge to one node as the "tipping point" where a random net­work suddenly goes from being sparsely connected to densely connected.

The Value of Random Friends

Social networks are like lattice graphs because they have order and structure in them. Your collection of friends most likely includes people you grew up with, people you went to school with, colleagues from work, people in your profession, and your current neighbors. This means that, because we tend to draw friends from social pools, your friends have a greater-than-average like­lihood of knowing each other. All social networks have very identifiable clus­ters, or cliques.

Although our social networks are structured, we also all have a few ran­dom friends as well, people who are not in our normal social circle or just outside it, but who we have somehow met and become friendly with. These people who don't fit in our normal cluster are bridges out of our social networks and connect us to other social networks. It turns out that if you take a nicely structured lattice graph and then throw in a few random connections scattered around it, you get the best of both worlds. You get identifiable clusters, but with short degrees of separation, for in­ stance, "most of my friends might be dentists in St. Louis, but I know this guy at my gym who used to work in Hollywood..." and pretty soon you are talking to Madonna. Thus, our random friends are like the express flight from Charlotte to San Diego.

The structure of social networks is not only important for us as individu­als, but also makes a big difference in the functioning of large organizations. If an organization keeps people in strict career ladders and has silo-like busi­ness units and divisions, then the social network will be overly structured, with insufficient randomness. This, in turn, means long chains of hops for information to be transmitted around, resulting in poor communications and slow decision making. In contrast, some organizations quite deliberately move people across functions and businesses in their careers, thus creating within the company social networks that have a greater diversity of connec­tions. Too much churning of people can turn a social network into a random mush, but a moderate amount can dramatically improve its functioning.

"The Network Is The Computer"

Boolean networks range from transistors on computer chips to chemical reaction networks (researchers have even been able to build "chemical com­puters"). Even though our brains don't work using the same 0 and 1 logic of computers, individual neurons can be mathematically portrayed that way and thus most researchers believe that the brain, a mass of neurons, is a form of Boolean network (albeit a staggeringly complex one). If we think of the economy as a massive network of brains, then in effect the economy is a Boolean network, too (an even more staggeringly complex one). Admittedly, this is a bit of a conceptual jump, but we can speculate about what this might mean for the economy if it is true.

After more than thirty years of research on Boolean networks, their prop­erties are fairly well understood. While Boolean networks can do amazing things such as form the World Wide Web, build your body, and create your mind, they are in fact simple creatures at heart. Basically, three variables guide the behavior of such networks. The first is the number of nodes in the net­work. The second is a measure of how much everything is connected to everything else. And the third is a measure of "bias" in the rules guiding the behavior of the nodes.

The first important fact about Boolean networks is that the number of states a network can be in scales exponentially with the number of nodes. The exponential growth in possible states creates a very powerful kind of economy of scale in any network of information-processing entities. As the size of a Boolean network grows, the potential for novelty increases exponentially.

There are important diseconomies of scale driven by the second control variable of Boolean networks: the degree of connectedness. As the number of interdependencies grows, changes in one part of the network are more likely to have ripple effects on other parts of the network. As the potential for these knock-on effects grows, the probability that a positive change in one part of the network will have a neg­ative effect somewhere else also increases. This in turn means that densely connected networks become less adaptable as they grow.

We thus have two opposing forces at work in organizations: the informa­tional economies of scale from node growth, and the diseconomies of scale from the buildup of conflicting constraints. Taken together, these opposing forces help us understand why big is both beautiful and bad: as an organiza­tion grows, its degrees of possibility increase exponentially while its degrees of freedom collapse exponentially. Put simply, large organizations inherently have more attractive opportuni­ties before them than small organizations do (the large can theoretically do everything the small can do, plus more). But reaching those future opportu­nities involves trade-offs, and the more densely connected the organizational network, the more painful those trade-offs will be. The politics of organiza­tions are such that local pain in particular groups or departments is often sufficient to prevent the organization from moving to a new state, even if that state is more globally fit.

Boring is better

The more reg­ularity there is in the behavior of the nodes, the more density in connections the net­work can tolerate. In an organizational context, we can think of bias as being a measure of predictability. If there is predictability in the decision making of an organization, then the organization can func­tion effectively with a more densely connected network. If, however, decision making is less predictable, then less-dense connections, more hierarchy, and smaller spans of control are needed. Thus, for example, in an army, where regular, predictable behavior of troops is highly valued, it might be possible to get away with larger unit sizes than, say, in a creative advertising agency. It also means that factors that make behavior less predictable, such as office politics and emotions, can limit the size an organization can grow to before being overwhelmed by complexity. One can see a recipe for creating a dys­functional organization: just mix unpredictable behavior, a flat hierarchy, and lots of dense interconnections—the chances of getting anything done would be roughly zero.


At the beginning of the twenty-first century, Traditional Economics offers us two competing hypotheses to explain the oscillating patterns we see in the economy. On one side we have the microeconomics-based real busi­ness cycle theory, which holds on to the rational-equilibrium view and sees the economy as merely propagating external shocks. Under this theory, the key causes of economic oscillations are exogenous political events, changes in technology, and other factors. But such models cannot tell us why the cycles have been so persistent throughout history, despite enormous changes in the exogenous factors posited as causes. On the other side of the tracks, we have the macroeconomics-based New Keynesianism. This body of work has backed away from Traditional orthodoxy and incorporated less-than-perfect rationality, dynamics, and time delays in order to find endogenous explana­tions. In many ways, New Keynesianism is a step in the Complexity Economics direction, but the New Keynesians have not been prepared to abandon equi­librium, and as a result, the empirical success of the theory has thus far been limited.

The ultimate accomplishment of Complexity Economics would be to develop a theory that takes us from theories of agents, networks, and evo­lution, all the way up to the macro patterns we see in real-world economies. Such a comprehensive theory does not yet exist, but we can begin to see glimmers of what it might look like. Such a theory would view macroeconomic patterns as emergent phenom­ena, that is, characteristics of the system as a whole that arise endogenously out of interactions of agents and their environment.

Complex adaptive systems tend to have signature emergent patterns that are common across many types of systems. These patterns help us better understand the workings of those systems. We will look at three such signature patterns: oscillations, punctuated equilib­rium, and power laws.

Patterns of punctuated equilibrium show up not just in biological evolu­tion, but in other complex systems ranging from the slides of avalanches to the crashes of stock markets.

Power laws have been discovered in a wide variety of phenomena, includ­ing the sizes of biological extinction events, the intensity of solar flares, the ranking of cities by size, traffic jams, cotton prices, the number of fatalities in warfare, and even the distribution of sex partners in social networks. Power laws, along with oscillations and punctuated equilibrium, are another signa­ture characteristic of complex adaptive systems.

Gaussian, random walks almost never have fluctuations greater than five standard deviations, yet in real economic data, such as stock market crashes, five-standard-deviation events, and even larger-deviation ones, do in fact occur. They found that the fluc­tuations in stock prices follow clear power laws in the tails of the distribution. One of the consequences of this result is that financial markets are far more volatile than Traditional Economics leads us to believe.

Professional traders have always known that even in big stocks, there is less liquidity than one might think, and that order books are quite patchy. That is why they have to dribble out large trades over time—a big trade hit­ ting the market all at once would move the price too much. But until Farmer and company's work, it wasn't appreciated just how much this patchiness of order books contributed to stock volatility. The team built a mathematical model of the order book and showed that with just random trading (again, no real news), the structure of the order book on its own was a significant source of volatility. They also showed that, as one might expect, smaller, more thinly traded stocks were more volatile than large, more liquid stocks.


Evolution is an algorithm that is substrate-neutral. It takes information about designs for things and mindlessly grinds that information through a process. Evolution is also recursive: its output from one cycle is the input for the next round. This circularity means that it will keep cranking along infi­nitely until something stops it.

Complex designs are inherently modular. Our bodies have a bewildering array of systems, sub­ systems, and other components, from the cardiovascular system, to the heart, to an individual red blood cell, which in turn has its own systems and subsys­tems. Complex human-made designs have the same characteristic, such as a car's braking system, the brakes themselves, and an individual brake pad. A complex design can be viewed as a hierarchical collection of modules and submodules. In an evolutionary system, each of these systems, subsystems, and component parts has corresponding pieces of code for it in the schema. Thus, the schema for an evolutionary construction is full of such building blocks combined into higher-level building blocks that are combined into still higher-level building blocks.

Evolution is a process of sifting from an enormous space of possibil­ities. It tries a bunch of designs, sees what works, and does more of what works and less of what doesn't, repeated over and over again. There is no foresight, no planning, no rationality, and no conscious design. There is just the mind­less, mechanical grinding of the algorithm.

Evolu­tion ultimately selects for building blocks that support replication. This is the essence of Dawkins's famous selfish-gene theory. By selfish, Dawkins did not mean, as is sometimes misunderstood, that our genes make us (or other crea­tures) inherently selfish in our quest to survive—in fact, cooperation is a crit­ical survival technique for many species, including our own. What he was referring to was the logic of replication, that genes (or more generally, build­ing blocks) that are good at supporting their own replication (i.e., "selfish") will be replicated. Any other strategy will not survive in a world of competition.

The setup for evolution boils down to information processing. In order for evolution to get a foothold, the algorithm needs an information-processing medium: something to store, modify, and copy schemata. Evolu­tion could not get started in the biological world until a combination of thermodynamics and random chance created the first self-copying molecules to store, modify, and replicate molecular designs. The information-processing medium that gave evolution a foothold in the economy was spoken language and, later, writing. Once the information- processing medium is established, the processes of differentiation, selection, and replication can begin. Good replicators replicate, and what is fit is defined by the environment, which includes competition with other replicators. Evolution then starts its march through design space, seeking out designs that are better and better replicators.

Holland has shown that evolution automatically strikes the right balance between exploration and exploitation. When things are good, when evolution has found a high plateau, evolution will devote proportionally more population resources to exploiting. But when things are bad, when the population is down in the valley, proportionally more resources will be de­voted to exploring. Every time evolution occupies a new part of the fitness landscape, it is placing bets to sample the unknown. But like any bettor, as evolution gets more information, it wants to double up on the bets that look most promising. Holland has worked out the optimal formula for balancing exploitation and exploration and has shown that evolution comes very close to achieving the optimal balance. Evolution is a gambler, but one that plays the odds very well.

Good Tricks, Forced Moves, and Path Dependence

Good Tricks are moves on the fitness landscape that are not required on pain of extinction, but rather are so attractive that there is a strong probability they will be repeat­edly and independently discovered by evolutionary searches of the land­ scape. For example, imagine in our landscape a great, high mountain region called "creatures with eyes," for which all the DNA books have instructions for making sensors that detect light.

Dennett also refers to another factor—Forced Moves. In the game of chess, players sometimes find themselves in a position in which they have no choice on what to do next. They are boxed into a position in which any other move is suicidal. Likewise, in a fitness landscape, the movements of evolutionary search are constrained by what is fit in a given environment. That is, one can run along a ridge that is above the poisonous fog of low fitness, but if one deviates from the ridgeline, one is dead. Some Forced Moves are created by the constraints im­posed by the laws of physics and chemistry. For example, the laws of thermo­dynamics dictate that all biological creatures must have some kind of boundary between themselves and the rest of the world to provide a barrier between their low-entropy interior and the high-entropy outside world. Thus, all living things have some kind of skin, membrane, exoskeleton, protein shell, or other container.

A third and final consequence of fitness landscape topography is path dependence. In evolution­ary systems, history matters; where you can go in the future depends on where you have been in the past. Differentiation can spread populations out in a neighborhood of the landscape, but you can't just go instantly from one part of the landscape to another at will.

Mathematicians and evolutionary theorists have explored a variety of alter­native search algorithms on different landscape shapes. Some are better for searching perfectly random landscapes, and some are better for searching highly ordered and regular landscapes. But for landscapes that are in between, are rough-correlated, and have complex features such as plateaus, holes, and portals, evolution is hard to beat. And when the landscape is constantly changing, when the search problem is a dynamic one, when one must balance the tension between exploring and exploiting—evolution truly is the grand champion.

Stripping Evolution Back to Its Basics

To recap the substrate-neutral version of evolution we have been building, here are the necessary conditions for evolution to do its work:

  • There is a design space of possible designs.
  • It is possible to reliably code those designs into a schema.
  • There is some form of schema reader that can reliably decode schemata and render them into interactors. In endogenous evolution, schemata code for the building of their own readers.
  • Interactors are made up of modules and systems of modules that are coded for by building blocks in the schemata.
  • The interactors are rendered into an environment. The environment places constraints on the interactors (e.g., the laws of physics, climate, or the LEGO Judge), any of which can change over time. A particularly important constraining factor is competition among interactors for finite resources.
  • Collectively, the constraints in an environment create a fitness function whereby some interactors are fitter than others.

The process of evolution can then be thought of as an algorithm that searches the design space for designs that are fit, given the fitness constraints of the environment. The algorithm conducts its search of the design space as follows:

  • There is a process of variation of schemata over time. Schemata can be varied by any number of operators, for example, crossover and mutation.
  • Schemata are rendered into interactors creating a population.
  • Acting on the interactors is a process of selection, whereby some designs are deemed by the fitness function to be fitter than others. Less fit interactors have a higher probability of being removed from the population.
  • There is a process of replication. Fit interactors have on average a greater probability of replicating, and more variants are made of them than of less-fit designs.
  • Thus over time, building blocks that contribute to interactor fitness are replicated more frequently and become more common in the population.
  • Finally, the algorithmic process of variation, selection, and replication is conducted recursively on the population, with output from one round acting as the input for the next round.

When the algorithm is running in an appropriately setup information-processing substrate with the right parameters, we can then expect to see the following results:

  • The creation of order from randomness. From simple random begin­nings, the algorithm creates complex designs that are "ordered" from the point of view of the fitness function. All evolutionary processes operate in open systems, so in effect the algorithm harnesses energy to decrease local entropy and turn randomness into order.
  • The discovery of fit designs. The algorithm provides a fast and efficient way of searching the enormity of design space for fit designs. In endo­genous evolution, designs are fit if they survive and replicate under the constraints of their environment ("good replicators get replicated").
  • Continuous adaptation. The algorithm "learns" what the fitness function wants and seeks out designs that meet those criteria. If the fitness function changes, evolution produces designs that reflect the new selection pressures.
  • The accumulation of knowledge. The evolutionary process accumulates knowledge over time. DNA contains immense amounts of information about which biological designs have worked in the past. If you were an alien from another planet and had never seen the earth, but somehow obtained a piece of DNA from an earthly organism, you could learn much about the earth's environment just from that piece of code (assuming you had a DNA reader). Schemata are like the hard drives of the evolutionary process; they fill up with information over time.
  • The emergence of novelty. During the evolutionary process, the algorithm continuously creates new variants of designs. In a theoretical sense, all possible designs already exist in the design space, but by discovering and rendering them, evolution introduces "new" designs into the real world. Experiments using computer-simulated evolution to design things from jet-engine fan blades to computer chips have also resulted in novel designs.
  • Growth in resources devoted to successful designs. Populations of successful designs grow, and populations of unsuccessful designs shrink as successful designs win in the competition for resources. The larger populations mean that successful schemata "control" more resources in terms of matter, energy, and information than do unsuccessful schemata. Growth, however, may not follow a smooth pattern, but may follow a pattern of punctuated equilibrium due to a combination of network effects from coevolution and the shape of the fitness landscape itself.

Evolution is highly effective at finding fit designs in massive design spaces with rough-correlated fitness landscapes because:

  • Evolution employs parallel search. In effect, each member of the population is an individual experiment in design, so there are many hikers out looking for high peaks.
  • Evolution creates a spectrum of jumps on the landscape. It doesn't pursue just short, incremental jumps that could get stuck on local optima; nor does it pursue too many crazy long jumps that have a greater chance of failing than succeeding.
  • Finally, evolution is a process of continuous innovation. The recursive nature of the algorithm never stops. This is essential, given the constantly changing nature of the landscape. There may be periods of more or less active search as evolution balances exploration and exploitation, but the search is never complete. The system has no equilibrium—in evolutionary systems, stasis is a recipe for extinction.

In effect, evolution says, "I will try lots of things and see what works and do more of what works and less of what doesn't." But in this process of sifting, remarkable things happen. The algorithm learns what the fitness function "wants," knowledge of that learning accumulates in the population of sche­mata, and the evolutionary process generates novelty as it searches for fitter and fitter designs.

Evolution is like a play. The cast of characters and the plot are fixed, but the particular actors, the setting, and many of the details are not. The evolution­ary process can be set in biology, computer simulations, or human culture, technology, and the economy. The universal evolutionary process produces the general results just described as long as the conditions are in place, no matter what the substrate.



The Prisoner's Dilemma is particularly interesting because it is an example of a non-zero-sum game. In a non-zero-sum game, cooperation between two or more people leads to some greater gain for the group. The Prisoner's Dilemma presents us with a conundrum. The economy depends on cooperative activity—people need to work together to produce things, to trade with each other, and so on—yet, when people look at their narrow, short-term self-interest, they have an incentive to slack off at work, to cheat in trading, and generally not to cooperate. Even when people don't actually cheat, being the first to extend the hand of cooperation for the greater good might have risks when one doesn't know what others will do. Such dilemmas are called coordination problems. When people do not cooperate, game theorists refer to them as defecting.

Just as the churning of the evo­lutionary algorithm through the Library of All Prisoner's Dilemma Strategies created patterns of innovation, growth, and creative destruction, so too does the churning of evolution through the Library of Smith lead to these pat­terns in the real economy. Variation occurs as people continually experiment with, tinker with, and invent new business strategies and organizational designs. Selection works at multiple levels in the economy, causing some Business Plans to succeed and others to fail. Likewise, replication occurs in economic systems as successful designs are rewarded with more resources and are widely copied.

The model I will outline views economic evolution as the joint product of evolution in Physical Technology space, Social Technology space, and Busi­ness Plan space. We can think of these as three distinct, but interlinked and coevolving design spaces. In each space, evolution is at work, churning through possible designs, finding and amplifying ones that work, discarding those that don't, and thereby creating the order that we see in our technolog­ical, social, and economic worlds.


Physical Technologies (PTs) are methods and designs for transforming matter, energy, and information from one state into another in pursuit of a goal or goals.

Some PTs result in the production of an artifact (e.g., a hand ax or computer program), but others result in the provision of a service. For example, one could imagine a set of PTs for making a bank loan or providing a Shiatsu massage. A PT is not the thing itself (the hand ax, the software, or the massage), but rather, it is both the design for the thing, and the instructions and techniques for making it. One way of envisioning a PT is to think about what an instruc­tion manual for the artifact or service would look like.

One of the most remarkable things about human Physical Technology is how each new invention creates both the possibility of, and the need for, more inventions. Each invention opens up new niches for future inventions, and components from one invention are often recycled into new forms.

As one considers the possibility that PT innovation is an evolutionary process, a natural reaction is, "But evolution is a blind, random process, while technology innovation is directed by human rationality and intention. How do we square that circle?" The answer is that there is nothing funda­mental in the nature of the evolutionary algorithm that says intentionality and rationality cannot play a role, nor does anything say the process must be completely random. At its core, evolution is an iterative process of experi­mentation, selection, and then amplification of things that work. The ran­dom part of the process in biological evolution is the creation of variety for selection to act on. But even this is far from completely random. Mutations may be random, but recombination in sexual creatures is not. Competition for mates ensures that fit organisms have a higher probability of pairing off with other fit organisms.

The only requirement is that the algorithm be fed a sufficient variety of experiments for selection to act on. The experiments must cover a broad enough swath of the fitness landscape to give the algorithm a fighting chance of finding high peaks. From the algorithm's point of view, it doesn't really matter how the process achieves that diversity. In the case of humans search­ing PT space, the evolutionary algorithm gets its diversity of experiments through what I will call deductive-tinkering.

Selection on the Physical Technology Landscape

Physical Technologies are selected according to their fitness for their purpose, whatever that purpose is. We thus look for PTs that meet their purposes better than alternative PTs do. By "better," I mean that they are more effective in meeting their purpose and are more economically efficient at it than the alternatives.

Physical Technologies are replicated as humans spread them from one individual's head to another, as artifacts embodying the PTs are copied, and as the PTs are written down on stone tablets, printed in books, and put on Web pages. We can think of PTs as having a "market share" that fluctu­ates over time as successful PTs spread and unsuccessful ones die out.

Technology S-curves are the natural result of the geometry of rough-correlated fitness landscapes.

Disruptive Technologies: It is very difficult for successful, incumbent companies to make long jumps in the PT landscape. When you are on top of a local peak, there are far more ways to go down in fitness than up, and leaping to a new architecture appears highly risky. From the perspective of an entrepreneur or a new entrant starting in the low valley of a new architecture, there are lots of ways up and many new, untried peaks to explore. Most attempts up from the entrepreneurial valley will wind up in dead-end canyons or on disappointing short peaks. But with enough explorers working away, someone will eventually find an attractive route up.


Social Technologies (ST) are methods and designs for organising people in pursuit of a goal or goals.

The term Social Technologies is a close cousin of a term used by econo­mists: institutions. Nobel Prize winner Douglass North defines institutions as "the rules of the game in a society." Institutions are one ingredient in orga­nizing, but I intend for my definition of STs to be somewhat broader and include other ingredients such as structures, roles, processes, and cultural norms. Social Technologies include all the elements necessary for organiz­ing.

Like the other design spaces we have discussed, the library of STs has three important attributes. First, like its PT cousin, the ST design space is self-feeding and exponentially unfolding. Each ST breakthrough creates more headroom for the next set of breakthroughs—the invention of money enabled the invention of accounting, which enabled the invention of the joint stock company, which enabled the invention of stock markets, and so on.

Second, STs have a modular, building-block quality to them. For example, the organizational design of a large multinational corporation is a collection of modules that includes designs for organizing its business units, designs for its accounting and control systems, designs for its committee structure, and designs for its cultural norms of behavior.

Third, the fitness landscape associated with ST design space is highly likely to be rough-correlated. Small differences in ST designs tend to yield small dif­ferences in relative fitness, but occasionally, small changes will make a ST either unworkable or much better. Thus, the ST fitness landscape, like our other fitness landscapes, has an alpine, rough-correlated shape, with flat spots, Swiss cheese holes, and the occasional portal to higher ground. A prediction from this assumption is that just as we observed S-curves and disruptive tech­nologies in PT space, we would expect to see the equivalent in ST space. His­tory seems to bear this out.

Deductive-Tinkering in Social Technology Space

In STs, the ratio of deduction to tinkering is more weighted toward the latter than is the case with PTs. Despite advances in economics and organization theory, there is still far more art than science in activities such as redesigning a company's organization or creating a central banking system, as compared with building a jet aircraft or designing a new heart drug. One of the promises of Complexity Economics is that, over time, it will push the art-science boundary in ST a step further toward science. Despite the lesser role of deduction in ST space, the process of searching for fit STs using deductive-tinkering is nonetheless an evolutionary process. Peo­ple conduct experiments with various STs, and then over time, successful designs tend to persist, while less successful ones fade away. Successful de­signs tend to be amplified as they are copied, attract more resources, and spread.

What drives humanity's deductive-tinkering search through So­cial Technology space? What spurs us to constantly seek out new and better ways of organizing ourselves? The answer lies in the magic of non-zero-sum games. Cooperation in non-zero- sum games has a 1 + 1 = 3 logic, whereby if you scratch my back, I'll scratch yours, and together we can do something neither can do as well on our own and we both benefit. Non-zero-sum cooperation is one of those Good Tricks of survival that has been widely employed by biological evolution.

We can view the deductive-tinkering search through the ST fitness landscape as a quest for STs that enable people to play and capture the benefits of non-zero-sum games. Social Technology fitness will therefore depend on three factors. First, the ST must provide the potential for non-zero-sum payoffs. Second, it must provide methods for allocating the payoffs in such a way that people have an incentive to play the game. And third, the ST must have mechanisms for managing the problem of defection.

Non-Zero Magic

There are four basic sources of 1 + 1 = 3 magic in non-zero-sum games. All four have been well known to Traditional Economic theory for a long time.

  1. First is the division of labor.
  2. Second is the heterogeneity of people. Their different needs and tastes create opportunities to trade for mutual benefit.
  3. Third are the benefits of increasing returns to scale.
  4. Fourth, and finally, cooperation helps smooth out uncertainties over time. If one hunting band has a successful day and another does not, the successful band can share its bounty with the unsuccessful group under the proviso that the others do the same when the situations are reversed (and perhaps the payback will include some interest).

Allocating the Payoffs

Nash equilibrium: How two or more bargainers split up the gains from exchange depends on how much each values the benefits of the deal, and what the par­ties' alternatives are. Each looks for his or her best deal assuming everyone else is looking for the best deal, too, and the trade is made at the point at which no one has any incentive to change position, given the actions of the other. The existence of a Nash equilibrium does not guarantee a happy, coop­erative result (eg. Prisoner's Dilemma).

Managing Defection: Evolution has steered us in a direction whereby we are naturally inclined to be cooperative to capture the riches of non-zero-sum gains. Nevertheless, it has also equipped us with a sensitivity to cheating, expectations of fairness, and a willingness to mete out punishment to those we believe have crossed the line. In effect, evolution has programmed into our mental software sophisticated, intuitive "Nash equilibrium finders" and "fairness detectors" that enable groups of humans to form coalitions that are at least reasonably stable and resistant to attack by free riders and cheaters.

Building Computers Out of People: Once the evolution of STs reached the stage at which large numbers of people could form coopera­tive networks and had the means for communicating and storing significant amounts of data, human organizations took on a different character—they became capable of emergent computation. Organizations of people have the ability to process information and solve complex problems that individuals cannot process or solve on their own.


The way I have defined things, it follows natu­rally that if Business Plans are the schemata in economic systems, and Business Plans code for the construction of businesses, then the interactors in economies must be businesses—it is businesses that do the "living and the dying."

A business is a person, or an organized group of people, who transforms matter, energy, and information from one state into another with the goal of making a profit.

A firm is one or more businesses controlled in common by a person or group of people.

Businesses are thus the interactors that struggle in survival-of-the-fittest competition in economic evolution. But it would be a mistake to assume that this means that businesses are the unit of evolutionary selection, just as it would be a mistake to assume that selection acts on organisms in biological systems.

A module is a component of a Business Plan that has provided in the past, or could provide in the future, a basis for differential selection between businesses in a competitive environment.

One way to think of modules is to ask, "If I were a manager of a business, what kinds of things might I try varying to improve the performance of that business?" For example, one might launch a new sales campaign, redesign the customer service process, introduce new cost control disciplines, or enhance products. Any of the activities that provided a basis for differentiating the performance of the business would be a module (or more precisely, the bits of Business Plan that coded for those activities would be the module).

How Selection Works in Market Systems: There is an iterative loop of option generation, testing, and selection. The loop begins in the mental models of the individual agents, eventually winds its way through groups of agents, and then leads to actions in the real world. Thus, option selection works at multiple levels: in the mental models of individual agents, within the organizational hierarchy, and ultimately in the marketplace itself.

Replication: Amplifying Success: We will measure replication success by the percentage of resources whose fitness is influenced by a particular Business Plan module. These resources might include money, people, plant and equipment, or even intangible assets such as brand aware­ness, technical knowledge, and customer relationships—any resources that are in finite supply and over which businesses compete. And since we are now talking about a continuous rather than discrete mea­sure, we will say that a successful Business Plan module is amplified, rather than replicated. Thus, a Business Plan module is amplified in Business Plan space if its influence over resources grows over time.

Thus, we can see a variety of mechanisms at work for transmitting and amplifying Business Plan modules. They include the mental simulation and decision making of a single person, the problem solving and decision making of a group, imitation within and outside an organization, the direction of resources toward successful modules by the market, and the takeover of one business by another.

Economic Evolution in a Nutshell

We now have all the elements of an evolutionary model for searching the Library of Smith for fit Business Plans. We have seen how Business Plans are instructions for creating businesses that can be implemented by qualified Busi­ness Plan readers. These instructions bind Physical Technologies and Social Technologies together into modules under a strategy. Business Plans are dif­ferentiated through the deductive-tinkering of agents as they search for potentially profitable plans. While the distribution of experiments created by this process differs from the purely random differentiation of biological evolu­tion, it nonetheless feeds the evolutionary algorithm with a superfecundity of Business Plans for selection to act on.

The process of selection is nested and occurs at several levels, ranging from the mental simulations of individuals to the problem-solving activities of groups. Further selection occurs as Business Plans percolate up and down the hierarchies of organizations, but then at some point the plans are imple­mented and the market renders its judgment.

Finally, successful modules are rewarded by gaining influence over more resources. Success for a module comes at two levels. The first level is within an organization, when a Business Plan is implemented and is given resources for its execution, for example, when people and money are invested in exe­cuting a plan. The second level is when modules are expressed in the market­ place and rewarded with growth and more capital by customers and financial markets. This is a winnowing process; the superfecundity of Business Plans means that far more plan options are considered within an organization than can be implemented, and far more plans are tried in the market than can suc­ceed. As selection does its work, fit modules eventually gain influence over a larger and larger percentage of the total resource base of the economy. This is a highly dynamic process; what is fit today may or may not be fit tomor­row. Thus, the evolutionary process never stops, as modules come and go and as businesses rise and fall, adapting to the needs of the marketplace.

In Praise of Markets—for Different Reasons

Following the framework I have just outlined, we can reinterpret markets as an evolutionary search mechanism. Markets provide incentives for the deductive-tinkering process of differentiation. They then critically provide a fitness function and selection process that represents the broad needs of the population (and not just the needs of a few Big Men). Finally, they provide a means of shifting resources toward fit modules and away from unfit ones, thus amplifying the fit modules' influence.

In short, the reason that markets work so well comes down to what evolu­tionary theorists refer to as Orgel's Second Rule (named after biochemist Leslie Orgel), which says, "Evolution is cleverer than you are." Even a highly rational, intelligent, benevolent Big Man would not be able to beat an evolutionary algo­rithm in finding peaks in the economic fitness landscape. Markets win over command and control, not because of their efficiency at resource allocation in equilibrium, but because of their effectiveness at innovation in disequilibrium.


The Second Law of Thermodynamics provides a basic constraint on all life: over time, energy inputs must be greater than energy expenditures. All organisms must make a thermodynamic "profit" to survive and reproduce. The design for an organism can be thought of as a strategy for making thermodynamic profits long enough to reproduce, before the Second Law eventually catches up.

A pattern of matter, energy, and or information has economic value if the following three conditions are jointly met:

  1. IRREVERSIBILITY. All value-creating economic transformations and transactions are thermodynamically irreversible.
  2. ENTROPY. All value-creating economic transformations and transactions reduce entropy locally within the economic system, while increasing entropy globally.
  3. FITNESS. All value-creating economic transformations and transactions produce artifacts and or actions that are fit for human purposes.

Taken together, the three conditions say that economic activity is fun­damentally about order creation. Faced with the disorder and randomness of the world, humans spend most of their waking hours ordering their envi­ronment in various ways to make it a more hospitable and enjoyable place. We order our world by transforming energy, matter, and information into the goods and services we want, and we have discovered the evolutionary Good Trick that by cooperating, specializing, and trading, we can create even more order than we otherwise could on our own. But what is the purpose of all our scurrying, buzzing, furtive order creation? Why do we do it?

In biological evolutionary the­ory, there is just a single, universal utility function—the replication of genes. From the gene's-eye view, our genes built our bodies as a strategy for repli­cating themselves. A part of that strategy was to give our bodies large brains that enabled humans to live in complex, cooperative social environments and build tools. Those brains were built to serve the purpose of replicating the genes that built them. As a result, the brain developed goals, preferences, and drives that were consistent with surviving, mating, and bringing up young in the ancestral environment. We have used our large brains to order our envi­ronment to meet the needs of our goals, preferences, and drives.

All wealth is created by thermodynamically irreversible, entropy-lowering processes. The act of creating wealth is an act of creating order, but not all order is wealth creating. Individual minds, organizations, and markets churn through Business Plans seeking different forms of economic order, and where the markets propose, consumers dispose. They select forms of order that meet their needs, fulfilling drives and preferences that may seem modern, but have deep historical roots in the universal utility function of our genes. Wealth is thus a form of anti-entropy. It is a form of order, but not just any order—it is fit order. Patterns of economic order, in the form of products and services, compete with each other to be needed, desired, and even craved by con­sumers. We can retrospectively say that patterns of economic order that are suc­cessful in the competition to meet our preferences are fit, and the Business Plan modules that contribute to the creation of fit economic order are then amplified over time. And just as species and their environments coevolve, the competitive ecosystem of Business Plans and the preferences of consumers coevolve, making fitness a contingent concept, whereby what is fit today may or may not be fit tomorrow.

Economic wealth and biological wealth are thermodynamically the same sort of phenomena, and not just metaphorically. Both are systems of locally low entropy, patterns of order that evolved over time under the constraint of fitness functions. Both are forms of fit order. And the fitness function of the economy—our tastes and preferences—is fundamentally linked to the fitness function of the biological world—the replication of genes. The economy is ulti­mately a genetic replication strategy. It is yet another evolutionary Good Trick, along with leopard camouflage, bat radar, and fruit-fly eyes. The economy is a massively complex Good Trick built on the complex Good Tricks of big brains, nimble toolmaking hands, cooperative instincts, language, and culture.

If wealth is indeed fit order, then we can use another more familiar word to describe it. In physics, order is the same thing as information, and thus we can also think of wealth as fit information; in other words, knowledge. Infor­mation on its own can be worthless. Knowledge on the other hand is informa­tion that is useful, that we can do something with, that is fit for some purpose. We have found the answer to our quest. Wealth is knowledge and its ori­gin is evolution.



While Complexity Economics strips away our illusions of control over our economic fate, it also hands us a lever—a lever that we have always possessed but never fully appreciated. We may not be able to predict or direct economic evo­lution, but we can design our institutions and societies to be better or worse evolvers. The message of Com­plexity Economics is that evolution may indeed be cleverer than we are, but rather than outsmart it, we can understand it and harness its power to serve human purposes.

The standard approach to strategy hinges on two fundamental assumptions: first, that one can make confident predictions about what strategies will be successful in the future, and second, that one can make strategic commitments that will result in sustainable competitive advantage. Companies invest billions of dollars on the back of these assumptions every day. Unfortunately, both assumptions are wrong.

The combination of punctuated equilibrium and power laws in complex adaptive systems is almost fiendishly designed to lull people into a sense of self-confidence, and then pull a nasty surprise.

In evolutionary systems, sus­tainable competitive advantage does not exist; there is only a never-ending race to create new sources of temporary advantage. This then changes our definition of an excellent company from one that has continuous high performance for very long periods (an achievement that is almost non-nonexistent) to one that can string together a series of temporary advantages over time—in other words, a strong runner in the Red Queen race.

Companies Don't Innovate; Markets Do

Companies are Big Man (today we should say, Big Person) hierarchies with all their human foibles and distortions, while markets are almost pure evolutionary machines. Companies have an inherent disadvantage in that they can never have the same diversity of Business Plans as contained in the market as a whole. Nor can they ever perfectly mirror the selection pressures of actual markets or have the nearly infinite resources of the capital markets to invest in amplifying and scaling up Business Plans that succeed. From the unsentimental perspective of the evolutionary algorithm, businesses are just experimental grist for the evolutionary mill.

Strategy as a Portfolio of Experiments

The key to doing better is to "bring evolution inside" and get the wheels of differentiation, selection, and amplification spinning within a company's four walls. Rather than thinking of strategy as a single plan built on predictions of the future, we should think of strategy as a portfolio of experiments, a popula­tion of competing Business Plans that evolves over time.

There are some general lessons that can be learned from a portfolio-of-experiments approach to strategy. First, management needs to create a con­text for strategy. Constructing a portfolio of experiments requires a collective understanding of the current situation and shared aspirations among the management team. Second, management needs a process for differentiating Business Plans that results in a portfolio of diverse Plans. Third, the organi­zation needs to create a selection environment that mirrors the environment in the market. Fourth and finally, processes need to be established that enable the amplification of successful Business Plans and the elimination of unsuccessful Plans.

The conventional wisdom is that an innovative mind-set is one that toler­ates risk, ambiguity, and a lack of control. The approach that I have out­lined does not require such a mind-set—rather, it requires a redefinition of these concepts. Instead of taking big, risky bets to innovate, take many small ones and only bet big on stuff that works. Pursuing multiple Business Plans at the same time does not necessarily mean ambiguity if there is clarity in the selection pressures for good plans and bad plans. And while senior executives cannot necessarily control the outcomes of their strategic experiments, they can control the process of creating, selecting, and amplifying experiments to increase their odds of success.


The design of a firm's social architecture plays a major role in determining how adaptable that organization will be. We will define a social architecture as having three components:

  • The behaviors of the individual people in the organization.
  • The structures and processes that align people and resources in pursuit of an organization's goals.
  • The culture that emerges from the interactions of people in the organization with each other and their environment.

Organizations are complex adaptive systems. Organizations are made up of individual agents who dynamically interact with each other; agents' rules of behavior and networks of interactions change in response to changes in the environment; and agents' interactions produce emergent macro-level patterns of behavior. Organizations are goal directed, boundary-maintaining, and socially constructed systems of human activity.

Why firms exist

Coase's answer was simple but insightful—people form organizations to mini­mize "transaction costs." If a house builder and a carpenter are only going to work together once on a specific job, it might make sense for them simply to write a contract for that job and get on with it. If, however, they are going to work together repeatedly on multiple jobs, and perhaps in multiple roles, then it would be very expensive to write new contracts for each new job and role. In that case, it might be cheaper for them to form some sort of endur­ing relationship in an organization, perhaps as partners, or one might hire the other as an employee. In short, Coase said, if it is cheaper to freelance, people will freelance, but if it is cheaper to pull people together into an organiza­tion, they will.

Culture is an emergent characteristic of a group of agents and is determined by the agents' rules of behavior (or norms) for acting in their social environment and for interacting with each other. Cultural rules are socially transmitted and learned. Cultural norms are in short the rules of thumb for behaving in a social environment. Norms are "should" or "ought" statements about what is con­sidered the right, appropriate, or the expected thing to do, by the society or organization in a given situation.

Culture: The Ten Commandments

Performing norms

  1. PERFORMANCE ORIENTATION. Always do your best, go the extra mile, take initiative, and continuously improve yourself.
  2. HONESTY. Be honest with others, be honest with yourself, be transparent and face reality.
  3. MERITOCRACY. Reward people on the basis of merit.

Cooperating norms
4. MUTUAL TRUST. Trust your colleagues' motivation, and trust in their skills to get the job done. 5. RECIPROCITY. Live the golden rule; do unto others as you would have them do unto you. 6. SHARED PURPOSE. Put the organization's interests ahead of your own, and behave as if everyone is in it together.

Innovating norms
7. NONHIERARCHICAL Junior people are expected to challenge senior people, and what matters is the quality of an idea, not the tide of the person saying it. 8. OPENNESS. Be curious, open to outside thinking, and willing to experiment; seek the best, wherever it is. 9. FACT-BASED. Find out the facts; it is facts, not opinions, that ultimately count. 10. CHALLENGE. Feel a sense of competitive urgency; it is a race without a finish line.

Cognitive science tells us that most corporate change programs are 180 degrees backward. Hu­mans are fairly stubborn creatures and don't just immediately change their mental models and behaviors in responses to speeches from their bosses, or because of PowerPoint presentations or plastic cubes with inspirational mes­sages. Instead of trying to appeal to our fact-based, deductive sides, corpo­rate change programs need to address the story-loving, pattern-recognizing, more emotional, inductive side of human cognition. People need to be jolted from their existing mental models and see an urgent and personal gap between the way things are and the way things need to be. A change pro­gram needs to have a fact-based argument underlying it, but the emphasis in communications should be on stories, analogies, and patterns to help people see the issues. Most change programs are also very passive, with lots of com­munications cascading down from on high. But learning is interactive, and thus a change program needs to get people to personally grapple with the issues.

Many executives find the idea of sound bites mildly distasteful and associate them with politicians. But the reality of human cognition is that we remem­ber short, catchy, rhythmic phrases that use alliteration, metaphor, humorous twists, or other tricks. Jack Welch was the master of the carefully engineered sound bite and used them with great effect to drive norms of behavior into the GE organization. Finally, intense repetition is essential. There is a rea­son why religious leaders see their flocks weekly, and why politicians give the same stump speech until they are blue in the face—our minds tend to weight messages by frequency as much as quality.


In the world of Traditional finance theory, at a given moment, the price of a stock reflects all the information available on the stock, and everyone's expectations based on that information.

Traditional finance theory is at best a rough approximation that works under certain circumstances; at worst, it is plain and simply wrong. To cut to the heart of the matter, we need to establish three salient facts:

  • First, a substantial body of empirical and experimental evidence shows that real-world investors look nothing like their theoretical, perfectly rational counterparts. Investors do not discount in the way Traditional theory assumes; they have various biases regarding risk, are subject to framing errors in processing information, and use heuristics to make decisions. This does not mean that investors are irrational or make decisions purely on emotion. Rather, they are "boundedly" and inductively rational, instead of per­fectly and deductively rational. Suffice it to say that three decades of work leaves little doubt that perfect rationality is a poor approximation of economic reality.
  • Second, markets do not follow a random walk. Market data has considerable structure, which has all the signature characteristics of a complex adaptive system.
  • Third, financial markets are not efficient in the Traditional Economics sense of the word, but they are highly effective in an evolutionary sense.

The efficient-markets hypothesis is a myth born of nineteenth-century theories of equilibrium and Bachelier's random walk. Although Traditional market efficiency may not be a very meaningful concept, financial markets are highly effective evolutionary systems. Markets are the best Social Technol­ogy devised yet for integrating the views of large numbers of people to put prices on complex assets, and to allocate capital. Furthermore, the competi­tive intensity of markets ensures that they are fast at processing information, and that there is pressure on their participants to continuously innovate. Andrew Lo of MIT calls this evolutionary effectiveness of markets the "Adap­tive Markets Hypothesis."

Implications for managers

Complexity finance affects the world of business in three areas. First, widely used methods for calculating a company's cost of capital may be wrong. Second, Complexity finance raises questions about the appropriateness of granting stock options in executive compensation. Third, and perhaps most importantly, it challenges us to think in new ways about the fundamental purpose of corporations and the nature of shareholder capitalism.

Embracing the dual objectives of endurance and growth forces management to fully confront the inherent tensions in executing and adapting and creates pressure to strike a more equal balance between the two. In a competitive evolutionary environment, "endure and grow" is the what, and "adapt and execute" is the how. Enduring and growing are the timeless demands placed on designs in an evolutionary system.


Human nature

If one digs deeply into the Left-Right divide, down to its philosophical and historical core, one finds two conflicting views of human nature. On the Left is the view that human beings are inherently altruistic; that greed and selfish­ness stem not from human nature, but from the construction of the social order; and that humans can be made better through a more just society. The lineage of this view descends from Jean-Jacques Rousseau and Karl Marx.

On the Right is the view that human beings are inherently self-regarding and that the pursuit of self-interest is an inalienable right. The most effective system of government is one that accommodates rather than attempts to change this aspect of human nature. As the eighteenth-century Scottish philoso­pher David Hume put it, "in contriving any system of government... every man ought to be supposed to be a knave and to have no other end, in all his actions, than his private interests." The Right claims, however, that if people pursue their self-interest through the mechanism of markets, then the general interests of society will be served as well. The lineage of this view descends from Hume, John Locke, and Thomas Hobbes.

One might be surprised not to see Adam Smith's name on this list. But Smith actually took a more nuanced view. In his Wealth of Nations, Smith indeed showed how self-interest, mediated by markets, can lead to social benefit. But in his other great work, The Theory of Moral Sentiments, Smith also said, "How selfish soever man may be supposed, there are evidently some principles in his nature, which interest him in the fortunes of others." In other words, Smith took a more rounded view of human behavior, one that acknowledged the coexistence of both the self-interested and altruistic sides of human nature.

For cen­turies, the question of the self-regarding versus the altruistic nature of human­ kind was a philosophical question and ultimately a matter of opinion. Since the 1980s, however, it has become a scientific question. A substantial body of evidence from controlled experiments, empirical studies, anthropological field work, and the application of game theory has now yielded an answer: Smith was basically right. Human beings are neither inherently altruistic nor selfish; instead they are what researchers call conditional cooperators and altruistic punishers. Gintis and his colleagues refer to this type of behavior as strong reciprocity and define it as "a predisposition to cooperate with others, and to punish (even at personal cost if necessary) those who violate the norms of cooperation, even when it is implausible to expect these costs will be recovered at a later date."

Markets versus states

From a Complexity Economics perspective, markets play a critical role in collecting and processing information, as well as keeping power hier­archies in check by providing a fitness function for Business Plan selection. The economy is simply too complex for the central planning required by socialism to work effectively.

A Complexity perspective would distinguish between two types of government action. Policies that get the government involved in dif­ferentiating, selecting, and amplifying Business Plans would be seen as inter­fering in economic evolution and have all of the problems discussed in the critique of socialist economies. Examples would include Japanese industrial policy that subsidizes and protects favored industries, French intervention in European banking mergers to favor its own national banks, or the U.S. gov­ernment's energy policy favoring corn-based ethanol due to the political importance of Iowa. In contrast, policies that shape the fitness environment, while leaving Business Plan selection and amplification to market mecha­nisms, are a different matter. As we have seen, notions of efficiency in evolu­tionary systems are ephemeral. A Complexity perspective would say that government regulations form part of the fitness environment that compa­nies compete in. As long as markets provide the mechanism for selecting and amplifying Business Plans, then the economic evolutionary process will innovate and adapt in response to those regulations.

For example, if voters tell their elected representatives that protecting the environment is a priority for society, then it is well within the government's purview to shape the economic fitness function to favor environmentally friendly Business Plans over environmentally unfriendly plans. Examples would include a carbon tax, emissions trading, or mandatory industry recy­cling requirements. In these cases the government is not selecting plans (e.g., whether fuel cells, ethanol, wind power, or some other Physical Technology is the best way to reduce emissions), but rather shaping the fitness environment in which plans will succeed or fail (e.g., in a world with a carbon tax a low-emissions plan will do better than a high-emissions plan).

The economic role of the state is to cre­ate an institutional framework that supports the evolutionary workings of markets, strikes an effective balance between cooperation and competition, and shapes the economic fitness function to best serve the needs of society. Consistent with norms of strong reciprocity, the state also has an obligation to ensure that all its citizens have an equal opportunity to participate in the economic system, and to provide a basic level of support for those who do not succeed in that system. The economic role of markets is to provide in­centives for the discovery and differentiation of Business Plans, apply the fit­ness function shaped by consumers, technology, and the state in selection, and channel resources to selected plans for amplification. The question is not states versus markets—it is how to combine states and markets to create an effective evolutionary system.

Culture matters

Which norms support economic development, and which norms don't? Much research remains to be done in this area, but various researchers have proposed typologies of cultural rules.

  • In the first category are norms related to individual behavior. These in­clude norms that support a strong work ethic, individual accountability, and a belief that you are the protagonist of your own life and not at the whim of gods or Big Men. Fatalism greatly reduces personal incentives. It is also important to believe that there is a payoff to hard work and a moral life in this world, and not just in the next. Finally, economically successful cultures appear to strike a balance between optimism that improvement is possible, and realism about one's current situation.

  • In the second category are norms related to cooperative behavior. Fore­most is a belief that life is a non-zero-sum game and that there are payoffs to cooperation. Societies that believe in a fixed pie of wealth have a difficult time engendering cooperation and tend to be low in mutual trust. Consistent with our discussion of strong reciprocity, it is important that the culture have norms that value generosity and fairness, but also sanction those who free ride and cheat.

  • The third category contains norms related to innovation. Deductive-tinkering is much more effective if the deductive part is strong, and thus cul­tures that look to rational scientific explanations of the world rather than religious or magical explanations tend to be more innovative. Likewise, a cul­ture needs to be tolerant of heresy and experimentation, as strict orthodoxy stifles innovation. Finally, it is important that the culture be supportive of competition and celebrate achievement, since overly egalitarian cultures re­duce the incentives for risk taking.

  • One final norm is important to all three categories: how people view time. Cultures that live for today (or, conversely, are mired in the past) have prob­lems across the board, ranging from low work ethic, to an inability to engage in complex cooperation and low levels of investment in innovation. Why work hard, and invest in cooperation and innovation if tomorrow doesn't matter? In contrast, cultures that have an ethic of investing for tomorrow tend to value work, have high intergenerational savings rates, demonstrate a willingness to sacrifice short-term pleasures for long-term gain, and enjoy high levels of cooperation.