Arthur was able to give his audiences any number of lock-in examples. There were Beta-versus-VHS and QWERTY, of course. But there was also the strange case of the internal combustion engine. In the 1890s, Arthur discovered, when the automotive industry was still in its infancy, gasoline was considered the least-promising power source. Its chief rival, steam, was well developed, familiar, and safe; gasoline was expensive, noisy, dangerously explosive, hard to obtain in the right grade, and required a new kind of engine containing complicated new parts. Gasoline engines were also inherently less fuel-efficient. If things had been different and if steam engines had benefited from the same ninety years of development lavished on gasoline engines, we might now be living with considerably less air pollution and considerably less dependence on foreign oil. But gasoline did win out—largely, Arthur found, because of a series of historical accidents.
Increasing returns isn’t an isolated phenomenon at all: the principle applies to everything in high technology. Look at a software product like Microsoft’s Windows, he says. The company spent $50 million in research and development to get the first copy out the door. The second copy cost it—what, $10 in materials? It’s the same story in electronics, computers, pharmaceuticals, even aerospace. (Cost for the first B2 bomber: $21 billion. Cost per copy: $500 million.) High technology could almost be defined as “congealed knowledge,” says Arthur. “The marginal cost is next to zilch, which means that every copy you produce makes the product cheaper and cheaper.” More than that, every copy offers a chance for learning: getting the yield up on microprocessor chips, and so on. So there’s a tremendous reward for increasing production—in short, the system is governed by increasing returns. Among high-tech customers, meanwhile, there’s an equally large reward for flocking to a standard. “If I’m an airline buying a Boeing jet,” says Arthur, “I want to make sure I buy a lot of them so that my pilots don’t have to switch.” By the same token, if you’re an office manager, you try to buy all the same kind of personal computer so that everyone in the office can run the same software. The result is that high technologies very quickly tend to lock in to a relatively few standards: IBM and Macintosh in the personal computer world, or Boeing, McDonnell Douglas, and Lockheed in commercial passenger aircraft. Now compare that with standard bulk commodities such as grain, fertilizer, or cement, where most of the know-how was acquired generations ago. Today the real costs are for labor, land, and raw materials, areas where diminishing returns can set in easily. (Producing more grain, for example, may require that farmers start to open up less productive land.) So these tend to be stable, mature industries that are described reasonably well by standard neoclassical economics. “In that sense, increasing returns isn’t displacing the standard theory at all,” says Arthur, “It’s helping complete the standard theory. It just applies in a different domain.”
“In the real world, outcomes don’t just happen,” he says. “They build up gradually as small chance events become magnified by positive feedbacks.”
Physicists had begun to realize by the early 1980s that a lot of messy, complicated systems could be described by a powerful theory known as “nonlinear dynamics.” And in the process, they had been forced to face up to a disconcerting fact: the whole really can be greater than the sum of its parts. Now, for most people that fact sounds pretty obvious. It was disconcerting for the physicists only because they had spent the past 300 years having a love affair with linear systems—in which the whole is precisely equal to the sum of its parts. In fairness, they had had plenty of reason to feel this way. If a system is precisely equal to the sum of its parts, then each component is free to do its own thing regardless of what’s happening elsewhere. And that tends to make the mathematics relatively easy to analyze. (The name “linear” refers to the fact that if you plot such an equation on graph paper, the plot is a straight line.) Besides, an awful lot of nature does seem to work that way. Sound is a linear system, which is why we can hear an oboe playing over its string accompaniment and recognize them both. The sound waves intermingle and yet retain their separate identities. Light is also a linear system, which is why you can still see the Walk/Don’t Walk sign across the street even on a sunny day: the light rays bouncing from the sign to your eyes are not smashed to the ground by sunlight streaming down from above. The various light rays operate independently, passing right through each other as if nothing were there. In some ways even the economy is a linear system, in the sense that small economic agents can act independently. When someone buys a newspaper at the corner drugstore, for example, it has no effect on your decision to buy a tube of toothpaste at the supermarket. However, it’s also true that a lot of nature is not linear—including most of what’s really interesting in the world. Our brains certainly aren’t linear: even though the sound of an oboe and the sound of a string section may be independent when they enter your ear, the emotional impact of both sounds together may be very much greater than either one alone. (This is what keeps symphony orchestras in business.) Nor is the economy really linear. Millions of individual decisions to buy or not to buy can reinforce each other, creating a boom or a recession. And that economic climate can then feed back to shape the very buying decisions that produced it. Indeed, except for the very simplest physical systems, virtually everything and everybody in the world is caught up in a vast, nonlinear web of incentives and constraints and connections. The slightest change in one place causes tremors everywhere else. We can’t help but disturb the universe, as T. S. Eliot almost said. The whole is almost always equal to a good deal more than the sum of its parts. And the mathematical expression of that property—to the extent that such systems can be described by mathematics at all—is a nonlinear equation: one whose graph is curvy.
As intriguing as molecular biology and computer simulation and nonlinear science were separately, Cowan had a suspicion that they were only the beginning. It was more a gut feeling than anything else. But he sensed that there was an underlying unity here, one that would ultimately encompass not just physics and chemistry, but biology, information processing, economics, political science, and every other aspect of human affairs. What he had in mind was a concept of scholarship that was almost medieval. If this unity were real, he thought, it would be a way of knowing the world that made little distinction between biological sciences and physical sciences—or between either of those sciences and history or philosophy. Once, says Cowan, “The whole intellectual fabric was seamless.” And maybe it could be that way again.
Anderson was intrigued with this Santa Fe Institute in spite of himself. Reversing the tide of reductionism—now that was his kind of language. He had personally been fighting a guerilla war against reductionism for decades. What first incited him to action, he recalls, was reading a lecture back in 1965 by the particle physicist Victor Weisskopf. In it, Weisskopf seemed to imply that “fundamental” science—that is, particle physics and some parts of cosmology—was somehow different from and better than more applied disciplines such as condensed-matter physics. Deeply annoyed, and scathing as only an insulted condensed-matter physicist can be, Anderson had immediately prepared a lecture of his own in rebuttal. In 1972 he had published it as an article in Science magazine entitled “More Is Different.” And he had been pushing the argument at every opportunity since then. To begin with, he says, he is the first to admit that there is a “philosophically correct” form of reductionism: namely, a belief that the universe is governed by natural law. The vast majority of working scientists accept that assertion wholeheartedly, says Anderson. Indeed, it’s hard to imagine how science could exist if they didn’t. To believe in natural law is to believe that the universe is ultimately comprehensible—that the same forces that determine the destiny of a galaxy can also determine the fall of an apple here on Earth; that the same atoms that refract the light passing through a diamond can also form the stuff of a living cell; that the same electrons, neutrons, and protons that emerged from the Big Bang can now give rise to the human brain, mind, and soul. To believe in natural law is to believe in the unity of nature at the deepest possible level. However, says Anderson, this belief does not imply that the fundamental laws and the fundamental particles are the only things worth studying—and that everything else could be predicted if you only had a big enough computer. A lot of scientists certainly do seem to think that way, he says. Back in 1932, the physicist who discovered the positron—the antimatter version of the electron—declared, “The rest is chemistry!” More recently, Murray Gell-Mann himself had been known to dismiss condensed-matter theory as “dirt physics.” But that was precisely the kind of arrogance that Anderson found so infuriating. As he wrote in his 1972 article, The ability to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. In fact, the more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science, much less society.”
This everything-else-is-chemistry nonsense breaks apart on the twin shoals of scale and complexity, he explains. Take water, for example. There’s nothing very complicated about a water molecule: it’s just one big oxygen atom with two little hydrogen atoms stuck to it like Mickey Mouse ears. Its behavior is governed by well-understood equations of atomic physics. But now put a few zillion of those molecules together in the same pot. Suddenly you’ve got a substance that shimmers and gurgles and sloshes. Those zillions of molecules have collectively acquired a property, liquidity, that none of them possesses alone. In fact, unless you know precisely where and how to look for it, there’s nothing in those well-understood equations of atomic physics that even hints at such a property. The liquidity is “emergent.” In much the same way, says Anderson, emergent properties often produce emergent behaviors. Cool those liquid water molecules down a bit, for example, and at 32°F they will suddenly quit tumbling over one another at random. Instead they will undergo a “phase transition,” locking themselves into the orderly crystalline array known as ice. Or if you were to go the other direction and heat the liquid, those same tumbling water molecules will suddenly fly apart and undergo a phase transition into water vapor. Neither phase transition would have any meaning for one molecule alone. And so it goes, says Anderson. Weather is an emergent property: take your water vapor out over the Gulf of Mexico and let it interact with sunlight and wind, and it can organize itself into an emergent structure known as a hurricane. Life is an emergent property, the product of DNA molecules and protein molecules and myriad other kinds of molecules, all obeying the laws of chemistry. The mind is an emergent property, the product of several billion neurons obeying the biological laws of the living cell. In fact, as Anderson pointed out in the 1972 paper, you can think of the universe as forming a kind of hierarchy: “At each level of complexity, entirely new properties appear. And at each stage, entirely new laws, concepts, and generalizations are necessary, requiring inspiration and creativity to just as great a degree as in the previous one. Psychology is not applied biology, nor is biology applied chemistry.”
The founding workshops made it clear that every topic of interest had at its heart a system composed of many, many “agents.” These agents might be molecules or neurons or species or consumers or even corporations. But whatever their nature, the agents were constantly organizing and reorganizing themselves into larger structures through the clash of mutual accommodation and mutual rivalry. Thus, molecules would form cells, neurons would form brains, species would form ecosystems, consumers and corporations would form economies, and so on. At each level, new emergent structures would form and engage in new emergent behaviors. Complexity, in other words, was really a science of emergence. And the challenge that Cowan had been trying to articulate was to find the fundamental laws of emergence. By no coincidence, it was also about this time that the new, unified science acquired a name: the sciences of complexity.
Holland started by pointing out that the economy is an example par excellence of what the Santa Fe Institute had come to call “complex adaptive systems.” In the natural world such systems included brains, immune systems, ecologies, cells, developing embryos, and ant colonies. In the human world they included cultural and social systems such as political parties or scientific communities. Once you learned how to recognize them, in fact, these systems were everywhere. But wherever you found them, said Holland, they all seemed to share certain crucial properties. First, he said, each of these systems is a network of many “agents” acting in parallel. In a brain the agents are nerve cells, in an ecology the agents are species, in a cell the agents are organelles such as the nucleus and the mitochondria, in an embryo the agents are cells, and so on. In an economy, the agents might be individuals or households. Or if you were looking at business cycles, the agents might be firms. And if you were looking at international trade, the agents might even be whole nations. But regardless of how you define them, each agent finds itself in an environment produced by its interactions with the other agents in the system. It is constantly acting and reacting to what the other agents are doing. And because of that, essentially nothing in its environment is fixed. Furthermore, said Holland, the control of a complex adaptive system tends to be highly dispersed. There is no master neuron in the brain, for example, nor is there any master cell within a developing embryo. If there is to be any coherent behavior in the system, it has to arise from competition and cooperation among the agents themselves. This is true even in an economy. Ask any president trying to cope with a stubborn recession: no matter what Washington does to fiddle with interest rates and tax policy and the money supply, the overall behavior of the economy is still the result of myriad economic decisions made every day by millions of individual people.
Second, said Holland, a complex adaptive system has many levels of organization, with agents at any one level serving as the building blocks for agents at a higher level. A group of proteins, lipids, and nucleic acids will form a cell, a group of cells will form a tissue, a collection of tissues will form an organ, an association of organs will form a whole organism, and a group of organisms will form an ecosystem. In the brain, one group of neurons will form the speech centers, another the motor cortex, and still another the visual cortex. And in precisely the same way, a group of individual workers will compose a department, a group of departments will compose a division, and so on through companies, economic sectors, national economies, and finally the world economy. Furthermore, said Holland—and this was something he considered very important—complex adaptive systems are constantly revising and rearranging their building blocks as they gain experience. Succeeding generations of organisms will modify and rearrange their tissues through the process of evolution. The brain will continually strengthen or weaken myriad connections between its neurons as an individual learns from his or her encounters with the world. A firm will promote individuals who do well and (more rarely) will reshuffle its organizational chart for greater efficiency. Countries will make new trading agreements or realign themselves into whole new alliances. At some deep, fundamental level, said Holland, all these processes of learning, evolution, and adaptation are the same. And one of the fundamental mechanisms of adaptation in any given system is this revision and recombination of the building blocks.
Third, he said, all complex adaptive systems anticipate the future. Obviously, this is no surprise to the economists. The anticipation of an extended recession, for example, may lead individuals to defer buying a new car or taking an expensive vacation—thereby helping guarantee that the recession will be extended. The anticipation of an oil shortage can likewise send shock waves of buying and selling through the oil markets—whether or not the shortage ever comes to pass. But in fact, said Holland, this business of anticipation and prediction goes far beyond issues of human foresight, or even consciousness. From bacteria on up, every living creature has an implicit prediction encoded in its genes: “In such and such an environment, the organism specified by this genetic blueprint is likely to do well.” Likewise, every creature with a brain has myriad implicit predictions encoded in what it has learned: “In situation ABC, action XYZ is likely to pay off.” More generally, said Holland, every complex adaptive system is constantly making predictions based on its various internal models of the world—its implicit or explicit assumptions about the way things are out there. Furthermore, these models are much more than passive blueprints. They are active. Like subroutines in a computer program, they can come to life in a given situation and “execute,” producing behavior in the system. In fact, you can think of internal models as the building blocks of behavior. And like any other building blocks, they can be tested, refined, and rearranged as the system gains experience.
Finally, said Holland, complex adaptive systems typically have many niches, each one of which can be exploited by an agent adapted to fill that niche. Thus, the economic world has a place for computer programmers, plumbers, steel mills, and pet stores, just as the rain forest has a place for tree sloths and butterflies. Moreover, the very act of filling one niche opens up more niches—for new parasites, for new predators and prey, for new symbiotic partners. So new opportunities are always being created by the system. And that, in turn, means that it’s essentially meaningless to talk about a complex adaptive system being in equilibrium: the system can never get there. It is always unfolding, always in transition. In fact, if the system ever does reach equilibrium, it isn’t just stable. It’s dead. And by the same token, said Holland, there’s no point in imagining that the agents in the system can ever “optimize” their fitness, or their utility, or whatever. The space of possibilities is too vast; they have no practical way of finding the optimum. The most they can ever do is to change and improve themselves relative to what the other agents are doing. In short, complex adaptive systems are characterized by perpetual novelty.
“But I didn’t come out of the anesthesia for a day longer than I should have. They were worried about a coma.” He did wake up, eventually. But it was a long time before he was coherent. “I had this weird experience of watching my mind come back,” he says. “I could see myself as this passive observer back there somewhere. And there were all these things happening in my mind that were disconnected from my consciousness. It was very reminiscent of virtual machines, or like watching the Game of Life. I could see these disconnected patterns self-organize, come together, and merge with me in some way. I don’t know how to describe it in any objectively verifiable way, and maybe it was just a figment of all these funny drugs they were giving me, but it was as if you took an ant colony and tore it up, and then watched the ants come back together, reorganize, and rebuild the colony. “So my mind was rebuilding itself in this absolutely remarkable way. And yet, still, there were a number of points along the way when I could tell I wasn’t what I used to be, mentally. There were things missing—though I couldn’t say what was missing. It was like a computer booting up: I could feel different levels of my operating system building up, each one with more capability than the last. I’d wake up one morning, and like an electric shock almost, I’d sort of shake my head and suddenly I’d be on some higher plateau. I’d think, ‘Boy, I’m back!’ Then I’d realize I wasn’t really quite back. And then at some random point in the future, I’d go through another one of those, and—am I back yet or not? I still don’t know until this day. A couple of years ago I went through another one of those episodes, a fairly major one. So who knows? When you’re at one level, you don’t know what’s at a higher level.”
Could a machine be programmed to make a copy of itself? Von Neumann didn’t have any doubt that the answer was yes, at least in principle. After all, plants and animals have been reproducing themselves for several billion years, and at the biochemical level they are just “machines” following the same natural laws as the stars and planets. But that fact didn’t help him very much. Biological self-reproduction is immensely complicated, involving genetics, sex, the union of sperm and egg, cell divisions, and embryo development—to say nothing of the detailed molecular chemistry of proteins and DNA, which was still almost totally unknown in the 1940s. Machines obviously had none of that. So before von Neumann could answer the question about machine self-reproduction, he had to reduce that process to its essence, its abstract logical form. In effect, he had to operate in the same spirit that programmers would years later when they started to build virtual machines: he had to find out what was important about self-reproduction, independent of the detailed biochemical machinery. To get a feel for the issues, von Neumann started out with a thought experiment. Imagine a machine that floats around on the surface of a pond, he said, together with lots of machine parts. Furthermore, imagine that this machine is a universal constructor: given a description of any machine, it will paddle around the pond until it locates the proper parts, and then construct that machine. In particular, given a description of itself, it will construct a copy of itself. Now that sounds like self-reproduction, said von Neumann. But it isn’t—at least, not quite. The newly created copy of the first machine will have all the right parts. But it won’t have a description of itself, which means that it won’t be able to make any further copies of itself. So von Neumann also postulated that the original machine should have a description copier: a device that will take the original description, duplicate it, and then attach the duplicate description to the offspring machine. Once that happens, he said, the offspring will have everything it needs to continue reproducing indefinitely. And then that will be self-reproduction. As a thought experiment, von Neumann’s analysis of self-reproduction was simplicity itself. To restate it in a slightly more formal way, he was saying that the genetic material of any self-reproducing system, natural or artificial, has to play two fundamentally different roles. On the one hand, it has to serve as a program, a kind of algorithm that can be executed during the construction of the offspring. On the other hand, it has to serve as passive data, a description that can be duplicated and given to the offspring. But as a scientific prediction, that analysis turned out to be breathtaking: when Watson and Crick finally unraveled the molecular structure of DNA a few years later, in 1953, they discovered that it fulfilled von Neumann’s two requirements precisely. As a genetic program, DNA encodes the instructions for making all the enzymes and structural proteins that the cell needs to function. And as a repository of genetic data, the DNA double helix unwinds and makes a copy of itself every time the cell divides in two. With admirable economy, evolution has built the dual nature of the genetic material into the structure of the DNA molecule itself.
Langton now had four very detailed analogies— Cellular Automata Classes: I & II → “IV” → III Dynamical Systems: Order → “Complexity” → Chaos Matter: Solid → “Phase Transition” → Fluid Computation: Halting → “Undecidable” → Nonhalting —along with a fifth and far more hypothetical one: Life: Too static → “Life/Intelligence” → Too noisy But what did they all add up to? Just this, Langton decided: “solid” and “fluid” are not just two fundamental phases of matter, as in water versus ice. They are two fundamental classes of dynamical behavior in general—including dynamical behavior in such utterly nonmaterial realms as the space of cellular automaton rules or the space of abstract algorithms. Furthermore, he realized, the existence of these two fundamental classes of dynamical behavior implies the existence of a third fundamental class: “phase transition” behavior at the edge of chaos, where you would encounter complex computation and quite possibly life itself.
Langton had written his clearest and most articulate manifesto yet for what artificial life was all about. Artificial life, he wrote, is essentially just the inverse of conventional biology. Instead of being an effort to understand life by analysis—dissecting living communities into species, organisms, organs, tissues, cells, organelles, membranes, and finally molecules—artificial life is an effort to understand life by synthesis: putting simple pieces together to generate lifelike behavior in man-made systems. Its credo is that life is not a property of matter per se, but the organization of that matter. Its operating principle is that the laws of life must be laws of dynamical form, independent of the details of a particular carbon-based chemistry that happened to arise here on Earth four billion years ago. Its promise is that by exploring other possible biologies in a new medium—computers and perhaps robots—artificial life researchers can achieve what space scientists have achieved by sending probes to other planets: a new understanding of our own world through a cosmic perspective on what happened on other worlds. “Only when we are able to view life-as-we-know-it in the context of life-as-it-could-be will we really understand the nature of the beast,” Langton declared.
The idea of viewing life in terms of its abstract organization is perhaps the single most compelling vision to come out of the workshop, he said. And it’s no accident that this vision is so closely associated with computers: they share many of the same intellectual roots.
A second great insight, which could be heard at the workshop again and again: living systems are machines, all right, but machines with a very different kind of organization from the ones we’re used to. Instead of being designed from the top down, the way a human engineer would do it, living systems always seem to emerge from the bottom up, from a population of much simpler systems. A cell consists of proteins, DNA, and other biomolecules. A brain consists of neurons. An embryo consists of interacting cells. An ant colony consists of ants. And for that matter, an economy consists of firms and individuals.
The theme was heard over and over again at the workshop, said Langton: the way to achieve lifelike behavior is to simulate populations of simple units instead of one big complex unit. Use local control instead of global control. Let the behavior emerge from the bottom up, instead of being specified from the top down. And while you’re at it, focus on ongoing behavior instead of the final result. As Holland loved to point out, living systems never really settle down. Indeed, said Langton, by taking this bottom-up idea to its logical conclusion, you could see it as a new and thoroughly scientific version of vitalism: the ancient idea that life involves some kind of energy, or force, or spirit that transcends mere matter. The fact is that life does transcend mere matter, he said—not because living systems are animated by some vital essence operating outside the laws of physics and chemistry, but because a population of simple things following simple rules of interaction can behave in eternally surprising ways. Life may indeed be a kind of biochemical machine, he said. But to animate such a machine “is not to bring life to a machine; rather, it is to organize a population of machines in such a way that their interacting dynamics are ‘alive.’”
Finally, said Langton, there was a third great idea to be distilled from the workshop presentations: the possibility that life isn’t just like a computation, in the sense of being a property of the organization rather than the molecules. Life literally is a computation. To see why, said Langton, start with conventional, carbon-based biology. As biologists have been pointing out for more than a century, one of the most striking characteristics of any living organism is the distinction between its genotype—the genetic blueprint encoded in its DNA—and its phenotype—the structure that is created from those instructions. In practice, of course, the actual operation of a living cell is incredibly complicated, with each gene serving as a blueprint for a single type of protein molecule, and with myriad proteins interacting in the body of the cell in myriad ways. But in effect, said Langton, you can think of the genotype as a collection of little computer programs executing in parallel, one program per gene. When activated, each of these programs enters into the logical fray by competing and cooperating with all the other active programs. And collectively, these interacting programs carry out an overall computation that is the phenotype: the structure that unfolds during an organism’s development.
Now, what’s beautiful about all this, said Langton, is that once you’ve made the link between life and computation, you can bring an immense amount of theory to bear. For example, Why is life quite literally full of surprises? Because, in general, it is impossible to start from a given set of GTYPE rules and predict what their PTYPE behavior will be—even in principle. This is the undecidability theorem, one of the deepest results of computer science: unless a computer program is utterly trivial, the fastest way to find out what it will do is to run it and see. There is no general-purpose procedure that can scan the code and the input and give you the answer any faster than that. That’s why the old saw about computers only doing what their programmers tell them to do is both perfectly true and virtually irrelevant; any piece of code that’s complex enough to be interesting will always surprise its programmers. That’s why any decent software package has to be endlessly tested and debugged before it is released—and that’s why the users always discover very quickly that the debugging was never quite perfect. And, most important for artificial life purposes, that’s why a living system can be a biochemical machine that is completely under the control of a program, a GTYPE, and yet still have a surprising, spontaneous behavior in the PTYPE.
The connectionist idea shows how the capacity for learning and evolution can emerge even if the nodes, the individual agents, are brainless and dead. More generally, by putting the power in the connections and not the nodes, it points the way toward a very precise theory of what Langton and the artificial lifers mean when they say that the essence of life is in the organization and not the molecules. And it likewise points the way toward a deeper understanding of how life and mind could have gotten started in a universe that began with neither.
A lot of people around Los Alamos and Santa Fe have found the edge-of-chaos idea awfully compelling. Langton is basically saying that the mysterious “something” that makes life and mind possible is a certain kind of balance between the forces of order and the forces of disorder. More precisely, he’s saying that you should look at systems in terms of how they behave instead of how they’re made. And when you do, he says, then what you find are the two extremes of order and chaos. It’s a lot like the difference between solids, where the atoms are locked into place, and fluids, where the atoms tumble over one another at random. But right in between the two extremes, he says, at a kind of abstract phase transition called “the edge of chaos,” you also find complexity: a class of behaviors in which the components of the system never quite lock into place, yet never quite dissolve into turbulence, either. These are the systems that are both stable enough to store information, and yet evanescent enough to transmit it. These are the systems that can be organized to perform complex computations, to react to the world, to be spontaneous, adaptive, and alive.
Like other Santa Fe folk, Arthur is hesitant when it comes to speculating about the larger meaning of all this. The results are still so—embryonic. And it’s entirely too easy to come off sounding New Age and flaky. But like everyone else, he can’t help thinking about the larger meaning. You can look at the complexity revolution in almost theological terms, he says. “The Newtonian clockwork metaphor is akin to standard Protestantism. Basically there’s order in the universe. It’s not that we rely on God for order. That’s a little too Catholic. It’s that God has arranged the world so that the order is naturally there if we behave ourselves. If we act as individuals in our own right, if we pursue our own righteous self-interest and work hard, and don’t bother other people, then the natural equilibrium of the world will assert itself. Then we get the best of all possible worlds—the one we deserve. That’s probably not quite theological, but it’s the impression I have of one brand of Christianity. “The alternative—the complex approach—is total Taoist. In Taoism there is no inherent order. ‘The world started with one, and the one became two, and the two became many, and the many led to myriad things.’ The universe in Taoism is perceived as vast, amorphous, and ever-changing. You can never nail it down. The elements always stay the same, yet they’re always rearranging themselves. So it’s like a kaleidoscope: the world is a matter of patterns that change, that partly repeat, but never quite repeat, that are always new and different. “What is our relation to a world like that? Well, we are made of the same elemental compositions. So we are a part of this thing that is never changing and always changing. If you think that you’re a steamboat and can go up the river, you’re kidding yourself. Actually, you’re just the captain of a paper boat drifting down the river. If you try to resist, you’re not going to get anywhere. On the other hand, if you quietly observe the flow, realizing that you’re part of it, realizing that the flow is ever-changing and always leading to new complexities, then every so often you can stick an oar into the river and punt yourself from one eddy to another. “So what’s the connection with economic and political policy? Well, in a policy context, it means that you observe, and observe, and observe, and occasionally stick your oar in and improve something for the better. It means that you try to see reality for what it is, and realize that the game you are in keeps changing, so that it’s up to you to figure out the current rules of the game as it’s being played. It means that you observe the Japanese like hawks, you stop being naive, you stop appealing for them to play fair, you stop adhering to standard theories that are built on outmoded assumptions about the rules of play, you stop saying, ‘Well, if only we could reach this equilibrium we’d be in fat city.’ You just observe. And where you can make an effective move, you make a move.” Notice that this is not a recipe for passivity, or fatalism, says Arthur. “This is a powerful approach that makes use of the natural nonlinear dynamics of the system. You apply available force to the maximum effect. You don’t waste it. This is exactly the difference between Westmoreland’s approach in South Vietnam versus the North Vietnamese approach. Westmoreland would go in with heavy forces and artillery and barbed wire and burn the villages. And the North Vietnamese would just recede like a tide. Then three days later they’d be back, and no one knew where they came from. It’s also the principle that lies behind all of Oriental martial arts. You don’t try to stop your opponent, you let him come at you—and then give him a tap in just the right direction as he rushes by. The idea is to observe, to act courageously, and to pick your timing extremely well.”