Evolution has been seen as a billion-year drama that led inexorably to its grandest creation: human intelligence. The emergence in the early twenty-first century of a new form of intelligence on Earth that can compete with, and ultimately significantly exceed, human intelligence will be a development of greater import than any of the events that have shaped human history. It will be no less important than the creation of the intelligence that created it, and will have profound implications for all aspects of human endeavor, including the nature of work, human learning, government, warfare, the arts, and our concept of ourselves.
A variety of philosophical issues will emerge. Are computers thinking, or are they just calculating? Conversely, are human beings thinking, or are they just calculating? The human brain presumably follows the laws of physics, so it must be a machine, albeit a very complex one. Is there an inherent difference between human thinking and machine thinking? To pose the question another way, once computers are as complex as the human brain, and can match the human brain in subtlety and complexity of thought, are we to consider them conscious? This is a difficult question even to pose, and some philosophers believe it is not a meaningful question; others believe it is the only meaningful question in philosophy. This question actually goes back to Plato’s time, but with the emergence of machines that genuinely appear to possess volition and emotion, the issue will become increasingly compelling.
PART ONE: PROBING THE PAST
Let’s notice a striking feature of the passage of time. Events moved quickly at the beginning of the Universe’s history We had three paradigm shifts in just the first billionth of a second. Later on, events of cosmological significance took billions of years. The nature of time is that it inherently moves in an exponential fashion—either geometrically gaining in speed, or, as in the history of our Universe, geometrically slowing down. Time only seems to be linear during those eons in which not much happens. Thus most of the time, the linear passage of time is a reasonable approximation of its passage. But that’s not the inherent nature of time. Why is this significant? It’s not when you’re stuck in the eons in which not much happens. But it is of great significance when you find yourself in the “knee of the curve,” those periods in which the exponential nature of the curve of time explodes either inwardly or outwardly.
It is in the nature of exponential growth that events develop extremely slowly for extremely long periods of time, but as one glides through the knee of the curve, events erupt at an increasingly furious pace. And that is what we will experience as we enter the twenty-first century.
The Law of Time and Chaos: In a process, the time interval between salient events (that is, events that change the nature of the process, or significantly affect the future of the process) expands or contracts along with the amount of chaos. When there is a lot of chaos in a process, it takes more time for significant events to occur. Conversely, as order increases, the time periods between salient events decrease. We have to be careful here in our definition of chaos. It refers to the quantity of disordered (that is, random) events that are relevant to the process. If we’re dealing with the random movement of atoms and molecules in a gas or liquid, then heat is an appropriate measure. If we’re dealing with the process of evolution of life-forms, then chaos represents the unpredictable events encountered by organisms, and the random mutations that are introduced in the genetic code. Let’s see how the Law of Time and Chaos applies to our examples. If chaos is increasing, the Law of Time and Chaos implies the following sublaw: The Law of Increasing Chaos: As chaos exponentially increases, time exponentially slows down (that is, the time interval between salient events grows longer as time passes). This fits the Universe rather well. When the entire Universe was just a “naked” singularity—a perfectly orderly single point in space and time—there was no chaos and conspicuous events took almost no time at all. As the Universe grew in size, chaos increased exponentially, and so did the timescale for epochal changes. Now, with billions of galaxies sprawled out over trillions of light-years of space, the Universe contains vast reaches of chaos, and indeed requires billions of years to get everything organized for a paradigm shift to take place. We see a similar phenomenon in the progression of an organisms life. We start out as a single fertilized cell, so there’s only rather limited chaos there. Ending up with trillions of cells, chaos greatly expands. Finally, at the end of our lives, our designs deteriorate, engendering even greater randomness. So the time period between salient biological events grows longer as we grow older. And that is indeed what we experience. But it is the opposite spiral of the Law of Time and Chaos that is the most important and relevant for our purposes. Consider the inverse sublaw, which I call the Law of Accelerating Returns: As order exponentially increases, time exponentially speeds up (that is, the time interval between salient events grows shorter as time passes).
The Law of Accelerating Returns (to distinguish it from a better-known law in which returns diminish) applies specifically to evolutionary processes. In an evolutionary process, it is order—the opposite of chaos—that is increasing. And, as we have seen, time speeds up.
The Law of Accelerating Returns as Applied to an Evolutionary Process:
An evolutionary process is not a closed system; therefore, evolution draws upon the chaos in the larger system in which it takes place for its options for diversity; and
Evolution builds on its own increasing order. Therefore:
In an evolutionary process, order increases exponentially. Therefore:
Time exponentially speeds up. Therefore:
The returns (that is, the valuable products of the process) accelerate.
Consider the sophistication of our creations over a period of only a few thousand years. Ultimately, our machines will match and exceed human intelligence, no matter how one cares to define or measure this elusive term. Even if my time frames are off, few serious observers who have studied the issue claim that computers will never achieve and surpass human intelligence. Humans will have vastly beaten evolution, therefore, achieving in a matter of only thousands of years as much or more than evolution achieved in billions of years. So human intelligence, a product of evolution, is far more intelligent than its creator. And so, too, will the intelligence that we are creating come to exceed the intelligence of its creator.
At what point do we consider the computer to be a conscious agent with its own free will? These have been the most vexing problems in philosophy since the Platonic dialogues illuminated the inherent contradictions in our conception of these terms.
We can argue that consciousness and identity are not a function of the specific particles at all, because our own particles are constantly changing. On a cellular basis, we change most of our cells (although not our brain cells) over a period of several years. On an atomic level, the change is much faster than that, and does include our brain cells. We are not at all permanent collections of particles. It is the patterns of matter and energy that are semipermanent (that is, changing only gradually), but our actual material content is changing constantly, and very quickly We are rather like the patterns that water makes in a stream. The rushing water around a formation of rocks makes a particular, unique pattern. This pattern may remain relatively unchanged for hours, even years. Of course, the actual material constituting the pattern—the water—is totally replaced within milliseconds. This argues that we should not associate our fundamental identity with specific sets of particles, but rather the pattern of matter and energy that we represent.
The “Consciousness Is Just a Machine Reflecting on Itself” School: A common approach is to deny the issue exists: Consciousness and free will are just illusions induced by the ambiguities of language. A slight variation is that consciousness is not exactly an illusion, but just another logical process. It is a process responding and reacting to itself. We can build that in a machine: just build a procedure that has a model of itself, and that examines and responds to its own methods. Allow the process to reflect on itself. There, now you have consciousness. It is a set of abilities that evolved because self-reflective ways of thinking are inherently more powerful.
We may note here a similarity of quantum mechanics to the computer simulation of a virtual world. In today’s software games that display images of a virtual world, the portions of the environment not currently being interacted with by the user (that is, those offscreen) are usually not computed in detail, if at all. The limited resources of the computer are directed toward rendering the portion of the world that the user is currently viewing. As the user focuses in on some other aspect, the computational resources are then immediately directed toward creating and displaying that new perspective. It thus seems as if the portions of the virtual world that are offscreen are nonetheless still “there,” but the software designers figure there is no point wasting valuable computer cycles on regions of their simulated world that no one is watching. I would say that quantum theory implies a similar efficiency in the physical world. Particles appear not to decide where they have been until forced to do so by being observed. The implication is that portions of the world we live in are not actually “rendered” until some conscious observer turns her attention toward them. After all, there’s no point wasting valuable “computes” of the celestial computer that renders our Universe. This gives new meaning to the question about the unheard tree that falls in the forest.
Turing offered an explanation of why we would fail to acknowledge intelligence in our machines. In 1947, he wrote: “The extent to which we regard something as behaving in an intelligent manner is determined as much by our own state of mind and training as by the properties of the object under consideration. If we are able to explain and predict its behavior we have little temptation to imagine intelligence. With the same object, therefore, it is possible that one man would consider it as intelligent and another would not; the second man would have found out the rules of its behavior.” I am also reminded of Elaine Rich’s definition of artificial intelligence, as the “study of how to make computers do things at which, at the moment, people are better.” It is our fate as artificial intelligence researchers never to reach the carrot dangling in front of us. Artificial intelligence is inherently defined as the pursuit of difficult computer-science problems that have not yet been solved.
Clearly, no simple formula can emulate the most powerful phenomenon in the Universe: the complex and mysterious process of intelligence. Actually, that’s wrong. All that is needed to solve a surprisingly wide range of intelligent problems is exactly this: simple methods combined with heavy doses of computation (itself a simple process, as Alan Turing demonstrated in 1936 with his conception of the Turing Machine,16 an elegant model of computation) and examples of the problem. In some cases, we don’t even need the latter; just one well-defined statement of the problem will do. How far can we go with simple paradigms? Is there a class of intelligent problems amenable to simple approaches, with another, more penetrating class that lies beyond its grasp? It turns out that the class of problems solvable with simple approaches is extensive. Ultimately, with sufficient computational brute force (which will be ample in the twenty-first century) and the right formulas in the right combination, there are few definable problems that fail to yield.
The irreversibility of computation is often cited as a reason that computation is useful: It transforms information in a unidirectional, “purposeful” manner. Yet the reason that computation is irreversible is based on its ability to destroy information, not to create it. The value of computation is precisely in its ability to destroy information selectively. For example, in a pattern-recognition task such as recognizing faces or speech sounds, preserving the information-bearing features of a pattern while “destroying” the enormous flow of data in the original image or sound is essential to the process. Intelligence is precisely this process of selecting relevant information carefully so that it can skillfully and purposefully destroy the rest. That is exactly what the neural net paradigm accomplishes. A neuron—human or machine—receives hundreds or thousands of continuous signals representing a great deal of information. In response to this, the neuron either fires or does not fire, thereby reducing the babble of its input to a single bit of information. Once the neural net has been well trained, this reduction of information is purposeful, useful, and necessary . We see this paradigm—reducing enormous streams of complex information into a single response of yes or no—at many levels in human behavior and society. Consider the torrent of information that flows into a legal trial. The outcome of all this activity is essentially a single bit of information—guilty or not guilty, plaintiff or defendant. A trial may involve a few such binary decisions, but my point is unaltered. These simple yes-or-no results then flow into other decisions and implications. Consider an election—same thing—each of us receives a vast flow of data (not all of it pertinent, perhaps) and renders a 1-bit decision: incumbent or challenger. That decision then flows in with similar decisions from millions of other voters and the final tally is again a single bit of data. There is too much raw data in the world to continue to keep all of it around. So we continually destroy most of it, feeding those results to the next level. This is the genius behind the all-or-nothing firing of the neuron. Next time you do some spring cleaning and attempt to throw away old objects and files, you will know why this is so difficult—the purposeful destruction of information is the essence of intelligent work.
One key to intelligence is knowing what not to compute. A successful person isn’t necessarily better than her less successful peers at solving problems; her pattern-recognition facilities have just learned what problems are worth solving.
To create flexible intelligence in our machines, we need to automate the knowledge-acquisition process. A primary goal of learning research is to combine the self-organizing methods—recursion, neural nets, evolutionary algorithms—in a sufficiently robust way that the systems can model and understand human language and knowledge. Then the machines can venture out, read, and learn on their own. And like humans, such systems will be good at faking it when they wander outside their areas of expertise.
PART TWO: PREPARING THE PRESENT
Let’s review what we need to build an intelligent machine. One resource required is the right set of formulas. We examined three quintessential formulas in chapter 4. There are dozens of others in use, and a more complete understanding of the brain will undoubtedly introduce hundreds more. But all of these appear to be variations on the three basic themes: recursive search, self-organizing networks of elements, and evolutionary improvement through repeated struggle among competing designs. A second resource needed is knowledge. Some pieces of knowledge are needed as seeds for a process to converge on a meaningful result. Much of the rest can be automatically learned by adaptive methods when neural nets or evolutionary algorithms are exposed to the right learning environment. The third resource required is computation itself. In this regard, the human brain is eminently capable in some ways, and remarkably weak in others. Its strength is reflected in its massive parallelism, an approach that our computers can also benefit from. The brain’s weakness is the extraordinarily slow speed of its computing medium, a limitation that computers do not share with us. For this reason, DNA-based evolution will eventually have to be abandoned.
There won’t be mortality by the end of the twenty-first century. Not in the sense that we have known it. Not if you take advantage of the twenty-first century’s brain-porting technology. Up until now, our mortality was tied to the longevity of our hardware. When the hardware crashed, that was it. For many of our forebears, the hardware gradually deteriorated before it disintegrated. Yeats lamented our dependence on a physical self that was “but a paltry thing, a tattered coat upon a stick.”30 As we cross the divide to instantiate ourselves into our computational technology, our identity will be based on our evolving mind file. We will be software, not hardware. And evolve it will. Today, our software cannot grow. It is stuck in a brain of a mere 100 trillion connections and synapses. But when the hardware is trillions of times more capable, there is no reason for our minds to stay so small. They can and will grow.
Self-replicating machines built at the atomic level could truly transform the world we live in. They could build extremely inexpensive solar cells, allowing the replacement of messy fossil fuels. Since solar cells require a large surface area to collect sufficient sunlight, they could be placed in orbit, with the energy beamed down to Earth. Nanobots launched into our bloodstreams could supplement our natural immune system and seek out and destroy pathogens, cancer cells, arterial plaque, and other disease agents. In the vision that inspired the cryonics enthusiasts, diseased organs can be rebuilt. We will be able to reconstruct any or all of our bodily organs and systems, and do so at the cellular level.
Without self-replication, nanotechnology is neither practical nor economically feasible. And therein lies the rub. What happens if a little software problem (inadvertent or otherwise) fails to halt the self-replication? We may have more nanobots than we want. They could eat up everything in sight.
The bigger danger is the intentional hostile use of nanotechnology. Once the basic technology is available, it would not be difficult to adapt it as an instrument of war or terrorism. It is not the case that someone would have to be suicidal to use such weapons. The nanoweapons could easily be programmed to replicate only against an enemy; for example, only in a particular geographical area. Nuclear weapons, for all their destructive potential, are at least relatively local in their effects. The self-replicating nature of nanotechnology makes it a far greater danger.
There are a variety of proposals for nanotechnology swarms, in which the real environment is constructed from interacting multitudes of nanomachines. In all of the swarm conceptions, physical reality becomes a lot like virtual reality.
We are discovering that the brain can be directly stimulated to experience a wide variety of feelings that we originally thought could only be gained from actual physical or mental experience. These results suggest that once neural implants are commonplace, we will have the ability to produce not only virtual sensory experiences but also the feelings associated with these experiences. We can also create some feelings not ordinarily associated with the experience.
The ability to control and to reprogram our feelings will become even more profound in the late twenty-first century when technology moves beyond mere neural implants and we fully install our thinking processes into a new computational medium—that is, when we become software.
Regardless of the nature and derivation of a mental experience, spiritual or otherwise, once we have access to the computational processes that give rise to it, we have the opportunity to understand its neurological correlates. With the understanding of our mental processes will come the opportunity to capture our intellectual, emotional, and spiritual experiences, to call them up at will, and to enhance them.
Just being—experiencing, being conscious—is spiritual, and reflects the essence of spirituality Machines, derived from human thinking and surpassing humans in their capacity for experience, will claim to be conscious, and thus to be spiritual. They will believe that they are conscious. They will believe that they have spiritual experiences. They will be convinced that these experiences are meaningful. And given the historical inclination of the human race to anthropomorphize the phenomena we encounter, and the persuasiveness of the machines, we’re likely to believe them when they tell us this.
First let us postulate that the computer scientists succeed in developing intelligent machines that can do all things better than human beings can do them. In that case presumably all work will be done by vast, highly organized systems of machines and no human effort will be necessary. Either of two cases might occur. The machines might be permitted to make all of their own decisions without human oversight, or else human control over the machines might be retained. If the machines are permitted to make all their own decisions, we can’t make any conjectures as to the results, because it is impossible to guess how such machines might behave. We only point out that the fate of the human race would be at the mercy of the machines. It might be argued that the human race would never be foolish enough to hand over all the power to the machines. But we are suggesting neither that the human race would voluntarily turn power over to the machines nor that the machines would willfully seize power. What we do suggest is that the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines’ decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide. On the other hand it is possible that human control over the machines may be retained. In that case the average man may have control over certain private machines of his own, such as his car or his personal computer, but control over large systems of machines will be in the hands of a tiny elite—just as it is today, but with two differences. Due to improved techniques the elite will have greater control over the masses; and because human work will no longer be necessary the masses will be superfluous, a useless burden on the system. If the elite is ruthless they may simply decide to exterminate the mass of humanity. If they are humane they may use propaganda or other psychological or biological techniques to reduce the birth rate until the mass of humanity becomes extinct, leaving the world to the elite. Or, if the elite consists of soft-hearted liberals, they may decide to play the role of good shepherds to the rest of the human race. They will see to it that everyone’s physical needs are satisfied, that all children are raised under psychologically hygienic conditions, that everyone has a wholesome hobby to keep him busy, and that anyone who may become dissatisfied undergoes “treatment” to cure his “problem.” Of course, life will be so purposeless that people will have to be biologically or psychologically engineered either to remove their need for the power process or to make them “sublimate” their drive for power into some harmless hobby. These engineered human beings may be happy in such a society, but they most certainly will not be free. They will have been reduced to the status of domestic animals.
—Theodore Kaczynski
The process of automation that began in England two hundred years ago—and continues today at an ever accelerating pace (as per the Law of Accelerating Returns)—eliminates jobs at the bottom of the skill ladder and creates new ones at the top of the skill ladder. Hence increasing investment in education. But what happens when the skill ladder extends beyond the abilities of the bulk of the human population, and ultimately beyond the ability of any human, educational innovations notwithstanding? The answer we can predict from the Law of Accelerating Returns is that the ladder will nonetheless continue to reach ever higher, implying that humans will need to become more capable by other means. Education can only accomplish so much. The only way for the species to keep pace will be for humans to gain greater competence from the computational technology we have created, that is, for the species to merge with its technology. Not everyone will find this prospect appealing, so the Luddite issue will broaden in the twenty-first century from an anxiety about human livelihoods to one concerning the essential nature of human beings. However, the Luddite movement is not likely to fare any better in the next century than it has in the past two. It suffers from the lack of a viable alternative agenda.
PART THREE: TO FACE THE FUTURE
2019: People are beginning to have relationships with automated personalities as companions, teachers, caretakers, and lovers. Automated personalities are superior to humans in some ways, such as having very reliable memories and, if desired, predictable (and programmable) personalities. They are not yet regarded as equal to humans in the subtlety of their personalities, although there is disagreement on this point. An undercurrent of concern is developing with regard to the influence of machine intelligence. There continue to be differences between human and machine intelligence, but the advantages of human intelligence are becoming more difficult to identify and articulate. Computer intelligence is thoroughly interwoven into the mechanisms of civilization and is designed to be outwardly subservient to apparent human control. On the one hand, human transactions and decisions require by law a human agent of responsibility, even if fully initiated by machine intelligence. On the other hand, few decisions are made without significant involvement and consultation with machine-based intelligence.
2029: The subjective experience of machine intelligence is increasingly accepted, particularly since “machines” participate in this discussion. Machines claim to be conscious and to have as wide an array of emotional and spiritual experiences as their human progenitors, and these claims are largely accepted.
2099: Human thinking is merging with the world of machine intelligence that the human species initially created. The reverse engineering of the human brain appears to be complete. The hundreds of specialized regions have been fully scanned, analyzed, and understood. Machine analogues are based on these human models, which have been enhanced and extended, along with many new massively parallel algorithms. These enhancements, combined with the enormous advantages in speed and capacity of electronidphotonic circuits, provide substantial advantages to machine-based intelligence. Machine-based intelligences derived entirely from these extended models of human intelligence claim to be human, although their brains are not based on carbon-based cellular processes, but rather electronic and photonic “equivalents.” Most of these intelligences are not tied to a specific computational-processing unit (that is, piece of hardware). The number of software-based humans vastly exceeds those still using native neuron-cell-based computation. A software-based intelligence is able to manifest bodies at will: one or more virtual bodies at different levels of virtual reality and nanoengineered physical bodies using instantly reconfigurable nanobot swarms. Even among those human intelligences still using carbon-based neurons, there is ubiquitous use of neural implant technology, which provides enormous augmentation of human perceptual and cognitive abilities. Humans who do not utilize such implants are unable to meaningfully participate in dialogues with those who do.