Rise of the Machines: A Cybernetic History - by Thomas Rid

With a focus on the history of cybernetic myths, a word on the mythical is in order. Myths are deeply embedded in our collective memory; they shape our understanding of technology at every turn, even if we aren’t aware of this palimpsest, the deep and hidden legacy of the cybernetic history. Contrary to the use of everyday language, a myth—as articulated in a prominent tradition of political theory—doesn’t mean that something is factually wrong. Myths don’t contradict the facts; they complement the facts. Saying something works as a myth doesn’t mean that it is “only” a myth. On the contrary. Political and technical myths are very real and work in powerful ways. And they may even align smoothly with the observed, hard facts. Myths are more than real in at least three different ways.

First, myths overcome the limits of fact, experience, and technology at a given moment in time. The rise of the machines was always projected into the future, not into the present or the past. Evidence was always in short supply. The cybernetic promise, of course, was neither wrong nor right. Any vision of the future is neither false nor true until the predicted future, or some pale version of it, actually comes to pass. For their adherents, subscribing to the cybernetic narratives of the future required more than evidence. It required belief. And the myth made it easy to believe. With its inherent uncertainty cloaked in the unflinching language of science and technology, the line between scholarship and worship blurred again and again over the decades. It did so subtly, and seductively. So powerful was the myth that its own creators kept falling for it. Technology myths have the form of a firm promise: the cyborg will be built; machines that are more intelligent than humans will be invented; the singularity is coming; cyberspace will be free. The myth is underdetermined by fact, yet it purports to be as certain and as hard as empirical evidence can get, shielded from debate and contradiction. Faith dressed as science.

Second, mythologies are remarkable not for their content, but for their form. The basis of the myth appears as fully experienced, innocent, and indisputable reality: computers are becoming ever faster; machines are ever more networked; encryption is getting stronger. But at the same time the myth makes a leap, it adds a peculiar form to the meaning. And this form is always emotional. Myths are convincing because they appeal to deeply held beliefs, to hopes, and often to fears about the future of technology and its impact on society. These beliefs are informed by visions and projections, by popular culture, by art, by fiction and science fiction, by plays, films, and stories. But the myth often harks back to fiction clandestinely, without making the cultural inheritance explicit. Science fiction novels, for instance, inspired the 1990s national-security debate. And sometimes hard-nosed experts even wrote the fiction, to spell out dystopian visions of future conflict, freed from the unbearable shackles of fact. The crypto activists of the 1990s, a movement of fervent and influential zealots extolling the many blessings of spreading encryption, unabashedly recommended science fiction as the “sources” and the main inspiration for anarchy in “cyberspace.”

The third and most crucial feature of cybernetic myths is that they transcend the present. Mythical narratives form a path between the past and the future to keep a community’s shared experiences in living memory. For political and historical myths, such as the German raids on the City of London during the Blitz, the more stable anchor point is in the past. The political myth draws a clear line from a past event into the future and sees the present as a dot on this line. It forms the connective tissue of a community’s identity over time, evoked again and again by services in St. Paul’s and anniversary flyovers by the Royal Air Force. For cybernetic myths, the reverse is the case: the more stable anchor point is always in the future or, to be more precise, in a shared yet vague imagination of the future—not too close and not too distant. The golden range seems to be about twenty years forward, close enough to extrapolate from the past, yet distant enough to dare brave new ideas of the future. The outcome is equally effective. The technological myth draws a clear line from the future into the past and sees the present as a dot on this line. The cybernetic myth creates the powerful illusion of being able to predict the future: Trust me, says the mythos; this is how the future will be. This isn’t fiction or prediction; this is a hard fact that hasn’t happened yet. Keeping a technological myth as an effective and viable path into the future therefore requires permanent use and repetition; it requires restating the mythical promise again and again, so that it becomes and remains gospel. It requires “work on the myth,” the German philosopher Hans Blumenberg pointedly observed in Arbeit am Mythos.

Control and communication began to shift fundamentally during World War II. It was then that a new set of ideas emerged to capture the change: cybernetics. The famously eccentric MIT mathematician Norbert Wiener coined the term, inspired by the Greek verb kybernan, which means “to steer, navigate, or govern.” Cybernetics; or, Control and Communication in the Animal and the Machine, Wiener’s pathbreaking book, was published in the fall of 1948.

Machines are also about the future. And cybernetics, forged at war, has hence been the vehicle that projected and predicted the future of ever more intelligent automata. Two opposing forces have shaped cybernetic visions of the future. The first was the hope for a better world with less violence, for work to become more humane, for play to become more entertaining, for politics to become more free, for war to become less bloody. Thinking machines brought progress, in that deeply modernist belief. But an opposing force equally shaped cybernetic visions of imminent technological change: the fear of a world with robots pushing workers into unemployment, of machines harming humans, of critical systems breaking down, of mass surveillance and the loss of privacy, of mechanized regression. Optimism competed against pessimism, liberation against oppression, utopia against dystopia.

CONTROL AND COMMUNICATION AT WAR

Engineers often used duck shooting to explain the challenge of anticipating the position of a target. The experienced hunter sees the flying duck, his eyes send the visual information through nerves to the brain, the hunter’s brain computes the appropriate position for the rifle, and his arms adjust the rifle’s position, even “leading” the target by predicting the duck’s flight path. The split-second process ends with a trigger pull. The shooter’s movements mimic an engineered system: the hunter is network, computer, and actuator in one. Replace the bird with a faraway and fast enemy aircraft and the hunter with an antiaircraft battery, and doing the work of eyes, brain, and arms becomes a major engineering challenge. This engineering challenge would become the foundation of cybernetics.

The confrontation in the sky over the English Channel during the late summer of 1944 was remarkable: never before had one autonomous weapon clashed with another autonomous weapon with so little human interference. The future of war had arrived. “Now we saw the beginning of the first battle of the robots,” Pile observed at the time: “Human error was being gradually eliminated from the contest: in the future, the machines would fight it out.”

CYBERNETICS

When the US Army invaded and occupied Germany in 1945, it took back stateside some of the Reich’s brightest engineers, as well as cutting-edge technology. The most highly prized individual was Wernher von Braun, the Reich’s young and dashing missile engineer, and his vast team from Peenemünde by the Baltic Sea in Germany’s north.

Three ideas were at the core of this novel approach to thinking about automation and human-machine interaction. The first core idea of cybernetics was control. The very purpose of machines and living beings is to control their environment—not merely to observe it, but to master it. Control is fundamental. The concept of entropy illustrates just how fundamental. Entropy is a measure of disorder, of uncertainty, degradation, and loss of information. Nature has a tendency to increase entropy, to gradually decline into disorder: cold things warm up; hot things cool down; information gets lost in noise; disorganization gradually takes over. Halting or reversing this trend toward disorder requires control. Control means that a system can interact with its environment and shape it, at least to a degree. Environmental data are fed into a system through input, and the system affects its environment through output. For Wiener, this was the essence of the cybernetic worldview:

It is my thesis that the physical functioning of the living individual and the operation of some of the newer communication machines are precisely parallel in their analogous attempts to control entropy through feedback.

This quote introduces the second core concept of cybernetics: feedback. For Wiener, “feedback” described the ability of any mechanism to use sensors to receive information about actual performance, as opposed to expected performance. An elevator is an example. Feedback will tell the door-opening mechanism whether the elevator has actually arrived behind the sliding doors and only then open the doors for waiting passengers; not using feedback would increase the risk of error, possibly allowing unsuspecting people to step into an empty elevator shaft. Another of Wiener’s examples is, unsurprisingly, an artillery gun that uses feedback to make sure the muzzle is actually pointing at the target. The mechanism that controls the turning of the turret, for instance, requires feedback. The actual performance of the turret-turning mechanism varies: extreme cold thickens the grease in the bearings and makes turning harder; sand and dirt might affect turning even more. So verifying output through feedback is critical. Feedback tends to oppose what a system is already doing—for instance, stopping a motor from turning a turret or telling a thermostat to switch off a heater. Such feedback is called “negative feedback,” and it generally serves the purpose of stabilizing a desired state. Feedback, then, is “the property of being able to adjust future conduct by past performance,” wrote Wiener. The mechanism itself may be very simple, like turning a turret, or it may be complex, like regulating the human body’s temperature. In Wiener’s mind, feedback would provide a kinesthetic sense to machines, akin to the human awareness of the position or movement of arms and legs. This analogy led him to another core idea.

The third core idea of cybernetics described a tight relationship between humans and machines. In the antiaircraft predictor project, the enemy pilot was recognized as effectively forming a single entity with the bomber, behaving “like a servo-mechanism,” as the team at MIT had noted already in early 1941. It wasn’t just the plane and the pilot that formed a servomechanism. Even the antiaircraft gun behaved as a “dynamic system,” composed of multiple human operators and complex mechanics joined in the struggle against ever-increasing entropy. But there are other aspects to the human-machine parallel. Wiener tended to anthropomorphize machines: switches corresponded to synapses, wires to nerves, networks to nervous systems, sensors to eyes and ears, actuators to muscles. Conversely, he also mechanized man by using machine comparisons to understand human physiology.

The first two cybernetic concepts—control and feedback—could be abstract, technical, and hard to grasp. But the third idea—the merging of human and machine—stirred the imagination.

Cybernetics, of course, could easily be applied beyond one man and one machine. Collective entities—say, firms or entire societies—formed systems that lent themselves to cybernetic analysis. Social science could study control, communication, feedback, adaptive behavior, and organizational learning, and many scholars would do so.

Cybernetics is a “theory of machines,” Ashby wrote in his best-selling textbook, An Introduction to Cybernetics, published in 1956. That wasn’t a new concept. Engineers and mechanics had been building complex machines for centuries. Theorizing about machines and about control had been done before. But cybernetics wasn’t about levers and cogs and revolving axles and potentiometers. The question it posed wasn’t “What is this thing?” but “What does it do?” as Ashby put it. Cybernetics wasn’t about mechanics; it was about behavior. The cyberneticists’ view of the machines reflected the midcentury modernist Zeitgeist. In architecture, functionalists designed machines to live in or to sit in—such as Le Corbusier’s efficient modernist concrete buildings and chrome-and-leather lounge chairs. In psychology, behaviorists measured and experimented with the mechanics of the mind; Ivan Pavlov’s pioneering research on conditioned reflexes is an example. Cybernetics, Ashby suggested, was “essentially functional and behaviouristic.” To Ashby, the popular imagination had it wrong. This wasn’t about thinking mechanical devices. The critical indicator of whether a machine qualifies as a brain wasn’t a machine’s ability to think. More important was whether the machine was doing something. “The brain is not a thinking machine, it is an acting machine,” he wrote in December 1948. “It gets information and then it does something about it.” This was an engineer’s perspective of the brain. He considered it simply an input-output device.

Cyberneticists moved elegantly between electronic engineering and the life sciences, blurring the line between living and nonliving systems. Ashby had another powerful trick up his sleeve. This time it came from engineering, not biomedical research: the black box. Black boxes were invented to help engineers interact with machines. The black box isn’t an actual black box, of course. It’s an idea, a concept that is used to shed light into the dark.

Engineers and cyberneticists used it first. Calling something a black box was an elegant way to describe something they couldn’t describe, to understand a device that wasn’t properly understood. The internal workings of the “box” remained in the dark, unseen and “opaque.” A machine would produce output from input. What happened in between was unclear. The gun director was a black box for its operators. They didn’t know what exactly happened inside, but they knew how to fire the gun with the output values. The proximity fuse was a black box: most military officers had no idea how the radio mechanism inside the shell worked, but they used it to good effect. Office workers did not know how their brand-new IBM thinking machines worked, but they knew how to input data and how to read the computer’s output and act on it. Ashby didn’t suggest that black boxes behave like real objects. He was more radical. Ashby suggested that real objects “are in fact all black boxes.” For those with a cybernetic mind, it didn’t matter what was inside the black box: switches, tubes, and wires—or blood and gray matter. What mattered was input and output. In this view, the body itself is a prototype of human-machine interaction. Man himself is a black box.

Cybernetics was coherent and convincing enough to be an emerging field, and it was incoherent enough to attract a wide range of scholars. Mathematicians, physicists, and biologists, as well as philosophers and sociologists, seized the cybernetic vision and produced thousands of books and research articles between the late 1950s and early 1970s. New journals sprang up. International conferences were held. New graduate degrees on cybernetics emerged. The neologism found its way into the world’s languages and dictionaries. Early cybernetic work sought to articulate and apply the principles by which systems could be controlled. Later cybernetic scholarship became more abstract: cyberneticists asked how systems would describe themselves, how they organize themselves, and how they control themselves.

The leap from science to myth was small, smaller than many nonscientists would imagine. The mathematical mind is trained for abstraction. Algebra isn’t limited by the availability of fresh apples to count or to multiply. Arithmetic is dealing with abstract entities. The ambitions of the emerging discipline were equally expansive. The “real machine” could be electronic, mechanical, neural, social, or economic. This alone meant that the realm of cybernetics was vast. This ambitious vision of cybernetics is best expressed through an analogy: cybernetics relates to the machine as geometry relates to the object in space. Ashby’s machines-and-math comparison was an inspiration, a stroke of genius. Nature provides a range of geometrical objects in space: stones, apples, snakes, horses, or something more complex, like trees or mountains. Geometry contains these objects and can very well calculate the surface of an apple or the volume of a stone. But real, existing objects don’t limit geometry. Geometry is far more expansive—a framework to grasp all possible forms and shapes and sizes. It is just the same with cybernetics and machines: “It takes as its subject-matter the domain of ‘all possible machines,’” Ashby argued in 1956. It is of secondary interest whether some of these machines “have not yet been made, either by Man or by Nature.”

One of the common criticisms of cybernetics, even in the striving 1950s, was that its vision was fanciful, that many of the technologies had never been built and would not become reality anytime soon. However, theorizing about nonexistent machines was not a bug, Ashby argued, but rather a feature. Physics, the master discipline, was leading the way. Physics is also an important and highly successful scientific discipline that studies nonexistent systems: springs without mass, particles with mass but no volume, gases that behave perfectly. Such things don’t exist. Yet understanding them in pure theoretical form is crucial for understanding even simple things like a watch. Ashby took this abstraction-first approach to its logical conclusion: the cyberneticist would survey the possible relations between man and machine first “in the general theory.” Only then would he get his hands dirty by looking more closely at “terrestrial machines” that are found in some scientific or industrial installations. Ashby’s ploy was brilliant: the absence of evidence wasn’t a problem. That the theoretically—and cybernetically—predicted future had not arrived yet didn’t mean it would not arrive soon. It was the same as with quantum theory. Cybernetics was liberated from the imperfect devices of the present. Ashby’s powerful ruse would propel the new idea of cybernetics forward for many decades, leading it into uncharted territory.

The father of cybernetics wasn’t sure whether the machines would turn out to be a force for good or evil, whether the robots would serve their human creators or perhaps rise up and revolt. It was the duty of any self-respecting scientist, he believed, to reflect on the possibilities for abuse, and to warn the world of the coming danger, just as responsible scientists should have warned of the coming atomic bomb. One of the scenarios that most concerned Wiener was computers simulating war, and potentially making automatic decisions about when and how to use force. This was not a far-fetched scenario. The US Air Force had started investing vast sums of money in automated air defense systems. Delegating decisions to machines courted disaster. “There is nothing more dangerous to contemplate than World War III,” Wiener told the Chicago Tribune in 1961. “It is worth considering whether part of the danger may not be intrinsic in the unguarded use of learning machines.” As machines increase the speed of automated decisions in war, Wiener surmised, man would find it harder to stop them. “To turn a machine off effectively, we must be in possession of information as to whether the danger point has come.”

Such escalation could be an accident, or, indeed, ever-more-intelligent mechanized servants could deliberately turn on their human creators. “We wish a slave to be intelligent,” Wiener told the press. “However, we also wish him to be subservient.” And therein was the contradiction, said Wiener: “Complete subservience and complete intelligence do not go together.”

AUTOMATION

“The first industrial revolution,” Norbert Wiener wrote in the preface of Cybernetics, “was the devaluation of the human arm by the competition of the machinery.” Now, one and a half centuries after those dark, satanic mills, the world was again in the throngs of a second industrial upheaval. “The second industrial revolution,” Wiener was convinced, “is similarly bound to devalue the human brain, at least in its simpler and more routine decisions.” Automation, the argument went, would require more high-skilled scientists and administrators. Less-skilled workers would be crowded out of the labor market and pushed into unemployment.

“Even in the field of science,” a sixty-seven-year-old Wiener reflected, “it is perilous to run counter to the accepted tables of precedence.” The taboo wasn’t just about equating God and man; equating living beings and machines in the same breath also was blasphemy. Living beings would be living beings in all their parts, and machines were made of lifeless metal and plastic and glass in all their parts. The two could not even be compared. Automata, unlike animals, had no mysterious fine structure that gave them purpose. And for cyberneticists, purpose was the key feature of all systems driven by negative feedback. “Physics—or so it is generally supposed—takes no account of purpose; and the emergence of life is something totally new.” This was the taboo. And in Wiener’s view, this forbidden fruit stood in the way of the advance of knowledge. Wiener set out to attack the taboo in his final book, re-examining several themes that have been discussed in sacred texts—but that have a close cybernetic analogy. Three aspects of cybernetics had religious implications, Wiener wrote: that machines could learn, that they could self-reproduce, and that they illuminated magic and legend. Science, as he saw it, was invading the territory of religion, pushing God out. Cybernetics would make the irrational rational.

At first glance, machines that are able to learn don’t seem to have religious significance. But Wiener, the former child prodigy, was versed in the classics. He knew better. Learning machines are linked to one of the deepest and most controversial theological problems: how to explain the suffering of humans in the here and now, how to explain God’s tolerance of evil, a problem that theologians know as “theodicy.” It was the main question of the book of Job, a key text in the Hebrew Bible and the Christian Old Testament, and of Paradise Lost, John Milton’s epic seventeenth-century poem. Justifying evil was akin to the question of whether God could play a game with a creature he had made: in both of these religious texts the devil, as Wiener saw it, was playing a game with God, either for the soul of Job or, in Paradise Lost, for the souls of all of humankind. Both works also saw the devil as a creature of God; this was a necessary assumption of monotheism. So the crucial question was, “Can God play a significant game with his own creature?” For Wiener, this question was mimicked in the relationship between humans and their own creatures, machines. In general terms, the question could be rephrased like this: “Can any creator, even a limited one, play a significant game with his own creature?” Wiener himself was a limited creator of machines. If he could play a significant game with a machine, that would answer the question. The answer of cybernetics was clear: yes. Yes because even a mechanical creation can win against its human creator. It had already been done, with checkers. Building a mechanism that would beat even the best players of the Russian school at chess was only a question of time, Wiener correctly foresaw. Man’s creation could outsmart man, so God’s creation could surely outsmart God. Mechanical power, the MIT professor implied, showed the limits of divine power.

There was another, even more sacrosanct taboo ripe for mechanical reprocessing: that only God can create life. Only God is the creator, and there can be only one God. The hallmark of living beings is that they can remake themselves in their own image, because they were created by God in His image. Siring and creating new life in God’s image was the cause of divine nature; creating new life-forms in somebody else’s image was against the natural order of things. It was blasphemous: “In our desire to glorify God with respect to man and Man with respect to matter, it is thus natural to assume that machines cannot make other machines in their own image.” But that is exactly what automation promised: humans were soon able to make machines that could, in turn, make other machines in their own image. If a machine is supposed to create another machine in its own image, Wiener suggested, understanding the notion of an image becomes necessary. What, then, was the image of a machine? “The machine may generate the message, and the message may generate another machine,” the professor suggested, somewhat cryptically. Passing a design on is relatively simple in the case of software. A “soft” machine—a program, even a complex program—can simply be copied, transferred to a different computer, and installed there. The message—the disk image—re-creates the virtual machine. In the case of simple hardware, the machine’s image would be its design plan. And that plan could be sent over a telegraph line as a message that would then be used to generate a copy of that machine at the other end. But cybernetics wouldn’t have to stop here. After all, life could be compared to machines now. Organically implemented feedback devices—humans, for instance—were just more complex machines, with negative feedback loops stabilizing body temperature instead of room temperature and blood pressure instead of tank pressure. So in principle, Wiener was convinced, that complex machine made up of molecular structures could be turned into an image, transferred, and reassembled elsewhere, just as simple machines could be disassembled and reassembled with the manual’s design blueprint: “It is conceptually possible for a human being to be sent over a telegraph line,” he wrote in God and Golem.

Automation and large machines were depicted as autonomous agents. Computers were electronic brains. Robots were portrayed as humanoids in cartoons and films. Extreme and often dark prophecies dominated the popular-press coverage of new contraptions. Modern cybernetics was greeted with the same reprobation that had been attached to the sin of sorcery in former ages, at least according to Wiener, as he laid out in his final book on God and the machines, which he finished in the summer of 1963. Surely, he argued, if a scholar two centuries earlier, in the 1760s, had claimed to make mechanical creatures that could learn to play games, or even to proliferate, such a man would have to wear the sanbenito, a yellow coat embroidered with images of monks, dragons, and demons that meant its wearer was a heretic condemned to burn at the stake. For Wiener, modern humans had just gotten over the insult of being compared to apes by Darwin. Being compared to machines was the next affront in line. But Wiener had the opposite objective: he didn’t want to see the machine as magical; he wanted to do just the reverse—to show that the magical was mechanical.

To illustrate the mechanics of magic, Wiener used fables and short stories deeply embedded in the collective mind. One of his oft-used stories was Goethe’s famous 1797 poem “The Sorcerer’s Apprentice.” In the poem the master sorcerer briefly leaves his workshop to run a few errands. He tells his apprentice to fill the bath with water in the meantime. Bored, the apprentice tries a magic phrase on a broom, turning the wooden stick into a water-fetching slave. After the initial excitement, the apprentice notices that he has forgotten the magic command and now can’t stop the broom’s tireless work. The entire workshop is filling up with water. When the apprentice tries to break the broom with an axe, two halves continue working, bringing two times as much water. Finally, the old sorcerer returns, speaks the magic lines, and the broom returns to its corner. For Wiener, the story followed an archetypical pattern that was reflected in countless fables and horror tales and religious texts: man, eager and ambitious, would summon the help of magic, or of a “Jinni,” only to discover that he had lost control of his own creation. But even in these situations of hubris, the deus ex machina would still avert catastrophe.

Wiener’s favorite tale—“The Monkey’s Paw,” a classic horror story from 1902 written by W. W. Jacobs, an English author of short stories—carries this principle to a ghastly extreme. A British sergeant-major brings back from India a talisman, a dried monkey’s paw. Visiting his friend Herbert, the sergeant-major tells the friend and his parents that the monkey’s paw has the power to grant its owner three wishes, and that his comrade used the third wish for death. He throws it in the fire, but Herbert’s father retrieves the magic paw. The man wishes for £200. The next day his son leaves for work at a factory. Later that day, a factory supervisor brings the news that Herbert was killed in a machinery accident, offering £200 to the father as compensation. A week after the funeral the mother, consumed by grief, talks the father into using the second wish of the monkey’s paw to get her son back. One evening the father utters the wish. Soon after comes slow and heavy knocking on the door. Excited, the mother rushes to unlock it. The father, realizing that a hideous, undead creature is outside, uses the third wish, and the knocking stops.

Cybernetics finally made sense of these stories. The robots in Čapek’s R.U.R. were fictional machines, so they obviously symbolized machines escaping their human master. But so did the magic broom in Goethe’s “Sorcerer’s Apprentice,” or indeed the monkey’s paw in Jacobs’s horror tale. In Wiener’s mind, the broom and the paw stood for the machine, for man’s imperfect creation of magic. And these magic devices obeyed the rules in a literal way, as single-minded and stubborn as the ladder logic of a control device programmed in binary machine language. The broom executed the user’s command to the letter, like binary code, until somebody entered the magic command to stop it. The paw executed the user’s wish like command-line entry, until a third wish undid the grisly error of the second. In Wiener’s view, the machine wasn’t magic. Rather, the reverse was the case: these stories and poems were the precybernetic way of articulating the logic of machines. Of course, Wiener had an all-encompassing notion of communication and control at that point. But somewhere along the way the professor had lost his orientation. He wanted to understand religion through the lens of cybernetics—but he ended up doing the reverse. Wiener inadvertently understood cybernetics through the lens of religion. The machine was meant to dominate the mythical, but the mythical dominated the machine instead.

If a giant computer with perfect rationality was programmed to win according to rigid rules, the machine would not entertain any additional considerations except winning according to those rules, however narrowly defined. The obvious example was a war game, a nuclear war game. A machine playing the war game needed a preprogrammed notion of victory. If that interpretation of victory was simple and conventional, then victory would be the machine’s goal at any cost, “even that of the extermination of your own side.”

Diebold was concerned that the human decision-maker still had an important role to play in SAGE—the S in the name, after all, stood for semi-automated ground environment; it wasn’t fully automated. Strategy remained a human task. Machines executed strategy only by directing weapons to their targets. To Diebold, that was a problem. The lead time between attack alert and response was becoming smaller. Meanwhile, the number of available weapons was growing bigger. Both made the allocation of weapon systems harder—too hard for inefficient humans with their slow and fallible brains. “Dependence on a human decision maker in our present military environment is hazardous, to say the least,” wrote the man who had coined the word “automation.” Like gadgeteers before and after him, Diebold trusted the machine more than its designers.

The idea of cyberculture was first articulated by the mathematician Alice Mary Hilton in 1963. Her point of departure was unique and rather surprising. Automation was radical, even revolutionary. Calling the effects of automation simply a second nineteenth-century industrial revolution would be “far too narrow,” Hilton believed. Automation was bigger: it wasn’t merely more industrial mechanization, not just an extension of man’s physical power. The nineteenth century, she believed, only completed a prehistoric development: agriculture was enabled because men invented tools as an extension of their physical prowess; automated cybernetic systems were simply going one step further by extending humans’ mental prowess. The twentieth century would hence unleash a genuinely new development: humankind’s emancipation from repetitive tasks. Creative minds would truly be free to think. Everything that human beings might need or want, Hilton foresaw, would very soon be produced by machines—“solely by machines without any human intervention or labor.” The industrial revolution of the nineteenth century wasn’t big enough as a comparison, Hilton argued. The agricultural revolution was the better analogy. The ability to cultivate crops and livestock turned food gatherers and primitive hunters into food growers and organized communities. Cultivating plants and domesticating animals freed “some” of their energy to create civilization. Automating production would have a similar effect yet again: now “all of human energy” could be freed from the task of providing for survival. Instead of an agricultural revolution, humankind would now face a cybercultural revolution.

ORGANISMS

Organic machines could be realized in one of two ways. One was bolting machine parts onto existing biological organisms. From the beginning, the goal of merging the artificial and the natural was to enhance the performance of the organism. Machine modification could help make an animal—and ultimately humans—fit to survive and operate in previously hostile environments, such as outer space or the deep seas. Life was no longer bound by evolution. The resulting modification was a cyborg, shorthand for “cybernetic organism.” The second possibility was even more ambitious: creating living machines without an organic base. Stand-alone machines, entirely without tissue, could be endowed with features of living organisms, such as the ability to reproduce, to mutate, to evolve, and to think—or to fight and kill autonomously. Endowing machines with lifelike attributes raised two hairy questions: when and if machines could come alive, and when and if machines could outperform human beings. Only the distant future would hold answers to these questions, if there were, in fact, answers. These were more philosophical than technical themes. But their appeal was bound to grow as technology advanced and as cybernetics offered an inspiring vocabulary for coming to terms with machines as organisms.

“Cybernetic mechanisms,” the GE engineers understood, had a range of advantages over conventional vehicles: effective man-machine integration eliminated levers, brake pedals, and clutches; it made programming obsolete; it required very little training; its force feedback reduced risk; and cybernetic machines would free operators to focus on the actual problem at hand. “The operator is able to react in such a natural manner that he subconsciously considers the machine as part of himself,” Mosher told the army’s transportation experts in Michigan. The army’s vision was to create some sort of intelligent full-body armor, turning the soldier, in effect, into a walking tank.

GE’s work pushed the engineers into philosophical terrain. There was a subtle difference between human control and automatic control. For Mosher, a simple inert hand shovel used in the garden was a cybernetic anthropomorphic machine; it extended the human body and senses and could be used without training. The shovel perfectly extended the user’s arm, functionally becoming a part of the operator’s body. But more complex machinery—for instance, a crane—broke up this union, cutting the operator off from “continuous sensory appraisal.” For optimal control, the user needed to sense force, surfaces, position, speed, and direction—not simply see the end of the crane’s arm from a remote cabin. Operating a crane was a bit like trying to catch a ball while looking at yourself and the ball in a mirror; it was difficult and clumsy. Operating a cybernetic machine was like being a more powerful version of yourself, simply catching the ball—almost like being a spaceship or a TV network.

By 1965, the cyborg had begun to capture the popular imagination. “What is man?” was the opening line of the first full-length book on the subject, Cyborg. Its author, D. S. Halacy, took a grand and ambitious view, portraying the evolution from ordinary man to “superman,” as the book’s subtitle promised, in direct reference to Friedrich Nietzsche’s Übermensch, as the author made clear in the text. For millions of years, the evolution of humans had been left to nature. Now, by the early 1960s, humans had taken evolution into their own hands. Human progress wasn’t any longer driven passively by evolution. “Participant evolution” meant that man himself was now an active factor in his own development—in the masculine language of the 1960s. Radical changes would become possible to adapt the body to extreme environments: nose and mouth could be permanently sealed to enable life in the vacuum of the space beyond Earth’s atmosphere, while a purpose-built implant would oxygenate the astronaut’s blood. But there wasn’t just outer space; the planet’s “inner space” in the deep seas was equally promising. Already, ocean divers could breathe gases other than air; “a more drastic approach is that of learning to breathe water,” Halacy wrote. In fact, these changes were so drastic that the idea of human evolution itself was probably obsolete. Yes, there was an “evolution to the cyborg”—but then came the cyborg revolution.

Man-machine interaction, of course, wasn’t limited to human physical capacities but could very well apply to human intellectual capacities. Consequently, the computer itself became the subject of man-machine interaction. Perhaps the most influential thinker and technologist to tackle this specific question was J. C. R. Licklider, one of the pathbreaking early pioneers of the internet and a participant in the Wiener circle.

Man-machine symbiosis, in short, was superior to automation. Licklider didn’t want to advocate more automation, or to delegate ever more decision authority to machines. He was sharply critical of the automation enthusiasts of his day. The very concept of mechanical extension, as he saw it, led to the idea that humans could and should be replaced by machines, that “the men who remain are there more to help than to be helped.” Licklider wasn’t opposed to this vision in principle. But it was impracticable, “fantastic,” he thought. Like Mosher at General Electric, who worked on a very different problem for the army, Licklider realized that the best systems were a blend of the best of both humans and machines.

The notion that machines could outthink humans was still hot among scientists in the 1960s. Irving “Jack” Good was a leading UK mathematician, then based at Trinity College, Oxford, and the Atlas Computer Lab in Chilton. He had worked as a cryptologist at Bletchley Park with Alan Turing during the war, and later at GCHQ until 1959. Good had become convinced that “ultraintelligent machines” would soon be built. “The survival of man depends on the early construction of an ultraintelligent machine,” he enigmatically opened his most-read paper, in 1965. In Good’s view, a machine was ultraintelligent if it could “far surpass” all the intellectual activities of any human being, however clever. Once this was achieved, Good reasoned, then a singular moment in human history would have arrived. Humans would not be at the top of creation any longer.

Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

CULTURE

The machines, it turned out, had nearly unlimited metaphorical power. The most obvious comparison was the human brain. If the thinking machine was a simplified brain, then the reverse question was practically asking itself: Wasn’t the actual brain just a complex machine? The mind suddenly became something that could be understood and described and analyzed with language borrowed from engineering. And cybernetics provided that language: input and output, negative feedback, self-regulation, equilibrium, goal, and purpose. All this had a literally spiritual, drug-like appeal. Seeing the mind as a machine was liberating. The reason was simple. Man could understand machines, make them, control them, tweak them, fix them, and improve them. If the mind was simply a kind of machine, then humans could understand it, control it, tweak it, fix it, and improve it. Doing so was only a matter of finding the right levers to pull and cogs to turn. No longer was human psychology something mysterious, something unknown, something beyond the comprehension and imagination of ordinary people.

The next logical step was to extend that comparison. If the individual mind was a self-regulating system that could be tweaked by oiling the feedback loops, what else was? Norbert Wiener’s and Ross Ashby’s ideas immediately had a spiritual and quasi-religious appeal that went far beyond the fear of automation, or the fear of machines organically merging with humans. Soon, creative minds beyond the confines of the hard sciences discovered the power of cybernetics—especially in counterculture. By the late 1970s, cybernetics had gone viral. Sometimes in disguise. Entire communities functioned like whole systems, many in the countercultural avant-garde came to understand: there was a different way of seeing things, a circular way, where everything was connected, connected by feedback, kept in balance, in touch with the environment, even with animals and plants and rocks, in unity, as one single whole, one planet, shrunk into a village by communication technology. A veritable cult emerged. Seeing communities as self-regulating feedback systems was liberating, driven by a theory of machines that was quite literally “out of control,” in the memorable phrase of the founding editor of Wired magazine, Kevin Kelly. The cybernetic myth had a major cultural impact. Wiener’s work, in its countercultural and highly symbolic reading, forms one of the oldest and deepest roots of that firm belief in technical solutions that would later come to characterize the culture of Silicon Valley.

By 1970, cybernetics had already peaked as a serious scholarly undertaking, and it soon began to fade. Its scientific legacy is hard to evaluate. On the one hand, cybernetic ideas and terms were spectacularly successful and shaped other fields: control engineering, artificial intelligence, even game theory. On the other hand, cybernetics as a science entered a creeping demise, with therapists and sociologists increasingly filling the rolls at the American Society for Cybernetics. Kevin Kelly, the Wired magazine editor, later observed that “by the late 1970s, cybernetics had died of dry rot.” Yet, to the surprise of the remaining founders, cybernetics lived on—not in Boston’s scientific research labs, but in California’s counterculture communes. The rising New Age movement found the new discipline’s mystic side appealing.

The idea’s journey from the East Coast to the West Coast is extraordinary—the ideology emerged and evolved in a range of military-funded and space-related projects, and then found its way into San Francisco’s drug-fueled counterculture, in the short space of two decades. One person played a particularly important role in this metamorphosis: Stewart Brand.

Tools, for Brand, had an almost mythical meaning. Anything could be a tool: a hacksaw, a monocular, a pair of Levi’s 501 jeans, or the ideas in a book. “Here are the tools to make your life go better. And to make the world go better,” he wrote in one of the catalog’s introductions. “That they’re the same tools is our theory of civilization.”

Brand’s thinking was simple: if commune dwellers wanted to go back to the basics of self-sustained living and farming, they needed to know the basics first, and find those basics first. That’s where his catalog came in. It listed and recommended the basics, or what its maker considered to be the basics: guides on shelter and land use, industry and craft, communications, community, nomadics, and—first and foremost—“understanding whole systems,”

Brand had not just studied cybernetics; he oozed it. Naturally, perhaps, the first issue of the Whole Earth Catalog offered readers access to that multipurpose tool: cybernetics.

Brand had a vision, a purpose. He stated his vision on the first page of every Whole Earth publication, where he explained the “function” and then the “purpose” of this widely popular publication. According to Brand, the catalog’s function was to serve as an evaluation and access device. With it, the “user” could find out what was worth getting and where to get it. The purpose was to promote tools for education, inspiration, and shaping the environment—because, Brand wrote, “We are as gods and might as well get good at it.”

Learning was a crucial part of counterculture. Learning was perhaps the only way to expand the mind to see the way into a better, more peaceful, and more just future. To those hungry for mind expansion and knowledge, both psychedelic drugs and computers had instant and intuitive appeal.

Brand was searching for the meaning of consciousness, for the right for life, for what’s sacred. “Tall order,” he admitted in Harper’s magazine. But then something remarkable happened. As Brand put it, “In the summer of ’72, a book began to fill it in for me: Steps to an Ecology of Mind, by Gregory Bateson.”

Already in the early 1950s, Ashby and Wiener had suggested that the line between humans and their tools is arbitrary. The chisel is functionally part of the sculptor. The bomber pilot acts like a servomechanism. Man and machine were forming one system. This was Cybernetics 101, the very basics. Ashby had then pointed out that the line between system and environment is arbitrary. Bateson simply took this idea to its logical conclusion: if the axe was an extension of the man’s self, so was the tree, for the man could hardly use the axe without the tree. So it was tree-eyes-brain-muscles-axe-stroke-tree—“and it is this total system that has the characteristics of immanent mind,” Bateson wrote in Steps. One mind resided not in one person’s skull; it resided in the whole system: “Mind is immanent in the larger system—man plus environment.”

To Bateson and to his many disciples, established views were tilted and biased toward the individual. So deeply entrenched were these established views that even he, Bateson, succumbed to them: “If I am cutting down a tree, I still think ‘Gregory Bateson’ is cutting down the tree.” The self, his own mind, was still an “excessively concrete object to him.” That was different from the cybernetic epistemology. It was different from the true, correct view of the mind that cybernetics enabled. But there was a huge difference between glimpsing a new way of thinking in exceptional moments, and making that new way of thinking habitual. Bateson needed to get into the habit. So he explored experiences that could help him imagine what it would be like “to have this habit of correct thought,” as he called it. He experimented with psychedelic drugs. “Under LSD, I have experienced, as have many others, the disappearance of the division between self and the music to which I was listening,” he said at a lecture in early 1970 in New York City. “The perceiver and the thing perceived become strangely united into a single entity.” Psychedelics were not an escape into a chemically created artificial reality. The drugs didn’t pull down the curtain on the user; they pulled it up. The chemical substance revealed a more accurate, a more correct, and a more wholesome perspective on the world. Psychedelics liberated the viewer from an otherwise artificial reality.

Most members of Brand’s generation, he recalled, scorned computers as embodiments of centralized control, as tools of a militarized superpower in a senseless war in Vietnam, wielding computer-controlled nuclear weapons that could end all human civilization. Counterculture had tried and tried various ways to overcome war and capitalism and top-down control. Now, a small group of cultural entrepreneurs and hackers embraced computers as a tool of liberation. These machines, not psychedelic drugs or archetypal geodesic domes, “turned out to be the true royal road to the future,” Brand recalled later, in the mid-1990s.

The early internet was evolving fast. Yet McKenna was ahead of his time. To him, a new form of planetary connection was emerging: “Through electronic circuitry and the building of a global information-system, we are essentially exteriorizing our nervous system, so that it is becoming a patina or skin around the planet,” he told High Frontiers. “And phenomena like group drug-taking and rock-and-roll concerts and this sort of thing,” he said, “these are simply cultural anticipations of this coming age of electronic-pooling-of-identity.”

To the Amazon-traveling ethnobotanist, the very technology that began its evolution in air defense research—and was then refined in the Cold War—didn’t clash with the wholesome peace and oneness of the psychedelic subculture at all. On the contrary, technology and hallucinogens were two sides of the same coin: “I think every time you take a psychedelic drug you are anticipating and experiencing this future state of electronic and pharmacological connectedness,” McKenna suspected in 1984. This was not an eccentric view. It represented an entire subculture.

The WELL was perhaps the first proper online social network with general appeal. It had all the upsides and downsides that come with social media: it was addictive, it was entertaining, it was riveting, and it could be a waste of time. There were great discussions, and there was spamming and trolling. A minority did the majority of the talking. The social-media pioneers had enough experience to intuitively get the details right. Real names mattered. So, everyone’s real name was available on the system, in “finger files.” Any user could simply finger another user’s screen name—say “hlr” or “rabar”—and pull up the user’s real identity. It also mattered that users, not the WELL, took responsibility for what they said. So, Brand and McClure came up with “YOYOW”: “You own your own words.” The motto greeted every user on the log-in screen. Nobody could reproduce what somebody else had written. These rules promoted civility and a high quality of discussion. And the initially small number of regionally clustered users meant that the WELL was indeed different from later and larger social networks in one critical aspect: there was an expectation that you would, sooner or later, meet the others in person, face-to-face, perhaps at one of the regular parties at the home of the WELL’s mainframe and its administrators in Sausalito, a pleasant half-hour drive across the Golden Gate Bridge from downtown San Francisco. All this made for a real small-town community feel online. The discussions were organized in so-called “conferences.” These threads covered a potpourri of topics: environment, future, books, drugs, sexuality, or “best and worst memories of the sixties.” All this appealed to baby boomers, who had come of age in the 1960s and were now in their late thirties and early forties—bright, libertarian, male, with postgraduate degrees. Their demographics and common interests alone made this online community a unique cultural phenomenon, in a “self-absorbed, cabalistic way,” observed journalist Katie Hafner in a Wired article about the WELL that has become a classic.

The very idea of separating online and offline didn’t square with the cybernetic minds of the Whole Earth pioneers. It was like isolating one of Ashby’s units of the homeostat, like cutting off Wiener’s negative feedback loops, like taking Bateson’s axe away from the man felling the tree—the antithesis of balance and whole systems. From the get-go, Brand wanted the Whole Earth network to be self-governing; he wanted the system to be a sociotechnical homeostat, a collective thinking machine and communal learning mechanism.

SPACE

It was a potent cybernetic idea: a separate, virtual, computer-generated space distinct from real physical space—what later became known as “cyberspace.” For decades, visual artists in advertising and film had articulated and imagined the space inside electronic apparatuses akin to outer space, portraying the atom as a solar system, with electrons orbiting like planets, the whole Earth encapsulated in a lightbulb, or tiny humans working inside giant vacuum tubes. By the early 1980s, the idea had found its moment. The space inside the machine inspired and mesmerized the countercultural avant-garde, which already sensed that the technical ground was shifting. The 1980s became a decade of imagination, of “consensual hallucination,” in the immortal words of William Gibson.

By the end of the decade, virtual-reality technologies were all the rage: data gloves, data goggles, and data suits triggered eccentric visions of how humans would interact through networked computers and wearable interfaces. The machines of the near future, many readers of the Whole Earth Review came to believe by the late eighties, would enable intense immersive experiences that would rival and surpass the most intense emotional experiences available to date: sex, music, and drugs. An entire subculture—cyberpunk—emerged at the curious intersection where technology and networked machines met mind expansion, psychedelics, music, and fashion. By 1988, entrepreneurs and intellectuals, inspired by cybernetics, control, man-machine interfaces, and whole systems, had dubbed this new place “cyberspace.”

At first, this novel space was something that could be entered only with fancy hardware interfaces, with goggles and gloves and data suits. It was synonymous with virtual reality. Then, sometime in the spring of 1990, a curious flip happened. Cyberspace became almost synonymous with the entire internet. By the early 1990s, virtual space—discovered by the air force and coveted by hippies—became something that could be accessed with more affordable hardware by millions of legitimate operators; a personal computer and a modem were enough to begin the fantastic voyage to the new frontier. Cyberspace was on its gleaming path to becoming a mythical new realm of freedom and liberty—and of war.

The tension between hippie utopia and nuclear dystopia was palpable. It expressed itself in an escapist counterculture characterized by an intoxicating blend of punk, futurism, surreal collages, psychedelic visual art, a virtual-reality technology fetish, industrial and electronic music, and drugs. Cyberpunk had its own distinct fashion, with ponytails on men, tattoos on women, all-black leather jackets with mirrored shades on everybody, and hats brimmed with irony. The subgenre’s symbolic father figure was William Gibson, an American-Canadian novelist with more links to counterculture than to computer science.

“Cyberspace,” for Gibson, was meant to evoke a virtual, disembodied world of computer networks that users would be able to “jack into” through consoles and portals.

Gibson’s novels had their own aesthetic. People lived in sprawling cityscapes, crammed and gritty and dark. Washed-up computer cowboys and hustlers with alien tattoos, shades, and neural implants that were blurring the line between human body and machine part. The stories had an apocalyptic feel, like Ridley Scott’s 1982 cult film Blade Runner or The Matrix, a 1999 film directed by the Wachowskis. Gibson romanticized the technology. When he shaped the language and the aesthetic of cyberpunk, he didn’t even know that hard drives had spinning disks. “Fortunately I knew absolutely nothing about computers,” he recalled.

Those two science fiction visions of computer-generated spaces would appeal to different yet overlapping communities: Gibson’s Neuromancer appealed to a wider and, in the short term, more influential community passionate about counterculture, aesthetics, virtual reality, and drugs. Vinge’s True Names appealed to a narrower group that became influential only in the long term: those passionate about engineering, gaming, encryption, and privacy.

VR was even better than LSD because it was, by definition, a social experience, not chemically induced isolation. The effect would be wholly positive. Lanier believed his new technology would “bring back a sense of the shared mythical altered sense of reality that is so important in basically every other civilization and culture prior to big patriarchal power.”

Wearable devices, and the virtual reality they would simulate so perfectly, would enable human operators to break free of desks, screens, mice, and entangling wires. The new technology promised new ways to move inside a human body, in atoms, or inside buildings that existed only as design plans. Virtual reality would revolutionize industrial design, medicine, architecture, space exploration, entertainment, education, games—even sex and drugs.

But all that was theory. All those bold claims about social interaction in virtual reality were vision alone. It had never been done. The actual technology wasn’t there yet. In practice, Lanier’s cyberspace remained a solitary experience. Limitations in computing power and bandwidth did not allow for a truly interactive experience through glove and eye phone (at least not for another quarter century).

Leary had famously pioneered the use and study of psychedelic drugs. Yet in hindsight, until the late 1970s he found that he had no language to express how the brain actually worked. But the psychologist had an epiphany when he got his first computer. He suddenly understood that cybernetic terminology was ideal to describe the workings of the mind and the brain, that “human biocomputer” he had spoken about already in Berkeley in 1983, covered in the drugs-and-technology magazine High Frontiers. Comparing the body to manufactured artifacts had been common for a long time: hydraulic engineering helped us understand the heart as a pump with valves and pipes and the blood as a circulation system, Leary believed. Now cybernetics enabled the understanding of the brain’s piping. Networked computers, for Leary, were the operating system of the mind itself. Millions of young Americans understood that “the best model for understanding and operating the mind came from the mix of the psychedelic and cybernetic cultures.”

Two sharply different communities seized on the myth of cyberspace as the new frontier: West Coast privacy activists and the East Coast defense establishment. For the former, cyberspace was a space of freedom; for the latter, it was a space of war.

ANARCHY

Silicon Valley and America’s technology start-up scene, still bathing in the crisp utopian afterglow of the 1980s, watched the rise of the New Economy, with vertigo-inducing growth rates. Entrepreneurs rubbed their hands in anticipation. Intellectuals were inebriated by the simultaneous emergence of two revolutionary forces: personal computers and the internet. More and more PC owners connected their machines to the fast-growing global computer network, first with clunky, screeching modems, then with faster and faster broadband connections. But amid the hype and a slowly but steadily growing economic bubble, it dawned on a number of users that something was missing: privacy and secure communications. History, thankfully, was gracious. Even more than that: nature itself was generous to humans in front of plastic keyboards. Unrelated to either PCs or the internet, cryptographers had made a third and no less far-reaching discovery in the 1970s. They didn’t just invent a technology; more like explorers than innovators, they discovered an algorithm based on a beautiful mathematical truth. That truly revolutionary technology was finally unleashed for widespread public use in June 1991: asymmetric encryption, also known as public-key cryptography. When free crypto was added to the computer underground, “crypto anarchy” emerged. Now people with mirror shades, modems, and PCs could be against stuff. And even better, despite the decade’s spirit for unrestrained optimism, they had found something concrete to be against: the government’s attempts to regulate ciphers. And so cypherpunk was born, a pun on “cyberpunk.” The ideology was powerful—far more powerful and durable than those whimsical and short-lived names implied.

Public-key encryption was revolutionary for a simple reason. It solved the age-old security problem of key distribution. Sharing a secret key had previously required a secure communication channel. If Alice wanted to send Bob a secret message, she would first need to share the secret key with him. But a secret could not be shared on an insecure channel. Suppose Alice sent Bob a letter containing the secret key and asking him to use it to scramble their subsequent correspondence—say, by replacing every letter with a specified alternative letter. Eve (cryptographers like to call the supposed evil eavesdropper “Eve”) could simply intercept the letter and make a copy of Alice’s secret key en route to Bob. Eve would then be able to read all future messages encrypted with this key.

By the 1960s, the British military had started worrying. Tactical radio had become more widespread, along with computers and telecommunication technology, making the problem of key distribution worse. “The management of vast quantities of key material needed for secure communication was a headache for the armed forces,” recalled one of the British government’s leading cryptographers at the time, James Ellis. Ellis first believed, as was generally assumed then, that no secret communication was possible without a secret key first being shared. His view changed with the random discovery of a World War II report, Final Report on Project C-43, by a Bell technician, Walter Koenig, also prepared under an NDRC contract. Back in October 1944, Koenig had suggested a theoretical way of securing a telephone call by having the recipient of a call add noise to the signal and then subtract it afterward. Only Bob could subtract the noise, because only he knew what he had added in the first place. An eavesdropper, Eve, simply would not know how to modify the noise, because she wouldn’t have access to the noise that had been added to the phone conversation in the first place. The system was impractical at the time. But Ellis got the decisive and entirely counterintuitive cue: there was no need to assume that only the sender could modify the message; the recipient could have a role as well. “The noise which had been added,” Ellis wrote in 1970, “had been generated by the recipient and is not known to the sender or anyone else.” The recipient, therefore, “takes an active part in the encipherment process.” In theory, at least, Ellis seems close to solving the age-old key distribution problem. Now the secret British cryptographers needed to find a mathematical way to enable the recipient to take part in ordinary encryption and decryption. “The unthinkable was actually possible,” Ellis recalled.

Public-key encryption was revolutionary for a simple reason. It solved the age-old security problem of key distribution. Sharing a secret key had previously required a secure communication channel. If Alice wanted to send Bob a secret message, she would first need to share the secret key with him. But a secret could not be shared on an insecure channel.

The numbers are easily multiplied, but it is nearly impossible to reverse the step and find the two primes that were used to generate the product. Multiplication took seconds; factoring would take millions of years, even with the most powerful computers. The algorithm that Rivest, Shamir, and Adleman suggested took advantage of this asymmetric factorization problem. The public encryption key would contain the product; the private decryption key would contain the two primes. It was safe to share the public key on an insecure channel because the factorization problem was so hard that it was, in effect, already encrypted, scrambled by a one-way function that was easy to perform but nearly impossible to reverse. It was magic.

The NSA would try everything it could to stop strong crypto from going public, and it would continue to try over the next two decades: cutting government funding of cryptographic research, or taking over the funding; vetting papers before publication; threatening scholars with criminal proceedings, or trying to convince them that publication damaged the national interest. The agency’s attempts to stop crypto were clumsy and ham-handed. Even their most potent tool, classifying encryption as a weapon under the International Traffic in Arms Regulations, would ultimately fail. The NSA’s attempts to reign in crypto in the late 1970s foreshadowed a trend: the government’s endeavors to counter the rise of strong encryption confirmed the worldview of those who were inclined to distrust Washington’s secret machinations. The leaked Pentagon Papers and the ensuing Watergate affair earlier in the decade had eroded trust in the federal government, especially on the libertarian left. Resistance was brewing.

Public-key cryptography made it possible to keep a message private: The sender would scramble the clear text with a key that the recipient had “publicly revealed.” Then the recipient, and only the recipient, could use the matching private key to unscramble the message’s ciphertext. But the new technique could do even more. Public-key cryptography made it possible to “sign” a message electronically, by doing exactly the opposite: having the sender encipher a signature with a privately held encryption key, thus enabling the recipient to verify the message’s origin by deciphering that signature with the sender’s publicly revealed key, thereby proving that only one party, the legitimate sender, could have scrambled the message’s signature. Everybody could decipher and read the signature, but in only one way: with the sender’s public key. This form of authentication was like a handwritten signature on steroids: signatures could be verified by everybody and forged by nobody. Electronic mail could now be even better than old-fashioned snail mail, with sealed envelopes that only the intended recipient could open and signatures that were impossible to fake, guaranteeing confidentiality and authenticity.

Thankfully, public-key encryption had emerged just in time to save privacy from automation, computerization, and data-hungry corporations and governments. So Chaum started working on concrete solutions: untraceable electronic mail, digital pseudonyms, anonymous credentials, and general protection of privacy. Chaum is best known for yet another revolutionary cryptographic discovery: blind signatures. The nondigital equivalent to a blind signature would be using carbon paper to sign a letter that is already in an envelope, without having read the letter first. A signature, in short, is blind when the content of a message is disguised before the signature is added. This signature can then be used to verify the undisguised message. Chaum had two situations in mind where blind signatures could be put to use: one was digital voting. Alice might want to prove that she cast a vote in an election while keeping her actual vote anonymous. Chaum’s sophisticated digital blind signature scheme made this possible. A voter could sign the ballot without revealing the cast vote. It became possible to confirm all this electronically: Alice votes anonymously, Bob sends her a blind receipt, and Eve doesn’t see any of it. But Chaum’s true passion was another purpose for using blind signatures: digital cash.

So powerful were Chaum’s ideas that an entire movement arose. That movement believed crypto was en route to making the state as we know it obsolete. Many of these early cryptographers had been exposed to a powerful streak of American culture: civil libertarianism with its deep-seated distrust of the federal government—or of any government. Counterculture, with its focus on free speech, drugs, and sexual liberation, was constantly pushing the boundary of what was legal. Meanwhile, the NSA’s hysterical reaction to basic crypto scholarship amplified this hostility toward government in the emerging computer underground of the 1980s. So it was no coincidence that Bay Area cryptographers unearthed what would become one of the most potent political ideas of the early twenty-first century.

May was a ferocious reader of fiction as well as nonfiction. Crypto was so new and so radical in its implications that inspiration simply couldn’t come from science, he thought; it could only come from science fiction. Vernor Vinge’s novella True Names came to May’s attention in 1986. “You need to read this,” a friend told him, giving him a dog-eared Xerox copy of the entire short story. Vinge feared total identification and transparency: “It occurred to me that a true name is like a serial number in a large database,” the science fiction writer recalled later. The names could serve as identifiers, connecting otherwise disparate information, as what intelligence officers call “selectors.” Whoever had access to a true-names database would have power over the objects in the database. Vinge’s 1981 novella spelled out the very same tension that was driving Chaum’s fear of the “dossier society” at the very same moment. May was “riveted,” he said later. He thought the story articulated a number of themes that were swirling around in “computer circles” at the time—notably, the role of digital money, anonymity, pseudonyms and reputations, and countering the government’s interest to impose control “in cyberspace.”

Cryptography provided the “ontological support for these cyberspatial worlds,” he understood. The astounding mathematical power of quite large primes guaranteed enduring structures in the vastness of a new space that could now be safely “colonized,” May believed. Owning a particular “chunk of cyberspace,” he explained, meant running software on specific machines and networks. And the owners of such virtual properties made the rules. They set access policies and determined the structure of whatever was to happen there: “My house, my rules.” Anybody who didn’t like the rules in a particular virtual world would be welcome to stay away. And, May was convinced, anybody who wanted to call in old-fashioned governments to force a change of the rules would face an uphill battle.

For the libertarian minded, crypto anarchy meant that “men with guns” could not be brought in to interfere with transactions that all participants mutually agreed on. Taking violence out of the equation had two wide-reaching consequences. Two types of men with guns would find crypto hard to cope with. The first were the police and agents of federal law enforcement. No longer would they be able to trace and find those who refused to declare income or deal in illegal goods. The state, in short, would lose a good deal of its coercive power. If financial transactions became untraceable, enforcing taxation would be impossible. And that, of course, was a good thing. “One thing is for sure,” May told Kevin Kelly of the Whole Earth Review already in late 1992, “long-term, this stuff nukes tax collection.” But crypto wouldn’t affect only the government and the rule of law. The other kinds of men with guns were criminals. And the same applied to them. Criminals would also lose their power to coerce others with threats of physical violence. If the buyers of drugs, for instance, would be untraceable not just for the Feds but also for gangs, then markets that were chronically plagued by violence and abuse would become nonviolent and abuse would stop. Anonymously ordering LSD online was much less risky than going to dodgy street corners and talking up shady pushers. Strong crypto, made widely available, enabled totally anonymous, unlinkable, and untraceable exchanges between parties who had never met and who would never meet. The anarchists saw it as a logical consequence that these interactions would always be voluntary: since communications were untraceable and unknown, nobody could be coerced into involuntary behavior. “This has profound implications for the conventional approach of using the threat of force,” May argued in the Cyphernomicon. It didn’t matter if the threat of force would come from governments or from criminals or even from companies: “Threats of force will fail.”

Making strong encryption available to John Doe was a big step forward for privacy. But it didn’t even begin to solve a fundamental problem: scrambling plaintext to ciphertext beautifully protected the letter inside an envelope. Whoever opened the envelope could not read what was inside. That was great. But the encryption available at the time didn’t conceal what was on the envelope: the sender’s address, the receiver’s address, and some other information about when and how the letter was sent. The correspondents’ identity was openly revealed. Encryption protected the content of packets but not the headers—what later would be called “metadata.” The now publicly available PGP protocol left metadata unprotected. PGP on its own, in short, created confidentiality, not anonymity. The cypherpunks wanted a solution to this problem.

Remailers were the solution. These were dedicated machines programmed to take scissors to the envelopes, encrypted or not, to physically cut out the sender’s address and then forward the e-mail to the recipient. Remailers, in other words, were servers that automatically stripped e-mails of information that could identify the sender: the code running on the remailer would cut out the metadata, removing the sender’s address, replace it with a nonexistent placeholder such as nobody@shell.portal.com, and forward it to the intended recipient. It was like writing a letter with no sender’s address, or like calling somebody from a public phone with a distorted voice. Remailers could also be chained, to increase security, just in case one remailer kept a log file that could identify the sender. Court orders or lawsuits were ineffective against machines that automatically forgot data. But integrating PGP into remailers was a problem, at least initially. Eric Hughes and Hal Finney wrote the first such remailers in 1992, in the programming languages Perl and C. By 1996, several dozen remailing machines would be operational. They had many uses. It became possible, for instance, to publish sensitive information simply by e-mailing it to a publicly accessible e-mail list, because nobody could trace the e-mail back to its sender. In this way the remailers were used to “liberate” ciphers that had not been published before, to spill a few government secrets, and to reveal secrets of the Church of Scientology.

For Barlow and the cypherpunks, this Vingean prophecy of a borderless cyberspace secured through the power of large primes was a shimmering pacific dream slowly inching into reality; for national security–minded government officials, it was a sinister, threatening nightmare. So in April 1993, the White House under Bill Clinton tried a new approach: if they couldn’t stop the spread of crypto, they could perhaps control it. The government proposed a new federal standard for encryption. The proposal was officially named the Escrowed Encryption Standard, or EES. It was designed to enable encrypted telecommunication, especially voice transmission on mobile phones—but with a twist. The standard encompassed an entire family of cryptographic processors, collectively and popularly known as “Clipper chips.” The government expertise for designing such a system, naturally, resided in the country’s mighty signal intelligence agency, the NSA. The proposal was then to be implemented through NIST, the National Institute of Standards and Technology. The system’s basic feature was simple in theory: when two devices establish a secure connection, law enforcement agencies will still be able to access the key that was used to encrypt the data. In short, communication was protected, but the FBI could read the mail or listen in when needed. The technical implementation of that simple idea was more difficult than expected. NSA engineers came up with what they thought was a neat trick. To make a secure phone call, two phones would first establish a so-called session key to encrypt the conversation. That much was a given. The session key would unlock the ciphertext and reveal the plaintext. So the NSA needed to find a way to make the session key accessible to law enforcement without compromising the phone’s security. To do that, they created a so-called Law Enforcement Access Field, abbreviated LEAF. The LEAF would retain a copy of the session key. That retained session key, of course, was sensitive, and was itself encrypted with a device-specific key, called the “unit key.” This unit key was assigned at the time the Clipper chip was manufactured and hardwired into the device. Unit keys were held in “escrow” by two government agencies. The Feds, in short, had a spare key for encrypted traffic. The White House argued that Clipper would achieve twin goals: the chip would provide Americans with secure telecommunications, and it would not compromise law enforcement agencies in their ability to do legal, warranted wiretaps. The cypherpunks, predictably, called BS on Clipper. The chip wasn’t just controversial; it was a bombshell. The tiny chip was the big cause the movement was waiting for. The very idea that a government, whatever its constitutional form, should be allowed to hold a copy of all secret keys was simply absurd to the growing number of crypto activists. “Crypto = guns” now meant that the Clinton administration faced the combined rage of First Amendment and Second Amendment activists, of those in favor of free speech and armed self-defense: Berkeley academics = NRA types. “Would Hitler and Himmler have used ‘key recovery’ to determine who the Jews were communicating with so they could all be rounded up and killed?” May asked on the list, rhetorically.

Most influential would be a bundle of court cases brought by Daniel Bernstein, known as Bernstein v. United States. The young mathematician, represented by the EFF, challenged the State Department for requiring him to get a license to be an arms dealer before he could publish a small encryption program. Then, by the end of the century, on May 6, 1999, the US Court of Appeals for the Ninth Circuit would rule, in a first, that software code was constitutionally protected speech, eventually ushering in the end of the hated cryptographic export control regime.

Crypto anarchy, some successes notwithstanding, seemed to meander toward the fringe. But curiously, the ideology lacked a proper book-length treatment. There was graffiti, as well as a deluge of rambling e-mails, magazine stories, interviews, and the messy and disorganized Cyphernomicon. The cypherpunks wrote code, but not books. May, the movement’s self-styled essayist, had tried and failed. Then, in 1997, Simon & Schuster, one of the big New York publishing houses, published The Sovereign Individual. It was a strange book, full of apocalyptic yet optimistic predictions. The two authors, inspired by the political philosophy of cypherpunk, left out the jargon and the arcane crypto discussions, yet kept the boldness: cyberspace was about to kill the nation-state, they argued. Lord William Rees-Mogg was a prominent, albeit sharply controversial, figure in British public life. From 1967 to 1981, the owlish Rees-Mogg was editor of the Times, chairman of the Arts Council of Great Britain, and the BBC’s vice-chairman. In 1988 he was made a life peer in the House of Lords, as Baron Rees-Mogg of Hinton Blewett in the County of Avon. His coauthor was James Dale Davidson, a conservative American financial commentator and founder of the National Taxpayers Union, an advocacy group. “As ever more economic activity is drawn into cyberspace, the value of the state’s monopoly power within borders will shrink,” Rees-Mogg and Davidson predicted. “Bandwidth is destined to trump the territorial state.” To back up their futurology, the two pundits called on the acid-dropping former cattle rancher from Wyoming, John Perry Barlow. He had it right, they said: “Anti-sovereign and unregulatable, the Internet calls into question the very idea of a nation-state.” Echoing May and the cypherpunks, they argued that the state’s threats of coercion would simply be ineffective online, shielded by strong crypto.

Crypto anarchy embodied the unshakable cybernetic faith in the machine. It combined Wiener’s hubristic vision of the rise of the machines with Brand’s unflinching belief that computers and networked communities would make the world a better place. A direct line connects the techno-utopianism of Timothy Leary to the techno-utopianism of Timothy May, cyberpunk to cypherpunk.

WAR

The brightest minds of the budding Bay Area technology avant-garde agreed with their fellow counterculture intellectuals: they were sure the computerized future was unstoppable—yet they remained unsure how the machines would benefit the forces of good and evil. Their future vision was oscillating between dystopia and utopia. That tension made cyberpunk fizz with excitement—and it was that tension between dystopia and utopia that would come to define the Pentagon’s interest in “cybernetic war” at the same time. Because the new weapons could be used as tools for oppression, or liberation. Because technology could be used by state authorities, or by those resisting authority, by the army or the insurgency—by the empire or the rebels, in the language of Star Wars. Yes, the Department of Defense was pouring money into the development of fancy weaponry that no liberation movement could dream of ever matching. Yet, viewed up close, change was afoot: technology was empowering the strong and the weak. The rise of the personal computer, the spread of networks, and the discovery of public-key encryption were—paradoxically—both strengthening and threatening hierarchy and authority.

Rand understood that information was power—not power in some other plane, but in the here and now. At times of war, this wasn’t just an adage. It meant something concrete: information was organizational power. Superior command, control, communication, and intelligence had always been critical in war. New technologies would make it even more critical in future war. The prefix “cyber,” so in vogue in the early 1990s, meant just that—“to steer or govern”—Arquilla and Ronfeldt pointed out.

Cybernetic attacks at first appeared to be a shiny new weapon that the mighty US armed forces could hurl at its enemies, as America’s gift to warfare. What looked at first like a military opportunity revealed itself more and more as a prime threat. The machines, it seemed, had turned against their American creators.

Moonlight Maze was the first known state-on-state cyber attack in history, literally monumental in scale, and it was duly portrayed as the first “cyberwar” and as the long-awaited electronic Pearl Harbor, back in 1998 and early 1999. The pioneering Russian campaign would indeed accurately foreshadow the future of the most common and most costly computer network breaches over the next two decades, down to the level of tactics, techniques, and procedures; it was advanced, persistent, clandestine, and reactive. Identifying start, end, costs, and perpetrators required time and hard work. But when the long-anticipated future emerged in 1998, it wasn’t good enough, yet again; it was too pale and unbefitting. Reality once more underdelivered and disappointed the visionaries, with all those unseemly details spoiling the brave prospect of what the future of networked machines had in stock.

FALL OF THE MACHINES

The history of cybernetics holds several warnings. The new field’s scientific popularity peaked around 1969. By the 1980s, cybernetics had lost a great deal of academic respectability; it had “died of dry rot,” as Wired’s Kevin Kelly memorably put it. Today, the most serious scholars working on cybernetics are historians of science and cultural scholars; engineers and computer scientists abandoned it in droves, renaming their projects and their degrees. “Cybernetics,” nearly seventy years after the word was coined, sounds oddly old-fashioned and out-of-touch. Yet the myth has lost none of its formidable force. For technology, as ever, exudes certainty: Flick a switch and the engine will stop. Push a button and the light will go out. Run a calculation and the computer will display the result. It is with such mechanized certainty that futurists have, again and again, predicted the future. Cybernetics, as Ross Ashby put it so eloquently, was a theory of all machines, including those machines that had not been built yet. Cybernetics was thus equipped with uncanny powers. It professed to master the mechanics of the future. It claimed to control the wiring that linked yesterday to tomorrow. On closer examination, cybernetics thus carried several patterns through the decades—patterns that are also warnings.

One pattern is spiritual. Not always, but often, the machine has become a godhead, an idol. Norbert Wiener, mesmerized by his own invention, wanted to understand magic and religion through his theory of the machine—but he achieved the opposite: the spiritual encroached on the mechanical, and Wiener himself came to see the machine in mythical terms. First, humans were seen as functioning like machines, then the community became a machine, and finally the spiritual itself became cybernetic. Science created a totem. The machine became the avatar.

A second powerful pattern is contradiction. The history of cybernetic ideas has a perennial tension built in. The machines were always a positive and a negative force at the same time, utopian and dystopian at once, although most of the time optimism dominated. Automated factories would free workers from undignified drudgery, yet deprive them of their dignity. Robotics would take away labor and create more labor. Computers were dumb and could be hacked by teenagers, yet they could outsmart humans. Autonomous machines could seize control, and as personal computers they would provide more control. More networked computers would lead into a “dossier society” of ubiquitous surveillance—and enable anonymity and a freer and better political order. Networked information systems would make nations more vulnerable and more fragile than ever, and networked command-and-control systems would make their armies more dominant and more lethal than ever. Machines would be future society’s hard-charging overlords and its soft underbelly. The myth hides these contradictions and makes them acceptable.

A third pattern is that technology, again and again, has outperformed the myth. Historical myths hail from the past and thus cannot be overwhelmed by reality. But technology myths hail from the future and therefore can come into conflict with reality. Even bold predictions about the future a quarter century out will face a moment of truth a quarter century later. This has occurred many times in the seven decades since World War II. Whenever actual technical developments catch up with the mythical vision of technology, one of two things tends to happen. The first is that work on the myth slowly stops, it doesn’t get repeated anymore, and eventually recedes. This is what happened with Manfred Clynes’s original idea of the cyborg, and with John von Neumann’s vision of the self-reproducing machine. The second possibility is that the myth simply shifts its shape and escapes again into the future. This is what happened with Alice Mary Hilton’s dreams of automation in the early 1960s, with Jaron Lanier’s cyberspace in the early 1990s, with Timothy May’s crypto anarchy in the late 1990s, and, indeed, with John Hamre’s cyberwar around the same time. First a free cyberspace arrived; then it was a vision of the future again. One day the Pentagon was at cyberwar; the next day, cyberwar had not happened yet.

Another pattern is an extraordinary appetite for new terms. New myths need new words, pointed out Roland Barthes, an influential mid-twentieth-century philosopher and literary theorist. This is especially true for technology myths that carry a promise of a new future. Old words cannot convey the novel. And cybernetics, as the story here demonstrates, has spawned a range of new terms that were eagerly taken up by scientists, writers, activists, and officials.

The final pattern is irony. The thinking machines have risen, in breathtaking and revolutionary fashion. In the decades since World War II, during the second half of the twentieth century, a time that is still in living memory, humankind has experienced a faster pace of change in its communication behavior than ever before in human history. By the late 1970s, the three main ingredients were available: the computer, the internet, and public encryption. Over the next decade, input-output devices became ever sleeker and networking platforms more social and more competitive. But the machine’s basic ingredients remained unchanged—as did the human tendency to think about the machine in perfection, not in limitation. If something goes wrong, it must be human error: a human flaw in the machine’s operation, in its design, in its programming, or in its maintenance must have caused the problem, whatever it was. For our instrument can’t make a mistake, even if it is created in our image; only we can make mistakes. And therein lies the ultimate irony. Cybernetics, first and foremost Norbert Wiener, tried to disenchant the machine—but achieved the opposite, the enchantment of the machine. The science of negative feedback itself created a powerful positive feedback loop, driving our visions of the future into perennial overdrive, the opposite of equilibrium: the persistent expectation is that everything will change—fast, completely, and inevitably.