The Dream Machine - by M. Mitchell Waldrop

The very first thing he’d always do was turn to Tracy and his thirteen-year-old sister Lindy and ask, “What have you done today that was altruistic, creative, or educational?” And he meant it. Tracy and Lindy would have to think through all the things they had done that day to find something they could fit into one of those categories.

Always the big dreamer, the kind of guy who was forever asking “What if?,” Tracy’s dad had come to believe that one day all computers would be like his machine up in Cambridge. They would be humane and intimate. They would respond to people and help them as individuals. They would serve as a new medium of expression. They would democratize access to information, foster wider communities, and build a new global commons for communication, commerce, and collaboration. Ultimately, in fact, they would enter into a kind of symbiosis with humans, forming a cohesive whole that would think more powerfully than any human being had ever thought and process data in ways that no machine could ever do by itself.

Tracy’s dad was also gambling on a soft-spoken, rather lonely guy who had approached him on practically his first day at the Pentagon, and whose ideas on “augmenting the human intellect” had proved to be identical to his own notion of human-computer symbiosis. Douglas Engelbart had been a voice in the wilderness until then; his own bosses at SRI International, in what would soon become Silicon Valley, thought he was an absolute flake. But once Tracy’s father had given him his first real funding—and vigorously defended him to his higher-ups—Engelbart, with his group, would go on to invent the mouse, on-screen windows, hypertext, full-screen word processing, and a host of other innovations. Engelbart’s December 1968 presentation at a computer meeting in San Francisco would blow about a thousand minds at once—and later be remembered as one of the turning points in computer history, the moment when the rising generation of computer professionals at last began to understand what interactive computing could do. By no coincidence, this was also the rising generation whose members had had their graduate educations supported by Tracy’s dad and his successors at the Pentagon—and a talented portion of which would soon be gathering at PARC, the Xerox Corporation’s legendary Palo Alto Research Center. There they would put Dad’s “symbiosis” vision into the form we are still using more than three decades later: a stand-alone personal computer equipped with a graphics display screen and a mouse. A graphical user interface with windows, icons, menus, scroll bars, and all the rest. A laser printer to print things out. And the Ethernet local-area network to tie it all together.

The last transition

Bush proposed a system based on the structure of thought itself. “The human mind ... operates by association,” he noted. “With one item in its grasp, it snaps instantly to the next that is suggested by the association of thoughts, in accordance with some intricate web of trails carried by the cells of the brain. ... The speed of action, the intricacy of trails, the detail of mental pictures [are] awe-inspiring beyond all else in nature.” By analogy, he continued, the desk library would allow its user to forge a link between any two items that seemed to have an association (the example he used was an article on the English long bow, which would be linked to a separate article on the Turkish short bow; the actual mechanism of the link would be a symbolic code imprinted on the microfilm next to the two items). “Thereafter,” wrote Bush, “when one of these items is in view, the other can be instantly recalled merely by tapping a button. ... It is exactly as though the physical items had been gathered together from widely separated sources and bound together to form a new book. It is more than this, for any item can be joined into numerous trails.” Such a device needed a name, added Bush, and the analogy to human memory suggested one: “Memex.” This name also appeared for the first time in the 1939 draft. In any case, Bush continued, once a Memex user had created an associative trail, he or she could copy it and exchange it with others. This meant that the construction of trails would quickly become a community endeavor, which would over time produce a vast, ever-expanding, and ever more richly cross-linked web of all human knowledge. Bush never explained where this notion of associative trails had come from (if he even knew; sometimes things just pop into our heads). But there is no doubt that it ranks as the Yankee Inventor’s most profoundly original idea. Today we know it as hypertext. And that vast, hyperlinked web of knowledge is called the World Wide Web.

In effect, Bush’s larger vision of individual empowerment and creative enhancement was taken into the computer community’s collective unconscious, lurking in certain brains like a time bomb that wouldn’t explode for another twenty or thirty years, when the technology to realize that vision was finally ready.

Today we speak of “the computer” as if it were a single thing that had to be invented only once. But as Wiener’s list of features suggests, the modern digital computer is actually a combination of at least half a dozen separate inventions, most of which involved not just another gadget but a shift in the way people thought about computing. At the time of Wiener’s memo, moreover, it was far from clear whether he or anyone else had put the individual pieces together in the right way; those conceptual transitions were still very much works in progress. Take the shift from analog computing to digital computing, for example. This was essentially the shift from measurement to arithmetic–from machines that made a physical model of the problem at hand, as in the Differential Analyzer, to machines that could manipulate numbers as discrete chunks of information. Even in 1940, of course, digital calculation had the advantage of accuracy: the lowliest desktop calculator could add 2 plus 2 and get 4, whereas the fanciest version of the Differential Analyzer would get 4 plus or minus 2 percent. But analog machines still had the huge advantage of speed, especially when they were grappling with the kinds of large, complex problems the analyzer had been built for.

Better yet, for Vannevar Bush and for many others, was that analog machines had a wonderfully evocative quality. They didn’t just calculate an answer; they invited you to go in and make a tangible model of the world with your own hands, and then they acted out the unfolding reality right before your eyes. For anyone watching that process, Bush wrote, “one part at least of formal mathematics will become a live thing.” Compared to that, digital computers seemed static and dead, nothing but electrons zipping invisibly through wires. That may have been why Bush himself later seemed to feel such a sense of loss as digital computing swept the world, starting in the 1950s. Certainly he never wavered in his own commitment to the analog approach. Doggedly, and without success, the Best Apparatus Man in America kept on trying to come up with a workable analog design for his Memex until his death, in 1974. And until the end, his colleagues could hear him grumbling about the “damn digital computer.” All of which just goes to show that Fate does have a sense of irony. The qualities that would give digital computers their ultimate advantage over analog machines—their vastly greater flexibility and programmability—were still only dimly perceived in 1940 (indeed, it would be several years yet before anyone even demonstrated a programmable digital computer). But one of the biggest single steps in that direction had already been taken—thanks to Vannevar Bush’s own Differential Analyzer.

In effect—though he didn’t emphasize it in his thesis—Shannon had shown that a relay circuit could physically embody that ineffable whatever-it-was in a human computer’s head that allowed her to know what mathematical operations to perform when, and whether to perform operation A or B depending on the result of operation C. That ability, in turn, is ultimately what makes a modern digital computer so much more than just an adding machine: it can work its way through a sequence of such decisions automatically. In a word, it can be programmed. And that’s why “A Symbolic Analysis of Relay and Switching Circuits” is arguably the most influential master’s thesis of the twentieth century: in it Claude Shannon laid the theoretical foundation for all of modern computer design, nearly a decade before such computers even existed. In the years since, switching technology has progressed from electromechanical relays to microscopic transistors etched on silicon. But thanks to Shannon, microchip designers still talk and think in terms of a device’s internal logic.

All across the United States (and the world), in fact, hands-on tinkerers were already plunging ahead with their own ideas for computing, and in the process achieving most of the other transitions on Wiener’s list. For example:

  • From decimal math to binary math.

  • From mechanical to electromechanical to fully electronic.

  • From operator-supervised to programmable computers.

So there it was: the history of the computer in the 1930s was a history of conceptual groping on the part of many individuals, each one wrestling with recalcitrant hardware and each one solving a piece of the conceptual puzzle. But only at the end of the decade were a few people such as Norbert Wiener beginning to put all the pieces together—and even they were only just beginning. Ultimately, in fact, it would take the war itself to forge those pieces into a unified whole. The war, that is, plus a small group of young men with insufficient respect for the wisdom of their elders—and another world-famous mathematician on an insatiable quest for calculating power.

No matter how clever its design, no machine could do more than Turing’s. Or to state it in modern terms, all computers are fundamentally the same: given enough time and memory capacity, the lowliest handheld PC can do anything the mightiest supercomputer on the planet can. But Turing also went further, showing that there were certain things that even the most powerful machine could not compute. In particular, he pointed out, no computer could predict its own behavior. Say you feed the machine a tape and come back to find that it’s been chugging along for twenty-four hours. Say it’s been chugging along for a week, or a year, or even ten years: read, write, left, right. There’s still an infinite supply of tape remaining, but is this thing ever going to write down, say, a O? Is it ever going to finish? There’s no way to tell, Turing showed, not in general. The “halting problem,” as it’s known, is hopeless. Nowadays, of course, very few people wonder whether their computers will eventually output a 0. But Turing’s argument applies to any nontrivial behavior: except in very special cases, the fastest way to find out if your program will take a given action is to run it and see. And that, in turn, means that the old saying about the mindlessness of computers is true but also irrelevant. A computer does only what its programmers tell it to do—but the programmers can’t really know the consequences of their commands until they see their program running. (That’s one big reason software vendors have to spend so much time debugging their products: there is no universal testing program that can guarantee another program’s correctness.) Or to say it still another way, the imaginary machine that Turing modeled on human mathematicians had some of the same unpredictability as the human mind.

Extrapolating a series of data points with a mathematical equation was one thing, but seeing it done by an inanimate machine was quite another, and still a cause for astonishment in 1941. Indeed, it went straight to the heart of perhaps the greatest conundrum in natural philosophy: the mind-body problem. Ordinary physical matter (“body”) is inherently passive, goes the classic argument. It simply responds to outside forces. An apple falls from its tree only if it is pulled by gravity. A baseball abruptly heads for the center-field bleachers only if its trajectory is intercepted by a bat. Things happen only if they are made to happen. In the physical world, every effect requires a cause—and the cause always comes first. However, the argument continues, human beings (and other living things) are not passive. Even though our bodies are obviously made of physical matter, we don’t just respond to outside forces. We have autonomy. We can take action. We have goals, expectations, desires, purpose. Our “causes” are not in the past but in the future. And therein lies the conundrum: how can that possibly be unless we possess some ineffable quality of “mind,” or “spirit,” or “soul” that transcends the physical matter of our bodies? Good question—except that Wiener and Bigelow had just produced a purely physical device that took action based on a prediction. In however rudimentary a way, their fire-control apparatus had “causes” that lay in the future. Furthermore, they realized, it had that capacity for a very deep reason: feedback.

The concept of feedback is one of those brilliantly simple ideas that seem utterly obvious once someone has pointed them out. Consider a thermostat, for example: Whenever the surrounding room gets too cold, a strip of metal bends, closes a contact, and turns the furnace on; when the room warms up again, the strip straightens out, releases the contact, and cuts the furnace off. The system automatically counteracts the changes in temperature, and the room stays pretty close to comfortable. In a world where no device is ever perfectly accurate or reliable, the two men recognized, some such self-correcting mechanism was essential for any kind of effective operation. And indeed, many specific examples of feedback mechanisms were already well known. In the late eighteenth century, for example, the Scottish inventor James Watt had equipped his new steam engine with a simple “governor” for safety: if the engine ever started going too fast, the governor device would automatically cut back the power and force it to slow down.

But did feedback also apply to voluntary action? Definitely, Wiener and Bigelow argued. Consider the process of picking up a pencil. Your mind has a goal: Pick up the pencil. But as you reach out for it, the motion of your arm is never perfectly accurate. So your brain must constantly make corrections using information from your eyes, muscles, and fingertips. A smooth, coordinated action is possible if and only if your brain can complete that feedback loop. And if it can’t complete the loop? Then voluntary action becomes impossible, the two men reasoned.

The same principles of feedback apply in every medium, they said, just as the same principles of aerodynamics apply to the flight of a sparrow and the flight of a B-29. This meant, ultimately, that the human nervous system no longer had to be viewed as something utterly mysterious, a kind of organic black box possessing powers of responsiveness and will that science could never understand. Quite the opposite: viewed from an engineering perspective, the nervous system was a perfectly comprehensible array of feedback loops in active communication with its environment. Conversely, they said, if it was valid to think of the nervous system in engineering terms, then it was just as valid to think of machines in biological terms. Look at the fire-control system, correcting its aim after every shot with feedback from radar. The gun and its fire-control system operated in a completely automatic fashion, with no humans in the loop anywhere. And yet the gun seemed guided by a grimly determined intelligence. Through feedback, said Wiener, Bigelow, and Rosenblueth, a mechanism could embody purpose.

Even today, more than half a century later, that assertion still has the power to fascinate and disturb. It arguably marks the beginning of what are now known as artificial intelligence and cognitive science: the study of mind and brain as information processors. But more than that, it does indeed claim to bridge that ancient gulf between body and mind—between ordinary, passive matter and active, purposeful spirit. Consider that humble thermostat again. It definitely embodies a purpose: to keep the room at a constant temperature. And yet there is nothing you can point to and say, “Here it is—this is the psychological state called purpose.” Rather, purpose in the thermostat is a property of the system as a whole and how its components are organized. It is a mental state that is invisible and ineffable, yet a natural phenomenon that is perfectly comprehensible. And so it is in the mind, Wiener and his colleagues contended. Obviously, the myriad feedback mechanisms that govern the brain are far more complex than any thermostat. But at base, their operation is the same. If we can understand how ordinary matter in the form of a machine can embody purpose, then we can also begin to understand how those three pounds of ordinary matter inside our skulls can embody purpose—and spirit, and will, and volition. Conversely, if we can see living organisms as (enormously complex) feedback systems actively interacting with their environments, then we can begin to comprehend how the ineffable qualities of mind are not separate from the body but rather inextricably bound up in it.

After a decade in which the computer pioneers had struggled to master the vagaries of hardware, barely able to see the forest for the trees, von Neumann had laid out the fundamental principles of computer design with breathtaking clarity. Up until that point, most of the pioneering computers had embodied at least part of the problem-solving process in their actual physical structure. The classic example was Bush’s Differential Analyzer, in which the problem was represented by the arrangement of gears and shafts. The brilliant idea behind the stored-program concept was to make a clean split, to separate the problem-solving sequence from the hardware entirely. The act of computation thus became an abstract process that we now know as software, a series of commands encoded in a string of binary 1s and 0s and stored in the computer’s memory.

The stored-program concept now ranks as one of the great ideas of the computer age—arguably the great idea. By rendering software completely abstract and decoupling it from the physical hardware, the stored-program concept has had the paradoxical effect of making software into something that is almost physically tangible. Software has become a medium that can be molded, sculpted, and engineered on its own terms. Indeed, as the Yale University computer scientist David Gelernter has pointed out, the modern relationship between software and hardware is essentially the same as that between music and the instrument or voice that brings it to life. A single computer can transform itself into the cockpit of a fighter jet, a budget projection, a chapter of a novel, or whatever else you want, just as a single piano can be used to play Bach or funky blues. Conversely, a spreadsheet file can (with a little effort) be run on a Microsoft Windows machine, a Macintosh, or a Unix workstation, just as a Bach fugue can be performed on a pipe organ or by an ensemble of tubas. The bits and bytes that encode the spreadsheet obviously can’t function without the computer, any more than a page full of notes can become music without a performer. And yet the spreadsheet also transcends the computer, in exactly the same way that the Bach fugue transcends any given performance of it. Everything important about that spreadsheet—its on-screen appearance, its structure, its logic, its functionality, its ability to respond to the user—exists, like the harmonies and cadences of Bach, in an abstract, platonic world of its own.

Metaphorically, at least, this abstraction is probably about as close as science and technology have ever come to the pagan notion of animation, spirit, enchantment. It is certainly a big part of what gives computers their emotional clout. Anyone who has ever switched on a personal computer has felt it: watching the programs fill the screen is disconcertingly like watching a dead thing come alive. Without software, the glowing glass box is just a glass box. With software, it becomes what the MIT sociologist Sherry Turkle has dubbed the first psychological machine—active, surprising, goal-driven, and capable of responding to us in ways that no ordinary machine could ever do.

More recently, of course, software has also become the basis of a whole new abstraction, that vast network of interlinked computers known variously as the Internet, the Web, or cyberspace. After all, it is only when software becomes independent of hardware that we can even think about sending files and programs over a telephone line or a high-speed data network. Indeed, cyberspace now seems set to raise the software abstraction to yet another level, where programs won’t even be tied to one place anymore. Instead, software “agents” searching for data will be able to leave their home computers and fan out through the network at will, merging, spawning, communicating, collaborating, and leaping from machine to machine like a society of tame computer viruses.

In any case, the stored-program concept articulated in von Neumann’s 101-page “First Draft” marked the beginning of the last great transition in the development of computers, a transition that is still going on today. It was the shift in focus from structure to behavior, from how computers were made to what they could do. Before von Neumann’s paper, people viewed electronic digital computers merely as fancier and fancier adding machines. The things were immensely useful, to be sure, but they were still just crunchers of numbers. After von Neumann’s paper—though it would take the better part of a generation before this really became clear—people could begin to conceive of these machines as something fundamentally new. Computing machines had become computers, the devices that implemented software.

New kinds of people

Indeed they were new kinds of people, though they barely knew it themselves. In effect, Lick, Miller, and their fellow Young Turks were groping toward a third way of thinking about the human condition, one that was neither blind mysticism nor cold reductionism. In the intellectual turmoil of Cambridge in 1948, they were beginning to glimpse a way to be completely rigorous and scientific and yet still believe in the existence of mind and the reality of our inner lives. Skinner and the other behaviorists had tried to make psychology “scientific” by reducing us all to the status of machines—blind, rigid, nineteenth-century-style machines. We were nothing more than walking, talking bundles of stimulus and response, organic robots to be controlled and manipulated at will. But Lick and Miller had begun to sense the subversive possibilities of twentieth-century machines. Feedback systems were the stuff of everyday engineering, and yet they could embody “mental states” such as purpose and expectation. Electronic computing systems were little more than arrays of very fast switches, and yet they, too, could embody “mental states,” in the form of data stored in (where else?) a memory. Computer programs were nothing more than lists of precisely defined rules that the machine followed to the letter, and yet as Turing had shown, they could generate results that were inherently surprising and unpredictable.

Shannon was determined to find a theory that could be applied not just to telephones but to any form of communication, from telegraphs and televisions to casual conversation or even nerve impulses. To achieve that ambitious goal, he first imagined the communication process as being divided into five parts:

  1. An information source: the person or thing generating the original message.

  2. A transmitter: the instrument that transforms the message into a signal suitable for transmission (the voice that produces a sound wave, the telephone that produces an electrical signal, etc.).

  3. A communication channel: the medium that conducts the signal (air, a telephone wire, a coaxial cable, a beam of light, etc.).

  4. A receiver: the instrument that takes the signal and tries to reconstruct the message (the ear, the telephone on the receiving end, etc.).

  5. A destination: the person or thing the message is intended for.

This deceptively simple five-part framework had the great virtue of clarity: just as John von Neumann’s abstract functional design for EDVAC would later do for computer engineering, Shannon’s outline gave him a way to think about the architecture of communication and what a given system was supposed to accomplish, preventing him from getting bogged down in the vagaries of vacuum tubes and cable connections. It provided the generality he needed to devise a truly fundamental theory of communications, in much the same way that Sir Isaac Newton had derived a fundamental theory of physics from just three general laws of motion and one universal law of gravitation. And yet it simultaneously gave him a framework that he could tailor to any given problem, much as Newton’s laws could be applied to the fall of an apple or the motions of the Moon.

In effect, though Wiener didn’t quite express it this way, cybernetics was offering an alternative to the Skinnerian worldview, in which human beings were just stimulus-response machines to be manipulated and conditioned for their own good. It was likewise offering an alternative to von Neumann’s worldview, wherein human beings were unrealistically rational technocrats capable of anticipating, controlling, and managing their society with perfect confidence. Instead, cybernetics held out a vision of humans as neither gods nor clay but rather “machines” of the new kind, embodying purpose—and thus, autonomy. No, we were not the absolute masters of our universe; we lived in a world that was complex, confusing, and largely uncontrollable. But neither were we helpless. We were embedded in our world, in constant communication with our environment and one another. We had the power to act, to observe, to learn from our mistakes, and to grow. “From the point of view of cybernetics, the world is an organism,” Wiener declared in his autobiography. “In such a world, knowledge is in its essence the process of knowing. ... Knowledge is an aspect of life which must be interpreted while we are living, if it is to be interpreted at all. Life is the continual interplay between the individual and his environment rather than a way of existing under the form of eternity.”

While many members of Wiener’s audiences undoubtedly came for the entertainment value, others were eager to hear what this odd little man had to say. After all, these were people who had just seen World War II bring forth atomic bombs, “electronic brains,” radar, guided missiles, antibiotics, and jet engines—a wave of technologies so astonishing that it was hard to tell sober news stories from pulp science fiction. And now here was Norbert Wiener with his vision of a new age in history, helping them make sense of it all. He didn’t call it the Information Age; that term would be invented later, by others. But he made it clear that this magical stuff called information lay at its heart. Information was a substance as old as the first living cell and as new as the latest technology. It was the stuff that flowed through communication channels; indeed, it was the stuff that messages were made of. But it was also the stuff that concepts and images and stored programs were made of. It was the stuff that entered the eyes and the ears, that flowed through the brain, that provided the feedback for purposeful action. Information was what computers and brains were about. It was the one central concept that unified communication, computation, and control and made them all seem like different facets of one underlying reality. Information was at once the stuff of a new world and a whole new way of understanding that world.

Certainly that was how it seemed down in the basement of Memorial Hall. George Miller, J. C. R. Licklider, and the other Young Turks had grasped the implications of the Information Bomb immediately: information was the concept that promised to make the mind and the brain comprehensible. “There is no question in my mind,” said William McGill, that “what we’re looking at here is the origin of cognitive science.” Cognitive science is our modern view of the brain as an organ designed to acquire, communicate, and transmute information, he explained; the discipline would rout behaviorism almost completely during the “cognitive revolution” of the 1960s and 1970s and eventually cut broad swaths through psychology, neuroscience, artificial intelligence, linguistics, and even anthropology. “But you can pin down [its birth] at MIT and Harvard in the late nineteen-forties, when people were sensing that these new mathematical methods would transform everything.”

Lick, for his part, was embracing his systems-engineering approach to the brain more enthusiastically than ever. To his mind, cybernetics in general and information theory in particular were lighting the way toward a “hard-science” psychology, one that would be far richer, more satisfying, and more productive than anything that had ever come out of behaviorism. Indeed, said McGill, if Lick’s career had turned out differently, he might well have gone on to become a pioneer in cognitive neuroscience, the branch of cognitive science that explores precisely how the brain processes information and gives rise to conscious experience.

Meanwhile, when it came to the larger meaning of cybernetics for history and culture—well, there’s no way to know if Lick was directly influenced by Wiener’s views on the subject, since he never discussed the matter one way or another. But he definitely resonated with those views. When Lick later spoke about the power of computers to transform human society on an epic scale, about their potential to create a future that would be “intellectually the most creative and exciting [period] in the history of mankind,” he sounded very much like Norbert Wiener in prophet mode. Lick likewise echoed Wiener as he worried about technology’s potential for harm: “If all the Industrial Revolution accomplished was to turn people into drones in a factory,” he was sometimes heard to say, “then what was the point?” Indeed, Lick’s entire later career in computers can be seen as a thirty-year exercise in the human use of human beings, an effort to eliminate mind-numbing drudgery so that we could be free to use our full creative powers. Even in the 1940s, moreover, much of Lick’s work as a psychologist still revolved around the notion of the human-machine interface and the question of how to use technology not to replace people but to help them.

The freedom to make mistakes

Lick was already honing the leadership style that he would use to such effect a decade later with the nationwide computer community. Call it rigorous laissez-faire. On the one hand, like his mentor Smitty Stevens, Lick expected his students to work very, very hard; he had nothing but contempt for laziness and no time to waste on sloppy work or sloppy thinking. Moreover, he insisted that his students master the tools of their craft, whether they be experimental technique or mathematical analysis. On the other hand, Lick almost never told his students what to do in the lab, figuring that it was far better to let them make their own mistakes and find their own way. And imagination, of course, was always welcome; the point here was to have fun. The Licklider style wasn’t for everyone, and not everyone stayed. But for self-starters who had a clear sense of where they were going, it was heaven. Good people liked to be with Lick; he seemed to be surrounded by an atmosphere of ideas and excitement.

For whatever reason—the perceived urgency of the task, perhaps, or the good sense of General Shiely and his oversight team—the researchers had remarkable freedom to make decisions without being second-guessed from the top. They simply paid for the Memory Test Computer out of Division 6’s “advanced research” budget, which they could dip into for whatever they considered needful—with no committee meetings, no studying the question to death, and nobody’s pointing out a thousand ways they ought to do it differently. “As long as [our decisions] were plausible and could be explained,” agreed Forrester, “we could carry other people with us.”10 Just as important, Forrester adds, they also had the freedom to make mistakes and learn from them: project managers wasted very little time on finger-pointing. “[Mistakes were] admitted and fixed rather than evaded or denied,” he says.

The tale of the fig tree and the wasp

So, Lick wondered, what would happen if you put humans and computers together as a system? What would happen if you let the machines take over all those dreary, algorithmic chores they were so good at? How much of your time would be opened up for real creativity? Ever the experimenter, Lick decided to answer that question with a measurement. “In the spring and summer of 1957,” he wrote a few years later, “I tried to keep track of what one moderately technical person actually did during the hours he regarded as devoted to work. Although I was aware of the inadequacy of the sampling, I served as my own subject. ... I obtained a picture of my activities that gave me pause. Perhaps my spectrum is not typical—I hope it is not, but I fear it is. About 85 per cent of my ‘thinking’ time was spent getting into a position to think, to make a decision, to learn something I needed to know. ... [These getting-into-position activities] were essentially clerical or mechanical: searching, calculating, plotting, transforming, determining the logical or dynamic consequences of a set of assumptions or hypotheses, preparing the way for a decision or an insight. Moreover, my choices of what to attempt and what not to attempt were determined to an embarrassingly great extent by considerations of clerical feasibility, not intellectual capability.” Eighty-five percent!? That figure did more than give Lick pause. It seems to have hit him with the force of a religious epiphany: our minds were slaves to mundane detail, and computers would be our salvation. We and they were destined to unite in an almost mystical partnership: thinking together, sharing, dividing the load. Each half would be preeminent in its own sphere—rote algorithms for computers, creative heuristics for humans. But together we would become a greater whole, a symbiosis, a man-machine partnership unique in the history of the world: “The hope is that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today.”

It’s fair to say that one of Lisp’s two greatest legacies to the art of programming was a certain style, a certain exploratory approach to pushing back the software frontiers. And the other legacy? An undeniable grace, beauty, and power. As a Lisp programmer continued to link simpler functions into more complex ones, he or she would eventually reach a point where the whole program was a function—which, of course, would also be just another list. So to execute that program, the programmer would simply give a command for the list to evaluate itself in the context of all the definitions that had gone before. And in a truly spectacular exercise in self-reference, it would do precisely that. In effect, such a list provided the purest possible embodiment of John von Neumann’s original conception of a stored program: it was both data and executable code, at one and the same time. In mathematics, the technical name for this sort of thing is recursive function theory, which was why McCarthy called his first public description of Lisp “Recursive Functions of Symbolic Expressions and Their Computation by Machine.” Today ranked as one of the most influential documents in the history of computer languages, that paper established that a language could have a rigorous mathematical foundation. And it signified that John McCarthy had finally come up with a framework that was precise enough, rigorous enough, and compelling enough to satisfy even him. Certainly it changed how McCarthy’s AI students perceived their creation, says Abrahams: “Now, all of a sudden, Lisp was not merely a language you used to do things. It was now something you looked at, an object of beauty. It was something to be studied as an object in and of itself.”

Lick, who hated to write but hated even more to disappoint anyone in his professional family, agreed to help out. He handed Elkind the completed manuscript of “Man-Computer Symbiosis” on January 13, 1960. “It was ... beyond expectations,” says Elkind, who still marvels at what his mentor produced. Indeed, he says, when you look back at that paper from the perspective of today, knowing what happened later, you can see that it essentially laid out the vision and the agenda that would animate U.S. computer research for most of the next quarter century, and arguably down to the present day.

Lick began the article with a metaphor: “The fig tree is pollinated only by the insect Blastophaga grossorum [the fig wasp]. The larva of the insect lives in the ovary of the fig tree, and there it gets its food. The tree and the insect are thus heavily interdependent: the tree cannot reproduce without the insect; the insect cannot eat without the tree; together, they constitute not only a viable but a productive and thriving partnership. This cooperative ‘living together in intimate association, or even close union, of two dissimilar organisms’ is called symbiosis. ... The purposes of this paper are to present the concept [of] and, hopefully, to foster the development of man-computer symbioses.”

Such a symbiosis would involve much more than merely programming the computer, Lick emphasized. That kind of relationship reduced the machine to a tool, a gadget for executing an algorithm that the human had thought through in advance. What Lick had in mind was a relationship more like that with “a colleague whose competence supplements your own”—that is, a friend who could help you out when the problems got too hard to think through in advance. “[Such problems] would be easier to solve,” he wrote, “and they could be solved faster, through an intuitively guided trial-and-error procedure in which the computer cooperated, turning up flaws in the reasoning or revealing unexpected turns in the solution.”

A second, and closely related, meaning of symbiosis was real-time command and control, Lick noted: “Imagine trying, for example, to direct a battle with the aid of a computer on [a batch-processing schedule]. You formulate your problem today. Tomorrow you spend with a programmer. Next week the computer devotes five minutes to assembling your program and forty-seven seconds to calculating the answer to your problem. You get a sheet of paper twenty feet long, full of numbers that, instead of providing a final solution, only suggest a tactic that should be explored by simulation. Obviously, the battle would be over before the second step in its planning was begun.”

In either case, he said, symbiosis meant humans and computers working together in a partnership, with each side doing what it did best: “[Humans] will set the goals and supply the motivations. ... They will formulate hypotheses. They will ask questions. They will think of mechanisms, procedures, and models. ... They will define criteria and serve as evaluators, judging the contributions of the equipment and guiding the general line of thought. ... The information-processing equipment, for its part, will convert hypotheses into testable models and then test the models against data. ... The equipment will answer questions. It will simulate the mechanisms and models, carry out the procedures, and display the results to the operator. It will transform data, plot graphs. ... [It] will interpolate, extrapolate, and transform. It will convert static equations or logical statements into dynamic models so that the human operator can examine their behavior. In general, it will carry out the routinizable, clerical operations that fill the intervals between decisions.”

The interactivity would open up a high-bandwidth channel to our perceptions, to our instincts—to our deepest understanding. For the psychologist-turned-computer-evangelist, this was the great promise of dynamic modeling. After all, as Lick and his coauthor Robert W. Taylor would write in a 1968 article,31 “By far the most numerous, most sophisticated, and most important models are those that reside in men’s minds. In richness, plasticity, facility, and economy, the mental model has no peer.” Included among those mental models are images recalled from memory, expectations about the probable course of events, fantasies of what might be, perceptions of other people’s motives, unspoken assumptions about human nature, hopes, dreams, fears, paradigms—essentially all conscious thought. Of course, Lick and Taylor would continue, “[the mental model] has shortcomings. It will not stand still for careful study. It cannot be made to repeat a run. No one knows just how it works. It serves its owner’s hopes more faithfully than it serves reason. It has access only to the information stored in one man’s head. It can be observed and manipulated only by one person.” But if you could join mental models to computer models, Lick reasoned, and if you could get the two of them into just the right kind of symbiotic relationship, then you could overcome every one of those limitations.

First and most obviously, he said, the computer would greatly enhance our ability to handle complexity. Partly this is because computers are so good at processing vast quantities of data, but much more important, according to Lick, is their potential to give us a fundamentally new way of representing knowledge. In addition to our classic formats—text, tables, diagrams, equations, and the like—we now have the power to represent knowledge as a process, an executable program. Imagine the equations that describe, say, the development of a hurricane. And now imagine a computer simulation that shows us that hurricane wandering across the Caribbean to devastate southern Florida: the equations have been brought to life by the computer in precisely the same way that the score of Beethoven’s Fifth Symphony is brought to life by an orchestra. Just as in music, drama, dance, or any other performing art, Lick declared, “information is a dynamic, living thing, not properly to be confined (though we have long been forced to confine it thus) within the passive pages of a printed document. As soon as information is freed from documental bounds and allowed to take on the form of process, the complexity (as distinguished from the mere amount) of knowledge makes itself evident.”

By the beginning of the 1960s, then, a decade and a half before the microcomputer revolution began in the garages of Silicon Valley, and a full thirty years before the dawn of the Internet Age, the air around Cambridge was already alive with the essential ideas: graphics-rich personal workstations and the notion of human-computer symbiosis; time-sharing and the notion of computer-aided collaborative work; networks and the notion of an on-line community; on-line libraries and the notion of instant, universal access to knowledge; and computer languages and the notion of a new, digital medium of expression.

The phenomena surrounding computers

Fano and his colleagues were starting to learn their first great lesson about utilities: what begins as a convenience quickly becomes a necessity. People begin to structure their work, their daily routines, even their entire lives around the utility. And as a consequence they become very, very proprietary about it.

What he found truly fascinating, though, was the second great lesson of time-sharing: in an information utility, the power flows both ways. Unlike a power utility, which basically just provides a resource that people consume, an information utility lets the users give back to the system. It even lets them create whole new resources in the system. “More than anything else,” says Fano, “the system became the repository of the knowledge of the community. And that was a totally new thing.”

Now, however, time-sharing had made exchanging software trivial: you just stored one copy in the public repository and thereby effectively gave it to the world. “Immediately,” says Fano, “people began to document their programs and to think of them as being usable by others. They started to build on each other’s work.” Indeed, the very existence of that public data repository on the 7094 quickly transformed Project MAC’s central “power plant” into the intellectual center of the community. Through E-mail, the exchange of files, and the sharing of programs, it functioned as the town square, the village market, the Roman forum, and the Athenian agora all in one—the place where citizens gathered to talk, to gossip, to conduct business, to propose ideas, and then to argue until they came up with better ideas. Within six months of the system’s November 1963 startup, CTSS and the on-line environment it supported had become, at least in embryo, everything that would later be claimed for the on-line world of the Internet.

In fact, it was even more than that. With more than half of its system commands now written by the users themselves, CTSS had proved to be just what Lick had been hoping for in his Intergalactic Network memo, and what Fano had predicted in the Project MAC proposal: a self-guided system. The very fact that CTSS was being (partially) created by its users meant that it was adapting to those users and evolving to meet needs that its founders never could have envisioned. Far more than Corbató himself had originally realized, his open-systems design for CTSS had deep parallels to the notions of free speech in a political democracy, free competition in a market economy, and the free exchange of ideas in scientific research. In each there is a core set of rules that everyone must accept to make the interchange possible at all. But beyond that, creativity reigns.

You were never quite sure how it would respond or what was going to crash next. And the only hope for managing in the face of such chaos, Corbató recognized, was to accept that spontaneity. He and his colleagues would have to give up every engineer’s first instinct, which was to control things so that problems could not happen, and instead design a system that was guaranteed to fail—but that would keep running anyhow. Nowadays this is known as a fault-tolerant system, and designing one is still considered a cutting-edge challenge. It means giving the system some of the same quality possessed by a superbly trained military unit, or a talented football team, or, for that matter, any living organism—namely, an ability to react to the unexpected. But in the early 1960s, with CTSS, Corbató and his colleagues had to pioneer fault-tolerant design even as they were pioneering time-sharing itself. For example, among their early innovations were “firewalls,” or software barriers that kept each user’s area of computer memory isolated from its neighbors, so that a flameout in one program wouldn’t necessarily consume the others.

“When you look at Lick’s legacy, two very distinct things stand out,” says Bob Fano. “One is that he was a very imaginative, creative psychoacoustics man. That’s the first part of his history, and you shouldn’t overlook that.” Because Lick had come to computing from psychology, Fano explains—instead of through mathematics and engineering, like almost everyone else—he instinctively saw computers in relation to the workings of the human brain, rather than as an exercise in pure technology. And that, in turn, was why he was so quick to embrace computers as a way of enhancing human creativity and enriching human life. “It was a vision of man-machine interaction that was often unhampered by practical realities,” says Fano, who had many an argument with Lick on that very point. “But he really had an understanding of the role the computer could play.” Second, says Fano, when Lick was presented with a miraculous, never-to-be-repeated opportunity to turn his vision into reality, he had the guts to go for it, and the skills to make it work. Lick had the power to spin his dreams so persuasively that Jack Ruina and company were willing to go along with him—and to trust him with the Pentagon’s money. Once he had that money in hand, moreover, Lick had the taste to recognize and cultivate good ideas wherever he found them. Indeed, the ideas he fostered in 1962 would ultimately lay the foundations for computing as we know it today. The time-sharing technology he pushed so relentlessly would turn out to be the evolutionary ancestor of both personal computing and local-area networking, as well as a test bed for all the issues of on-line social behavior that would reemerge a generation later. The computer-graphics experiments he funded so lovingly would likewise turn out to be important steps along the road to our current generation of high-resolution computer displays with their windows, icons, menus, and so on. And of course, Lick’s vision of an Intergalactic Network would be the direct inspiration for the Arpanet of the late 1960s, which would in turn evolve into the Internet of the 1990s. Perhaps most important of all, however, Lick had the patience to take the long view. He couldn’t get it all done in one year, or two years, or a lifetime. But by creating a community of fellow believers, he guaranteed that his vision would live on after him.

The intergalactic network

Lick himself didn’t really seem to have any objection to this approach; to his mind, top-down programming was simply a commonsense method of delegating responsibility. He just didn’t think it should be the only approach. Before you can delegate responsibilities, he argued, you have to figure out precisely what those responsibilities are and how you should carve them up. And when you’re really out there on the edge, as he knew after a professional lifetime of research, that process is anything but clear-cut. “On the frontier,” he had written in 1965, “man must often chart his course by stars he has never seen. Rarely does one recognize or discover a complex problem, formulate it, and lay out a procedure that will solve it—all in one great flash of insight.”

When the systems are truly complex, in short, programming has to be a process of exploration and discovery. That had been the whole point of interactive languages such as Lisp, as well as interactive-design tools such as Sketchpad: they made it easy to explore new solutions by making it easy to formulate and then reformulate ideas on the fly. And that was the whole point of Lick’s Dynamic Modeling project: he wanted to push exploratory programming as far as he could in every direction. As Bob Fano explained it in a 1998 biographical memoir8 of his friend, “Lick’s goal was a self-contained, coherent, interactive computer system that could be used by researchers with moderate computer skills to investigate the behavior of a variety of models with little programming required on their part. [So] the software scheme conceived by Lick was very different from any existing problem-oriented language. Briefly, it was akin to a software Tinkertoy, based on a vast library of software modules that readily could be assembled into specific models and programs for investigating them. New modules could be constructed and added to the library by users to meet their special requirements, so that the library would eventually become the repository of the work of the community.

Looking back on it, there were any number of ways that the Arpanet project could have failed. It could have been snuffed out by the Vietnam-era budget crunch before it even got started, as Pentagon officials scrounged for money high and low. It could have been crushed by the mainstream telecommunications community, which saw packet switching as utterly wrongheaded at best and a competitor at worst. Or it could have been ignored and left to wither away by researchers who couldn’t really see any use for it. And yet the Arpanet succeeded beyond all expectations, thereby pioneering the technology (and becoming the first piece) of that all-pervasive entity known today as the Internet. “It was one of the great experiments in science,” declares Len Kleinrock. “It completely changed the way things are going now—commerce, government, industry, science, everything.”

A good part of the credit for that triumph goes to the leadership of ARPA, notably Charles Herzfeld and Steve Lukasik at the agency director level and John Foster at the DDR&E level. These men not only understood the vision that animated their computer office but protected and encouraged what that office was trying to do. And perhaps most important, they continued to foster ARPA’s extraordinarily un-federal-government-like management style—one that might be summarized as allowing “the freedom to make mistakes.” An even larger share of the credit goes to the successive directors of IPTO itself: J. C. R. Licklider, Ivan Sutherland, Bob Taylor, and Larry Roberts. Although the ARPA management style granted them enormous, almost unfettered authority to dictate the course of research, they all almost invariably exercised that authority with taste, tact, and restraint. To the researchers, they were not dictators so much as protectors, intermediaries who would keep inquisitive congressmen, senators, and generals out of their hair—not to mention paperwork off their desks. “They wanted progress,” notes Kleinrock, “not progress reports.”

If there was an exception that proved the rule, it was Larry Roberts. On the one hand, he definitely did the Arpanet his way, whether the intended beneficiaries wanted the thing or not. On the other hand, he was also quick to seize an unlooked-for opportunity—the Network Working Group—to get users involved with the network (the younger ones, anyway). Through his support for the working group, in short, Roberts was coaxing the people who used the Arpanet to take charge of their own destiny, to learn how to govern themselves. And almost without realizing it, the young researchers who heeded the call ended up creating a parliament that was about as democratic as anyone could imagine: if you wanted to be a member of the Network Working Group, you were. Almost by accident, moreover, they created a transnational forum for comment and debate, in the form of the RFC series, which was soon being promulgated around the world over the Arpanet itself. Their parliament was in session everywhere, all the time. And yet by technological necessity, they almost inevitably found themselves arriving (eventually) at a consensus; they knew full well that a protocol had to be universal to be a protocol at all. They had to agree, or the bits wouldn’t flow. “There was a mixture of competitive ideas,” says Steve Crocker as he thinks back on those meetings. “But people were also talking to each other. There weren’t armed camps saying, ‘You’ve got to do it my way or not at all.’ And all this was happening without anybody having to summon people to a meeting or let formal contracts.” Indeed, this brand of hyperdemocratic, bottom-up decision making proved to be so effective that it would later become the model for the governance of the Internet. Contemporary standard-setting bodies such as the Internet Engineering Task Force are essentially the Network Working Group writ large.

Living in the future

Taylor’s hands-off approach was just the most visible element of an exquisitely delicate balancing act, modeled on the one practiced by Lick and all his successors at ARPA. Yes, went the argument, people needed the freedom to create. But their creations had to add up to something—and not just another bunch of unconnected new gadgets, either. At ARPA that “something” had been human-computer symbiosis, broadly defined. Now, at PARC, it was the “electronic office,” whatever that might turn out to be. Yet in either case the goal was a system of information technology, a whole new way for human beings to work together. All of the various gadgets had to be part of that system. And to achieve that goal, Taylor knew, he somehow had to get all these maverick geniuses moving in the same direction, without forcing everyone to move in lockstep. Somehow he had to give them a sense of purpose and group cohesion, without crushing spontaneity and individual initiative. Somehow, in short, he had to set things up so they would freely follow their own instincts—and end up organizing themselves.

One thing he definitely liked and put into practice was a style of research that could be paraphrased as, “Don’t just invent the future; go live in it.” This had been the philosophy, too, behind Projects MAC and Genie, in which the time-sharing system was supposed to be simultaneously the main research tool, the primary object of experimentation, and the tangible product. For that matter, it had been Taylor’s own philosophy in pushing for the full-fledged Arpanet instead of just a few demonstration projects. By all means, Taylor told his recruits, let’s get way out in front of the curve—five years for sure, ten years if we can. And forget about the cost: Xerox is signing the checks for now, and Moore’s law will solve the problem soon enough. But whatever you build, use it. In fact, get everybody in PARC to use it. Get them pounding on the technology every day, writing reports, writing programs, sending E-mail—anything and everything, so they can see for themselves what the problems and the possibilities are. And then use what they learn to build better technology.

It would be several years yet before anyone actually used the word Internet, says Kahn; at the time his phrase was “interconnected network,” or “internetworking.” And even in retrospect, he says, he has no idea when or how the notion hit him. Maybe it was just something in the air. After all, he had recently helped build the Arpanet, which was already an open system with respect to individual computers. As long as you had an IMP and met the 1822 interface specification (written by one Robert E. Kahn), you could plug into the Arpanet with any computer you wanted, running any operating system you wanted, and the bits would still flow. A network of networks was the same principle, just one level up. Perhaps the idea also resonated with the open-software architecture of operating systems such as CTSS, Multics, and Unix, which gave users a standard interface for writing their own programs. Or perhaps it even resonated with Alan Kay’s notion of software objects, which would present a standard interface to the world while doing their own thing inside.

But wherever the idea came from, it was an exceedingly potent one—as potent, in its own way, as the idea of an open marketplace guided by Adam Smith’s Invisible Hand, or the idea of an open democracy in which all human beings are created equal. If you wanted to be a part of the Internet, you could be: it would be open to anyone willing to deal in the common currency and/or speak the common language, as defined by the interface standard. It was this architecture of openness that would enable the Internet to undergo its explosive growth in the 1990s, when it would expand from a handful of users to uncounted tens of millions in less than half a decade. And it was this same architecture of openness that would continue to allow the Internet to flourish with minimal central authority.

Smalltalk’s overlapping windows, its icons, its menus, and its mouse pointing device constituted a user interface that was more unified and more tightly integrated than that in any other Alto application—and that, moreover, would soon become very, very familiar to the rest of us. To see what it looked like, just glance at the screen of any Apple Macintosh, or any PC running Microsoft Windows: the descendants of that Smalltalk interface can now be found on tens of millions of computers worldwide. Granted, Smalltalk wasn’t the fastest programming language in the world. But then, that wasn’t the point. Kay and his team tried to make the Smalltalk environment very much like an artist’s studio, with all the tools so close at hand that users would never once have to break their concentration. With just a click of the mouse, for example, you could pop up a browser window to scan through a library of existing objects. You could likewise pop up an editor window to write new code defining new behaviors for those objects, then pop up another window to execute the code and watch how they actually responded, and then pop up yet another window full of debugging tools to help you ferret out any problems.

Considered as an event, and not just as a place, the Xerox Palo Alto Research Center was unique, one of those rare, near-miraculous combinations of talent, luck, and timing that you couldn’t reproduce if you tried. To begin with, PARC was the direct heir of the people, the ideas, the technology, and the culture that had been nurtured by ARPA since J. C. R. Licklider’s time. Indeed, it’s fair to say that PARC was ARPA continued by other means. Second, PARC was established at precisely the right moment. Not only were microchips just beginning to make ARPA’s visions feasible in the real world, but the first generation of ARPA students were just beginning to come of age. “In the history of art,” says Alan Kay, “the most powerful work in any genre is done not by the adults who invent it, but by the first generation of kids who grow up in it. Think of perspective painting during the Renaissance. Well, we were that generation. We were the kids who’d had our Ph.D.s paid for by ARPA.” And third, PARC managed to bring a substantial portion of that generation together under corporate sponsor, Xerox, that was rich enough to provide ample money and resources, yet patient enough to wait for long-term results. Moreover, PARC in its early years was blessed with a set of leaders—Jack Goldman, George Pake, and Bob Taylor—who deeply understood the dynamics of innovation. True, the chemistry between Pake and Taylor was far from ideal, but even so, they managed to cultivate an extraordinarily fertile environment for creativity. “The PARC of the nineteen-seventies was amazing,” says Danny Bobrow. “Yes, it had a major fraction of the most brilliant computer scientists in the world at that time. But what really made it was the interaction of all these people. There was Alan Kay wanting a certain kind of on-lineness, Thacker and Lampson wanting a certain kind of machine, the whole notion of mice and hypertext coming out of the Engelbart tradition—even me arguing with Kay about what object-oriented programming meant. There was a sense that we could invent it all. We could do hardware, software, AI, printing, networking. And we had the freedom to do it. George Pake was providing the shield so that we could think hard and provide a new vision. And we did, in many dimensions.”

They did indeed. In the space of about three years—an impressively short time even by computer standards—Kay, Lampson, Thacker, Metcalfe, Starkweather, and the other Young Turks who gathered at PARC pioneered all the basic technologies that have dominated the field ever since: a stand-alone personal computer with a bit-mapped graphics display; a recognizably modern graphical user interface (GUI) featuring windows, icons, menus, the mouse pointing device, and all the rest; object-oriented programming; WYSIWYG word processing and a host of other graphical applications; the laser printer; and a local-area network—Ethernet—to tie them all together.

To Engelbart, the difference between the batch-processing mainframes sold by IBM and the kind of interactive, “personal” computing that he was after (via time-sharing) was the difference between railroads and the private automobile. One represented schedules, standardization, and regimentation, and the other individuality, autonomy, and freedom.

Lick’s kids

So there it was: even Lick had to admit that the situation at ARPA had improved significantly since his day. But for how long? Bob Kahn had done great things, granted. And after Heilmeier, a director like Bob Fossum was a joy. But in the current climate of budgetary and “relevance” constraints, who knew what the next director might be like? Nonetheless, Lick wrote in his “Multinet” article, he was still an optimist. Even if you couldn’t automatically look to the private sector or to the government for leadership, you still had—well, the People. At least when it came to computing, he wrote, “there is a feeling of renewed hope in the air that the public interest will find a way of dominating the decision processes that shape the future.” Just look at E-mail, the Arpanet mailing lists, and all the rest, he said. Just look at the on-line communities that seemed to come into being wherever there was a network. Users of a modern computing system weren’t just passive consumers; the medium itself drew them in. It gave them a forum, it made them active participants, it gave them a stake in deciding their own destiny. So if you could somehow expose ordinary people to this medium—if you could somehow get the technology out of the laboratory and into the mass market so they could experience it firsthand—then ordinary people might just create this embodiment of equality, community, and freedom on their own. It was a vision that was downright Jeffersonian in its idealism, and perhaps in its naïveté as well. Nonetheless, Lick insisted, “the renewed hope I referred to is more than just a feeling in the air.... It is a feeling one experiences at the console. The information revolution is bringing with it a key that may open the door to a new era of involvement and participation. The key is the self-motivating exhilaration that accompanies truly effective interaction with information and knowledge through a good console connected through a good network to a good computer.”

And so it was that the field was left open for the hobbyists in their basements and garages, individuals who didn’t have to worry about meeting payrolls and pleasing the stockholders but just liked to play around with electronics—and were used to making components do things their manufacturers had never imagined. These were the same kinds of tinkerers who’d gotten creative with DEC’s first transistor modules in the 1950s, who’d fallen in love with the PDP-8 in the 1960s, and who were snapping up the programmable calculators now. They were the spiritual brothers of the MIT hackers and the freewheeling Unix mavens. They were the people who had been ham-radio operators and/or stereo buffs since they were teenagers, often using equipment they had built themselves from mail-order kits, or from scratch. They were the guys who had gotten intrigued by the minicomputers they’d encountered at work or at school. And for no “logical” reason whatsoever—certainly none that they could explain to their spouses—they were the people who wanted computers of their own at home, to play with, to experiment with, to experience.

In retrospect, of course, Berners-Lee’s combination of hypertext and browsing would prove to be one of those brilliantly simple ideas that change everything. By giving users something to see, it set off a subtle but powerful shift in the psychology of the Internet. No longer would it be just an abstract communication channel, like the telephone or the TV; instead, it would become a place, an almost tangible reality that you could enter into, explore, and even share with the other people you found there. It would become the agora, the electronic commons, the information infrastructure, cyberspace. Because of Berners-Lee’s hypertext browsing, users would finally begin to get it about the Internet. And they would want more.