Introduction
World War II was decided by steel and aluminum, and followed shortly thereafter by the Cold War, which was defined by atomic weapons. The rivalry between the United States and China may well be determined by computing power. Strategists in Beijing and Washington now realize that all advanced tech—from machine learning to missile systems, from automated vehicles to armed drones—requires cutting-edge chips, known more formally as semiconductors or integrated circuits. A tiny number of companies control their production.
Fabricating and miniaturizing semiconductors has been the greatest engineering challenge of our time. Today, no firm fabricates chips with more precision than the Taiwan Semiconductor Manufacturing Company, better known as TSMC.
Last year, the chip industry produced more transistors than the combined quantity of all goods produced by all other companies, in all other industries, in all human history. Nothing else comes close.
Semiconductors spread across society because companies devised new techniques to manufacture them by the millions, because hard-charging managers relentlessly drove down their cost, and because creative entrepreneurs imagined new ways to use them. The making of Moore’s Law is as much a story of manufacturing experts, supply chain specialists, and marketing managers as it is about physicists or electrical engineers.
If any one of the steps in the semiconductor production process is interrupted, the world’s supply of new computing power is imperiled. In the age of AI, it’s often said that data is the new oil. Yet the real limitation we face isn’t the availability of data but of processing power. There’s a finite number of semiconductors that can store and process data. Producing them is mind-bogglingly complex and horrendously expensive. Unlike oil, which can be bought from many countries, our production of computing power depends fundamentally on a series of choke points: tools, chemicals, and software that often are produced by a handful of companies—and sometimes only by one. No other facet of the economy is so dependent on so few firms. Chips from Taiwan provide 37 percent of the world’s new computing power each year. Two Korean companies produce 44 percent of the world’s memory chips. The Dutch company ASML builds 100 percent of the world’s extreme ultraviolet lithography machines, without which cutting-edge chips are simply impossible to make. OPEC’s 40 percent share of world oil production looks unimpressive by comparison.
PART I: COLD WAR CHIPS
Eight engineers left Shockley Semiconductor and decided to found their own company, Fairchild Semiconductor, with seed funding from an East Coast millionaire. The eight defectors from Shockley’s lab are widely credited with founding Silicon Valley. One of the eight, Eugene Kleiner, would go on to found Kleiner Perkins, one of the world’s most powerful venture capital firms. Gordon Moore, who went on to run Fairchild’s R&D process, would later coin the concept of Moore’s Law to describe the exponential growth in computing power. Most important was Bob Noyce, the leader of the “traitorous eight,” who had a charismatic, visionary enthusiasm for microelectronics and an intuitive sense of which technical advances were needed to make transistors tiny, cheap, and reliable. Matching new inventions with commercial opportunities was exactly what a startup like Fairchild needed to succeed—and what the chip industry needed to take off.
The Nobel Prize for inventing the transistor went to Shockley, Bardeen, and Brattain. Jack Kilby later won a Nobel for creating the first integrated circuit; had Bob Noyce not died at the age of sixty-two, he’d have shared the prize with Kilby. These inventions were crucial, but science alone wasn’t enough to build the chip industry. The spread of semiconductors was enabled as much by clever manufacturing techniques as academic physics. Universities like MIT and Stanford played a crucial role in developing knowledge about semiconductors, but the chip industry only took off because graduates of these institutions spent years tweaking production processes to make mass manufacturing possible. It was engineering and intuition, as much as scientific theorizing, that turned a Bell Labs patent into a world-changing industry.
PART II: THE CIRCUITRY OF THE AMERICAN WORLD
Soviet leaders never comprehended how the “copy it” strategy condemned them to backwardness. The entire Soviet semiconductor sector functioned like a defense contractor—secretive, top-down, oriented toward military systems, fulfilling orders with little scope for creativity. The copying process was “tightly controlled” by Minister Shokin, one of his subordinates remembered. Copying was literally hardwired into the Soviet semiconductor industry, with some chipmaking machinery using inches rather than centimeters to better replicate American designs, even though the rest of the USSR used the metric system. Thanks to the “copy it” strategy, the USSR started several years behind the U.S. in transistor technology and never caught up.
Integrated circuits didn’t only connect electronic components in innovative ways, they also knit together nations in a network, with the United States at its center. The Soviets inadvertently made themselves part of this network by copying Silicon Valley’s products. Japan, by contrast, was deliberately integrated into America’s semiconductor industry, a process supported by Japanese business elites and the U.S. government.
The semiconductor symbiosis that emerged between America and Japan involved a complex balancing act. Each country relied on the other for supplies and for customers. By 1964, Japan had overtaken the U.S. in production of discrete transistors, while American firms produced the most advanced chips. U.S. firms built the best computers, while electronics manufacturers like Sony and Sharp produced consumer goods that drove semiconductor consumption.
In the end, the guerilla war in Vietnam’s countryside wasn’t a fight that aerial bombing could win. The arrival of TI’s Paveway laser-guided bombs coincided with America’s defeat in the war. When military leaders like General William Westmoreland predicted “combat areas that are under real- or near real-time surveillance” and “automated fire control,” many people heard echoes of the hubris that had dragged America into Vietnam in the first place. Outside a small number of military theorists and electrical engineers, therefore, hardly anyone realized Vietnam had been a successful testing ground for weapons that married microelectronics and explosives in ways that would revolutionize warfare and transform American military power.
Taiwanese officials like K. T. Li, who’d studied nuclear physics at Cambridge and ran a steel mill before steering Taiwan’s economic development through the postwar decades, began crystallizing a strategy to integrate economically with the United States. Semiconductors were at the center of this plan. Li knew there were plenty of Taiwanese-American semiconductor engineers willing to help. In Dallas, Morris Chang urged his colleagues at TI to set up a facility in Taiwan. Many people would later describe the mainland-born Chang as “returning” to Taiwan, but 1968 was the first time he stepped foot on the island, having lived in the U.S. since fleeing the Communist takeover of China. Two of Chang’s PhD classmates at Stanford were from Taiwan, however, and they convinced him the island had a favorable business climate and that wages would stay low.
As Americans grew skeptical of military commitments in Asia, Taiwan desperately needed to diversify its connections with the United States. Americans who weren’t interested in defending Taiwan might be willing to defend Texas Instruments. The more semiconductor plants on the island, and the more economic ties with the United States, the safer Taiwan would be.
From South Korea to Taiwan, Singapore to the Philippines a map of semiconductor assembly facilities looked much like a map of American military bases across Asia. Yet even after the U.S. finally admitted defeat in Vietnam and drew down its military presence in the region, these trans-Pacific supply chains endured. By the end of the 1970s, rather than dominoes falling to Communism, America’s allies in Asia were even more deeply integrated with the U.S.
Noyce and Moore abandoned Fairchild as quickly as they’d left Shockley’s startup a decade earlier, and founded Intel, which stood for Integrated Electronics. In their vision, transistors would become the cheapest product ever produced, but the world would consume trillions and trillions of them. Humans would be empowered by semiconductors while becoming fundamentally dependent on them.
Intel planned to dominate the business of DRAM chips. Memory chips don’t need to be specialized, so chips with the same design can be used in many different types of devices. This makes it possible to produce them in large volumes. By contrast, the other main type of chips—those tasked with “computing” rather than “remembering”—were specially designed for each device, because every computing problem was different. A calculator worked differently than a missile’s guidance computer, for example, so until the 1970s, they used different types of logic chips. This specialization drove up cost, so Intel decided to focus on memory chips, where mass production would produce economies of scale.
Hoff realized computers face a tradeoff between customized logic circuits and customized software. Because chipmaking was a custom business, delivering specialized circuits for each device, customers didn’t think hard about software. However, Intel’s progress with memory chips—and the prospect they would become exponentially more powerful over time—meant computers would soon have the memory capacity needed to handle complex software. Hoff bet it would soon be cheaper to design a standardized logic chip that, coupled with a powerful memory chip programmed with different types of software, could compute many different things. After all, Hoff knew, no one was building memory chips more powerful than Intel’s.
In the early 1960s, it had been possible to claim the Pentagon had created Silicon Valley. In the decade since, the tables had turned. The U.S. military lost the war in Vietnam, but the chip industry won the peace that followed, binding the rest of Asia, from Singapore to Taiwan to Japan, more closely to the U.S. via rapidly expanding investment links and supply chains. The entire world was more tightly connected to America’s innovation infrastructure, and even adversaries like the USSR spent their time copying U.S. chips and chipmaking tools. Meanwhile, the chip industry had catalyzed an array of new weapons systems that were remaking how the U.S. military would fight future wars. American power was being recast. Now the entire nation depended on Silicon Valley’s success.
PART III: LEADERSHIP LOST?
The U.S. had supported Japan’s postwar transformation into a transistor salesman. U.S. occupation authorities transferred knowledge about the invention of the transistor to Japanese physicists, while policymakers in Washington ensured Japanese firms like Sony could easily sell into U.S. markets. The aim of turning Japan into a country of democratic capitalists had worked. Now some Americans were asking whether it had worked too well. The strategy of empowering Japanese businesses seemed to be undermining America’s economic and technological edge.
The U.S. military was more dependent on electronics—and thus on chips—than ever before. By the 1980s, the report found, around 17 percent of military spending went toward electronics, compared to 6 percent at the end of World War II. Everything from satellites to early warning radars to self-guided missiles depended on advanced chips. The Pentagon’s task force summarized the ramifications in four bullet points, underlining the key conclusions: U.S. military forces depend heavily on technological superiority to win. Electronics is the technology that can be leveraged most highly. Semiconductors are the key to leadership in electronics. U.S. defense will soon depend on foreign sources for state-of-the-art technology in semiconductors.
American power in Asia had been built on technological dominance, military might, and trade and investment links that knit together Japan, Hong Kong, South Korea, and the countries of Southeast Asia. From the first Fairchild assembly plant on Hong Kong’s Kowloon Bay, integrated circuits had been an integral feature of America’s position in Asia. U.S. chipmakers built facilities from Taiwan to South Korea to Singapore. These territories were defended from Communist incursions not only by military force but also by economic integration, as the electronics industry sucked the region’s peasants off farms—where rural poverty often inspired guerilla opposition—into good jobs assembling electronic devices for American consumption. America’s supply chain statecraft had worked brilliantly in fending off Communists, but by the 1980s, the primary beneficiary looked to have been Japan. Its trade and foreign investment had grown massively. Tokyo’s role in Asia’s economics and politics was expanding inexorably. If Japan could so swiftly establish dominance over the chip industry, what would stop it from dethroning America’s geopolitical preeminence, too?
PART IV: AMERICA RESURGENT
Silicon Valley’s resurgence was driven by scrappy startups and by wrenching corporate transformations. The U.S. overtook Japan’s DRAM behemoths not by replicating them but by innovating around them. Rather than cutting itself off from trade, Silicon Valley offshored even more production to Taiwan and South Korea to regain its competitive advantage. Meanwhile, as America’s chip industry recovered, the Pentagon’s bet on microelectronics began to pay off as it fielded new weapons systems that no other country could match. America’s unrivaled power during the 1990s and 2000s stemmed from its resurgent dominance in computer chips, the core technology of the era.
Grove’s restructuring of Intel was a textbook case of Silicon Valley capitalism. He recognized that the company’s business model was broken and decided to “disrupt” Intel himself by abandoning the DRAM chips it had been founded to build. The firm established a stranglehold on the market for PC chips, issuing a new generation of chip every year or two, offering smaller transistors and more processing power. Only the paranoid survive, Andy Grove believed. More than innovation or expertise, it was his paranoia that saved Intel.
Most of Silicon Valley was happy to work with Korean companies, undercutting Japanese competitors and helping make South Korea one of the world’s leading centers of memory chipmaking. The logic was simple, as Jerry Sanders explained: “my enemy’s enemy is my friend.”
Silicon Valley’s rebirth isn’t solely a story of heroic entrepreneurs and creative destruction. Alongside the rise of these new industrial titans, a new set of scientists and engineers were preparing a leap forward in chipmaking and devising revolutionary new ways to use processing power. Many of these developments occurred in coordination with government efforts, usually not the heavy hand of Congress or the White House, but the work of small, nimble organizations like DARPA that were empowered to take big bets on futuristic technologies—and to build the educational and R&D infrastructure that such gambles required.
Government efforts were effective not when they tried to resuscitate failing firms, but when they capitalized on pre-existing American strengths, providing funding to let researchers turn smart ideas into prototype products. Members of Congress would no doubt have been furious had they learned that DARPA—ostensibly a defense agency—was wining and dining professors of computer science as they theorized about chip design. But it was efforts like these that shrank transistors, discovered new uses for semiconductors, drove new customers to buy them, and funded the subsequent generation of smaller transistors. When it came to semiconductor design, no country in the world had a better innovation ecosystem.
The USSR’s “copy it” strategy had actually benefitted the United States, guaranteeing the Soviets faced a continued technological lag. In 1985, the CIA conducted a study of Soviet microprocessors and found that the USSR produced replicas of Intel and Motorola chips like clockwork. They were always half a decade behind. The Soviets lacked an international supply chain. Working with America’s Cold War allies, Silicon Valley had forged an ultra-efficient globalized division of labor. Japan led the production of memory chips, the U.S. produced more microprocessors, while Japan’s Nikon and Canon and the Netherland’s ASML split the market for lithography equipment. Workers in Southeast Asia conducted much of the final assembly. American, Japanese, and European companies jostled over their position in this division of labor, but they all benefitted from the ability to spread R&D costs over a far larger semiconductor market than the USSR ever had.
None of the Japanese chip giants could replicate Intel’s pivot to microprocessors or its mastery of the PC ecosystem. Only one Japanese firm, NEC, really tried, but it never won more than a tiny share of the microprocessor market. For Andy Grove and Intel, making money on microprocessors was a matter of life or death. Japan’s DRAM firms, with massive market share and few financial constraints, ignored the microprocessor market until it was too late. As a result, the PC revolution mostly benefitted American chip firms. By the time Japan’s stock market crashed, Japan’s semiconductor dominance was already eroding. In 1993, the U.S. retook first place in semiconductor shipments. In 1998, South Korean firms had overtaken Japan as the world’s largest producers of DRAM, while Japan’s market share fell from 90 percent in the late 1980s to 20 percent by 1998. Japan’s semiconductor ambitions had underwritten the country’s expanding sense of its global position, but this foundation now looked brittle.
After the easy defeat of Saddam Hussein’s Iraq, America’s vast new fighting power was visible to everyone. This caused a crisis in the Soviet military and the KGB, who were embarrassed yet afraid to admit how decisively they were outgunned. The security chiefs led a demoralized coup attempt against Gorbachev that sputtered out after three days. It was a pathetic end for a once-powerful country, which couldn’t come to terms with the painful decline in its military power. The Russian chip industry faced humiliation of its own, with one fab reduced in the 1990s to producing tiny chips for McDonald’s Happy Meal toys. The Cold War was over; Silicon Valley had won.
PART V: INTEGRATED CIRCUITS, INTEGRATED WORLD?
The 1990s were the years when the word “globalization” first became commonly used, though the chip industry had relied on international production and assembly since the earliest days of Fairchild Semiconductor. Taiwan had deliberately inserted itself into semiconductor supply chains since the 1960s, as a strategy to provide jobs, acquire advanced technology, and to strengthen its security relationship with the United States. In the 1990s, Taiwan’s importance began to grow, driven by the spectacular rise of the Taiwan Semiconductor 164Manufacturing Company, which Chang founded with strong backing from the Taiwanese government.
As early as the mid-1970s, while still at TI, Chang had toyed with the idea of creating a semiconductor company that would manufacture chips designed by customers. At the time, chip firms like TI, Intel, and Motorola mostly manufactured chips they had designed in-house. Chang pitched this new business model to fellow TI executives in March 1976. “The low cost of computing power,” he explained to his TI colleagues, “will open up a wealth of applications that are not now served by semiconductors,” creating new sources of demand for chips, which would soon be used in everything from phones to cars to dishwashers. The firms that made these goods lacked the expertise to produce semiconductors, so they’d prefer to outsource fabrication to a specialist, he reasoned. Moreover, as technology advanced and transistors shrank, the cost of manufacturing equipment and R&D would rise. Only companies that produced large volumes of chips would be cost-competitive. TI’s other executives weren’t convinced. At the time, in 1976, there weren’t any “fabless” companies that designed chips but lacked their own fabs, though Chang predicted such companies would soon emerge.
The founding of TSMC gave all chip designers a reliable partner. Chang promised never to design chips, only to build them. TSMC didn’t compete with its customers; it succeeded if they did. A decade earlier, Carver Mead had prophesied a Gutenberg moment in chipmaking, but there was one key difference. The old German printer had tried and failed to establish a monopoly over printing. He couldn’t stop his technology from quickly spreading across Europe, benefitting authors and print shops alike. In the chip industry, by lowering startup costs, Chang’s foundry model gave birth to dozens of new “authors”—fabless chip design firms—that transformed the tech sector by putting computing power in all sorts of devices. However, the democratization of authorship coincided with a monopolization of the digital printing press. The economics of chip manufacturing required relentless consolidation. Whichever company produced the most chips had a built-in advantage, 169improving its yield and spreading capital investment costs over more customers. TSMC’s business boomed during the 1990s and its manufacturing processes improved relentlessly. Morris Chang wanted to become the Gutenberg of the digital era. He ended up vastly more powerful. Hardly anyone realized it at the time, but Chang, TSMC, and Taiwan were on a path toward dominating the production of the world’s most advanced chips.
Were it not for Communist rule, China might have played a much larger role in the semiconductor industry. When the integrated circuit was invented, China had many of the ingredients that helped Japan, Taiwan, and South Korea attract American semiconductor investment, like a vast, low-cost workforce and a well-educated scientific elite. However, after seizing power in 1949, the Communists looked at foreign connections with suspicion. For someone like Morris Chang, returning to China after finishing his studies at Stanford would have meant certain poverty and possible imprisonment or death. Many of the best graduates from China’s universities before the revolution ended up working in Taiwan or in California, building the electronics capabilities of the PRC’s primary rivals. China’s Communist government, meanwhile, made the same mistakes the Soviet Union did, though in more extreme forms.
The geography of chip fabrication shifted drastically over the 1990s and 2000s. U.S. fabs made 37 percent of the world’s chips in 1990, but this number fell to 19 percent by 2000 and 13 percent by 2010. Japan’s market share in chip fabrication collapsed, too. South Korea, Singapore, and Taiwan each poured funds into their chip industries and rapidly increased output.
If anyone could build a chip industry in China, it was Richard Chang. He wouldn’t rely on nepotism or on foreign help. All the knowledge needed for a world-class fab was already in his head. While working at Texas Instruments, he’d opened new facilities for the company around the world. Why couldn’t he do the same in Shanghai? He founded the Semiconductor Manufacturing International Corporation (SMIC) in 2000, raising over $1.5 billion from international investors like Goldman Sachs, Motorola, and Toshiba. One analyst estimated that half of SMIC’s startup capital was provided by U.S. investors. Chang used these funds to hire hundreds of foreigners to operate SMIC’s fab, including at least four hundred from Taiwan.
Chang’s strategy was simple: do as TSMC had done. In Taiwan, TSMC had hired the best engineers it could find, ideally with experience at American or other advanced chip firms. TSMC bought the best tools it could afford. It focused relentlessly on training its employees in the industry’s best practices. And it took advantage of all the tax and subsidy benefits that Taiwan’s government was willing to provide. SMIC followed this road map religiously.
Now TSMC had competition from multiple foundries in different countries in East Asia. Singapore’s Chartered Semiconductor, Taiwan’s UMC and Vanguard Semiconductor, and South Korea’s Samsung—which entered the foundry business in 2005—were also competing with TSMC to produce chips designed elsewhere. Most of these companies were subsidized by their governments, but this made chip production cheaper, benefitting the mostly American fabless semiconductor designers they served. Fabless firms, meanwhile, were in the early stages of launching a revolutionary new product chock-full of complex chips: the smartphone. Offshoring had reduced manufacturing costs and spurred more competition. Consumers benefitted from low prices and from previously unthinkable devices. Wasn’t this exactly how globalization was designed to work?
Both ASML and TSMC started as small firms on the periphery of the chip industry, but they grew together, forming a partnership without which advances in computing today would have ground to a halt.
America’s power was at its peak. Most people in Washington thought globalization was a good thing. The dominant belief in the U.S. government was that expanding trade and supply chain connections would promote peace by encouraging powers like Russia or China to focus on acquiring wealth rather than geopolitical power. Claims that the decline of America’s lithography industry would imperil security were seen as out of touch with this new era of globalization and interconnection. The chip industry, meanwhile, simply wanted to build semiconductors as efficiently as possible. With no large-scale U.S. lithography firms remaining, what choice did they have but to bet on ASML?
Anyone who raised the question of how the U.S. could guarantee access to EUV tools was accused of retaining a Cold War mindset in a globalizing world. Yet the business gurus who spoke about technology spreading globally misrepresented the dynamic at play. The scientific networks that produced EUV spanned the world, bringing together scientists from countries as diverse as America, Japan, Slovenia, and Greece. However, the manufacturing of EUV wasn’t globalized, it was monopolized. A single supply chain managed by a single company would control the future of lithography.
In the years since Intel first adopted the x86 architecture, computer scientists at Berkeley had devised a newer, simpler chip architecture called RISC that offered more efficient calculations and thus lower power consumption. The x86 architecture was complex and bulky by comparison. In the 1990s, Andy Grove had seriously considered switching Intel’s main chips to a RISC architecture, but ultimately decided against it. RISC was more efficient, but the cost of change was high, and the threat to Intel’s de facto monopoly was too serious. The computer industry was designed around x86 and Intel dominated the ecosystem. So x86 defines most PC architectures to this day.
Just a handful of years after Intel turned down the iPhone contract, Apple was making more money in smartphones than Intel was selling PC processors. Intel tried several times to scale the walls of Apple’s castle but had already lost first-mover advantage. Spending billions for second place was hardly appealing, especially since Intel’s PC business was still highly profitable and its data center business was growing quickly. So Intel never found a way to win a foothold in mobile devices, which today consume nearly a third of chips sold. It still hasn’t.
Intel’s missed opportunities in the years since Grove left the scene all had a common cause. Since the late 1980s, Intel has made a quarter trillion dollars in profit, even before adjusting for inflation, a track record that few other companies have matched. It has done this by charging a ton for PC and server chips. Intel could sustain high prices because of the optimized design processes and advanced manufacturing that Grove had honed and bequeathed to his successors. The company’s leadership consistently prioritized the production of chips with the highest profit margin. This was a rational strategy—no one wants products with low profit margins—but it made it impossible to try anything new. A fixation on hitting short-term margin targets began to replace long-term technology leadership. The shift in power from engineers to managers accelerated this process.
A new consensus in Washington formed around the idea that the best policy was to “run faster” than America’s rivals. “The likelihood that the United States will grow dependent on any one country, much less China, for any one product, especially semiconductors, is exceedingly small,” predicted one American expert. “Run faster” was an elegant strategy with only a single problem: by some key metrics, the U.S. wasn’t running faster, it was losing ground. Hardly anyone in government bothered to do the analysis, but Andy Grove’s gloomy predictions about the offshoring of expertise were partially coming true.
PART VI: OFFSHORING INNOVATION?
By the 2000s, it was common to split the semiconductor industry into three categories. “Logic” refers to the processors that run smartphones, computers, and servers. “Memory” refers to DRAM, which provides the short-term memory computers need to operate, and flash, also called NAND, which remembers data over time. The third category of chips is more diffuse, including analog chips like sensors that convert visual or audio signals into digital data, radio frequency chips that communicate with cell phone networks, and semiconductors that manage how devices use electricity.
This third category has not been primarily dependent on Moore’s Law to drive performance improvements. Clever design matters more than shrinking transistors. Today around three-quarters of this category of chips are produced on processors at or larger than 180 nanometers, a manufacturing technology that was pioneered in the late 1990s. As a result, the economics of this segment are different from logic and memory chips that must relentlessly shrink transistors to remain on the cutting edge. Fabs for these types of chips generally don’t need to race toward the smallest transistors every couple of years, so they’re substantially cheaper, on average requiring a quarter the capital investment of an advanced fab for logic or memory chips. Today, the biggest analog chipmakers are American, European, or Japanese. Most of their production occurs in these three regions, too, with only a sliver offshored to Taiwan and South Korea.
The memory market, by contrast, has been dominated by a relentless push toward offshoring production to a handful of facilities, mostly in East Asia. Rather than a diffuse set of suppliers centered in advanced economies, the two main types of memory chip—DRAM and NAND—are produced by only a couple of firms. For DRAM memory chips, the type of semiconductor that defined Silicon Valley’s clash with Japan in the 1980s, an advanced fab can cost $20 billion. There used to be dozens of DRAM producers, but today there are only three major producers.
The market for NAND, the other main type of memory chip, is also Asia-centric.
With the prominent exception of Intel, many key American logic chipmakers have given up their fabs and outsourced manufacturing. Other formerly major players, like Motorola or National Semiconductor, went bankrupt, were purchased, or saw their market share shrink. They were replaced by fabless firms, which often hired chip designers from legacy semiconductor firms but outsourced fabrication to TSMC or other foundries in Asia. This let fabless companies focus on their strength—chip design—without requiring simultaneous expertise in fabricating semiconductors.
Since the late 1980s, there’s been explosive growth in the number of fabless chip firms, which design semiconductors in-house but outsource their manufacturing, commonly relying on TSMC for this service.
Computer graphics remained an appealing niche for semiconductor startups, because unlike PC microprocessors, in graphics Intel didn’t have a de facto monopoly. Every PC maker, from IBM to Compaq, had to use an Intel or an AMD chip for their main processor, because these two firms had a de facto monopoly on the x86 instruction set that PCs required. There was a lot more competition in the market for chips that rendered images on screens. The emergence of semiconductor foundries, and the driving down of startup costs, meant that it wasn’t only Silicon Valley aristocracy that could compete to build the best graphics processors. The company that eventually came to dominate the market for graphics chips, Nvidia, had its humble beginnings not in a trendy Palo Alto coffeehouse but in a Denny’s in a rough part of San Jose.
In 2006, realizing that high-speed parallel computations could be used for purposes besides computer graphics, Nvidia released CUDA, software that lets GPUs be programmed in a standard programming language, without any reference to graphics at all. Even as Nvidia was churning out top-notch graphics chips, Huang spent lavishly on this software effort, at least $10 billion, according to a company estimate in 2017, to let any programmer—not just graphics experts—work with Nvidia’s chips. Huang gave away CUDA for free, but the software only works with Nvidia’s chips. By making the chips useful beyond the graphics industry, Nvidia discovered a vast new market for parallel processing, from computational chemistry to weather forecasting. At the time, Huang could only dimly perceive the potential growth in what would become the biggest use case for parallel processing: artificial intelligence.
It’s easy to lament the offshoring of semiconductor manufacturing. But companies like Qualcomm might not have survived if they’d had to invest billions of dollars each year building fabs. Jacobs and his engineers were wizards at cramming data into the radio-wave spectrum and devising ever-more-clever chips to decode the meaning of these signals. As was the case with Nvidia, it was a good thing they didn’t have to try to be semiconductor manufacturing experts, too. Qualcomm repeatedly considered opening its own fabrication facilities, but always decided against it, given the cost and complexity involved. Thanks to TSMC, Samsung, and other companies willing to produce their chips, Qualcomm’s engineers could focus on their core strengths in managing spectrum and in semiconductor design.
Moore’s Law marched forward through the 2000s and early 2010s, forcing cutting-edge chipmakers to spend ever larger sums rolling out a new, more advanced manufacturing process roughly once every two years. Smartphone, PC, and server chips quickly migrated to each new “node,” taking advantage of increased processing power and lower power consumption as transistors were more densely packed. Each node transition required ever more expensive machinery to produce. For many years, each generation of manufacturing technology was named after the length of the transistor’s gate, the part of the silicon chip whose conductivity would be turned on and off, creating and interrupting the circuit. The 180nm node was pioneered in 1999, followed by 130nm, 90nm, 65nm, and 45nm, with each generation shrinking transistors enough to make it possible to cram roughly twice as many in the same area. This reduced power consumption per transistor, because smaller transistors needed fewer electrons to flow through them.
Chang realized that TSMC could pull ahead of rivals technologically because it was a neutral player around which other companies would design their products. He called this TSMC’s “Grand Alliance,” a partnership of dozens of companies that design chips, sell intellectual property, produce materials, or manufacture machinery. Many of these companies compete with each other, but since none fabricate wafers, none compete with TSMC. TSMC could therefore coordinate between them, setting standards that most other companies in the chip industry would agree to use. They had no choice, because compatibility with TSMC’s processes was crucial for almost every company. For fabless firms, TSMC was their most competitive source of manufacturing services. For equipment companies and materials firms, TSMC was often their biggest customer. As smartphones began to take off, driving up demand for silicon, Morris Chang sat at the center.
Application processors, the electronic brain inside each smartphone, are mostly produced in Taiwan and South Korea before being sent to China for final assembly inside a phone’s plastic case and glass screen. Apple’s iPhone processors are fabricated exclusively in Taiwan. Today, no company besides TSMC has the skill or the production capacity to build the chips Apple needs. So the text etched onto the back of each iPhone—“Designed by Apple in California. Assembled in China”—is highly misleading. The iPhone’s most irreplaceable components are indeed designed in California and assembled in China. But they can only be made in Taiwan.
By the late-2010s, ASML, the Dutch lithography company, had spent nearly two decades trying to make extreme-ultraviolet lithography work. Doing so required scouring the world for the most advanced components, the purest metals, the most powerful lasers, and the most precise sensors. EUV was one of the biggest technological gambles of our time. In 2012, years before ASML had produced a functional EUV tool, Intel, Samsung, and TSMC had each invested directly in ASML to ensure the company had the funding needed to continue developing EUV tools that their future chipmaking capabilities would require.
For Frits van Houts, who took over leadership of ASML’s EUV business in 2013, the most crucial input into an EUV lithography system wasn’t any individual component, but the company’s own skill in supply chain management. ASML engineered this network of business relationships “like a machine,” van Houts explained, producing a finely tuned system of several thousand companies capable of meeting ASML’s exacting requirements. ASML itself only produced 15 percent of an EUV tool’s components, he estimated, buying the rest from other firms. This let it access the world’s most finely engineered goods, but it also required constant surveillance. The result was a machine with hundreds of thousands of components that took tens of billions of dollars and several decades to develop. The miracle isn’t simply that EUV lithography works, but that it does so reliably enough to produce chips cost-effectively. Extreme reliability was crucial for any component that would be put in the EUV system.
The final product—chips—work so reliably because they only have a single component: a block of silicon topped with other metals. There are no moving parts in a chip, unless you count the electrons zipping around inside. Producing advanced semiconductors, however, has relied on some of the most complex machinery ever made. ASML’s EUV lithography tool is the most expensive mass-produced machine tool in history, so complex it’s impossible to use without extensive training from ASML personnel, who remain on-site for the tool’s entire life span. Each EUV scanner has an ASML logo on its side. But ASML’s expertise, the company readily admits, was its ability to orchestrate a far-flung network of optics experts, software designers, laser companies, and many others whose capabilities were needed to make the dream of EUV a reality.
As the chip industry entered the EUV era, Intel looked poised to dominate. The company had been crucial to EUV’s emergence, thanks to Andy Grove’s initial $200 million bet on the technology in the early 1990s. Now, after billions of dollars of investment—a substantial portion of which had come from Intel—ASML had finally made the technology a reality. Yet rather than capitalizing on this new era of shrinking transistors, Intel squandered its lead, missing major shifts in semiconductor architecture needed for artificial intelligence, then bungling its manufacturing processes and failing to keep up with Moore’s Law.
Intel remains enormously profitable today. It’s still America’s biggest and most advanced chipmaker. However, its future is more in doubt than at any point since Grove’s decision in the 1980s to abandon memory and bet everything on microprocessors. It still has a shot at regaining its leadership position over the next half decade, but it could just as easily end up defunct. What’s at stake isn’t simply one company, but the future of America’s chip fabrication industry. Without Intel, there won’t be a single U.S. company—or a single facility outside of Taiwan or South Korea—capable of manufacturing cutting-edge processors.
Intel was right to perceive some benefits of an integrated model, but there were substantial downsides. Because TSMC manufactures chips for many different companies, it now fabricates nearly three times as many silicon wafers per year as Intel, so it has more chance to hone its process. Moreover, where Intel saw chip design startups as a threat, TSMC saw potential customers for manufacturing services. Because TSMC had only a single value proposition—effective manufacturing—its leadership focused relentlessly on fabricating ever-more-advanced semiconductors at lower cost. Intel’s leaders had to split their attention between chip design and chip manufacturing. They ended up bungling both.
Whether it will be Nvidia or the big cloud companies doing the vanquishing, Intel’s near-monopoly in sales of processors for data centers is ending. Losing this dominant position would have been less problematic if Intel had found new markets. However, the company’s foray into the foundry business in the mid-2010s, where it tried to compete head-on with TSMC, was a flop. Intel tried opening its manufacturing lines to any customers looking for chipmaking services, quietly admitting that the model of integrated design and manufacturing wasn’t nearly as successful as Intel’s executives claimed. The company had all the ingredients to become a major foundry player, including advanced technology and massive production capacity, but succeeding would have required a major cultural change. TSMC was open with intellectual property, but Intel was closed off and secretive. TSMC was service-oriented, while Intel thought customers should follow its own rules. TSMC didn’t compete with its customers, since it didn’t design any chips. Intel was the industry giant whose chips competed with almost everyone.
As the decade ended, only two companies could manufacture the most cutting-edge processors, TSMC and Samsung. And so far as the United States was concerned, both were problematic for the same reason: their location. Now the entire world’s production of advanced processors was taking place in Taiwan and Korea—just off the coast from America’s emerging strategic competitor: the People’s Republic of China.
PART VII: CHINA’S CHALLENGE
China’s tech giants depend on data centers full of foreign, largely U.S.-produced, chips. The documents that Edward Snowden leaked in 2013 before fleeing to Russia demonstrated American network-tapping capabilities that surprised even the cyber sleuths in Beijing. Chinese firms had replicated Silicon Valley’s expertise in building software for e-commerce, online search, and digital payments. But all this software relies on foreign hardware. When it comes to the core technologies that undergird computing, China is staggeringly reliant on foreign products, many of which are designed in Silicon Valley and almost all of which are produced by firms based in the U.S. or one of its allies. Xi thought this presented an untenable risk.
Chinese leaders didn’t need to be paranoid to think their country should build more chips at home. It wasn’t just about avoiding supply chain vulnerability. Like its neighbors, China can only win more valuable business if it produces what Beijing’s leaders call “core technologies”—products the rest of the world can’t live without. Otherwise, China risks continuing the low-profit pattern of what has occurred with the iPhone. Millions of Chinese are involved in assembling the phones, but when the devices are sold to end users Apple makes most of the money, with much of the rest accruing to the makers of the chips inside each phone. The question for China’s leaders was how to pivot to producing the kind of chips the world coveted.
If China only wanted a bigger part in this ecosystem, its ambitions could’ve been accommodated. However, Beijing wasn’t looking for a better position in a system dominated by America and its friends. Xi’s call to “assault the fortifications” wasn’t a request for slightly higher market share. It was about remaking the world’s semiconductor industry, not integrating with it. Some economic policymakers and semiconductor industry executives in China would have preferred a strategy of deeper integration, yet leaders in Beijing, who thought more about security than efficiency, saw interdependence as a threat. The Made in China 2025 plan didn’t advocate economic integration but the opposite. It called for slashing China’s dependence on imported chips. The primary target of the Made in China 2025 plan is to reduce the share of foreign chips used in China.
Chip firms simply can’t ignore the world’s largest market for semiconductors. Chipmakers jealously guard their critical technologies, of course. But almost every chip firm has non-core technology, in subsectors that they don’t lead, that they’d be happy to share for a price. When companies are losing market share or in need of financing, moreover, they don’t have the luxury of focusing on the long term. This gives China powerful levers to induce foreign chip firms to transfer technology, open production facilities, or license intellectual property, even when foreign companies realize they’re helping develop competitors. For chip firms, its often easier to raise funds in China than on Wall Street. Accepting Chinese capital can be an implicit requirement for doing business in the country. Viewed on their own terms, the deals that IBM, AMD, and Arm struck in China were driven by reasonable business logic. Collectively, they risk technology leakage. U.S. and UK chip architectures and designs as well as Taiwanese foundries have played a central role in the development of China’s supercomputer programs. Compared to a decade ago, though its capabilities still meaningfully lag the cutting edge, China is substantially less reliant on foreigners to design and produce chips needed in data centers.
Tsinghua Unigroup’s activities were impossible to comprehend from the perspective of business logic. There were too many Chinese state-owned and state-financed “private equity” firms circling the world’s semiconductor companies to describe this as anything other than a government-led effort to seize foreign chip firms. “Call forth the assault,” Xi Jinping had demanded. Zhao, Tsinghua Unigroup, and other government-backed “investment” vehicles were simply following these publicly announced instructions. Amid this frenzied dealmaking, Tsinghua Unigroup announced in 2017 that it had received new “investment”: around $15 billion from the China Development Bank and $7 billion from the Integrated Circuit Industry Investment Fund—both owned and controlled by the Chinese state.
Even when weighing Huawei’s track record of intellectual property theft, the company’s multibillion-dollar R&D spending suggests a fundamentally different ethos than the “copy it” mentality of Soviet Zelenograd, or the many other Chinese firms that have tried to break into the chip industry on the cheap.
The lack of clarity about Ren’s transition from the People’s Liberation Army to Huawei remains puzzling. The company’s complex and opaque ownership structure has also provoked reasonable questions. Huawei executive Ken Hu’s argument to a U.S. congressional inquiry that Ren Zhengfei’s membership in the Chinese Communist Party was just like how “some American businessmen are Democrat or Republican,” sounded to U.S. analysts like willful obfuscation of the Communist Party’s role in the company’s governance. Nevertheless, the thesis that Huawei was purpose built by the Chinese state has never had strong evidence behind it. Huawei’s rise has, however, worked in the interests of the Chinese state, as the company grabbed market share and embedded its equipment in the world’s telecom networks. For many years, despite the warning of America’s spy agencies, Huawei spread rapidly across the world.
By the end of the 2010s, Huawei’s HiSilicon unit was designing some of the world’s most complex chips for smartphones and had become TSMC’s second-largest customer. Huawei’s phones still required chips from other companies, too, like memory chips or various types of signal processors. But mastering the production of cell phone processors was an impressive feat. America’s near monopoly on the world’s most profitable chip design businesses was under threat. This was more evidence that Huawei was successfully replicating what South Korea’s Samsung or Japan’s Sony had done decades earlier: learning to produce advanced technology, winning global markets, investing in R&D, and challenging America’s tech leaders. Moreover, Huawei seemed uniquely well placed for a new era of ubiquitous computing that would accompany the rollout of the next generation of telecom infrastructure: 5G.
Yet 5G isn’t really about phones—it’s about the future of computing, and therefore, it’s about semiconductors. The “G” in 5G stands for generation. We’ve already cycled through four generations of mobile networking standards, each of which required new hardware on phones and in cell towers. Just as Moore’s Law has let us pack more transistors onto chips, there’s been a steady increase in the number of 1s and 0s flying to and from cell phones via radio waves. 2G phones could send picture texts; 3G phones opened websites; and 4G made it possible to stream video from almost anywhere. 5G will provide a similar leap forward.
It isn’t guaranteed that China will win the race to develop and deploy systems empowered by artificial intelligence, in part because this “race” isn’t about a single technology but about complex systems. The Cold War arms race, it’s worth remembering, wasn’t won by the first country to shoot a satellite into space. Yet China’s capabilities when it comes to AI systems are undeniably impressive. Georgetown University’s Ben Buchanan has noted that a “triad” of data, algorithms, and computing power are needed to harness AI. With the exception of computing power, China’s capabilities may already equal the United States’.
As the fighting power of China’s military has grown, the Pentagon has realized it needs a new strategy. In the mid-2010s, officials like Secretary of Defense Chuck Hagel began speaking about a need for a new “offset,” evoking the effort of Bill Perry, Harold Brown, and Andrew Marshall during the 1970s to overcome the USSR’s quantitative advantage. The U.S. faces the same basic dilemma today: China can deploy more ships and planes than the U.S., especially in theaters that matter, like the Taiwan Strait. “We will never try to match our opponents or our competitors tank for tank, plane for plane, person for person,” declared Bob Work, the former deputy defense secretary who is the intellectual godfather of this new offset, in a clear echo of the logic of the late 1970s. The U.S. military will only succeed, in other words, if it has a decisive technological advantage.
Both the U.S. military and the government’s spy agencies outsource the production of their chips to “trusted foundries.” This is relatively straightforward for many types of analog or radio frequency chips, where the U.S. has world-class capabilities. When it comes to logic chips, though, this poses a dilemma. Intel’s production capabilities are just behind the leading edge, though the company mostly produces chips for its own PC and server businesses. TSMC and Samsung, meanwhile, keep their most cutting-edge fabrication capabilities in Taiwan and South Korea. And a large share of chip assembly and packaging also takes place in Asia. As the Defense Department tries to use more off-the-shelf components to reduce cost, it will buy even more devices from abroad. The military worries that chips fabricated or assembled abroad are more susceptible to tampering, with back doors added or errors written in.
All these efforts to use microelectronics to spur a new “offset” and reestablish a decisive military advantage over China and Russia, however, assume the U.S. will keep its lead in chips. That’s now looking like a risky bet. The era of the “run faster” strategy saw the U.S. fall behind in certain segments of the chipmaking process, most notably in the growing dependence on Taiwan for building advanced logic chips. Intel, which for three decades had been America’s chip champion, has now very clearly stumbled. Many people in the industry think it has fallen decisively behind. Meanwhile, China is pouring billions of dollars into its chip industry while pressuring foreign companies to turn over sensitive technology. For every major chip firm, the Chinese consumer market is far more important a customer than the U.S. government.
“Call forth the assault,” Xi Jinping declared. China’s leaders have identified their reliance on foreign chipmakers as a critical vulnerability. They’ve set out a plan to rework the world’s chip industry by buying foreign chipmakers, stealing their technology, and providing billions of dollars of subsidies to Chinese chip firms. The People’s Liberation Army is now counting on these efforts to help it evade U.S. restrictions, though it can still buy legally many U.S. chips in its pursuit of “military intelligentization.” For its part, the Pentagon has launched its own offset, after admitting that China’s military modernization has closed the gap between the two superpowers’ militaries, especially in the contested waters off China’s coast. Taiwan isn’t simply the source of the advanced chips that both countries’ militaries are betting on. It’s also the most likely future battleground.
PART VIII: THE CHIP CHOKE
By around 2015, from deep in the U.S. government, gears slowly began to shift. The government’s trade negotiators saw China’s chip subsidies as a flagrant violation of international agreements. The Pentagon nervously watched China’s efforts to apply computing power to new weapons systems. The intelligence agencies and Justice Department unearthed more evidence of collusion between China’s government and its industries to push out American chip firms. Yet the twin pillars of American tech policy—embracing globalization and “running faster”—were deeply ingrained, not only by the industry’s lobbying, but also by Washington’s intellectual consensus. Moreover, most people in Washington barely knew what a semiconductor was. The Obama administration moved slowly on semiconductors, one person involved in the effort recalled, because many senior officials simply didn’t see chips as an important issue.
America’s technological lead in fabrication, lithography, and other fields had dissipated because Washington convinced itself that companies should compete but that governments should simply provide a level playing field. A laissez-faire system works if every country agrees to it. Many governments, especially in Asia, were deeply involved in supporting their chip industries. However, U.S. officials found it easier to ignore other countries’ efforts to grab valuable chunks of the chip industry, instead choosing to parrot platitudes about free trade and open competition. Meanwhile, America’s position was eroding.
In polite company in Washington and Silicon Valley, it was easier simply to repeat words like multilateralism, globalization, and innovation, concepts that were too vacuous to offend anyone in a position of power. The chip industry itself—deeply fearful of angering China or TSMC—put its considerable lobbying resources behind repeating false platitudes about how “global” the industry had become. These concepts fit naturally with the liberal internationalist ethos that guided officials of both political parties amid America’s unipolar moment. Meetings with foreign companies and governments were more pleasant when everyone pretended that cooperation was win-win. So Washington kept telling itself that the U.S. was running faster, blindly ignoring the deterioration in the U.S. position, the rise in China’s capabilities, and the staggering reliance on Taiwan and South Korea, which grew more conspicuous every year.
The China hawks on the National Security Council concluded that America’s semiconductor industry needed to be saved from itself. Left to the whim of their shareholders and to market forces, chip firms would slowly transfer staff, technology, and intellectual property to China until Silicon Valley was hollowed out. The U.S. needed a stronger export control regime, the China hawks believed. They thought Washington’s discussion about export controls had been hijacked by the industry, letting Chinese firms acquire too much advanced chipmaking design and machinery. Administration officials cited the revolving door between the Commerce Department and law firms who worked for the chip industry and lobbied against export controls, though these officials were also among the few people in the government who understood the complexity of semiconductor supply chains. Because of this revolving door, Trump administration officials believed, regulations allowed too much technological leakage, weakening America’s position relative to China.
Why did Australian and British cybersecurity experts differ in their assessment of Huawei risk? There’s no evidence of technical disagreements. UK regulators were quite critical of deficiencies in Huawei’s cybersecurity practices, for example. The debate was really about whether China should be stopped from playing an ever-larger role in the world’s tech infrastructure. Robert Hannigan, former head of the UK’s signals intelligence agency, argued that “we should accept that China will be a global tech power in the future and start managing the risk now, rather than pretending the west can sit out China’s technological rise.” Many Europeans also thought China’s technological advance was inevitable and therefore not worth trying to stop. The United States government didn’t agree.
One might have expected the offshoring of chipmaking to have reduced the U.S. government’s ability to restrict access to advanced chip fabrication. It would certainly have been easier to cut off Huawei if all the world’s advanced chipmaking was still based on U.S. soil. However, the U.S. still had cards to play. For example, the process of offshoring chip fabrication had coincided with a growing monopolization of chip industry choke points. Nearly every chip in the world uses software from at least one of three U.S.-based companies, Cadence, Synopsys, and Mentor (the latter of which is owned by Germany’s Siemens but based in Oregon). Excluding the chips Intel builds in-house, all the most advanced logic chips are fabricated by just two companies, Samsung and TSMC, both located in countries that rely on the U.S. military for their security. Moreover, making advanced processors requires EUV lithography machines produced by just one company, the Netherlands’ ASML, which in turn relies on its San Diego subsidiary, Cymer (which it purchased in 2013), to supply the irreplaceable light sources in its EUV lithography tools. It’s far easier to control choke points in the chipmaking process when so many essential steps require tools, materials, or software produced by just a handful of firms. Many of these choke points remained in American hands. Those that didn’t were mostly controlled by close U.S. allies.
It’s commonly argued that the escalating tech competition with the United States is like a “Sputnik moment” for China’s government. The allusion is to the United States’ fear after the launch of Sputnik in 1957 that it was falling behind its rival, driving Washington to pour funding into science and technology. China certainly faced a Sputnik-scale shock after the U.S. banned sales of chips to firms like Huawei. Dan Wang, one of the smartest analysts of China’s tech policy, has argued that American restrictions have “boosted Beijing’s quest for tech dominance” by catalyzing new government policies to support the chip industry. In the absence of America’s new export controls, he argues, Made in China 2025 would have ended up like China’s previous industrial policy efforts, with the government wasting substantial sums of money. Thanks to U.S. pressure, China’s government may provide Chinese chipmakers more support than they’d otherwise have received. The debate is about whether the U.S. should try to derail China’s growing chip ecosystem—thereby spurring an inevitable counterreaction—or whether it’s smarter simply to invest at home while hoping China’s chip drive peters out.
EUV machines are just one of many tools that are produced via multinational supply chains. Domesticating every part of the supply chain would be impossibly expensive. The global chip industry spends over $100 billion annually on capital expenditures. China would have to replicate this spending in addition to building a base of expertise and facilities that it currently lacks. Establishing a cutting-edge, all-domestic supply chain would take over a decade and cost well over a trillion dollars in that period. This is why, despite the rhetoric, China’s not actually pursuing an all-domestic supply chain. Beijing recognizes this is simply impossible. China would like a non-U.S. supply chain, but because of America’s heft in the chip industry and the extraterritorial power of its export regulations, a non-American supply chain is also unrealistic, except perhaps in the distant future. What is plausible is for China to reduce its reliance on the United States in certain spheres and to increase its overall weight in the chip industry, weaning itself off as many choke-point technologies as possible.
One of China’s core challenges today is that many chips use either the x86 architecture (for PCs and servers) or the Arm architecture (for mobile devices); x86 is dominated by two U.S. firms, Intel and AMD, while Arm, which licenses other companies to use its architecture, is based in the UK. However, there’s now a new instruction set architecture called RISC-V that is open-sourced, so it’s available to anyone without a fee. The idea of an open-source architecture appeals to many parts of the chip industry. Anyone who currently must pay Arm for a license would prefer a free alternative. Moreover, the risk of security defects may be lower, because the open nature of an open-source architecture like RISC-V means that more engineers will be able to verify details and identify errors. For the same reason, the pace of innovation may be faster, too. These two factors explain why DARPA has funded a variety of projects related to developing RISC-V. Chinese firms have also embraced RISC-V, because they see it as geopolitically neutral.
Politicians around the world have misdiagnosed the semiconductor supply chain dilemma. The problem isn’t that the chip industry’s far-flung production processes dealt poorly with COVID and the resulting lockdowns. There are few industries that sailed through the pandemic with so little disruption. Such problems that emerged, notably the shortage of auto chips, are mostly the fault of carmakers’ frantic and ill-advised cancelation of chip orders in the early days of the pandemic coupled with their just-in-time manufacturing practices that provide little margin of error. For the car industry, which suffered a several-hundred-billion-dollar hit to revenue, there’s plenty of reason to rethink how they’ve managed their own supply chains. The semiconductor industry, however, had a banner year. Besides a massive earthquake—a low but non-zero probability risk—it’s hard to imagine a more severe peacetime shock to supply chains than what the industry has survived since early 2020. The substantial increase in chip production during both 2020 and 2021 is not a sign that multinational supply chains are broken. It’s a sign that they’ve worked.
Nevertheless, governments should think harder about semiconductor supply chains than they used to. The real supply chain lesson of the past few years is not about fragility but about profits and power. Taiwan’s extraordinary ascent shows how one company—with a vision and with government financial support—can remake an entire industry. Meanwhile, U.S. restrictions on China’s access to chip technology demonstrate just how powerful the chip industry’s choke points are. The rise of China’s semiconductor industry over the past decade, however, is a reminder that these choke points are not infinitely durable. Countries and governments can often find ways around choke points, though doing so is time-consuming and expensive, sometimes extraordinarily so. Technological shifts can erode the efficacy of choke points, too.
As it began to reckon with the concentration of advanced chipmaking in East Asia, the U.S. government convinced both TSMC and Samsung to open new facilities in the U.S., with TSMC planning a new fab in Arizona and Samsung expanding a facility near Austin, Texas. These fabs are partially intended to appease American politicians, though they will also produce chips for defense and other critical infrastructure that the U.S. would prefer to fabricate onshore. However, both companies plan to keep the vast majority of their production capacity—and their most advanced technology—at home. Even promises of subsidies from the U.S. government are unlikely to change this. Among American national security officials, there is growing discussion about whether to use threats of export controls on chip design software and manufacturing equipment to pressure TSMC to roll out its newest process technologies simultaneously in the U.S. and in Taiwan. Alternatively, TSMC could be pressed to commit that every dollar of capital expenditure in Taiwan will be matched, for example, by a dollar of capital expenditure at one of TSMC’s new facilities in Japan, Arizona, or Singapore. Such moves might begin to reduce the world’s reliance on chipmaking in Taiwan. But for now, Washington is unwilling to exert the pressure that would be required. The entire world’s dependence on Taiwan, therefore, continues to grow.
The Pentagon’s public reports on Chinese military power have identified multiple ways China could use force against Taiwan. The most straightforward—but most unlikely—is a D-Day style invasion, with hundreds of Chinese ships steaming across the Strait and landing thousands of PLA infantrymen on shore. The history of amphibious invasions is littered with disasters, however, and the Pentagon judges that such an operation would “strain” the PLA’s capabilities. China would have little difficulty in knocking out Taiwan’s airfields and naval facilities as well as electricity and other critical infrastructure before any assault, but even still, it would be a tough fight. Other options would be easier for the PLA to implement, in the Pentagon’s judgment. A partial air and maritime blockade would be impossible for Taiwan to defeat on its own. Even if the U.S. and Japanese militaries joined Taiwan to try and break the blockade, it would be difficult to do. China has powerful weapons systems arrayed along its shores. A blockade wouldn’t need to be perfectly effective to strangle the island’s trade. Ending a blockade would require Taiwan and its friends—mainly, the U.S.—to disable hundreds of Chinese military systems sitting on Chinese territory. A blockade-busting operation could easily spiral into a bloody great power war. Even without a blockade, a Chinese air and missile campaign alone could defang Taiwan’s military and shut down the country’s economy without placing a single pair of Chinese boots on the ground. In a couple days, absent immediate U.S. and Japanese aid, Chinese air and missile forces could probably disarm key Taiwanese military assets—airfields, radar facilities, communications hubs, and the like—without severely impacting the island’s productive capacity.
China’s ruling party has no higher goal than asserting control over Taiwan. Its leaders constantly promise to do so. The government has passed an “Anti-Secession Law” envisioning the potential use of what it calls “non-peaceful means” in the Taiwan Strait. It’s invested heavily in the type of military systems, like amphibious assault vehicles, needed for a cross-strait invasion. It exercises these capabilities regularly. Analysts uniformly agree that the military balance in the Strait has shifted decisively in China’s direction. Long gone are the days, as during the 1996 Taiwan Strait crisis, that the U.S. could simply sail an entire aircraft carrier battlegroup through the Strait to force Beijing to stand down. Now such an operation would be fraught with risk for the U.S. warships. Today Chinese missiles threaten not only U.S. ships around Taiwan but also bases as far away as Guam and Japan. The stronger the PLA gets, the less likely the U.S. is to risk war to defend Taiwan. If China were to try a campaign of limited military pressure on Taiwan, it’s more likely than ever that the U.S. might look at the correlation of forces and conclude that pushing back isn’t worth the risk.
The world economy and the supply chains that crisscross Asia and the Taiwan Strait are predicated on this precarious peace. Every company that’s invested on either side of the Taiwan Strait, from Apple to Huawei to TSMC, is implicitly betting on peace. Trillions of dollars are invested in firms and facilities within easy missile shot of the Taiwan Strait, from Hong Kong to Hsinchu. The world’s chip industry, as well as the assembly of all the electronic goods chips enable, depends more on the Taiwan Strait and the South China coast than on any other chunk of the world’s territory except Silicon Valley. Business as usual is not nearly as fraught in California’s tech epicenter. Much of Silicon Valley’s knowledge could be easily relocated in case of war or earthquake.