An epic account of the decades-long battle to control what has become the world’s most critical resource—semiconductor technology—tracing how chips reshaped economic, military, and geopolitical power from the Cold War to today’s U.S.–China rivalry.
World War II—described by Japanese soldiers as a "typhoon of steel"—framed the early lives of three future semiconductor titans: Akio Morita in Japan, Morris Chang in China, and Andy Grove in Hungary. Each experienced the war's devastation firsthand: Morita dodged front-line service by working in a naval lab while B-29s reduced Japanese cities to rubble; Chang spent his adolescence fleeing successive Japanese and Chinese armies; Grove, a Jewish child in Budapest, survived Nazi persecution and Soviet "liberation."
The conflict's decisive factor was industrial capacity. U.S. factories, orchestrated by the War Production Board, overwhelmed the Axis with tanks, ships, planes, artillery, and materiel shipped worldwide. Yet even amid mass-production, new, more precise technologies were emerging. Wartime research labs produced rockets, radar, and—most portentously—atomic weapons, hinting that future victories might hinge on scientific innovation, not just steel output.
Morita glimpsed this shift while helping develop rudimentary heat-seeking missiles; he learned that electronic computation—then an embryonic idea—could let weapons identify and strike targets autonomously. Mechanical and human "computers" had long handled arithmetic for bureaucracies and bomb-sights, but their speed and flexibility were limited. The pressing need for faster calculations pushed engineers toward electronic solutions.
Vacuum-tube computers such as ENIAC, completed in 1945 to compute artillery trajectories, proved digital machines could multiply hundreds of numbers per second and be reprogrammed for varied tasks. Yet they remained impractical behemoths: ENIAC's 18,000 glowing tubes filled a room, attracted moths, and failed every couple of days. Until scientists found a smaller, cheaper, more reliable switch, computers would stay expensive, fragile curiosities—useful for code-breaking or ballistics, but far from the ubiquitous engines of progress they would later become.
In the mid-1940s, Bell Labs physicist William Shockley theorized that “doping” semiconductors such as silicon or germanium could create a solid-state switch far more reliable than vacuum tubes, yet his early experiments failed because contemporary instruments could not detect the tiny currents involved. Fellow researchers Walter Brattain and John Bardeen refined his concept in December 1947, delicately pressing two gold contacts onto germanium and successfully controlling current flow—an achievement that produced the world’s first working transistor and validated Shockley’s core idea.
Although AT&T initially viewed the device mainly as a low-power amplifier for telephone signals, Shockley, furious at being eclipsed, isolated himself and engineered an even better three-layer “sandwich” design whose middle layer could switch much larger currents in the outer layers. When Bell Labs introduced the transistor publicly in June 1948, it attracted little media attention, yet its tiny, rugged architecture soon enabled mass production by the thousands, then billions, becoming the fundamental on-off building block that would eventually automate computation and reshape modern technology.
Chapter Three discusses the critical developments and innovators behind the integrated circuit (IC), primarily focusing on the roles played by William Shockley, Jack Kilby, and Robert Noyce.
Initially, the transistor, invented at Bell Labs by Brattain, Bardeen, and Shockley, faced practical challenges for mass production and commercialization. While Brattain and Bardeen pursued purely academic careers after their Nobel Prize, Shockley aimed to achieve wealth and broader acclaim. He founded Shockley Semiconductor in 1955 in Mountain View, California, envisioning transistors as replacements for vacuum tubes. Although transistors were beginning to replace tubes in computers, wiring thousands of transistors created significant complexity, limiting practical use.
In the summer of 1958, Jack Kilby at Texas Instruments conceived a groundbreaking idea. While alone at the lab during a vacation period, Kilby explored ways to simplify transistor wiring. His innovative solution involved integrating multiple electronic components onto a single semiconductor substrate, effectively inventing the integrated circuit (IC). Colleagues quickly recognized the revolutionary implications of Kilby's "chip."
Around the same time, another major development occurred in California. Due to Shockley's difficult managerial style, eight talented engineers departed from Shockley Semiconductor, forming Fairchild Semiconductor in Palo Alto—later recognized as foundational for Silicon Valley. Notable among them was Bob Noyce, a visionary leader adept at bridging technological advances with commercial opportunities. At Fairchild, Swiss physicist Jean Hoerni improved transistor reliability with the "planar process," embedding transistors directly into silicon substrates, protecting them from contamination, unlike the previous mesa structure that was susceptible to impurities.
Noyce leveraged Hoerni's planar method to create his version of an integrated circuit, which involved depositing metallic conducting lines directly onto silicon chips, avoiding Kilby's method of connecting separate transistors via wires. This approach significantly enhanced reliability and facilitated further miniaturization.
Ultimately, while Kilby and Noyce independently developed versions of integrated circuits, Noyce’s planar design, characterized by built-in wiring and enhanced reliability, became crucial for commercial success. Initially expensive, these integrated circuits promised tremendous future potential through miniaturization, greater reliability, lower power consumption, and broader applicability—provided a sufficient market demand could be developed.
Chapter 4, titled "Liftoff," explores how Cold War competition, especially the space and missile races between the U.S. and the Soviet Union, provided critical early markets for integrated circuits (ICs) produced by Fairchild Semiconductor and Texas Instruments (TI).
The chapter opens with the launch of Sputnik by the Soviet Union, creating urgency within the United States to advance rocket and computer technology rapidly. The space race intensified further when Yuri Gagarin became the first person in space. In response, President John F. Kennedy announced plans to land Americans on the moon, spurring massive demand for advanced electronics.
Fairchild Semiconductor, recently founded by Robert Noyce and Gordon Moore, quickly found a crucial customer in NASA, tasked with developing the Apollo spacecraft. Initially, Apollo engineers at MIT's Instrumentation Lab doubted traditional transistor-based computers could meet the demanding size, weight, and energy efficiency criteria needed for space missions. After evaluating integrated circuits, MIT chose Fairchild’s ICs due to their superior reliability, smaller size, lower power consumption, and reduced weight.
Fairchild's MicroLogic chips dramatically exceeded expectations. The Apollo Guidance Computer, utilizing Noyce's chips, was compact—just 70 pounds and about one cubic foot in size—vastly smaller and lighter compared to earlier wartime computers like the ENIAC. NASA’s substantial purchases transformed Fairchild into a thriving business with skyrocketing revenues and growing credibility. The success of ICs in the challenging conditions of outer space significantly bolstered the market's trust in this emerging technology.
Parallel to Fairchild’s success, Jack Kilby and Pat Haggerty at Texas Instruments also sought major customers. Unlike Noyce, who leveraged NASA’s Apollo mission, TI pursued military opportunities through Pat Haggerty’s strategic insights and connections. Haggerty, renowned as a visionary leader, quickly recognized the military’s potential for IC use, driven by Cold War tensions, nuclear standoffs, and the arms race. TI ultimately secured a crucial contract to provide the guidance computer for the Air Force's Minuteman II missile, drastically reducing weight and increasing computational performance compared to earlier transistor-based designs.
The Minuteman II contract dramatically increased TI’s integrated circuit sales, scaling from mere dozens to hundreds of thousands of units. By 1965, the Minuteman program represented 20% of all integrated circuits sold, turning military contracts into an essential early market for TI’s chips.
In essence, the Cold War-driven space and missile programs provided both Fairchild and TI with pivotal early validation and substantial sales. These contracts, driven by national security fears and space-race ambitions, accelerated the acceptance, improvement, and mass production of integrated circuits, ultimately solidifying ICs as foundational technology for modern electronics.
Chapter 5, "Mortars and Mass Production," details how integrated circuits (ICs) moved from experimental inventions to reliable, mass-produced devices through significant innovations in manufacturing methods, primarily photolithography.
The chapter begins by introducing Jay Lathrop, an MIT graduate who, while working on military applications, discovered a technique to miniaturize transistor manufacturing. At Texas Instruments (TI), Lathrop developed photolithography—a process using Kodak's photoresist chemicals and an inverted microscope to project precise patterns of light onto semiconductor materials. This method replaced the cumbersome and imprecise practice of manually applying wax globs. Photolithography enabled smaller, more accurate transistors, creating the foundation for mass production.
Pat Haggerty and Jack Kilby at TI quickly recognized photolithography's potential. To utilize Lathrop’s innovation, TI developed proprietary chemical formulations, precision masks, and ultra-pure silicon wafers in-house—none of which were commercially available at the required quality. Through relentless experimentation and meticulous quality control, TI dramatically increased manufacturing yields. Engineers like Mary Ann Potter, TI’s first female physics graduate from Texas Tech, conducted rigorous, trial-and-error experimentation and complex manual data analysis, painstakingly improving production processes.
Another key figure at TI was Morris Chang, a Shanghai-born engineer educated at Harvard and MIT, who methodically improved transistor yields from nearly zero to 25% by systematically adjusting temperature and chemical processes. His success in manufacturing transistors for IBM positioned him to eventually oversee TI’s entire IC business.
Simultaneously, Fairchild Semiconductor faced similar mass production challenges. Bob Noyce swiftly recognized the importance of Lathrop’s photolithography, hiring Lathrop’s colleague James Nall to implement the technique at Fairchild. Hungarian refugee Andy Grove, a brilliant chemical engineer, joined Fairchild and became instrumental in refining production processes, ensuring high-quality IC manufacturing.
The chapter emphasizes that despite fundamental breakthroughs by Shockley, Bardeen, Brattain, Kilby, and Noyce, the practical rise of the semiconductor industry rested largely on engineering innovation and manufacturing ingenuity. Universities provided theoretical knowledge, but scaling the chip industry depended on meticulous, iterative improvements and mass production techniques developed in industrial labs.
By the mid-1960s, thanks to photolithography and production improvements, Fairchild and TI transformed ICs from niche, military-focused inventions into reliable, mass-producible components. This laid the groundwork for broader market applications, ultimately reshaping global electronics and computing industries.
Chapter Six, "I Want to Get Rich," chronicles the transformation of the integrated circuit (IC) industry from a defense-driven market to one centered on civilian applications and commercial growth, primarily through the innovative visions and strategies of Fairchild Semiconductor's leaders, particularly Bob Noyce and Gordon Moore.
Initially, the U.S. military and space programs were the predominant customers for ICs, driving the industry's early growth. By the mid-1960s, Fairchild Semiconductor was highly successful supplying chips for rockets, satellites, and missiles like Apollo spacecraft and Minuteman II. Although military contracts accounted for most early sales—up to 95% in 1965—Noyce had always envisioned a vastly larger consumer-driven market, believing that civilian applications would ultimately dwarf military demand. He strategically kept Fairchild's R&D largely independent from government control, deliberately avoiding heavy reliance on military research contracts, preferring instead to set his own technological and business priorities.
Noyce believed the defense industry, although lucrative, was bureaucratic, rigid, and slow in adopting new technologies. Conversely, civilian markets were seen as more dynamic, open-ended, and potentially larger. To cultivate these civilian markets, Noyce dramatically slashed chip prices, gambling that affordability would stimulate widespread adoption and usage. Fairchild chips that initially cost $20 per piece dropped to as low as $2 or less, sometimes even below manufacturing cost, to encourage experimentation and adoption by commercial customers.
The gamble paid off spectacularly. As chip prices fell, civilian computing rapidly expanded. By the mid-1960s, integrated circuits became central to computers produced by major companies like Burroughs, which placed massive orders dwarfing military purchases. By 1968, the civilian computer industry consumed as many chips as military projects did, with Fairchild dominating 80% of this rapidly growing commercial market. Moore famously predicted exponential growth in computing power—known as Moore's Law—which became the driving force of innovation in the semiconductor industry. The relentless drive toward more powerful, cheaper chips not only increased technical capabilities but also vastly expanded market potential.
However, Fairchild's rapid commercial success led to internal challenges. Owned by an East Coast multimillionaire who opposed employee stock options, Fairchild struggled to retain top talent. This rigid ownership structure contrasted sharply with emerging Silicon Valley norms, where financial incentives like equity stakes motivated employees. Dissatisfaction among Fairchild’s staff, notably their desire to personally capitalize on their technical innovations, began fueling exits. Employees increasingly left Fairchild to join or create new startups, drawn by lucrative financial opportunities elsewhere.
Even Noyce himself began questioning his future at Fairchild. Thus, alongside scientific and technical breakthroughs, the quest for financial reward emerged as a fundamental driving force in Silicon Valley. An exit questionnaire succinctly encapsulated this trend: one departing Fairchild employee candidly stated, "I want to get rich." This ethos of entrepreneurial ambition and financial aspiration would become a central hallmark of the semiconductor industry and Silicon Valley, propelling continuous technological advancement and economic growth.
Chapter Seven, "Soviet Silicon Valley," describes how the Soviet Union attempted to emulate America's rapidly developing semiconductor industry, creating their own semiconductor center, Zelenograd, driven by Cold War competition, espionage, and state planning.
In 1959, Anatoly Trutko, a Soviet semiconductor engineer, arrived at Stanford University as part of early Cold War student exchanges. Despite concerns about Soviet technological espionage, Trutko studied under William Shockley, illustrating the paradoxical openness of the United States despite intense geopolitical rivalry. This openness inadvertently benefited Soviet electronics development, as evidenced by rapid translations and distribution of Shockley's semiconductor textbook within the USSR.
The chapter highlights the Soviet government's high-level commitment to electronics, driven by leader Nikita Khrushchev's obsession with competing with and surpassing American technological supremacy. Khrushchev, unfamiliar with technology himself but eager for prestige, heavily invested in Soviet electronics after being convinced by senior officials like Alexander Shokin about microelectronics' strategic importance. Shokin leveraged Khrushchev's competitive drive to secure significant resources for semiconductor projects.
Central to the Soviet efforts were espionage activities conducted by Joel Barr and Alfred Sarant, American engineers and former members of Julius Rosenberg’s spy ring who defected to the USSR during FBI crackdowns. Though they lacked deep expertise in computing, their status as successful spies gave them considerable credibility and resources within the Soviet bureaucracy. Together, they persuaded Khrushchev of the necessity of creating a dedicated Soviet semiconductor research and manufacturing hub.
In May 1962, Khrushchev visited the Leningrad-based Special Design Bureau of the Electronics Industry No. 2, where Barr and Sarant demonstrated advanced Soviet microelectronics, including miniature radios and rudimentary computers. Impressed, Khrushchev endorsed their vision of a futuristic city entirely dedicated to semiconductor research and production, envisioning it as a grand socialist project symbolizing Soviet scientific prowess.
Thus, Zelenograd was founded outside Moscow—a meticulously planned Soviet "Silicon Valley" equipped with laboratories, factories, housing, and educational institutions such as the Moscow Institute of Electronic Technology. Engineers like Yuri Osakyan worked enthusiastically on semiconductor projects, including producing integrated circuits. These young Soviet scientists experienced an exciting atmosphere of technological optimism, fueled by significant government support and national pride.
However, while Zelenograd appeared outwardly similar to California's Silicon Valley—modern, scientifically ambitious, and full of talented engineers—it operated under strict central planning and lacked the entrepreneurial dynamism, financial incentives, and market-driven innovation of its American counterpart. Nonetheless, this Soviet effort to replicate American semiconductor success highlights the global recognition of Silicon Valley’s strategic importance, marking semiconductors as critical elements of Cold War competition and international technological advancement.
Chapter Eight, "Copy It," details how the Soviet semiconductor industry sought to replicate American technology through espionage and imitation, and why these strategies failed, resulting in lasting technological inferiority relative to the United States.
In the early 1960s, Soviet bureaucrat Alexander Shokin instructed Soviet engineers, including Boris Malin, who had returned from studies in the U.S. carrying a Texas Instruments SN-51 integrated circuit, to replicate American chips exactly. Soviet scientists strongly resented the "copy-it" directive, believing themselves equally capable theoretically. Indeed, the USSR had world-class physicists and notable semiconductor research—later exemplified when Russian scientist Zhores Alferov shared the 2000 Nobel Prize in Physics with Jack Kilby.
Despite this theoretical prowess, the Soviets fundamentally misunderstood what fueled the West's semiconductor success. While replicating nuclear weapons had been straightforward due to their limited numbers, semiconductor manufacturing relied on intricate mass-production processes. American engineers at firms like Fairchild and Texas Instruments (TI) continuously improved these complex production methods through rigorous trial-and-error experimentation. Soviet espionage could procure individual chips, but it failed to capture the essential tacit knowledge—specialized, experiential details crucial to manufacturing precision and reliability. The West’s rapid technological progress, encapsulated by Moore's Law, meant designs and production processes evolved constantly, quickly rendering stolen technology obsolete.
The Soviets also struggled with severe practical constraints. They lacked sophisticated industrial infrastructure and faced strict Western export controls enforced by the Coordinating Committee for Multilateral Export Controls (COCOM), limiting their access to advanced tools, chemicals, and production equipment. Attempts to circumvent these restrictions through neutral countries were inefficient and unreliable, resulting in consistently inferior materials and equipment. Consequently, Soviet production suffered from quality and purity issues, sharply reducing chip yields and reliability.
Beyond these practical difficulties, the top-down, militaristic structure of the Soviet semiconductor industry stifled innovation. Minister Shokin rigidly dictated technical direction, causing engineers like Yuri Osakyan—who created a Soviet integrated circuit—to work in obscurity under heavy secrecy. Advancement within Soviet industry depended on bureaucratic compliance rather than creative technological breakthroughs or market-oriented innovation. Furthermore, the USSR’s obsessive "copy-it" mentality bizarrely made Soviet semiconductor innovation reliant on, and always lagging behind, developments originating in America.
Ultimately, despite their dedicated efforts, Soviet semiconductor initiatives like Zelenograd—a planned "Soviet Silicon Valley"—were doomed by structural limitations: inadequate access to advanced manufacturing inputs, a rigid bureaucracy suppressing innovation, reliance on outdated espionage-driven replication, and a fundamentally flawed understanding of the rapid, iterative, and experiential nature of semiconductor manufacturing innovation in Silicon Valley. The Soviet chip industry thus remained perpetually behind, inadvertently reinforcing American technological leadership.
Chapter Nine, "The Transistor Salesman," explores how Japan rose rapidly as a global electronics powerhouse, intricately integrated into America's semiconductor ecosystem, driven by strategic Cold War policy, visionary business leadership, and innovative consumer marketing.
The chapter begins with a symbolic anecdote: in 1962, Japanese Prime Minister Hayato Ikeda gifting French President Charles de Gaulle a Sony transistor radio. De Gaulle dismissively labeled Ikeda a mere "transistor salesman," failing to recognize the enormous significance electronics would soon have globally.
After World War II, U.S. strategy intentionally encouraged Japan's technological growth, aiming to rebuild Japan economically and bind it tightly into an American-led global system. Despite initial American intentions to dismantle Japan’s high-tech industries, U.S. policy soon shifted toward supporting and guiding Japan’s postwar technological recovery. This was viewed as crucial to the Cold War effort—integrating Japan economically and politically within America's sphere of influence.
Japan eagerly embraced semiconductors. Japanese physicists like Makoto Kikuchi closely followed developments at Bell Labs. Access to American scientific journals and direct contact with U.S. physicists (such as John Bardeen’s celebrated Tokyo visit) enabled Japanese researchers to rapidly absorb transistor technology.
Simultaneously, visionary entrepreneurs like Akio Morita, co-founder of Sony, transformed Japan into a global consumer electronics giant. Morita, who recognized early the transistor's transformative potential, acquired licensing agreements directly from Bell Labs. While American firms like Texas Instruments initially fumbled opportunities in consumer transistor radios, Sony innovated rapidly, seizing leadership through better product design, marketing, and intuitive understanding of consumer desires. Morita famously believed consumers "do not know what is possible, but we do," positioning Sony as an innovative industry leader that shaped consumer demand.
Sony’s transistor radios and subsequent Japanese breakthroughs like Sharp's calculators quickly dominated global markets. American semiconductor firms willingly shared technology, believing Japan lagged far behind technically and posed minimal threat. Instead, Japan leveraged American semiconductor technology to achieve manufacturing excellence, product innovation, and rapid commercial success, particularly in consumer electronics.
This dynamic resulted in a complementary "semiconductor symbiosis": American firms remained dominant in advanced chip design and corporate computing, while Japanese companies excelled at mass-producing consumer electronics. By the 1960s, Japan overtook the U.S. in discrete transistor manufacturing, fueling explosive export growth—from $600 million in electronics exports in 1965 to $60 billion two decades later.
Strategically, the U.S. tolerated and encouraged Japanese competition despite domestic complaints about foreign imports, viewing Japan’s economic integration as vital for Cold War stability. U.S. policymakers argued that a prosperous, technologically advanced Japan was essential, reasoning that economic hardship would push Japan toward Communist influence. Thus, U.S. policy actively supported Japan’s semiconductor and electronics industries, ensuring continuous flow of American technology and investment.
Significantly, Japanese executives reciprocated cooperation. Morita, recognizing mutual benefits, notably assisted Texas Instruments in overcoming regulatory obstacles to build a semiconductor plant in Japan—solidifying closer U.S.-Japan business ties. These collaborative exchanges intertwined the two nations’ economies even more deeply, precisely aligning with American geopolitical aims.
Ultimately, Japan’s ascent as a global economic force hinged upon semiconductor innovation and the strategic vision of entrepreneurs like Morita. Japan's successful rise, once derided by de Gaulle, illustrated how selling transistors—representing broader semiconductor-driven economic growth—granted Japan enormous global influence and transformed it from a war-ravaged nation into an economic superpower closely bound to American geopolitical interests.
Chapter Ten, "Transistor Girls," examines how gender dynamics, labor practices, globalization, and cost considerations intersected to dramatically transform semiconductor production, driving the industry’s shift to Asian manufacturing hubs during the 1960s.
The chapter opens with an anecdotal reference to "The Transistor Girls," a sensationalized 1964 Australian novel depicting Asian female electronics workers through orientalist stereotypes. The real story, however, involves the deliberate recruitment of low-wage, mostly female assembly line workers across Asia, crucial in making integrated circuits affordable, enabling the rapid growth of semiconductor consumption predicted by Moore’s Law.
Fairchild Semiconductor’s quest for productivity and profitability played a central role in this shift. Charlie Sporck, a tough and efficiency-driven manager, joined Fairchild after being pushed out of General Electric (GE) due to resistance from unionized workers. At Fairchild, Sporck leveraged his anti-union stance, managing operations with a disciplined, productivity-focused approach. Unlike East Coast electronics firms dominated by men, Fairchild and other California chipmakers recruited mostly women for assembly-line positions, believing they had smaller hands and better dexterity for delicate tasks. Crucially, these female workers were also seen as easier to manage, less likely to unionize, and could be paid significantly lower wages.
As semiconductor demand exploded, Fairchild struggled to find sufficient affordable labor within the United States. Initially, the company expanded to cheaper domestic regions—such as rural Maine and New Mexico’s Navajo reservations—but even these areas were comparatively costly. Bob Noyce, a Fairchild co-founder, then pointed Sporck toward Hong Kong, where wages were approximately one-tenth of U.S. levels. Fairchild soon established an assembly plant in Hong Kong in 1963, the first American semiconductor firm to move part of its manufacturing offshore to Asia. This shift dramatically lowered production costs while improving output efficiency, as local workers reportedly outperformed their American counterparts in speed and reliability.
Success in Hong Kong led Fairchild to explore even cheaper locations across Asia, rapidly expanding assembly plants to Singapore, Malaysia, Taiwan, and South Korea. These regions offered remarkably lower labor costs (as low as $0.10 to $0.15 per hour), weak or suppressed union activity, and eager local governments welcoming foreign investment. The availability of vast pools of low-cost labor ensured that Asian manufacturing rapidly became the core of semiconductor production, cementing a global supply chain decades before "globalization" became a common term.
Thus, while foreign policy strategists saw potential ideological threats from Communist China’s proximity, chip industry executives like Sporck perceived these Asian regions primarily as ideal capitalist production environments, free from labor unrest and expensive wages. Ultimately, the chapter underscores how gendered labor recruitment, anti-union practices, cost reduction, and global economic forces drove the semiconductor industry’s offshoring revolution, laying foundations for modern electronics manufacturing networks centered firmly in Asia.
Chapter Eleven, "Precision Strike," explores how integrated circuits revolutionized modern warfare through the development of precision-guided munitions during the Vietnam War, focusing on the pivotal role played by Texas Instruments (TI).
In the early stages of the Vietnam War, U.S. bombing strategies relied heavily on sheer volume, notably exemplified by the vast and largely ineffective Operation Rolling Thunder. Massive tonnages of bombs were dropped, but the majority missed their targets due to inadequate accuracy, rendering these bombings militarily ineffective. This inefficiency drove the U.S. military to seek smarter, more precise weapons.
Initially, guided missiles and bombs relied on outdated vacuum tubes, which malfunctioned frequently under the harsh environmental and combat conditions. The Sparrow missile, widely used in Vietnam, was notoriously unreliable due to its vacuum-tube-based radar guidance, with a failure rate of roughly 66%. Bombs dropped from aircraft had an average miss distance of about 420 feet, rendering precise targeting of small or mobile targets impossible.
Texas Instruments engineer Weldon Word saw microelectronics as the solution. Word envisioned a future where advanced sensors and integrated circuits could dramatically improve the accuracy of munitions, turning science fiction into reality. His strategy centered on making weapons simpler, cheaper, and reliable enough for widespread training and use.
In June 1965, Word met Colonel Joe Davis at Eglin Air Force Base, who showed him the Thanh Hoa Bridge—a critical North Vietnamese target heavily bombed but never destroyed due to inaccuracy. Davis challenged Word and TI to develop a weapon that could hit such strategic targets reliably.
Using TI’s semiconductor expertise, Word's team designed an innovative laser-guided bomb. They started with a standard 750-pound M117 bomb, equipped it with movable wings for steering, and embedded a simple semiconductor-based laser guidance system. The system worked by reflecting a laser off a target onto a silicon sensor divided into four quadrants; any deviation from the laser center point activated circuitry, adjusting the wings to correct the bomb's trajectory.
This straightforward design allowed rapid development and high reliability. After successful tests, the laser-guided bomb, named "Paveway," was deployed operationally. On May 13, 1972, American aircraft dropped laser-guided bombs on the Thanh Hoa Bridge, finally destroying it after 638 previous unsuccessful attempts. Other strategic targets were similarly neutralized, demonstrating unprecedented bombing accuracy.
Despite these technological breakthroughs, the Vietnam War’s guerrilla nature meant aerial precision bombing alone couldn’t secure victory, as demonstrated by America's eventual withdrawal. However, the development of microelectronics-based precision munitions had transformative implications, revolutionizing modern warfare. Few at the time grasped the full significance, but TI’s laser-guided bombs introduced a new military paradigm—precise targeting enabled by integrated circuits—that profoundly shaped subsequent military strategies and operations worldwide.
Chapter Twelve, "Supply Chain Statecraft," details how semiconductors reshaped geopolitical alliances, strengthened U.S. ties in Asia, and stabilized economies through strategic production networks built around American chip companies.
The chapter centers on Texas Instruments (TI) and its executive Mark Shepard, alongside TI’s Morris Chang, exploring their decisions to move chip assembly operations into Asia during the late 1960s and 1970s. Initially skeptical of Taiwan after a disastrous first visit and disagreements about intellectual property with Taiwan's economic minister, K.T. Lee, Shepard and TI soon recognized mutual strategic interests.
Taiwan was acutely aware of its geopolitical vulnerability. As America’s commitment in Vietnam waned, Taiwan and other anti-communist Asian nations feared abandonment. Recognizing this vulnerability, K.T. Lee strategically embraced semiconductor manufacturing as an economic and security solution. Attracting American semiconductor investment meant creating economic growth, transferring valuable technological knowledge, and cementing strong, sustainable ties with the United States, which Taiwan saw as essential for its survival.
This strategic alignment paid off when Texas Instruments opened its first Taiwanese chip facility in 1969. Taiwan quickly became a critical production center, shipping billions of semiconductor devices by the 1980s. Similar patterns unfolded across the region: Singapore, under leader Lee Kuan Yew, pursued chip assembly aggressively as a strategy to alleviate unemployment, inviting major U.S. semiconductor companies. Malaysia, Hong Kong, South Korea, and the Philippines followed suit, creating widespread semiconductor assembly networks throughout Asia.
Semiconductor manufacturing transformed these countries economically and politically. Electronics production brought stable employment opportunities, dramatically improving local economies and mitigating radical political pressures that arose from urbanization and rural migration. In Singapore, electronics became a cornerstone industry, making up a significant share of GDP and employment. Malaysia and Hong Kong saw similar benefits, with semiconductor factories offering essential jobs for urbanizing populations.
Strategically, this semiconductor-driven supply chain became a key part of U.S. foreign policy, integrating Asian economies into American-led global systems and reinforcing alliances. Even as American military presence decreased post-Vietnam, these robust economic connections persisted, solidifying relationships that military bases alone could not guarantee. By entwining the economic futures of Taiwan and other Asian states with U.S. semiconductor companies, a lasting form of economic and diplomatic security was achieved.
Ultimately, "Supply Chain Statecraft" illustrates how chip manufacturing became a potent geopolitical tool. The semiconductor industry's expansion into Asia provided economic prosperity, political stability, and strategic security—deepening U.S.-Asia integration, safeguarding American interests, and permanently altering global economic and diplomatic relations.
Chapter Thirteen, "Intel's Revolutionaries," chronicles how the founding of Intel by Bob Noyce and Gordon Moore in 1968 fundamentally transformed computing and set the stage for the modern digital revolution.
Amid global political upheaval in 1968, the Palo Alto Times quietly reported a seemingly minor event: Noyce and Moore's departure from Fairchild Semiconductor to create Intel, short for "integrated electronics." Though appearing modest, this move became revolutionary. Intel aimed to dramatically expand semiconductor production, foreseeing an enormous global demand for transistors. Noyce and Moore envisioned semiconductors becoming so ubiquitous and inexpensive that society would grow deeply dependent on them.
Intel's initial strategic focus was on memory chips, particularly dynamic random-access memory (DRAM). At the time, computers relied on "magnetic core memory," composed of tiny hand-woven metal rings to store binary data. This technique, however, had severe size limitations and wasn't scalable. IBM engineer Robert Dennard conceived a vastly superior alternative—DRAM—using silicon-based transistors coupled with capacitors, repeatedly charged to maintain data. Recognizing DRAM's massive potential, Intel quickly embraced this technology and dominated computer memory markets.
Memory chips, unlike customized logic chips, could be mass-produced uniformly, creating significant economies of scale. Intel deliberately targeted this scalable product, enabling rapid growth. However, in 1969, Intel's path shifted after a small Japanese calculator company, Busicom, approached them requesting customized chips for calculators. Intel engineer Ted Hoff realized Busicom's request—numerous specialized chips—was impractical and overly complex. Instead, Hoff proposed creating a single, versatile chip (the "microprocessor") paired with programmable memory, capable of varied computations simply by changing software, thus standardizing hardware production and shifting specialization to software.
This insight led Intel to produce the groundbreaking "4004" chip in 1971, widely regarded as the first commercial microprocessor. Intel marketed it as a "microprogrammable computer on a chip," transforming computing fundamentally. Suddenly, logic chips didn't need constant redesign. Computers could instead rely on general-purpose chips, mass-produced efficiently, with software handling specialized tasks. Intel’s innovation launched a computing revolution, making microprocessors affordable and ubiquitous.
Caltech professor Carver Mead, closely affiliated with Intel and a confidant of Gordon Moore, recognized early the extraordinary societal implications of mass-produced microprocessors. Mead popularized the term "Moore’s Law," predicting exponential growth in computing capability. He accurately foresaw a transformative future, where small, inexpensive silicon chips embedded in everyday objects would revolutionize human interaction with technology, automate life extensively, and dramatically enhance information processing.
Mead and Intel’s founders recognized that real revolutionary power now belonged to those controlling the new technologies of microelectronics and software. The shift from industrial to digital economies would fundamentally redistribute societal influence toward people and companies at the heart of the computing industry. Thus, despite outwardly appearing conservative and business-focused, Noyce, Moore, and their Intel colleagues correctly identified themselves as the true revolutionaries, redefining human life more profoundly than the political radicals of their era.
Ultimately, Intel's revolutionary innovation in microprocessors laid the foundation for today’s digital world, permanently reshaping global economies, power structures, and daily life.
Chapter Fourteen, "The Pentagon's Offset Strategy," explores how the U.S. Department of Defense strategically embraced Silicon Valley’s emerging semiconductor technology in the late 1970s and early 1980s to regain military superiority over the Soviet Union, particularly after the Vietnam War.
At the heart of this transformation was William Perry, a Silicon Valley entrepreneur turned Undersecretary of Defense for Research and Engineering under President Carter. Unlike Intel’s Bob Noyce and Gordon Moore, who focused on mass-market consumer electronics, Perry had deep ties to defense technology, having worked at Sylvania Electronic Defense Labs and run his own surveillance tech company using Intel chips. He saw firsthand how microprocessors and integrated circuits could transform defense systems.
By the 1970s, the Soviet Union had achieved nuclear parity with the U.S. and possessed superior conventional forces in Europe. Analysts like Andrew Marshall, head of the Pentagon’s new Office of Net Assessment, concluded that the U.S. had lost its qualitative edge in warfare and urgently needed to leverage its technological lead in computing to counter the USSR’s numerical advantage. This approach would later be called the Second Offset Strategy: using precision-guided weapons, sensors, and computing power to overcome Soviet strength with smarter—not more—systems.
Perry recognized that exponential advances in semiconductor performance, driven by Moore’s Law, would make it feasible to integrate powerful computing into compact weapons systems. He envisioned "smart weapons"—missiles and munitions with advanced guidance systems enabled by chips—that would drastically improve accuracy and reduce collateral damage.
Programs like Paveway laser-guided bombs had already shown what silicon-based guidance systems could do. Now, Perry expanded this vision to include new systems such as Tomahawk cruise missiles, which used radar altimeters and terrain-matching algorithms to navigate with precision. Importantly, he advocated for integrated systems—satellites, radars, processors, and missiles working together in real time—which culminated in DARPA’s "Assault Breaker" project. This program aimed to connect aerial radar, sensors, and ground-launched missiles into a coordinated strike system, laying the groundwork for modern network-centric warfare.
Perry’s push faced skepticism. Critics in Congress and the press questioned the practicality and reliability of so-called "smart" weapons, citing past failures like the vacuum-tube-based Sparrow missile. Many doubted that microelectronics could improve fast enough to meet Perry's ambitious goals. But Perry and supporters like Marshall believed Moore’s Law guaranteed rapid, compounding performance gains. They saw chips not as incremental improvements but as revolutionary catalysts for a new kind of warfare.
Even after Perry left office in 1981, the Defense Department continued investing heavily in precision technologies. Still, by then, the dynamic between the military and Silicon Valley had reversed. Where once the Pentagon had driven innovation, now the chip industry was being fueled by commercial demand, particularly from consumer electronics and corporate computing. Companies like Intel focused less on military applications and more on mass-market products, whose scale funded the R&D that kept Moore’s Law on track.
Nonetheless, the military reaped the benefits of this broader tech ecosystem. As U.S. semiconductor firms grew dominant globally, their innovations enabled the development of more accurate, interconnected, and data-driven military systems. These technologies not only reshaped U.S. defense strategy but also deepened America's economic and geopolitical influence, linking allies in Asia through supply chains and shared innovation.
In conclusion, Chapter Fourteen shows how America's post-Vietnam military renewal was powered not by more troops or tanks, but by silicon—the chips at the heart of modern computing. This shift recast American military power and anchored it to the commercial success and technological leadership of Silicon Valley, making the U.S. military increasingly reliant on the semiconductor industry to maintain its global edge.
Chapter Fifteen, “That Competition is Tough,” chronicles the crisis that gripped the U.S. semiconductor industry in the 1980s as Japanese firms emerged as dominant, high-quality competitors. It centers on Richard Anderson, a Hewlett-Packard (HP) executive whose evaluations of chip quality gave him enormous influence over which companies succeeded in selling semiconductors to one of Silicon Valley’s largest buyers.
While U.S. chipmakers like Intel and Texas Instruments had long dismissed Japanese competitors like Toshiba and NEC as imitators, Anderson’s rigorous testing showed otherwise. At a 1980 industry conference in Washington, D.C., Anderson revealed that Japanese DRAM chips vastly outperformed their American counterparts in reliability. Japanese failure rates were under 0.02%, while even the best U.S. firms had rates more than four times worse, and the worst had failure rates over ten times higher. Despite similar designs and costs, U.S. chips were clearly inferior—raising serious doubts about Silicon Valley’s future.
This episode reflected a broader shift: Japan had become a global leader in manufacturing, quality, and even innovation, shaking the foundations of American industrial supremacy. Once seen as a maker of cheap goods, Japan—through firms like Sony—transformed into a hub of technological excellence. Akio Morita, Sony’s visionary CEO, led the charge, insisting that Japan not merely replicate foreign technologies, but originate transformative products.
The 1979 launch of the Walkman was a turning point. It fused cutting-edge integrated circuits with visionary consumer design, selling 385 million units worldwide and redefining portable entertainment. Although the integrated circuit was invented in the U.S., it was Japan that now showed how to commercialize it at scale and quality.
Even as Japanese executives publicly downplayed their scientific credentials, framing their strength as disciplined execution rather than invention, the success of products like the Walkman belied this narrative. Makoto Kikuchi, Sony’s research head, told American journalists that the U.S. had more “geniuses,” while Japan thrived through uniform competence—implying that the Japanese advantage lay in consistent, large-scale manufacturing. But Japan was clearly innovating, not just implementing.
The U.S., ironically, had helped build this competition. After World War II, American occupation forces transferred transistor knowledge to Japanese engineers and opened U.S. markets to their goods to cultivate a capitalist ally in the Cold War. That policy succeeded—perhaps too well. By the 1980s, Japan’s economic rise was threatening U.S. technological leadership, particularly in semiconductors.
Veteran executive Charlie Spork, who had led National Semiconductor after stints at GE and Fairchild, was both alarmed and impressed by Japanese efficiency. Known for his own productivity focus, Spork admitted Japan’s edge was staggering. He sent his workers to tour Japanese factories, hoping to glean insights. They returned with stories of unwavering loyalty, reporting that Japanese foremen prioritized company over family, and workers were deeply committed. No protests or symbolic effigy burnings—just discipline and performance.
Spork made a documentary of the visit and shared it with his workforce, calling it a "beautiful story"—a sobering reminder of how formidable the competition had become. Japanese semiconductor firms didn’t just copy—they executed better, innovated in consumer products, and reshaped global tech.
The chapter starkly portrays how the postwar U.S.-Japan alliance, originally built on mutual benefit, had evolved into an intense rivalry. Silicon Valley’s dominance was no longer assured, and “Made in Japan” had become a global mark of quality and innovation. The U.S. chip industry now faced a painful reckoning—not only about its manufacturing practices but also about its own assumptions of technological superiority.
The form has been successfully submitted.
We will review your software soon!
See you soon.