Introduction: The Five-Layer AI Economy and the Rise of Compute Nationalism
On the morning of May 13, 2026, President Donald Trump boards Air Force One for Beijing, accompanied by a delegation of sixteen of America’s most powerful corporate executives: Elon Musk of Tesla, Tim Cook of Apple, Larry Fink of BlackRock, Kelly Ortberg of Boeing, David Solomon of Goldman Sachs, Stephen Schwarzman of Blackstone, Jane Fraser of Citigroup, and Meta’s Dina Powell McCormick, among others.
The summit agenda covers trade, Taiwan, the Iran war, and — above all — artificial intelligence. The centerpiece of that last item is a single question that neither side has been able to resolve: who controls the hardware on which the world’s intelligence infrastructure runs? Conspicuously, one CEO is absent from the delegation. Jensen Huang, founder and chief executive of NVIDIA, the company that more than any other has become the physical substrate of the global AI economy, told CNBC’s Jim Cramer days before the trip, “We should let the president announce whatever he decides to announce… If invited, it would be a privilege, it would be a great honor to represent the United States.”1 His reticence was diplomatic. His company’s absence from the Beijing table was not.
The reason for that tension — commercial, geopolitical, and strategic — is the subject of this paper. And it cannot be understood without a framework adequate to the moment.
“AI is no longer a single breakthrough or application — it is essential infrastructure. Every company will use it. Every nation will build it. From energy and chips to infrastructure, models and applications, every layer of the stack is advancing at once.”
— Jensen Huang, NVIDIA GTC 2026 Keynote2
Artificial intelligence is almost universally described as a software revolution. That description is convenient, familiar, and increasingly inadequate. Software is the visible layer of AI — the part users experience through chat interfaces, copilots, synthetic media, autonomous agents, and embedded intelligence across enterprise platforms. But beneath every visible AI interaction sits a far larger physical infrastructure whose scale, capital intensity, and geopolitical significance are beginning to reshape global power in ways that software narratives cannot fully capture.
The distinction matters structurally, not rhetorically. Modern frontier AI depends on electricity generation, semiconductor design, wafer fabrication, advanced packaging, extreme-ultraviolet lithography, cooling systems, high-bandwidth networking fabrics, datacenter construction, cloud orchestration, model engineering, and deployment architectures capable of operating at industrial scale. The economics of this stack increasingly resemble energy infrastructure, transportation backbones, telecommunications networks, and defense industrial bases — not software startups. Infrastructure at this depth does not scale cheaply, does not move quickly, and cannot be improvised under pressure. It requires years, capital measured in the hundreds of billions, and the kind of state coordination that markets alone cannot provide.
Economic historian Chris Miller of Tufts University’s Fletcher School captured this structural reality with unusual precision in his landmark study of the semiconductor industry: “Semiconductors have defined the world we live in, determining the shape of international politics, the structure of the world economy, and the balance of military power.”3 Miller was writing about chips broadly. In 2026, his observation applies with even greater force to the specific class of AI accelerators that have become the scarce industrial input at the center of the world’s most consequential technological competition.
That competition, in its deepest dimension, is not merely commercial or regulatory. Harvard political scientist Graham Allison — whose concept of the “Thucydides Trap” has become the defining framework for understanding U.S.–China great-power dynamics — has argued that the relationship between the two countries will, for as far ahead as one can see, constitute “a ruthless rivalry”4 across nearly every domain: tech, trade, industry, military, and global influence. Artificial intelligence has become the most contested terrain within that rivalry, because whoever controls AI infrastructure does not merely hold a commercial advantage. They hold the capacity to generate intelligence at industrial scale — and intelligence, in the twenty-first century, is power.
Huang has offered the clearest industrial map of what that power depends on. In repeated public presentations — and most sharply in his April 2026 interview with the Dwarkesh Patel podcast — he described AI as “a five-layer cake”5: energy at the base, then chips, then infrastructure (datacenters and networking), then models, then applications at the top. The insight is not merely taxonomic. It is strategic. Each layer depends on the layers below it. Applications are the visible tip; energy is the invisible foundation. And Huang’s point — made under considerable political pressure and with evident frustration — was that you cannot win the AI race by sacrificing any single layer for any other. The United States, in his view, was doing precisely that by ceding the chip layer to China’s domestic market.
It is within this five-layer structure that the current confrontation between Washington and Beijing finds its sharpest expression. At the heart of the dispute sits a single chokepoint: NVIDIA’s AI accelerators. These chips — the H100, the H200, the Blackwell B200 — are the primary computational instruments of the frontier AI economy. Training a large language model, running autonomous systems at scale, accelerating drug discovery, optimizing logistics at the national level: all of these applications run, overwhelmingly, on NVIDIA hardware. This concentration of capability in a single company’s product line has transformed what was once a commercially interesting chipmaker into a geopolitical institution.
The policy history of the past four years reflects that transformation with striking clarity. The Biden administration imposed initial export controls in October 2022, restricting the A100 and H100 from Chinese buyers. As NVIDIA designed the H20 as a downgraded China-specific chip in compliance with those rules, Washington subsequently banned even that product in April 2025 — a decision that cost NVIDIA a $4.5 billion inventory charge6 in a single quarter and wiped out an estimated $8 billion in H20 revenue it had projected for the following quarter. The Trump administration partially reversed course in December 2025, announcing that the more advanced H200 — still one generation behind the Blackwell architecture — could be sold to approved Chinese customers, subject to strict licensing requirements and a 25% U.S. government surcharge on each sale.7 Meanwhile, the Blackwell B200 — NVIDIA’s most capable chip — remained, and as of this writing continues to remain, completely off-limits to Chinese buyers.
The practical outcome of these oscillating restrictions has been stark. In May 2026, Huang disclosed in an interview with the Special Competitive Studies Project that NVIDIA’s AI chip market share in China had dropped to precisely zero. “In China, we have now dropped to zero,” he confirmed — a collapse from an estimated 70 to 95 percent market share only two years prior.8 Huang’s assessment of the policy that produced this outcome was blunt: “Conceding an entire market the size of China probably does not make a lot of strategic sense, so I think that has already largely backfired.”9
This is the central paradox that this paper seeks to illuminate. Washington believes that restricting China’s access to advanced AI hardware protects American strategic supremacy. Huang believes those same restrictions are accelerating China’s push toward computational self-sufficiency and fracturing the global AI technology stack that American companies built and from which American companies benefit. Both positions contain genuine strategic logic. Both positions reflect something true about the nature of what is actually at stake. And the summit that opens this week in Beijing — where sixteen American executives will sit across the table from Chinese counterparts to discuss trade, technology, and geopolitical accommodation — represents nothing less than the first attempt at high-level diplomatic management of a conflict that, for the past several years, has been fought primarily through regulatory instruments.
This paper argues that the contemporary confrontation between the United States and China over AI hardware, semiconductor ecosystems, rare earth leverage, industrial subsidies, sovereign cloud infrastructure, and strategic capital flows should not be understood as a “chip war.” That phrase is useful but too narrow. This is a struggle over control of the infrastructure required to generate intelligence at industrial scale — and it requires a new conceptual framework equal to its scope.
I call this framework Compute Nationalism.
Compute Nationalism is the doctrine under which nation-states treat computational infrastructure — including semiconductor design, advanced manufacturing ecosystems, datacenters, power systems, rare earth supply chains, AI accelerators, inference capacity, and sovereign cloud infrastructure — as strategic national assets to be protected, expanded, subsidized, weaponized, or denied in pursuit of geopolitical power. The doctrine differs fundamentally from digital sovereignty, which concerns informational governance: data localization, privacy law, platform jurisdiction. Compute Nationalism concerns something more fundamental — the industrial machinery required to generate intelligence itself. Software can move quickly; infrastructure cannot. AI models can be fine-tuned in weeks; fabrication ecosystems cannot be improvised in years. Cloud applications can be deployed globally; electric grids cannot be summoned overnight. This creates scarcity. Scarcity creates politics. Politics creates nationalism.
The Center for a New American Security, in its 2025 analysis of global compute and national security, identified the core strategic choice facing Washington with unusual precision: “The United States faces a choice: leverage its current lead to promote U.S. AI infrastructure and applications globally, while preserving its edge at the frontier; or continue to primarily focus on protection, while other countries gradually narrow the gap.”10 That formulation describes exactly the tension this paper examines. The five-layer structure that Jensen Huang uses to describe the AI economy — energy, chips, infrastructure, models, applications — is also, implicitly, a map of where sovereign intervention is possible, where leverage exists, and where strategic error can compound across decades rather than quarters.
This paper proceeds in five sections. The first defines Compute Nationalism and distinguishes it from adjacent frameworks — digital sovereignty, techno-nationalism, economic nationalism — to establish its conceptual distinctiveness. The second examines America’s architecture of compute containment: the export control regime, the semiconductor equipment chokepoints, and the strategic logic of tempo denial. The third analyzes China’s counteroffensive — the sovereignty doctrine, the rare earth leverage, the Huawei pivot, and the domestic manufacturing long game. The fourth extends the framework globally through comparative case studies, from the CHIPS Act to the Gulf states’ AI infrastructure ambitions. The fifth draws strategic lessons for corporations, startups, investors, and governments navigating this new terrain. The conclusion returns to the central thesis: that the moment intelligence becomes industrial infrastructure, geopolitics follows — and that moment, as the Beijing summit of May 2026 makes plain, has already arrived.

Section 1: Defining Compute Nationalism — State Sovereignty in the Age of Industrial Intelligence
Every geopolitical transition eventually exposes the inadequacy of inherited vocabulary. The digital era produced useful concepts: cybersecurity, platform governance, digital sovereignty, supply chain resilience, techno-nationalism. None of them fully captures what is emerging around artificial intelligence infrastructure. The reason is structural.
Most earlier digital frameworks were built on an assumption that information moved faster than physical constraints. Software scaled cheaply. Cloud infrastructure could be rented. Capital moved efficiently across borders. Talent flowed internationally. Globalization distributed industrial capacity to wherever it was most economical. These assumptions are weakening. Artificial intelligence at frontier scale is materially constrained in ways that earlier software revolutions were not — and Compute Nationalism begins with that recognition.
Its central premise is straightforward: computational infrastructure has become a strategic national asset because the production of intelligence increasingly depends on scarce industrial capacity. A startup building a consumer application faces relatively modest infrastructure requirements. A nation seeking sovereign frontier AI capability confronts a categorically different set of constraints: access to advanced semiconductor fabrication, packaging ecosystems, GPU allocation, grid-scale electricity, cooling infrastructure, high-bandwidth networking, sovereign cloud orchestration, and inference deployment economics. These are infrastructure questions. Infrastructure questions invite state intervention. Thus, Compute Nationalism.
Compute Nationalism vs. Digital Sovereignty
Digital sovereignty focuses on control over information systems: where data is stored, who governs privacy, which cloud providers host sensitive workloads, whether foreign governments can access national information infrastructure. These are important questions. But they are downstream of the deeper question that Compute Nationalism addresses: who controls the physical machinery required to process that information, train the models that make sense of it, and deploy intelligence at the scale of an economy or a military. Data without compute cannot train frontier models. Privacy law does not create fabrication ecosystems. Cloud jurisdiction does not manufacture accelerators. Digital sovereignty addresses governance. Compute Nationalism addresses industrial capability.
Compute Nationalism vs. Techno-Nationalism
Techno-nationalism is broader — encompassing national efforts to strengthen domestic technological competitiveness across industries through industrial policy, localization mandates, strategic investment, or trade intervention. The category includes electric vehicle subsidies, battery manufacturing incentives, quantum computing initiatives, and telecommunications competition. Compute Nationalism is more precise. It focuses specifically on intelligence-producing infrastructure: systems whose output is not a product but a capability — the capacity to generate, process, and deploy AI at scale. Compute shares a unique combination of properties: extreme capital intensity, supply constraint, energy dependence, military relevance, commercial indispensability, and physical scarcity. Few technology categories combine all of these simultaneously.
Intelligence Has Become Industrial
For most of human history, intelligence was treated primarily as a function of human capital — education, institutions, research culture, labor quality. Artificial intelligence transforms that model. Intelligence now has industrial inputs: power, silicon, cooling, packaging, networking, manufacturing ecosystems, and datacenter deployment. This industrialization is not incidental. It changes who can produce intelligence, at what scale, under what political conditions, and at what cost.
Huang’s GTC 2026 framing is worth quoting precisely because of its deliberate comprehensiveness: “AI is no longer a single breakthrough or application — it is essential infrastructure. Every company will use it. Every nation will build it.”11 The phrase “every nation will build it” is, in effect, a prediction of Compute Nationalism. When intelligence production requires industrial infrastructure that states can build, subsidize, or deny, states will inevitably behave as sovereign actors in that space. This is not ideology. It is institutional logic.
Scarcity Creates Strategy
If frontier compute were universally abundant and freely tradeable, geopolitical competition over it would be substantially muted. But it is not. Scarcity exists across multiple layers simultaneously: accelerator supply, advanced packaging bottlenecks, EUV lithography concentration, grid-scale power limitations, permitting delays, rare earth processing dependencies, and the sheer capital intensity of fab construction. Each layer of scarcity creates a corresponding leverage point — for the state that controls it and a vulnerability for the state that depends on it. Export controls matter because accelerators are scarce. Rare earth restrictions matter because the materials are concentrated. Datacenter permitting matters because grid-scale power is constrained. The architecture of Compute Nationalism is, at bottom, an architecture of scarcity management.
The central strategic question has shifted. It is no longer primarily “Who has the best AI software?” It is increasingly “Who controls the infrastructure required to generate intelligence at industrial scale?” That question — not model benchmarks, not parameter counts, not deployment speed — is what the Beijing summit of May 2026 is actually about.

Section 2: America’s Architecture of Compute Containment
The most common mischaracterization of American AI export policy is the description of it as regulation. That description is administratively accurate and strategically misleading. What Washington has constructed over the past four years is not a regulatory framework in any conventional sense. It is an attempt to shape the geography of intelligence production — to determine, through administrative instruments, which nations can build AI at frontier scale and which cannot. This is infrastructure strategy, not trade policy.
NVIDIA as Strategic Chokepoint: From Chipmaker to Geopolitical Institution
NVIDIA’s trajectory from graphics processor manufacturer to strategic chokepoint is one of the more remarkable transformations in modern corporate history. The company did not begin with geopolitical ambitions. It built GPUs for gaming, then discovered that the same parallel-processing architecture that rendered polygons could accelerate scientific computing, then machine learning, then deep learning, then the entire edifice of modern frontier AI. This convergence of commercial success and strategic centrality created an institution of extraordinary geopolitical significance — a private company whose product decisions are effectively national security decisions.
NVIDIA’s dominance over the AI accelerator market emerged through compounding advantages: the CUDA software ecosystem, which created extraordinary developer lock-in; hardware performance leadership that consistently exceeded alternatives; decades of software maturity; hyperscaler standardization; and startup dependency on a single vendor for the compute that determines whether frontier model training is possible at all. The financial consequences of this dominance are not abstract. China once represented between 20 and 25 percent of NVIDIA’s data center revenue, and CFO Colette Kress confirmed that the Chinese AI accelerator market alone was expected to grow to nearly $50 billion12 — a market that NVIDIA, as of May 2026, has effectively lost entirely.
Huang’s argument against the export control regime is not primarily financial, though the financial stakes are real. His deeper concern is architectural. Speaking on the Dwarkesh Patel podcast in April 2026, he framed the issue in terms of the five-layer AI stack: “Why are you causing one layer of the AI industry to lose an entire market so that you could benefit from another layer of the AI industry? There are five layers, and every single layer has to succeed.”13 His concern is that restricting the chip layer — the second layer in his framework — while American application companies continue to operate in China is not coherent strategy. It accelerates China’s adoption of a non-American AI technology stack while sacrificing NVIDIA’s ability to keep the world’s AI developers inside CUDA’s ecosystem.
Washington’s view is different, and not without logic. Advanced AI accelerators are dual-use technology: the same GPU cluster that trains an enterprise language model can support autonomous targeting research, cyber operations at scale, military simulation, or strategic logistics optimization. The national security logic exists. What is genuinely contested is proportionality — whether the strategic benefit of denial outweighs the strategic cost of accelerating China’s domestic semiconductor ecosystem and fragmenting the global AI stack. As Huang put it in his most pointed public statement on the matter: “It’s in the best interest of America to serve that China market. It’s in the best interest of China to have the American technology.”14
The Regulatory Timeline and Its Paradoxes
The export control timeline reveals both the strategic logic and the paradoxes of compute containment. The initial restrictions on A100 and H100 exports to China were imposed in October 2022. NVIDIA designed the H20 specifically for the Chinese market in compliance with those rules — a lower-performance chip built to stay beneath the export threshold. Washington then banned the H20 itself in April 2025, imposing the $4.5 billion inventory charge15 that NVIDIA disclosed in its SEC filings. The Trump administration partially reversed course in December 2025, approving H200 sales to selected Chinese customers under licensing conditions and a 25 percent government surcharge — while maintaining the complete ban on Blackwell B200 exports, NVIDIA’s most advanced architecture.16
The H200 approval created its own paradox. One day after the U.S. Bureau of Industry and Security formally issued rules permitting H200 exports on a case-by-case basis in January 2026, Reuters reported that Chinese customs authorities had instructed agents not to allow the chips into the country.17 China’s refusal to accept the chips Washington had just agreed to sell was not commercial logic. It was sovereignty signaling: Beijing was communicating that it would not accept American-controlled access on Washington’s terms, and that it intended to develop its own supply chain rather than remain structurally dependent on American licensing decisions.
The House of Representatives simultaneously extended the containment architecture in January 2026, passing the Remote Access Security Act by a 369-to-22 vote — closing a loophole that had allowed Chinese companies to rent access to export-controlled chips through offshore data centers in Indonesia, Japan, and elsewhere, without technically violating existing export law. The legislation extended export control logic to cloud computing for the first time, treating certain forms of remote access to controlled AI infrastructure as equivalent to physical export.
Semiconductor Equipment: The Deeper Chokepoint
Finished chips dominate the public narrative of compute containment. Manufacturing tools may determine its long-term outcome. The semiconductor supply chain is not a single industry but a layered civilizational infrastructure, and the tools required to manufacture advanced chips — lithography systems, deposition equipment, etch tools, inspection and metrology systems — are even more concentrated than the chips themselves.
ASML, the Dutch company that holds a monopoly on extreme-ultraviolet lithography equipment, sits at the center of this architecture. Without EUV systems, no foundry can manufacture chips below seven nanometers with the yield required for commercial production. China’s exclusion from EUV equipment — enforced through export restrictions coordinated between the United States, the Netherlands, and Japan — means that Chinese foundries remain structurally constrained at the manufacturing level regardless of what design capability they develop. This is the deeper layer of compute containment: not restricting chips, but restricting the ability to make chips. Huang himself acknowledged this asymmetry, noting in the Dwarkesh Patel interview that because China lacks EUV access and is constrained to older manufacturing nodes, Chinese foundries produce roughly one tenth the compute flops per dollar compared to TSMC’s leading-edge process.
Strategic Time as a Weapon
One of the least discussed dimensions of compute containment is temporal. Export restrictions are not always intended to permanently prevent capability development. Sometimes the objective is delay — buying time for domestic reshoring, allied coordination, military adaptation, and next-generation architecture development. A twelve-month delay in frontier AI capability can matter. A three-year delay in domestic manufacturing maturation can matter substantially more. A five-year delay in sovereign compute independence, compounded across the full stack, can reshape the strategic landscape.
Whether the time purchased by the current export control regime has been used wisely is a legitimate policy question. The CHIPS and Science Act committed federal resources to domestic semiconductor manufacturing expansion. Hyperscaler capital expenditure — forecast to exceed $600 billion across the five major American cloud providers in 2026 — continues to expand domestic compute capacity. But the cost of the China market loss, estimated at $10 to $15 billion annually for NVIDIA alone, raises serious questions about whether the tempo strategy has been calibrated correctly.

Section 3: China’s Counteroffensive — Compute Sovereignty Through Industrial Retaliation
If America’s doctrine is compute containment, China’s doctrine is compute sovereignty. The distinction matters because Beijing’s response is frequently mischaracterized as reactive improvisation. It is more coherent and more structurally grounded than that characterization allows.
Great powers do not absorb strategic pressure passively. They reinterpret pressure as structural threat and redesign national behavior accordingly. From Beijing’s perspective, dependence on foreign computational infrastructure — accelerators, manufacturing ecosystems, EDA software, advanced packaging, cloud dependencies, capital access — constitutes a strategic vulnerability of the first order. If any of these systems can be politically restricted, they cannot serve as durable foundations of national AI capability. That recognition transforms the strategic question from “How do we acquire more advanced foreign infrastructure?” to “How do we eliminate dependence on infrastructure we do not control?” That is compute sovereignty — and it is the Chinese expression of Compute Nationalism.
The Paradox of Strategic Pressure
The most important thing to understand about China’s response to American export controls is that the pressure appears to have accelerated, rather than contained, China’s sovereign AI ambitions. This is the central paradox of compute containment: the harder the pressure, the stronger the incentive toward independence.
Huang recognized this dynamic explicitly. In his Dwarkesh Patel interview, he argued that Chinese AI labs like DeepSeek were building infrastructure that does not depend on NVIDIA’s CUDA platform at all — and that Huawei’s Ascend AI chip ecosystem was gaining traction precisely because NVIDIA had been pushed out of the market. “We want to make sure that all the AI developers in the world are developing on the American tech stack,”18 he said. The implied concern: if export controls push Chinese developers onto the Huawei Ascend platform, they create an alternative AI ecosystem that can then be exported globally — not just to China, but to every country in the Global South that would otherwise have adopted CUDA.
The data supports this concern. Huawei’s AI chip revenue is projected to reach $12 billion in 2026, with a 60 percent market share in China by year-end.19 Nvidia’s market share in the same market stands at zero. This is not the outcome compute containment was designed to produce.
Rare Earth Retaliation: The Material Layer
Artificial intelligence is frequently discussed as though it exists above the material world. It does not. AI infrastructure is profoundly physical — dependent on specific metals, materials, magnets, and industrial chemistry that are not evenly distributed around the globe. China’s dominance in rare earth mining and, more importantly, rare earth processing creates a material chokepoint that partially mirrors the semiconductor chokepoints Washington has constructed. In December 2024, Beijing banned the export of gallium and germanium to the United States, two elements critical to compound semiconductors and advanced manufacturing.
This move — announced one day after the Biden administration’s December 2024 round of export control tightening — was not coincidental. It was a coordinated demonstration that the material layer of the AI stack creates leverage for Beijing as surely as the chip layer creates leverage for Washington. The asymmetry matters: Washington weaponizes manufacturing chokepoints at the design and fabrication level; Beijing weaponizes material chokepoints at the extraction and processing level. Neither has decisive advantage across the full stack. Both have sufficient leverage to impose costs on the other.
Huawei: The National Champion Pivot
No company better illustrates the dynamics of Chinese compute sovereignty than Huawei. Washington initially targeted Huawei as a strategic security threat — cutting its access to advanced chips, American software, and global supply chains. Beijing responded by treating Huawei as the primary vehicle of domestic technological sovereignty. The pressure that was intended to weaken Huawei as a commercial competitor instead elevated it as a national infrastructure institution.
Huawei’s Ascend AI chip ecosystem, combined with its CANN and MindSpore software stacks, constitutes the embryonic alternative to NVIDIA’s dominant CUDA platform. It does not yet match NVIDIA’s performance per watt. But that is not the strategic metric that matters. The strategic metric is whether it can serve as a viable foundation for Chinese AI development without dependence on American supply chains — and increasingly, it can. Huawei’s CloudMatrix cluster, which uses Ascend chips in massive configurations to achieve collective compute that rivals individual NVIDIA systems through sheer scale, demonstrates that sovereignty-oriented engineering can produce strategic capability even when it cannot yet produce performance parity.
The Manufacturing Long Game
China’s domestic semiconductor manufacturing ecosystem — centered on SMIC, Hua Hong, and a growing set of specialized foundries — remains constrained by the absence of EUV lithography. But this constraint, while real, should not be misread as permanent incapacity. The relevant strategic metric is not current parity but directional trajectory: less dependency, more sovereignty, greater resilience, increasing institutional learning.
Manufacturing capability compounds. Engineering ecosystems develop yield-improvement practices over years of iterative production. Supply chains localize as domestic inputs substitute for foreign ones. Talent ecosystems expand through engineering education and corporate investment. China’s fabrication trajectory should be evaluated over a decade-length time horizon, not a quarterly one. Washington’s export controls may have slowed that trajectory; they have not reversed it.
The Fragmentation Risk
The deepest geopolitical consequence of reciprocal compute nationalism — one that neither Washington nor Beijing has fully reckoned with — is the fragmentation of the global AI technology stack. If both sides pursue sovereign compute architectures, the world does not end up with two versions of the same technology. It ends up with two incompatible technological ecosystems, each with its own hardware standards, software dependencies, cloud architectures, developer communities, and export regimes.
This is not digital fragmentation in the relatively benign sense of separate social media ecosystems. It is intelligence-infrastructure fragmentation — the creation of a world in which the same AI model cannot run efficiently on both American and Chinese hardware, in which developer skills are not transferable across the two stacks, and in which every country in the world must ultimately choose a technological alignment. The geopolitical implications of that choice, for nations across Asia, Africa, Latin America, and the Middle East, may prove more consequential than any single export control decision made in Washington or Beijing.

Section 4: Global Case Studies in Compute Nationalism
One of the most important tests of any new geopolitical framework is portability. If a doctrine explains only one bilateral rivalry, it may be descriptive shorthand. If it predicts behavior across multiple actors with different political systems, economic structures, and strategic contexts, it represents a genuine conceptual category. Compute Nationalism passes that test.
United States: The CHIPS Act and Domestic Sovereignty
American criticism of Chinese industrial policy becomes more complicated when examined alongside the CHIPS and Science Act of 2022 — a $52 billion federal commitment to domestic semiconductor manufacturing expansion, research incentives, and supply chain reshoring. The legislation was framed publicly in terms of competitiveness, resilience, and national security. Its practical structure is industrial policy for compute infrastructure: public subsidy for private fabrication capacity, coordinated with allies through the Chip 4 alliance of the United States, Japan, South Korea, and Taiwan. Washington practices the sovereign compute investment it critiques in Beijing; the difference lies in political system and institutional form, not strategic logic.
The geography of American compute sovereignty is also notable. Semiconductor manufacturing investment in Arizona. Advanced packaging capacity in Ohio. Datacenter expansion throughout Texas. Energy-compute convergence reshaping grid planning across multiple states. AI infrastructure is visibly reshaping domestic political economy — not as a software phenomenon but as a physical industrial buildout with implications for land use, electricity regulation, labor markets, and municipal finance.
China: Made in China 2025 and AI Intensification
China’s Made in China 2025 initiative reflected strategic concern over dependency in critical industrial sectors well before AI became the dominant policy frame. Targets included robotics, advanced manufacturing, semiconductors, industrial automation, and new materials — the industrial foundations of what would later become AI infrastructure. The initiative’s structural logic maps directly onto Compute Nationalism: reduce dependency, build sovereign industrial capability, localize strategic ecosystems, enhance resilience. Artificial intelligence intensified the urgency without changing the doctrine. Chinese compute nationalism did not emerge as a sudden reaction to American pressure. It emerged from preexisting industrial sovereignty logic, and American pressure accelerated its implementation.
Gulf States: Sovereign Capital Meets AI Infrastructure
Perhaps the most strategically interesting emerging expression of Compute Nationalism appears in the Gulf. Saudi Arabia’s Humain AI initiative and the United Arab Emirates’ G42 are deploying sovereign wealth into AI infrastructure at a scale that commands serious attention. In late 2025, Washington approved Blackwell chip exports to both Humain and G42 — subject to strict security and reporting requirements — as part of a broader strategy to extend American AI infrastructure into the Gulf while maintaining governance control.20 The CSIS has noted that this creates a structural tension: as the United States exports compute capacity abroad to maintain strategic relationships, it necessarily cedes some leverage over governance standards and deployment norms. Capital-rich energy states are, in effect, converting hydrocarbon wealth into compute influence — a pattern that may define the geopolitics of the Gulf as definitively in the AI era as oil defined it in the twentieth century.
India: Middle-Power Balancing
India’s position is structurally distinctive. A large and rapidly digitizing population. Exceptional talent depth in software engineering and mathematics. Strategic balancing between Washington and Beijing that reflects New Delhi’s long-standing preference for non-alignment. Nascent semiconductor ambitions that aspire to fabrication sovereignty but face significant infrastructure and capital constraints. India illustrates a middle-power version of Compute Nationalism: strategic recognition that computational sovereignty matters, combined with limited current capacity to achieve it unilaterally. The balancing logic is itself a form of compute nationalism — resisting dependency on either major bloc while seeking bilateral infrastructure partnerships with both.
The Global Stratification Risk
Across these cases, expression varies but structural logic converges. States increasingly ask: Do we control enough compute to remain strategically relevant? Are we dangerously dependent? Should infrastructure be subsidized, owned, or protected? Should foreign investment in AI ecosystems be scrutinized? These are compute nationalist questions, and they are being asked simultaneously in Washington, Beijing, Riyadh, Abu Dhabi, New Delhi, Brussels, Tokyo, and Seoul. The likely structural outcome is a stratified global order: a small number of sovereign compute powers capable of full-stack AI independence; a second tier of alliance-dependent participants who achieve partial sovereignty through allied relationships; and a larger tier of infrastructure consumers whose AI capabilities are fundamentally shaped by the choices made above them.

Section 5: Strategic Lessons from the Compute War
Geopolitical frameworks earn their value through the quality of strategic guidance they generate. Compute Nationalism is not merely descriptive — it is predictive. If the doctrine is correct, certain strategic consequences follow for every major actor type in the AI economy.
For Corporations: Infrastructure Risk as Board-Level Governance
For multinational corporations, the most immediate lesson is that compute infrastructure is no longer commercially neutral. Historically, firms optimized AI infrastructure for cost, performance, and vendor reliability. Compute Nationalism introduces geopolitical variables that corporate planning processes were not designed to handle. Can export controls disrupt our AI supply chain? Can foreign subsidiaries lose access to models or compute we depend on? Can political escalation alter our cloud licensing agreements? Are our AI dependencies concentrated in ecosystems vulnerable to restriction?
These are not theoretical concerns. They are present operational risks. The response is likely to involve infrastructure regionalization — separate compute footprints, compliance architectures, and deployment stacks for different geopolitical zones. This increases cost and complexity. But Compute Nationalism makes such architecture increasingly rational, and the companies that invest in resilient infrastructure now will be better positioned when the next round of restrictions arrives.
For Startups: The End of Infrastructure Abstraction
The startup economy has historically thrived on infrastructure abstraction: rent cloud resources, scale software cheaply, expand globally, raise capital efficiently. Frontier AI disrupts this model. GPU access, inference economics, cloud pricing, and power costs are not background constants — they are strategic variables that can shift dramatically when the geopolitical context changes. Startups that build AI-native products without attention to compute access dependencies may find that their most critical constraint is not talent or product-market fit, but the political conditions under which their infrastructure provider operates.
The CUDA ecosystem concentration that Huang describes as NVIDIA’s competitive moat is, from a startup perspective, a single point of political exposure. Any startup whose product depends on NVIDIA hardware is indirectly exposed to every policy decision that affects NVIDIA’s ability to manufacture and sell at scale. Infrastructure realism — understanding compute dependencies not just as engineering choices but as geopolitical positions — is becoming a required competency for founders building in frontier AI.
For Investors: Pricing Infrastructure Fundamentals
Investors have frequently approached AI through software economics: network effects, marginal cost compression, user growth, platform adoption. Compute Nationalism suggests that infrastructure variables may prove more durably determinative than model benchmarks. Who controls power? Who controls datacenters? Who controls accelerator access? Who benefits from sovereign subsidies? These questions — infrastructure questions — may dominate AI valuation more than capability demonstrations over the next decade.
Power infrastructure is increasingly an AI investment variable. Location matters. Climate matters. Regulatory friendliness toward datacenter construction matters. Grid relationships matter. The AI economy is reconnecting digital valuation with industrial fundamentals in ways that have not been true of the software economy for thirty years. Investors who apply software-era valuation frameworks to infrastructure-era AI companies may systematically underestimate both the opportunities and the risks.
For Governments: AI Policy Is Now Infrastructure Policy
For governments, the implication is most fundamental. Much AI policy discourse remains focused on governance: algorithmic transparency, bias mitigation, privacy, platform accountability, election integrity. These matters are real and important. But Compute Nationalism identifies a prior question that governance frameworks cannot address: Does the nation control enough compute to remain strategically relevant? A government that produces sophisticated AI ethics frameworks while lacking domestic compute capacity has optimized the wrong layer of the stack.
The policy agenda that follows from Compute Nationalism is uncomfortable because it is expensive, slow, and physically constrained. Grid modernization. Nuclear energy expansion. Transmission permitting reform. Semiconductor investment coordination. Datacenter infrastructure development. Public-private compute partnerships. These are not typical technology policy initiatives. They are industrial policy at the scale of the postwar energy buildout. Governments that recognize this early enough will have meaningful choices. Those that recognize it late will find their AI sovereignty constrained by decisions made by others.
The Caution Against Over-Nationalization
Compute Nationalism explains state behavior. It does not automatically endorse every nationalist response. Excessive fragmentation carries genuine costs: duplicative infrastructure, reduced efficiency, innovation slowdown, alliance friction, capital misallocation, and the erosion of the global technology commons that has been the foundation of AI’s rapid advance. The correct equilibrium between sovereign resilience and global openness remains genuinely uncertain, and policymakers who resolve that tension prematurely in either direction risk either strategic vulnerability or innovation stagnation.

Conclusion: The Beijing Summit and the Infrastructure War That Defines It
On May 13, 2026, as President Trump and his delegation of American executives arrive in Beijing for talks with President Xi Jinping, they carry with them a paradox that this paper has attempted to illuminate. The most important company in the global AI economy — NVIDIA, the firm that built the physical substrate on which the world’s frontier AI runs — is not represented in the delegation, and its AI chip market share in China stands at zero. An extraordinary strategic outcome produced by policy choices whose internal logic is coherent on both sides and whose aggregate effect has been to accelerate exactly the dynamic each side sought to prevent.
Washington sought to constrain China’s access to frontier compute. The result has been the acceleration of China’s domestic compute sovereignty, the elevation of Huawei as a national champion, the emergence of an alternative AI technology stack, and the loss of the very market relationships that kept Chinese developers inside the American technology ecosystem. Beijing sought to resist dependency and build autonomous capability. The result has been a fragmentation of the global AI stack that creates genuine risks for Chinese developers who now build on infrastructure that is less capable, less interoperable, and less supported by the global developer community than CUDA.
Graham Allison has argued that the defining feature of the U.S.–China relationship, “for as far ahead as I can see, will be a ruthless rivalry”21 — a competition in which each side’s strategic choices trigger adaptation in the other that neither side fully anticipated. The compute war is the most vivid current expression of that rivalry. It is structural, not incidental. And it will not be resolved by a single summit, a single trade agreement, or a single licensing decision about H200 chips.
What this paper has argued is that understanding this rivalry requires a framework equal to its depth. “Chip war” is too narrow. “Tech competition” is too vague. Compute Nationalism — the doctrine under which states treat intelligence-producing infrastructure as sovereign strategic assets — provides the conceptual structure needed to explain not just the U.S.–China confrontation, but the broader global convergence toward sovereign AI infrastructure that is reordering geopolitical relationships across the Gulf, South Asia, Europe, and beyond.
Why “Compute”
Because the contest is larger than software, and larger than chips. Because intelligence production at frontier scale depends on energy, silicon, cooling, packaging, networking, manufacturing ecosystems, and datacenter deployment — a full industrial stack that cannot be improvised under pressure. Because whoever controls that stack increasingly controls the capacity to generate strategic advantage, economic productivity, and military capability in the twenty-first century. “Compute” captures the physical substrate of intelligence production.
Why “Nationalism”
Because states are behaving as sovereign actors rather than neutral market participants. Because access to the most critical AI infrastructure is becoming politically conditioned. Because infrastructure is being subsidized, protected, weaponized, and denied on strategic grounds. Because dependency is generating sovereignty responses across every major power. “Nationalism” captures the political behavior that scarcity and strategic importance inevitably produce.
The Historical Analogy and Its Limits
Oil shaped twentieth-century geopolitics because industrial civilization depended on energy access. States secured supply, protected routes, built reserves, and fought over infrastructure. Artificial intelligence may generate analogous strategic dynamics — not because compute is identical to oil, but because strategic dependency on a scarce industrial input creates similar political behavior regardless of what that input is. The analogy is imperfect. Compute, unlike oil, can be generated through investment and manufacturing, not merely extracted. CUDA, unlike a refinery, creates network effects that make switching costly. But the core insight holds: when a civilization’s most important productive capacity depends on infrastructure that is scarce, physically specific, and politically controllable, states will treat that infrastructure as sovereign territory.
That is Compute Nationalism. And it began — as the Beijing summit of May 2026 makes unmistakably plain — not in some hypothetical future, but in the world we are already living in.

Footnotes and Sources
1. Jensen Huang, interview with Jim Cramer, CNBC, May 2026, as reported in: CNBC — Trump invites Musk, Cook, Fink to China trip. Huang was notably absent from the delegation accompanying President Trump to Beijing, May 13–15, 2026.
2. Jensen Huang, NVIDIA GTC 2026 Keynote Address, March 2026. Quoted and analyzed in: Colorado AI News — Quote of Note, Jensen Huang. The full quotation: “AI is no longer a single breakthrough or application — it is essential infrastructure. Every company will use it. Every nation will build it. From energy and chips to infrastructure, models and applications, every layer of the stack is advancing at once.”
3. Chris Miller, Chip War: The Fight for the World’s Most Critical Technology (Scribner, 2022). Miller is Associate Professor of International History at the Fletcher School, Tufts University. The quoted passage appears in the introduction; see also: McKinsey — Author Chris Miller on the Global Influence of Semiconductors. Miller’s further observation: “You can’t understand the modern world without putting chips at the center of the story.”
4. Graham Allison, Douglas Dillon Professor of Government at Harvard Kennedy School, Destined for War: Can America and China Escape Thucydides’s Trap? (Houghton Mifflin Harcourt, 2017). The quoted characterization — “ruthless rivalry” — appears across multiple public lectures and interviews; see: Harvard Kennedy School — Thucydides’s Trap; and Springer — Escaping Thucydides’s Trap: Dialogue with Graham Allison. Allison writes: “Across nearly every dimension — tech, trade, industry, military, and global influence — the US and China will be the fiercest competitors history has ever seen.”
5. Jensen Huang, interview with Dwarkesh Patel, Dwarkesh Podcast, April 2026. Full transcript and audio: Dwarkesh.com — Jensen Huang: TPU Competition, Why We Should Sell Chips to China. Quoted directly: “AI is a five-layer cake. The AI industry matters across every single layer, and we want the United States to win at every single layer, including the chip layer.”
6. NVIDIA Corporation, Form 8-K, Q1 FY2026, filed with the U.S. Securities and Exchange Commission. SEC — NVIDIA Q1 FY2026 Earnings Release. The filing states: “On April 9, 2025, NVIDIA was informed by the U.S. government that a license is required for exports of its H20 products into the China market. As a result of these new requirements, NVIDIA incurred a $4.5 billion charge in the first quarter of fiscal 2026.”
7. U.S. Department of Commerce, Bureau of Industry and Security, Press Release, January 13, 2026: BIS — Revised Semiconductor License Review Policy for China. The BIS release confirms: “Today’s rule follows President Trump’s December 8, 2025 announcement that the United States will allow the H200 and similar products to be shipped to approved customers in China.” See also: Nextgov/FCW — Lawmakers Worry Over New Rule. “Nvidia’s fastest performing chip, called Blackwell, is still barred from being sold to Chinese firms.”
8. Jensen Huang, interview with the Special Competitive Studies Project (SCSP), May 2026, as reported by: Tom’s Hardware — Jensen Says Nvidia Now Has ‘Zero Percent’ Market Share in China. Huang confirmed: “In China, we have now dropped to zero.” Context: NVIDIA’s market share in China’s AI accelerator segment had been estimated at 70–95 percent as recently as 2023–2024.
9. Jensen Huang, SCSP interview, May 2026. Full quotation: “Conceding an entire market the size of China probably does not make a lot of strategic sense, so I think that has already largely backfired. Maybe it made sense at the time, but I think the policy really needs to be dynamic and needs to stay with the times.” Source: Tom’s Hardware, ibid.
10. Center for a New American Security (CNAS), Global Compute and National Security, August 2025. CNAS — Global Compute and National Security. The report further notes: “Washington’s efforts to protect America’s AI leadership have relied heavily on controlling the export of advanced AI chips. Controls on semiconductor manufacturing equipment going to China were imposed as early as 2019, followed by AI chip export controls targeting U.S. adversaries in 2022, which were strengthened in 2023, 2024, and 2025.”
11. Jensen Huang, NVIDIA GTC 2026. See footnote 2. The five-layer framework is elaborated in: YourStory — Jensen Huang’s 5-Layer AI Stack. Huang stated: “The AI race won’t be decided by models alone. The real competition plays out across inputs (energy), industrial execution (chips and infrastructure), ecosystem design, and cultural readiness (applications).”
12. NVIDIA CFO Colette Kress, Q1 FY2026 earnings call, February 25, 2026. Cited in: 247 Wall Street — Nvidia: Jensen Huang Says We Should Be Selling Chips to China. Kress confirmed the Chinese AI accelerator market was expected to grow to nearly $50 billion, and that losing access “would have a material adverse impact on our business going forward and benefit our foreign competitors in China and worldwide.”
13. Jensen Huang, Dwarkesh Patel Podcast, April 2026. See footnote 5. Full quotation: “Why are you causing one layer of the AI industry to lose an entire market so that you could benefit from another layer of the AI industry? There are five layers, and every single layer has to succeed.”
14. Jensen Huang, remarks to reporters at the APEC CEO Summit, Gyeongju, South Korea, October 31, 2025. Reported by: CNBC — Nvidia’s Huang Doesn’t Buy the National Security Concerns Over Selling Chips to China. Full quotation: “It’s in the best interest of America to serve that China market. It’s in the best interest of China to have the American technology.”
15. NVIDIA, SEC Form 8-K, Q1 FY2026. See footnote 6. Additional detail from: NVIDIA Form CORRESP, SEC filing: “The H20 data center product was designed specifically for the Chinese market in conformance with specific USG license requirements, and there is not a market outside of China for H20.”
16. Tom’s Hardware — H200 export saga. Tom’s Hardware — The Nvidia H200 Export Saga, As It Happened. “Nvidia’s most advanced generation — the Blackwell architecture (B100/B200/GB200) — remained off-limits… The H200 is one generation behind Blackwell and substantially less powerful, yet still transformative for Chinese capabilities.”
17. Reuters, as reported in: Network World — Nvidia H200 Chips in China: US Says Yes, China Says No. “Chinese customs authorities told customs agents this week that [the chips] are not permitted to enter China.” This occurred one day after the BIS formally approved H200 exports, January 2026.
18. Jensen Huang, Dwarkesh Patel Podcast, April 2026. See footnote 5. Full quotation: “We want to make sure that all the AI developers in the world are developing on the American tech stack, and making the contributions, the advancements of AI — especially when it’s open source — available to the American ecosystem. It would be extremely foolish to create two ecosystems: the open source ecosystem, and it only runs on a foreign tech stack, and a closed ecosystem that runs on the American tech stack.”
19. MSN/financial-news.co.uk, Nvidia’s China AI Chip Market Share Falls to Zero. “Huawei has rapidly filled the void, with AI chip revenue projected to reach $12 billion in 2026 and a 60% market share in China by year-end.”
20. Center for Strategic and International Studies (CSIS), If Compute is the New Oil, War in the Gulf Significantly Raises the Stakes, March 24, 2026. CSIS — If Compute is the New Oil. The analysis notes: “Approvals to export advanced Nvidia Blackwell chips to Emirati G42 and Saudi Humain in late 2025 were subject to both companies meeting strict security and reporting requirements.”
21. Graham Allison, dialogue with CCG, as published in China-US Focus — Can China and US Escape Thucydides Trap?; and Springer — Thucydides’s Trap Revisited. Allison’s full characterization: “The defining feature of the relationship between the US and China today, for as far ahead as I can see, will be a ruthless rivalry. So, a competition in which a rising China — which is seeking to ‘make China great again’ — will continue as it has for a generation, rising, and becoming stronger, and as it does so, encroaching on positions and prerogatives that Americans have grown accustomed to.”



