The global narrative surrounding artificial intelligence has long been framed by abundance—the idea that intelligence, once digitized, could scale infinitely and be accessed universally. That assumption is now breaking down. AI is no longer merely software; it is infrastructure constrained by compute, energy, chips, and geopolitical boundaries. As these constraints intensify, a new organizing principle emerges: National Prioritization—the deliberate allocation of AI capabilities based on state interests rather than market forces.

In liberal economies, corporate leaders such as Elon Musk, Sam Altman, and Dario Amodei have demonstrated the ability to negotiate with, resist, or influence government demands. Yet even within these systems, the boundaries of autonomy are narrowing. In 2023, SpaceX restricted the military use of Starlink in Ukraine, raising questions about private authority over wartime infrastructure. Gwynne Shotwell said:

“Our intent was never to have them use it for offensive purposes.”¹

In 2026, the Pentagon moved aggressively to bring major AI firms into classified systems, while Anthropic reportedly resisted defense use cases it believed could weaken its safety restrictions.² This tension reveals a new reality: the machine is no longer just a product. It is becoming a national instrument.

By contrast, centralized systems demonstrate a different equilibrium. The Jack Ma episode remains a warning about how corporate autonomy can collapse when it conflicts with state authority. After Ma criticized China’s regulatory system, Ant Group’s IPO was halted, Alibaba came under antitrust scrutiny, and Ma largely disappeared from public view for a period of time.³

This divergence signals a deeper structural transformation. AI is becoming a strategically rationed resource, not unlike oil in the 20th century. As Joseph Nye famously framed power:

“Power is the ability to affect others to get the outcomes one wants.”⁴

Control over AI—its compute, models, chips, energy, networks, and deployment rights—now represents precisely such power. The implication is clear: AI will not be distributed evenly. It will be allocated.


Section 1 — Defining National Prioritization

National Prioritization means the strategic allocation of AI infrastructure, compute resources, data access, and model capability by nation-states in alignment with national security, economic advantage, and geopolitical survival.

Unlike normal markets, where allocation follows price, National Prioritization introduces hierarchy. It asks: when compute is scarce, when chips are restricted, when energy becomes insufficient, and when war forces emergency decisions, who gets priority access to the machine?

The hierarchy will likely look like this:

  1. Military and defense systems
  2. Intelligence and cybersecurity agencies
  3. Critical infrastructure: energy, hospitals, logistics, finance
  4. Strategic industries: semiconductors, aerospace, robotics, cloud
  5. Commercial AI products
  6. General public and foreign access

This is why the term matters. National Prioritization is not simply “AI policy.” It is the wartime logic of allocation applied to intelligence infrastructure. It is the point where AI stops behaving like a consumer technology and starts behaving like a strategic reserve.


Section 2 — AI Growth and the Structural Emergence of Allocation Regimes

Artificial intelligence did not begin as a scarce system. It became one.

The early internet was built on a dream of openness. Bandwidth expanded, software replicated cheaply, and cloud services gave companies the impression that compute could be summoned on demand. AI inherited this myth of abundance. But frontier AI is different. It does not scale through software alone. It scales through electricity, advanced semiconductors, data centers, rare earth materials, cooling systems, submarine cables, cloud permissions, and political tolerance.

That is why the modern AI stack is not a neutral technology stack. It is an industrial stack. It is closer to oil refining, nuclear power, aircraft manufacturing, and military logistics than to a traditional software product. Nvidia GPUs, TSMC fabrication, ASML lithography, U.S. cloud infrastructure, export-control law, and power-grid access form the real skeleton of AI capability. Whoever controls these bottlenecks controls the practical ceiling of intelligence.

Chris Miller’s central insight in Chip War is that advanced semiconductors are made by a small number of companies, in a small number of places, through an extremely specialized supply chain.⁵ That is the physical basis of National Prioritization: if the supply chain is concentrated, access can be politically allocated.

The U.S. government’s semiconductor export controls against China show this clearly. In October 2022, the Bureau of Industry and Security imposed controls on advanced computing chips and semiconductor manufacturing equipment destined for China.⁶ In 2023 and 2024, those controls were expanded to cover additional advanced computing items and supercomputer end uses.⁷ By 2025, the United States had also attempted an AI Diffusion framework for advanced chips and model weights, even though the rule was later rescinded and revised by the next administration.⁸

This is National Prioritization in policy form. It does not say, “Let the market decide.” It says, “Certain actors may not receive frontier compute because their access could weaken national advantage.”

The same logic appears in the U.S. Department of Commerce’s proposed “Know Your Cloud Customer” rule. The proposal would require U.S. Infrastructure-as-a-Service providers to verify foreign customers through Customer Identification Programs when large AI model training could create cyber risks.⁹ The Federal Register described a CIP as a program for collecting, verifying, storing, and maintaining identifying information about foreign customers.¹⁰

This turns AI access into something closer to banking compliance. The foreign user is no longer just a customer. The foreign user becomes a risk category. Compute becomes a permissioned resource.

At the same time, the United States has tried to promote the export of the American AI technology stack to allies and partners. A 2025 White House order framed AI as foundational to economic growth, national security, and global competitiveness, and argued that U.S. AI technologies, standards, and governance models should be adopted worldwide.¹¹ This is the dual logic of National Prioritization: restrict adversaries, equip allies, discipline the middle.

That is why AI allocation will not be governed by pure openness. It will be governed by blocs. The old internet asked, “Can you connect?” The AI era asks, “Are you trusted enough to compute?”


Section 3 — Gigarmageddon and the Physics of AI Scarcity

National Prioritization becomes even more urgent under what I call Gigarmageddon: the moment when exponential AI demand collides with insufficient power, insufficient grid capacity, insufficient cooling, and insufficient permitting speed.

AI is not magic. It is energy converted into computation, and computation converted into intelligence-like output. Every prompt, training run, agentic workflow, autonomous robot, defense simulation, satellite network, and inference API call depends on electricity. When the demand curve moves faster than the grid, the nation faces a prioritization crisis.

Vaclav Smil’s energy framework is essential here because it reminds us that civilization is not built on abstractions; it is built on energy conversions.¹² AI does not escape this rule. It intensifies it.

A trillion-line-code economy, a robotics economy, and an agentic-capitalism economy will require enormous data-center expansion. Hyperscalers are already investing at scales that look less like normal technology investment and more like national infrastructure programs. Microsoft, Amazon, Google, Meta, Nvidia, xAI, OpenAI, and Anthropic are no longer merely software actors. They are becoming participants in national industrial capacity.

This is why states matter. Michigan’s nuclear restart, Pennsylvania’s coal-plant extension debates, Texas’s ERCOT expansion, Virginia’s data-center power pressure, Arizona’s TSMC semiconductor buildout, and Indiana’s AWS incentives all belong to the same story: AI is forcing a reallocation of local and national infrastructure.

Under Gigarmageddon, the question becomes brutal:

Does the military get the compute first?
Does the hospital system get the compute first?
Does the electric grid get the compute first?
Does OpenAI, Google, Meta, or Amazon get the power contract?
Do consumers lose access before defense systems lose access?
Do foreign customers get throttled before domestic firms?

Markets can answer some questions in normal times. But in crisis, markets do not decide priority. Governments do.

This is why National Prioritization is the bridge between Compute Mercantilism and Gigarmageddon. Compute Mercantilism explains why nations hoard AI capacity. Gigarmageddon explains why there may not be enough energy to satisfy all demand. National Prioritization explains who gets access when both forces collide.


Section 4 — Geopolitical Case Studies: When the Machine Becomes Territory

Geopolitics is no longer external to technology. It is embedded inside technology. Every cloud region, GPU export license, satellite terminal, AI acquisition, rare-earth shipment, and social-media algorithm can become a site of national conflict.

Case 1 — Meta, Facebook, and China Blocking the Manus AI Deal

The reported China decision to block Meta’s proposed acquisition of Manus AI is one of the clearest examples of National Prioritization in 2026. According to Reuters, China’s powerful state planner moved to discourage stake or asset transfers by homegrown technology companies to foreign investors without Beijing’s approval.¹³ AP reported that China blocked Meta’s acquisition of Manus after concerns about the transfer of advanced technology.¹⁴ The Guardian described the target as an AI agent developer and framed the decision as a signal of intensified scrutiny of U.S. investment in Chinese-linked AI assets.¹⁵

This case matters because Manus was not simply an app. It was an AI agent company. Agents are different from ordinary software because they can autonomously plan, execute tasks, manage workflows, optimize advertising accounts, and potentially act across digital systems. If Meta had integrated Manus deeply into Facebook, Instagram, and advertising infrastructure, the acquisition would not merely have transferred a company. It would have transferred an emerging layer of agentic capability.

That is why China’s intervention belongs in this paper. It shows that AI talent, AI agents, and AI workflows are becoming national assets. Beijing’s message was not only about Meta. It was about the principle that Chinese-origin AI capability cannot simply be purchased by an American platform and absorbed into a U.S.-controlled ecosystem.

In National Prioritization terms, China was asking: who gets the machine, and who owns the people who know how to build it?

Case 2 — TikTok, Location Data, Military Risk, and the Divestiture State

TikTok is another allocation case, but in reverse. Instead of China blocking U.S. acquisition, the United States pushed ByteDance toward divestiture because American officials viewed TikTok as a national security risk.

The strongest documented facts should be stated carefully. Public evidence showed that ByteDance employees used TikTok-related data to track journalists’ IP addresses and infer whether they were near suspected leakers. Forbes reported this in 2022, and Reuters later reported that ByteDance found employees had obtained TikTok user data of two U.S. journalists.¹⁶ The Guardian similarly reported that employees looked at IP addresses of journalists to determine whether they were in the same location as employees suspected of leaking information.¹⁷

The broader U.S. concern went beyond journalists. The Justice Department argued that TikTok posed a national-security threat because of access to vast amounts of American user data, including locations and private messages, and because of potential content manipulation.¹⁸ The Supreme Court upheld the TikTok divest-or-ban law in January 2025.¹⁹ The White House later described a framework under which TikTok’s U.S. operations would become majority-owned and controlled by U.S. persons.²⁰ Reuters reported in January 2026 that ByteDance said TikTok USDS Joint Venture LLC would secure U.S. user data, apps, and algorithms through privacy and cybersecurity measures.²¹

The military angle is important, but it must be framed precisely. The public record does not prove that TikTok itself systematically mapped U.S. military locations. The stronger, documented issue is that location data, social-media usage, and foreign-controlled platforms create operational-security risks for military and government personnel. The U.S. armed forces and many states restricted TikTok on government devices because of these risks.²² The Department of Justice’s broader data-security program also targeted access by countries of concern to Americans’ bulk sensitive personal data and U.S. government-related data.²³

This is National Prioritization through data sovereignty. The U.S. was not merely regulating speech. It was deciding that a foreign-adversary-controlled application could not retain unrestricted access to American users at national scale. The machine here was not just AI compute. It was attention, location, behavior, and algorithmic influence.

Case 3 — Huawei and the 5G Precedent

Huawei is the pre-AI case that explains the AI future. Before governments fought over large language models, they fought over 5G infrastructure. The United States restricted Huawei because it viewed telecommunications equipment as potential strategic infrastructure. Once communications networks become the nervous system of society, the vendor is no longer just a vendor. The vendor becomes a security question.

The Huawei precedent established a principle that now applies to AI: if a technology layer becomes critical enough, governments will not allow foreign control without scrutiny. 5G was the network layer. AI is the cognition layer. If 5G determined who connected, AI determines who reasons, predicts, automates, targets, filters, and commands.

This is why Huawei belongs in the same framework as Nvidia chips, TikTok data, Starlink terminals, and Manus agents. Each case shows that technology becomes political when it becomes infrastructural.

Case 4 — Rare Earths, Minerals, and the Physical Layer of AI

The rare-earth conflict demonstrates that AI sovereignty does not begin at the model layer. It begins underground. China’s dominance in rare-earth processing gives it leverage over batteries, magnets, chips, defense systems, robotics, and advanced manufacturing. When trade tensions rise, these materials become bargaining chips.

AI companies often speak in the language of models, parameters, tokens, and agents. But the geopolitical state speaks in the language of minerals, ports, fabrication plants, power plants, and export licenses. The model cannot run without the machine. The machine cannot exist without materials. The materials cannot move without geopolitical permission.

This is why National Prioritization is broader than AI ethics. It is the strategic allocation of the entire stack: minerals, chips, electricity, data centers, cloud permissions, models, networks, and applications.

Case 5 — Starlink and Private Control of Wartime Infrastructure

Starlink revealed a different kind of problem: what happens when private infrastructure becomes essential to war?

Reuters reported in February 2023 that SpaceX had curbed Ukraine’s use of Starlink for drones.²⁴ Shotwell said Starlink had been intended for humanitarian purposes such as broadband for hospitals, banks, and families, and that offensive use went beyond the agreement.²⁵

This is a defining case because it asks whether a private company can limit battlefield capability during an active war. If Starlink is essential to communications, then controlling Starlink means influencing military effectiveness. That is National Prioritization without formal national ownership: private infrastructure exercising sovereign-like discretion.

The future AI version is even more complex. What happens if a frontier model refuses a military use? What happens if a cloud provider throttles wartime inference? What happens if an AI company says its safety policy prevents a government-requested deployment? The Starlink case was a preview of the AI allocation crisis.

Case 6 — The Pentagon and Classified AI Systems

By May 2026, the U.S. military had reportedly reached agreements with major technology companies to use AI on classified systems. AP reported agreements involving firms such as Google, Microsoft, AWS, Nvidia, OpenAI, Reflection, and SpaceX, while noting Anthropic’s refusal to participate under certain conditions.²⁶ Reuters also reported that the Pentagon reached agreements with leading AI companies, including a Google deal for classified work.²⁷

This is National Prioritization becoming operational. The Pentagon is no longer merely studying AI. It is moving AI into classified networks, defense workflows, and military decision systems. In this environment, AI companies face a hard choice: remain commercial platforms, or become defense infrastructure.

The deeper implication is that the U.S. state is beginning to reserve frontier AI capability for national-security use. This does not necessarily eliminate commercial AI. But it changes the priority order. The military does not wait in the same line as advertisers, coders, students, or consumers. Under National Prioritization, the military moves to the front.


Section 5 — Corporate Strategy in a World of Allocated Intelligence

For technology companies, National Prioritization creates a new operating environment. The old Silicon Valley assumption was simple: build the product, scale globally, optimize growth, and let markets decide. That era is ending. AI companies now operate in a world where governments can restrict exports, block acquisitions, demand identity verification, require divestiture, subsidize domestic infrastructure, prioritize defense deployments, and punish companies that are seen as misaligned with national interests.

The first strategic requirement is jurisdictional awareness. A company can no longer treat the world as one market. The U.S., China, Europe, India, the Gulf states, and allied blocs will each develop different AI rules. A model that is legally deployable in one jurisdiction may be restricted in another. An acquisition that appears commercial in Silicon Valley may be treated as a strategic threat in Beijing.

The second requirement is compute resilience. Companies must secure access to GPUs, data centers, cloud capacity, and energy across multiple locations. A company that depends entirely on one cloud provider, one chip supplier, one country, or one power market is vulnerable to prioritization shocks.

The third requirement is energy strategy. AI firms must think like industrial firms. They need long-term power contracts, grid relationships, nuclear options, renewable portfolios, backup generation, and cooling plans. The winners in AI may not simply be the companies with the best models. They may be the companies with the most reliable power.

The fourth requirement is government alignment without total dependence. Companies need defense contracts, regulatory trust, and national-security credibility. But they must avoid becoming so closely tied to one state that they lose global legitimacy. This is the hard balance: too much independence invites government pressure; too much alignment invites foreign exclusion.

The fifth requirement is ethical clarity before crisis. The Anthropic-Pentagon dispute shows that safety principles become most difficult when national-security pressure rises. Companies must decide in advance what uses they will allow, what restrictions they will preserve, and what forms of deployment they will refuse.

The sixth requirement is supply-chain diplomacy. Nvidia must manage U.S. export controls and Chinese market demand. Apple must manage China manufacturing and U.S. political pressure. Meta must manage blocked acquisitions and platform sovereignty. Amazon and Microsoft must manage cloud access and government contracts. SpaceX must manage satellite infrastructure in war zones. OpenAI must manage global deployment while becoming strategically relevant to U.S. national power.

This is why AI corporate strategy is no longer only business strategy. It is geopolitical strategy.


Conclusion

AI was once imagined as a universally accessible layer of intelligence. That vision is giving way to a more constrained reality—one defined by scarcity, control, and strategic allocation. This paper introduced National Prioritization as the framework that explains this shift.

The key insight is simple but profound:

AI is no longer allocated by markets alone—it is allocated by power.

In times of stability, markets distribute resources. In times of crisis, states intervene. Under conditions of geopolitical rivalry or systemic constraint, the allocation of AI becomes a matter of national survival.

The implications are far-reaching. Nations will compete not only to build AI, but to control its distribution. Global inequality may deepen based on access to compute, energy, chips, and sovereign infrastructure. Corporate autonomy will increasingly be constrained by state priorities.

National Prioritization explains why Nvidia chips are restricted, why foreign cloud customers may face identity verification, why TikTok was forced toward U.S. ownership, why China blocked Meta from buying Manus, why Huawei was excluded from 5G networks, why Starlink’s use in Ukraine became controversial, and why the Pentagon is now moving frontier AI into classified systems.

Ultimately, the defining question of the AI era is not:

How powerful are the machines?

But rather:

Who gets to use them—and when?

That question, answered through National Prioritization, will define the balance of power in the 21st century.


Footnotes

1. Gwynne Shotwell / Reuters, “SpaceX curbed Ukraine’s use of Starlink internet for drones,” Reuters, February 9, 2023. (Reuters)
2. Associated Press, “U.S. military reaches deals with 7 tech companies to use their AI on classified systems,” May 1, 2026. (AP News)
3. Reuters, “Jack Ma loses title as China’s richest man after coming under Beijing’s scrutiny,” March 2, 2021. (Reuters)
4. Joseph S. Nye Jr., The Future of Power, Harvard University / PublicAffairs reference.
5. Chris Miller, Chip War, Simon & Schuster.
6. U.S. Bureau of Industry and Security, “Commerce Implements New Export Controls on Advanced Computing and Semiconductor Manufacturing Items to the PRC,” October 2022, summarized in Federal Register context. (Federal Register)
7. U.S. Bureau of Industry and Security, “Commerce Strengthens Export Controls to Restrict China’s Capability to Produce Advanced Semiconductors for Military Applications,” December 2, 2024. (Bureau of Industry and Security)
8. Federal Register, “Framework for Artificial Intelligence Diffusion,” January 15, 2025; BIS rescission notice, May 13, 2025. (Federal Register)
9. U.S. Department of Commerce / BIS, “Commerce Proposes Rule to Advance U.S. National Security Interests,” January 29, 2024. (Bureau of Industry and Security)
10. Federal Register, proposed IaaS Customer Identification Program rule, January 29, 2024. (Federal Register)
11. White House, “Promoting the Export of the American AI Technology Stack,” July 23, 2025. (The White House)
12. Vaclav Smil, Energy and Civilization, MIT Press.
13. Reuters, “Blocking of Meta’s AI startup buy raises risk for cross-border China tech deals,” April 28, 2026. (Reuters)
14. Associated Press, “China blocks Meta from acquiring AI startup Manus,” April 2026. (AP News)
15. The Guardian, “China blocks $2bn Meta takeover of AI agent developer Manus,” April 27, 2026. (The Guardian)
16. Forbes / Emily Baker-White, “TikTok Spied On Forbes Journalists,” December 22, 2022. (Forbes)
17. Reuters, “ByteDance finds employees obtained TikTok user data of two U.S. journalists,” December 23, 2022; The Guardian, TikTok journalist-tracking report. (Reuters)
18. Reuters, “U.S. Supreme Court to consider TikTok bid to halt ban,” December 18, 2024. (Reuters)
19. U.S. Supreme Court, TikTok Inc. v. Garland, January 17, 2025. (Supreme Court)
20. White House, “Saving TikTok While Protecting National Security,” September 25, 2025. (The White House)
21. Reuters, “TikTok reaches deal for new U.S. joint venture,” January 22, 2026. (Reuters)
22. Military.com, “How TikTok Grew from a Fun App for Teens into a Potential National Security Threat,” January 17, 2025. (Military.com)
23. U.S. Department of Justice, Data Security Program and EO 14117 implementation. (Department of Justice)
24. Reuters, SpaceX Starlink Ukraine drone restrictions. (Reuters)
25. Business Insider, “SpaceX never intended Starlink internet to be ‘weaponized’ in Ukraine,” February 2023. (Business Insider)
26. Associated Press, Pentagon AI agreements, May 1, 2026. (AP News)
27. Reuters, “Pentagon reaches agreements with leading AI companies,” May 1, 2026. (Reuters)
28. The Guardian, “Pentagon inks deals with seven AI companies,” May 1, 2026. (The Guardian)
29. The Verge, “Pentagon strikes classified AI deals with OpenAI, Google, and Nvidia,” May 1, 2026. (The Verge)
30. Washington Post, “Top AI companies agree to work with Pentagon on secret data,” May 1, 2026. (The Washington Post)
31. Harvard Law School, “Is the new U.S. TikTok safer?” February 27, 2026. (Harvard Law School)
32. SCOTUSblog, “Supreme Court upholds TikTok ban,” January 17, 2025. (SCOTUSblog)
33. Cornell Journal of Law and Public Policy, “TikTok, PAFACA, and the New National Security Playbook,” January 13, 2026. (Cornell Law School)
34. Skadden, “Know Your IaaS Customer,” February 13, 2024. (Skadden)
35. Crowell & Moring, “Who I(aa)S Your Foreign Customer?” February 8, 2024. (Crowell & Moring – Home)
36. Arnold & Porter, “Commerce Department Proposes Rule on IaaS Product-Related Requirements,” January 30, 2024. (Arnold & Porter)
37. AP, “The Commerce Department updates its policies to stop China from getting advanced computer chips,” October 2023. (AP News)
38. Reuters, “U.S. says China’s Huawei can’t make more than 200,000 AI chips in 2025,” June 12, 2025. (Reuters)
39. The Guardian, “U.S. chip export controls are a ‘failure,’ Nvidia boss says,” May 21, 2025. (The Guardian)
40. White House, TikTok national-security divestiture framework. (The White House)