The Thirty-Second War
Somewhere in eastern Ukraine, an AI-enabled drone has just compressed the entire history of military strategy into thirty seconds. Target identified through satellite imagery and analyzed by machine learning algorithms. Flight path calculated. Jamming countermeasures anticipated. In the final phase of flight, the drone's AI locks onto its pre-identified target independently—Russian electronic warfare having forced Ukrainian engineers to build backup systems that can function when GPS and communication links fail. From detection to strike: thirty seconds.
This is not a thought experiment. According to CSIS analysis of the Russia-Ukraine conflict, Ukrainian forces have reduced targeting cycles to just over thirty seconds through AI-assisted systems, fundamentally compressing the observe-orient-decide-act loop that has defined warfare for generations. Ukraine has employed AI-powered facial recognition to identify over 250,000 Russian soldiers operating in the country, creating a database used for both military targeting and psychological warfare—notifying families in Russia of casualties.
To be sure, critics from the tradition of critical technology studies would argue that I'm presenting this as inevitable technological progress rather than a political choice to remove human judgment from life-and-death decisions. They're not entirely wrong. The thirty-second targeting cycle represents a choice—by Ukrainian forces under existential threat—about acceptable trade-offs between speed and deliberation. What strikes me is not that this choice was predetermined by technology, but rather that once one side makes it, the pressure on adversaries to match that capability becomes intense. This is the security dilemma playing out at machine speed.
What the Ukraine example reveals is the broader transformation of artificial intelligence from a commercial technology into a fundamental instrument of state power—and the question of whether that transformation leads to permanent fragmentation or merely contested integration within an interdependent system remains genuinely uncertain. We are witnessing extraordinary capital concentration, geopolitical tensions creating real friction in technology flows, and vertical integration at unprecedented scale. Whether these forces prove self-reinforcing (driving toward separate technological ecosystems) or self-limiting (creating counterpressures for integration) will depend on political choices not yet made.
This is not the only future possible from where we stand today.
The Cycles of Integration and Specialization
I've written before about how the personal computer and cloud computing eras were defined by horizontal specialization—Intel made chips, Microsoft made operating systems, Dell assembled hardware, and thousands of software vendors built applications atop these standardized layers. This modular architecture enabled extraordinary innovation because companies could specialize in specific layers without mastering the entire vertical.
That era is ending—or more precisely, being transformed. But this is not the first time we've witnessed this pattern. The history of management theory offers a useful lens.
In 1937, Ronald Coase asked the foundational question: why do firms exist at all? If markets coordinate economic activity efficiently through prices, why organize production inside companies? His answer: firms exist when the transaction costs of market coordination exceed the costs of internal management. When it's cheaper to coordinate internally than to negotiate thousands of contracts with suppliers, you integrate vertically.
Alfred Chandler's "The Visible Hand" (1977) documented how this played out in the late 19th century. Railroads, steel companies, and oil refineries integrated vertically because the transaction costs of coordinating complex, interdependent operations through markets were prohibitive. Carnegie Steel owned iron mines, coke ovens, railroads, and mills because the performance requirements of making high-quality steel at scale demanded end-to-end control.
But Clayton Christensen observed that industries cycle between integration and modularity. When performance is insufficient, companies integrate to optimize the full stack. When performance becomes "good enough," modular specialists emerge, offering flexibility and cost advantages. The PC industry followed this pattern: early computers (IBM mainframes) were vertically integrated, then PCs became modular, then smartphones shifted back toward integration (Apple's vertical approach).
The question is: where does AI infrastructure fall in this cycle?
The capital expenditure numbers suggest we're entering an integration phase of unprecedented scale. In 2025, the major hyperscalers—Amazon, Microsoft, Google, and Meta—spent $405 billion on AI infrastructure, exceeding analyst forecasts by $130 billion. This represents 38-40% of all S&P 500 capital expenditures. These four companies are allocating capital at levels that rival small nations' GDP.
More revealing is how this capital deploys. Approximately 50-60% flows directly to processors and specialized chips—but what's changing is who makes those chips. Amazon's custom Trainium chips have reached multi-billion-dollar run rates, growing 150% quarter-over-quarter. Google's Tensor Processing Units number in the hundreds of thousands. Microsoft's CTO stated the goal of "mainly Microsoft silicon in datacenters"—remarkable for a company that spent decades as Intel's most important software partner.
This shift represents vertical integration at a scale unprecedented in the technology industry. The reason is not primarily cost, though Amazon claims Trainium delivers up to 70% lower cost per inference. The reason is optimization—and here Coase's logic applies directly.
AI workloads appear to have transaction costs that make market coordination inefficient. Google's Ironwood TPU delivers 42.5 exaflops across 9,216-chip superpods—twenty-four times more powerful than the world's largest supercomputer—precisely because hardware and software were co-designed. The coordination required to achieve this performance through negotiated contracts with specialized chip vendors would be prohibitively complex. The transaction costs of specifying requirements, managing interfaces, coordinating updates, and optimizing across layers exceed the costs of bringing chip design in-house.
But here's where the management theory perspective adds nuance: vertical integration is not the only viable strategy, and successful alternatives suggest we're not converging on a single winner-takes-all outcome.
Apple spent just $12.7 billion on capex in fiscal 2025—a fraction of competitors' $70-140 billion—yet remains highly competitive in AI. The company pursues a hybrid model: custom silicon for on-device processing while renting substantial cloud capacity. This is classic Coasean logic: integrate where transaction costs are high (custom silicon tightly coupled to iOS), use markets where transaction costs are manageable (cloud infrastructure that can be rented).
Similarly, specialized AI infrastructure providers like CoreWeave and Lambda Labs are raising billions pursuing horizontal strategies—providing GPU infrastructure without full-stack solutions. Their success suggests that transaction costs in some parts of the AI stack remain low enough that specialized providers can compete, even against hyperscaler integration.
The open-source community represents another complicating factor. Meta's Llama 3, Mistral's models, and the broader Hugging Face ecosystem are designed explicitly to run on diverse hardware across various infrastructures. This is modularity by design, prioritizing accessibility over optimization. Chinese developers fine-tune Llama, European startups deploy Mistral, American enterprises run both—a powerful convergence mechanism operating through bottom-up developer communities.
The critical question is whether AI infrastructure has reached the "good enough" threshold where modular specialists can compete, or whether performance requirements demand continued integration. Oliver Williamson's work on asset specificity suggests the answer depends on how specialized AI infrastructure needs to be. If custom silicon and co-designed hardware-software stacks provide decisive advantages, integration will dominate. If performance becomes "good enough" with standardized components, modularity will reassert itself.
I don't yet know which outcome prevails—and I suspect multiple strategies will coexist, with different companies making different trade-offs between performance, capital efficiency, flexibility, and accessibility.
The $1.15 Trillion Question: Who Captures the Value?
Goldman Sachs projects that hyperscaler capital expenditures from 2025 through 2027 will total $1.15 trillion. But here's the question that matters more than the aggregate number: who captures the value generated by this extraordinary capital deployment?
This is not merely about financial returns. It's about the fundamental distribution of AI's productivity gains—and the answer has profound implications for inequality, labor markets, and political stability that may prove more consequential than the geopolitical fragmentation dynamics I'll discuss later.
The evidence suggests AI infrastructure concentration is driving wealth concentration at unprecedented scale. Microsoft guided that fiscal 2026 capex growth will exceed fiscal 2025's 58% rate, suggesting spending above $140 billion. Amazon plans to exceed 2025's $125 billion. Google projects "significant increases" beyond $91-93 billion. Meta guided that 2026 capex will be "notably larger" than 2025's $70-72 billion, potentially exceeding $100 billion—yet Meta generates zero direct AI revenue. Every dollar must be justified by improved advertising performance.
Every dollar of this infrastructure investment flows to shareholders if the bet pays off. But the productivity gains enabled by this infrastructure—task automation, workflow acceleration, entire job categories eliminated—flow primarily to capital owners, not workers whose jobs are automated.
My research uncovered labor market data I initially relegated to footnotes but that may be the most important part of this story:
Entry-level tech hiring collapsed 50% from 2022 to 2024
New graduates face 30% unemployment rates in tech fields
Application-to-offer ratios: 600:1 (from 150:1 in 2021)
Time to first job: 8.6 months (from 3.2 months)
Meanwhile, workers with AI skills command 28-35% wage premiums. The labor market bifurcates: high-skill AI roles see +28% wage growth, mid-skill routine cognitive work sees -3%, and entry-level positions disappear. This is not incidental to AI infrastructure concentration; it is the mechanism through which productivity gains flow to capital rather than labor.
Retraining programs—the standard economist's response—show 9-22% success rates. Workers over 45 report 3x rejection rates even after completing AI certifications. Credential inflation has made entry impossible: entry-level AI roles now require master's degrees plus 2 years of experience.
The political implications are profound. Meta's $70-100 billion AI infrastructure investment must be justified by improved advertising performance—more effective surveillance, more precise targeting, more extraction of value from engagement. Every dollar flows to shareholders, not to users whose data generates value or workers displaced by automation. This is capital-biased technological change at a scale dwarfing previous automation waves.
Until we see evidence that AI productivity gains flow more broadly—through wage growth, successful retraining, or political mechanisms like progressive taxation—the "$1.15 trillion question" is not whether this investment responds to genuine demand. It's who captures the value generated, and through what mechanisms productivity gains are distributed—or concentrated.
This distributional question may prove far more politically destabilizing than geopolitical fragmentation.
Three Futures, Not One: The Contested Integration of Global AI
While American hyperscalers race to spend hundreds of billions, something more fundamental is happening: the global technology industry is undergoing a transformation whose endpoint remains genuinely uncertain.
I initially framed this as "permanent fragmentation into three separate ecosystems." That was too deterministic. What we're witnessing is contested integration—a struggle over the terms of global AI development, with multiple possible futures depending on political choices not yet made.
The Western Ecosystem is dominated by OpenAI, Anthropic, Google, Microsoft, Amazon, and Meta—companies controlling the most advanced foundation models and largest cloud infrastructures, benefiting from U.S. capital markets' willingness to fund massive capex programs. This ecosystem features private-sector leadership with government support through the CHIPS Act ($52.7 billion) and, under the Trump administration, aggressive deregulation and infrastructure permitting acceleration.
The Chinese System is built around Alibaba's Qwen, Baidu's ERNIE, ByteDance's Doubao, and sophisticated domestic alternatives like DeepSeek. Scale: 515 million Chinese users (38% global userbase), 38% of global AI investment ($47.8 billion government funding), 200+ large language models, 40,000+ AI applications. China is developing domestic semiconductors to bypass U.S. export controls: T-Head's RISC-V processors, Huawei's Ascend 910B (comparable to NVIDIA H100), SMIC's 7nm production despite restrictions.
The Regional Ecosystem includes the EU's €200 billion InvestAI initiative and €43 billion Chips Act targeting 20% global chip manufacturing by 2030, plus specialized efforts like UAE's Falcon models. Europe emphasizes regulatory frameworks (EU AI Act), data localization, and alignment with European values rather than either American market-driven innovation or Chinese state-directed coordination.
Now, why I think the "three separate futures" framing was incomplete:
Evidence of fragmentation pressures:
China's Great Firewall blocks Western AI services
U.S. export controls restrict chip sales, forcing domestic alternatives
EU AI Act creates requirements not mirrored elsewhere
Academic collaboration declining (though 21% of Chinese AI papers still include U.S. co-authors)
Immigration restrictions limiting talent flows
Evidence of integration forces:
TSMC supplies billions to both Western countries and China
International companies maintain presence across regulatory environments
Open-source models (Llama, Mistral) used globally
Developer communities converging on shared tools (Hugging Face, GitHub, Kubernetes)
Commercial incentives for interoperability remain powerful
The Trump administration's contradictory policies reveal the tension. Executive Order 14318 accelerates data center permitting, yet proposed 100% semiconductor tariffs would increase AI server costs by 75%. Most tellingly, the administration negotiated a 15% revenue-sharing arrangement allowing NVIDIA and AMD to resume chip sales to China—simultaneously monetizing American technological leadership while providing China cutting-edge chips, undermining the national security rationale while creating constitutional questions about export taxation. This is not coherent decoupling strategy; it's improvisation reflecting crosscutting pressures.
China's response reveals preference for selective self-sufficiency rather than total autarky. When allowed to purchase NVIDIA chips, China eagerly imports. When controls tighten, China invests in domestic alternatives. This is hedge strategy, not ideological separation.
Europe explicitly pursues "strategic autonomy" while maintaining transatlantic cooperation—60%+ of EU cloud workloads remain on AWS/Azure/GCP even as governments invest billions in alternatives.
The Energy Constraint: From Preference to Binding Limitation
Even as companies commit hundreds of billions, a more fundamental constraint is emerging: power.
U.S. data centers consumed 183 terawatt-hours in 2024 (4% of total electricity). By 2030: projected 426 terawatt-hours (133% increase). By 2035: 123 gigawatts—thirty-fold increase from 4 gigawatts in 2024.
The problem is not generation capacity but transmission infrastructure and grid interconnection timelines. Extending high-capacity power lines can take four to ten years, primarily for securing easements and regulatory approvals. According to Deloitte, 72% of power/datacenter executives characterize grid capacity as "very or extremely challenging."
Microsoft spent $88.2 billion on capex in fiscal 2025, with roughly half of Q1 FY2026's $34.9 billion devoted to "long-lived assets" including $11.1 billion in finance leases for datacenter sites. Yet Microsoft explicitly guided it "expects to be capacity constrained through at least the end of our fiscal year" despite planning to "increase total AI capacity by over 80%."
Read that again: Microsoft is spending nearly $90 billion annually and still expects to remain capacity constrained. The constraint is not capital—Microsoft has ample capital—but physical infrastructure to deliver power at the pace the company wants. Capital can buy datacenter construction. It cannot accelerate grid interconnection timelines governed by regulatory processes and physical construction constraints.
This creates what may be the most durable advantage in AI infrastructure: first-mover positioning on power access. Companies with existing datacenters in locations with available power have leads competitors cannot overcome through capital alone. If interconnection takes 7-10 years, decisions made in 2025 determine competitive positioning in 2032-2035.
Ukraine provides visceral demonstration of energy infrastructure's strategic importance. Russia systematically targeted Ukrainian power infrastructure, destroying generating capacity to approximately one-third of pre-invasion levels. The lesson: energy infrastructure is not merely economic resource but strategic asset and potential weapon.
If power infrastructure cannot scale at the pace companies want to deploy AI infrastructure, then energy becomes the binding constraint determining which countries and regions achieve AI dominance—not capital availability, not technical expertise, but gigawatts and the political will to prioritize their allocation to computation over other uses.
The Taiwan Chokepoint: Irreducible Fragility
Every analysis of AI power concentration must confront that 90% of the world's most advanced semiconductors come from a single island: Taiwan.
TSMC is the world's largest and most advanced chip foundry, producing processors for AI systems, smartphones, automobiles, datacenters—virtually every sophisticated electronic device. TSMC supplies billions to both Western countries and China, creating a situation where all major powers depend on a company in one of the world's most geopolitically contested regions.
The nightmare scenario has two variants, both catastrophic. If China invades Taiwan, TSMC's facilities could be seized, potentially allowing China to cut off advanced chip sales to U.S. allies. Alternatively, facilities could be destroyed in conflict, creating "an economic crisis the likes of which we have not seen since the Great Depression" as global advanced semiconductor supply disappears.
Wargaming exercises in 2025 examined invasion scenarios twenty-four times. Conclusions: while the U.S., Taiwan, Japan, and allies would likely defeat invasion and preserve Taiwan's independence, conflict would be extraordinarily costly. CSIS estimated that proposed 100% semiconductor tariffs could increase AI server costs by 75%, reducing viable infrastructure projects by 15-20 facilities, equivalent to $75-100 billion in additional costs over five years.
Taiwan's response: "semiconductor diplomacy." TSMC committed $165+ billion to U.S. manufacturing capacity, particularly Arizona. Taiwan deepened Japan partnerships. These align with the "Chip 4" alliance (U.S.-Japan-Taiwan-South Korea) coordinating semiconductor strategy.
Yet Taiwan faces a profound paradox. The "silicon shield" concept—that Taiwan's semiconductor role deters military aggression—potentially loses efficacy as production diversifies. By reducing global technological vulnerability, Taiwan may reduce the deterrent against invasion. The silicon shield works only so long as everyone needs it; diversification undermines the shield even as it reduces vulnerability.
Moreover, Taiwan faces severe talent challenges. In 2022: 35,167 unfilled engineer positions (40% increase from 27,701 in 2021). TSMC's Arizona facility delayed from 2024 to 2025 to 2028 specifically due to "shortage of specialist workers."
This talent shortage suggests near-term diversification will be more limited than political announcements imply. The U.S. CHIPS Act allocated $52.7 billion. The European Chips Act mobilized €43 billion. These are substantial, but translating funding into operational fabrication at Taiwan-comparable scale will take a decade or more.
Until production is genuinely diversified—operational at scale, not merely announced—the entire edifice of AI development rests on cross-strait stability. I find this genuinely terrifying.
What This Means: Three Scenarios for AI's Next Decade
In 1949, the Cold War bifurcated global technology into Western and Soviet systems. For forty years, these ecosystems developed separately with fundamentally incompatible standards. When the Cold War ended, integration proved difficult even when political barriers fell.
I initially invoked this analogy to argue we're witnessing comparable fragmentation. I now think this analogy is misleading in important ways.
Cold War technological bifurcation was sustained by conditions that don't fully apply to AI:
Ideological incompatibility: Communism vs capitalism represented fundamentally opposed economic systems. The U.S.-China relationship involves two market economies with state intervention, differing in degree not kind.
Physical separation: Iron Curtain, Berlin Wall, militarized borders physically prevented movement. Despite decoupling rhetoric, annual U.S.-China trade exceeds $700 billion.
Explicit autarky: Soviet bloc pursued economic self-sufficiency as ideological goal. China pursues selective self-sufficiency while eagerly importing when possible.
Different security dynamics: Cold War stability depended on mutual assured destruction. U.S.-China features economic mutual dependence creating different incentive structures.
What we're witnessing is better described as contested integration—a struggle over terms of global AI development—rather than inevitable fragmentation.
Three scenarios capture possible futures, with outcomes depending on political choices in 2026-2028:
Scenario 1: Managed Integration (30% probability)
Conditions: Commercial incentives for interoperability overwhelm political fragmentation pressures. Costs of duplication (separate AI stacks, semiconductor supply chains, talent pools) prove prohibitive. Companies successfully lobby to preserve market access. U.S.-China negotiate AI cooperation framework in select domains while maintaining strategic competition.
Outcome: Continued friction and regulatory divergence, but substantial interdependence persists. Companies operate globally with different compliance frameworks (similar to current data protection landscape). Open-source continues converging technical standards. This resembles current U.S.-China trade: intense competition in strategic domains, continued commerce in many others.
Key indicators: Whether semiconductor tariffs are imposed or negotiated into exemptions. Whether academic collaboration stabilizes. Whether companies maintain multinational structures.
Scenario 2: Contested Fragmentation (50% probability)
Conditions: Political imperatives drive meaningful separation in critical domains (military AI, surveillance, certain foundation models) while commercial domains remain integrated. Neither comprehensive integration nor complete decoupling proves politically sustainable.
Outcome: Three distinct regulatory regimes with different AI rules, but companies navigate these differences as they currently navigate different tax or labor laws. Certain technologies fragment by geography, but development tools, open-source models, and commercial applications remain largely global. Talent flows constrained but continue through multinationals and academic exchanges. Messy coexistence rather than clean separation.
Key indicators: Whether TSMC successfully diversifies while maintaining China sales. Whether open-source bridges geographic divides. Whether Europe sustains technology sovereignty investment.
Scenario 3: Deep Fragmentation (20% probability)
Conditions: Major crisis forces choosing sides. Taiwan conflict disrupts semiconductor supply and forces complete separation. U.S. imposes comprehensive technology embargo. China retaliates with rare earth cutoffs and cyberattacks. Academic collaboration ends. Companies forced to choose China or the West.
Outcome: Two incompatible ecosystems (Chinese and Western, Europe subsumed into Western) develop with minimal interaction. Massive duplication. Global South forced to choose. Innovation slows due to loss of scale economies. This is permanent fragmentation—but I assess it as least likely because economic costs are high and ideological drivers are weaker than during Cold War.
Key indicators: Taiwan crisis escalation. Complete U.S.-China trade breakdown. Formation of explicit alliance blocs with mutual exclusion.
The mechanisms driving toward integration: commercial incentives for global markets, developer communities converging on shared tools, open-source technical standards transcending borders, costs of duplication creating accommodation pressure.
The mechanisms driving toward fragmentation: strategic competition over AI's military/economic implications, nationalist technology policies, data localization, security dilemmas.
Which mechanisms dominate depends on choices not yet made. U.S. semiconductor tariff decisions. Chinese strategic patience on Taiwan. European political will. Corporate lobbying effectiveness. Electoral outcomes.
I genuinely don't know which scenario unfolds. But I know this: decisions in 2025-2026 about infrastructure buildout, geopolitical alignment, regulatory frameworks, and capital deployment will shape the global order for the next decade.
The thirty-second targeting cycle in Ukraine is not merely a tactical innovation. It is a preview of a world where technology and geopolitics are inseparable, where speed and concentration of power are decisive, and where choices about technology development have profound consequences extending far beyond quarterly earnings.
Three questions should guide our thinking:
First: Who captures AI infrastructure value? Distributional consequences within societies—whether productivity gains flow to capital owners or are shared broadly—may prove more politically destabilizing than fragmentation between societies. Current trajectory concentrates wealth at unprecedented scale while displacing workers without adequate sharing mechanisms. This is politically unsustainable regardless of which geopolitical scenario unfolds.
Second: What values are encoded in AI systems? Treating current development paths as technologically inevitable rather than politically chosen obscures whose interests these systems serve and what alternatives are foreclosed. The choice to invest $1.15 trillion in certain applications (advertising optimization, autonomous weapons, surveillance) rather than others (education, healthcare, climate solutions) reflects power structures, not technical requirements.
Third: Can commercial incentives for cooperation overcome political pressures for fragmentation? The answer determines whether we're entering contested integration or deep bifurcation. I've laid out three scenarios; which unfolds depends on choices not yet made.
This is the future we are building. Whether it is the future we want is a question we should ask while the trajectory remains genuinely uncertain—before decisions harden into path dependencies that become difficult to reverse.
The foundations are not yet fully set. The choices still matter. That should be both sobering and empowering.
References & Sources
Management Theory:
Coase, Ronald. "The Nature of the Firm" (1937). Transaction cost economics.
Chandler, Alfred. "The Visible Hand" (1977). Vertical integration in industrial era.
Christensen, Clayton. "The Innovator's Dilemma." Integration-modularity cycles.
Williamson, Oliver. Asset specificity and make-vs-buy decisions.
Geopolitical Conflict Analysis:
Center for Strategic and International Studies (CSIS). "Technological Evolution on the Battlefield."
War on the Rocks. "The Middle East's AI Warfare Laboratory." April 2025.
Ukraine Ministry of Defence. AI-powered targeting systems.
Campaign to Stop Killer Robots. 2025 Annual Report.
Capital Expenditure Data:
Amazon, Microsoft, Alphabet/Google, Meta Q3 2025 Earnings Calls.
Goldman Sachs. Hyperscaler capex forecast: $1.15 trillion 2025-2027.
IO Fund. "Big Tech's $405B Bet."
Labor Market & Distributional Analysis:
Bureau of Labor Statistics. Real Earnings Summary 2024.
Acemoglu, Daron and Pascual Restrepo. "Automation and Labor Markets" (2024).
Trump Administration AI Policy:
Executive Orders 14179, 14277, 14318. AI leadership, preventing "woke AI," data center permitting.
White House. "America's AI Action Plan." July 23, 2025.
Semiconductor Supply Chain:
TSMC. $165B U.S. investment, Arizona facility delays.
Global Taiwan. "Taiwan's Shortage of Chipmakers." March 2025.
U.S. CHIPS and Science Act. $52.7 billion subsidies.
European Chips Act. €43 billion mobilized.
Energy Infrastructure:
Deloitte. "Data Center Infrastructure for AI." Power demand projections.
Pew Research. "Energy Use at US Data Centers." October 2025.
Brookings. "Ukraine's Energy Sector."
Technology Decoupling & Sovereign AI:
Bain. "Sovereign Tech in a Fragmented World: Technology Report 2025."
China Internet Network Information Center. 515M gen AI users.
European Commission. "Apply AI Strategy." October 2025.
Nature Index. "AI Research Collaboration Trends 2024."