Why the current AI infrastructure boom mainly enriches NVIDIA and other multinationals
The global race to build “AI Factories” is being sold as the next great industrial transformation. Governments promise sovereign compute, strategic autonomy, new datacentres, thousands of jobs and a productivity revolution powered by artificial intelligence. Yet the political language of national capability often hides a much narrower economic reality. In many cases, these projects do not create broad domestic value. They channel public attention, land, energy capacity, subsidies and procurement budgets toward a small set of multinational vendors that dominate chips, cloud infrastructure and financing. At the center of that system stands NVIDIA.
The recent Guardian investigation into the United Kingdom’s AI push offers an unusually clear view of how this model works. Public announcements described billions in investment, new datacentres and a sovereign supercomputing future. But the underlying evidence was far thinner. Some of the supposed investments were not new facilities at all, but leased space inside existing datacentres. Some commitments were framed as national infrastructure while consisting largely of imported equipment deployed by foreign-linked firms. Job claims were difficult to verify. Even flagship projects were advanced through press releases without strong audit mechanisms, robust contractual clarity or meaningful public scrutiny.
This matters because the UK case is not an isolated scandal. It reveals the economics of the contemporary AI build-out. When a country commits itself to hyperscale AI infrastructure built around proprietary accelerator chips, imported hardware stacks and contracts with cloud intermediaries, most of the value does not remain in the local economy. The country supplies power, land, planning permissions, political legitimacy and often fiscal incentives. But the high-margin layers of the value chain flow outward to chip manufacturers, hyperscale cloud providers, orchestration platforms and the financial actors who back the compute boom. The host economy absorbs the risk and the resource burden. The upstream suppliers capture the rents.
The scale trap
The strongest argument in favor of AI Factories is that only massive scale can produce competitiveness. In practice, scale often produces dependency. Large AI datacentres require enormous capital expenditure, power availability, cooling systems, grid upgrades, networking capacity and access to scarce advanced hardware. According to the International Energy Agency, global electricity consumption from datacentres is projected to reach roughly 945 TWh by 2030, with AI-optimized facilities driving a major share of that increase. In other words, governments may end up allocating scarce energy and infrastructure capacity to support an AI build-out whose primary beneficiaries are foreign technology supply chains rather than local productive ecosystems.
At the same time, the technological premise behind the hyperscale narrative is weakening. The claim that useful AI requires only the largest frontier models and the biggest GPU clusters is no longer credible. Stanford’s 2025 AI Index shows that inference costs for systems performing at GPT-3.5 level fell by more than 280-fold between late 2022 and late 2024. It also finds that leading open-weight models significantly narrowed the performance gap with closed models. That is a structural change. It means the economic and technical barriers to meaningful AI deployment are falling quickly, especially for organizations whose real needs are document analysis, workflow automation, search, translation, customer support, coding assistance and internal knowledge systems rather than frontier model training.
The viable alternative is local, open-source and low-cost
If the objective is productivity, privacy, language adaptation, public-sector control and domestic capability, then the rational strategy is not to replicate the American hyperscale model. It is to build a distributed ecosystem around open-source software, open-weight models, efficient small language models, on-premise deployment, edge inference and interoperable infrastructure that can be audited and governed locally.
This is no longer a theoretical proposition. Small and medium-sized models have become capable enough for a wide range of practical use cases. Models like Llama, DeepSeek, Qwen3 & Gemma demonstrated that a compact model can deliver performance comparable to much larger systems while being small enough for local deployment. Mistral has likewise emphasized efficient open models designed for production use with lower latency and resource demands. The significance is strategic, not just technical. Once acceptable performance becomes available at modest compute cost, the center of gravity shifts away from giant datacentre projects and toward locally controlled AI stacks that organizations can actually own, adapt and maintain.
For public administrations and SMEs especially, this changes everything. A ministry, municipality, hospital, university or small firm does not need a national-scale GPU cathedral to summarize documents, support staff, classify records or build multilingual assistants. It needs secure deployment, clear governance, domain-specific fine-tuning, predictable cost and the ability to keep sensitive data under local control. Open models make that possible. They also spread knowledge more widely across domestic integrators, researchers and technical teams instead of locking expertise inside a handful of external vendors.
From infrastructure spectacle to productive capacity
None of this means compute infrastructure is unnecessary. It means compute should be subordinated to a digital sovereignty strategy rather than treated as a spectacle. Even Europe’s AI Factories can be valuable only if they function as shared, open access infrastructure for researchers, startups, universities and SMEs, not as a public justification for a new wave of dependence on closed foreign stacks. The European framework already points in that direction by emphasizing access for SMEs and by increasingly framing open-source AI as a lever for innovation and sovereignty.
The real policy question is therefore straightforward. Does a country want to buy compute, or does it want to build capability? Buying GPUs and leasing cloud services may generate impressive headlines, but it rarely creates lasting autonomy. Building around open models, local datasets, research institutions, public digital infrastructure and domestic service providers creates knowledge, skills and control that remain inside the economy.
That is the key distinction. The current AI Factory boom, as often implemented, risks becoming a transfer mechanism from public ambition to multinational balance sheets. The sustainable path is different. It lies in low-cost, locally deployable, open-source AI systems that are auditable, energy-rational, adaptable to national languages and compatible with democratic oversight. That is how artificial intelligence becomes an economic tool, not just an infrastructure mirage.
Sources
- The Guardian, “Revealed: UK’s multibillion AI drive is built on ‘phantom investments’”. Documents how highly publicized UK AI investments involved leased capacity, unverified commitments and limited government auditing.
https://www.theguardian.com/technology/2026/mar/09/revealed-uks-multibillion-ai-drive-is-built-on-phantom-investments - Stanford HAI, “2025 AI Index Report”. Provides evidence on the steep decline in inference costs, annual efficiency gains and the narrowing gap between open-weight and closed models.
https://hai.stanford.edu/ai-index/2025-ai-index-report - International Energy Agency, “Energy and AI”. Details the projected surge in data-centrer electricity demand and the central role of AI in driving that growth.
https://www.iea.org/reports/energy-and-ai - European Commission, “AI Factories”. Explains the EU’s policy framework for AI Factories and their intended public-interest role.
https://digital-strategy.ec.europa.eu/en/policies/ai-factories - European Commission, “Europe’s Open-Source AI Landscape: A lever for Innovation and Sovereignty”. Frames open-source AI as a strategic instrument for competitiveness and digital sovereignty in Europe.
https://digital-strategy.ec.europa.eu/en/library/europes-open-source-ai-landscape-lever-innovation-and-sovereignty - EuroHPC JU, “AI Factories Access Modes”. Shows that the public value of AI Factories depends on open and free access for startups and SMEs, not only on physical compute assets.
https://www.eurohpc-ju.europa.eu/ai-factories/ai-factories-access-modes_en - Mistral AI, “Mistral Small 3”. Presents an efficient open model explicitly optimized for high performance at a size suitable for local deployment.
https://mistral.ai/news/mistral-small-3