Why AI Companies Invest Hundreds of Billions

The race is not only about technology

The current artificial intelligence boom is often described as a technological race. That description is incomplete. It is also a race for infrastructure, market power and control over the next layer of the digital economy. The largest technology companies are investing hundreds of billions of dollars every year in data centres, chips, networks, energy contracts, research teams and model development, even though much of generative AI has not yet proved durable profitability. This looks irrational only if AI is seen as another software product sold through a subscription. It is more than that. It may become the basic interface through which people search, write, code, learn, buy, manage documents, use public services and organise work.

The central question is therefore not whether today’s chatbots are profitable enough to justify today’s spending. The real question is who will control the gateway between users, organisations, data and digital action. In the previous phase of the internet, power concentrated around search engines, social networks, mobile operating systems, cloud platforms and e-commerce. In the next phase, that gateway may be the AI agent: the system that answers, recommends, drafts, negotiates, automates and executes tasks on behalf of a person or an organisation. Whoever controls that layer may gain not only revenue, but strategic leverage over large parts of the economy.

Spending as defence against future irrelevance

This explains why the major firms are spending before the business model is fully settled. Google cannot allow search to be displaced by AI assistants controlled by others. Microsoft cannot risk losing its position in office software, developer tools and enterprise cloud services. Amazon needs AI demand to reinforce its cloud infrastructure business. Meta does not want its social platforms to depend on external model providers. OpenAI, Anthropic, xAI and other model companies are trying to become the default intelligence layer for businesses and consumers.

The immediate economics are difficult. Training and running frontier models is expensive. Free users generate little revenue. Business adoption is growing, but many companies are still experimenting rather than reorganising production around AI. The subscription model alone may not be sufficient. Advertising, enterprise tools, developer platforms, cloud consumption, specialised agents and embedded AI services are all possible monetisation paths, but none of them fully resolves the question yet. The result is a paradox: the industry is booming before it has proved who will make reliable profits from it.

For the largest hyperscalers, the risk is manageable because they already generate enormous profits from existing businesses. They can use cash flow and debt to build infrastructure. For pure AI firms, the situation is more fragile. They depend on repeated capital injections, strategic partnerships and expectations of future dominance. In that sense, the AI boom resembles earlier periods of technological overinvestment. Railways, telecom networks and the dot-com era destroyed many investors, but also left productive infrastructure behind. AI may become another productive bubble: socially transformative, economically wasteful for some investors, and highly profitable only for a few survivors.

Infrastructure is the real moat

Much of the spending is not simply about better models. It is about physical infrastructure: data centres, GPUs, memory, cooling systems, energy supply, land, fibre, security and specialised engineering. These assets are expensive, but they create barriers to entry. The more costly large-scale AI becomes, the fewer actors can compete at the frontier. This favours concentration. A market that begins with many competing models can gradually become dependent on a small number of companies that own the compute, the cloud, the distribution channels and the user relationships.

This matters for Europe. If every public authority, university, hospital, school and small business becomes dependent on a handful of foreign AI platforms, the result will not simply be a technology import bill. It will be a loss of operational knowledge, bargaining power and democratic control. Data flows, auditability, long-term costs, procurement choices and institutional capacity will all be shaped by external providers. AI would then reproduce the same dependency already visible in cloud computing and proprietary software, only at a deeper level.

The European alternative: open, local and sectoral AI

European Union member states do not need to imitate the American capital race. Europe’s comparative advantage should be a different architecture: open standards, interoperable systems, public-interest infrastructure, strong data protection, transparent procurement and digital sovereignty. This is where low-cost, open-source local LLMs become strategically important. They are not a universal substitute for frontier models. They will not always match the most expensive closed systems in advanced multimodal reasoning, large-scale agentic automation or cutting-edge research tasks. But they can cover a very large share of real organisational needs.

The most useful AI system is not always the largest model. In many workflows, performance depends on access to the right documents, well-structured data, retrieval systems, domain terminology, evaluation, human supervision and integration into existing processes. A smaller local model connected to a trusted knowledge base can be more useful, cheaper and safer than a remote proprietary model that the organisation cannot inspect or control.

Where local open LLMs can work

For public administrations, local open models can support document search, drafting, summarisation, classification of citizen requests, translation, internal helpdesks and legal or regulatory retrieval without sending sensitive information to external platforms. In healthcare, they can assist with administrative documentation, anonymisation, internal knowledge retrieval and patient communication under strict human oversight. In education, they can provide safer learning environments tailored to European languages and curricula, with stronger pedagogical control. In culture and tourism, they can support archives, translation, accessibility, cultural metadata and multilingual services. In manufacturing, energy, agrifood, shipping and finance, they can be embedded into technical support, compliance, maintenance, reporting and internal knowledge management.

Their advantage is not only lower cost. It is control. An organisation running a local open model can decide where data is stored, which version is used, how the system is adapted, who audits it and which provider maintains it. It can avoid permanent subscription dependence and build internal capability. For SMEs, this is especially important. Many small and medium-sized enterprises do not need frontier AI. They need affordable, reliable, private and adaptable tools that improve daily work without locking them into a single vendor.

From AI consumption to AI capacity

The right European strategy is therefore hybrid. Europe needs shared high-performance infrastructure for research, advanced industrial use and demanding public-interest applications. It also needs widespread deployment of local, open, low-cost models for everyday use in public bodies, SMEs, schools, universities and sectoral ecosystems. AI Factories, EuroHPC resources, open models, public procurement rules and sectoral data spaces can work together if they are designed around openness and reuse rather than dependency.

The AI giants are spending hundreds of billions because they are trying to own the future operating layer of the economy. Europe should not respond by becoming a better customer of that model. It should build capacity. Local open LLMs are not the whole answer, but they are one of the most practical ways to turn AI from a rent-extracting platform into a productive, democratic and controllable infrastructure.

Article sources:

The New Yorker, “The A.I. Industry Is Booming. When Will It Actually Make Money?”: An analysis of the economic dynamics of generative AI, the massive investment by hyperscalers, uncertain revenues, and the risk of a “productive bubble”: https://www.newyorker.com/news/the-financial-page/the-ai-industry-is-booming-when-will-it-actually-make-money,

Mistral AI, “Mistral 7B” and “Mixtral of Experts”: Mistral AI is the leading European example of open-weights model development that can be deployed locally, with emphasis on performance, lower cost, and infrastructure control: https://mistral.ai/news/announcing-mistral-7b/ and https://mistral.ai/news/mixtral-of-experts/,

DeepSeek AI, “DeepSeek-R1”: An example of an open reasoning model showing that high performance does not necessarily require exclusively closed and extremely expensive architectures, strengthening the debate on more efficient models: https://github.com/deepseek-ai/DeepSeek-R1.

EuroHPC Joint Undertaking, “AI Factories”: The official European framework for giving researchers, businesses, and startups access to supercomputing AI infrastructure, with the aim of strengthening European technological sovereignty: https://www.eurohpc-ju.europa.eu/ai-factories_en,

European Commission, “Apply AI Strategy”: The European Commission’s strategy for adopting AI in strategic sectors of the economy and the public sector, with emphasis on competitiveness, SMEs, and Europe’s technological sovereignty: https://digital-strategy.ec.europa.eu/en/policies/apply-ai.