AI must not become the next dependency

Why governments, researchers, and businesses should back open-source agentic AI built on low-cost local LLMs

The public debate on artificial intelligence is still shaped by hype, marketing language, and the illusion that the largest commercial models are ready to take over critical functions across the economy, the public sector, and research. Reality is far less convenient. This is especially true for agentic AI, meaning systems that do not just generate text but take actions, call tools, execute workflows, and interact with digital infrastructure. The issue is not whether these systems can look impressive in demos. The issue is whether they can operate reliably, affordably, and under meaningful human and institutional control.

That is why the next phase of AI should not be built on expensive, opaque, remote platforms controlled by a handful of vendors. It should be built on low-cost, fully open-source local large language models. This is not an ideological preference. It is a strategic requirement. When public institutions, universities, or firms depend on closed AI services, they also accept dependence on external pricing, external governance, external infrastructure, and external limits on transparency. That is not innovation sovereignty. It is technological outsourcing dressed up as modernization.

A serious AI strategy starts by rejecting the false idea that bigger is always better. Research policy should prioritize efficient, auditable, domain-specific models that can be adapted to local languages, public needs, and real operating conditions. Universities and research centers should be funded to develop open datasets, benchmark suites, transparent evaluation methods, and agent orchestration tools that can be inspected and improved by the wider community. The real frontier is not just model scale. It is system reliability, cost efficiency, reproducibility, and the design of workflows where human oversight is built in rather than added as an afterthought. If countries want AI capacity of their own, they must invest in open technical ecosystems, not just consume products made elsewhere.

At the level of government, the political conclusion is straightforward. AI is not a neutral utility. Whoever controls the models, the toolchains, and the deployment environments also shapes the terms of productivity, administration, and public power. For that reason, public-sector AI policy must favor open standards, local deployability, interoperability, and independent auditability. No critical government workflow should rely exclusively on black-box systems that cannot be inspected, migrated, or meaningfully governed. Public procurement should actively support open-source infrastructure, shared code repositories, language resources, and public-interest models. Most importantly, no consequential administrative decision should be delegated entirely to an AI agent without accountable human review. If that principle is not enforced now, institutional dependence will be normalized before safeguards are in place.

Businesses also need to move past the fantasy of fully autonomous AI workers. The strongest commercial use of agentic AI is not indiscriminate automation but carefully designed augmentation. Firms should invest in local open models, internal expertise, secure deployment, and well-scoped workflows that improve speed and quality without surrendering control over data or operations. The companies that benefit most will not be those that adopt the most aggressive marketing claims first. They will be those who understand their own processes, know where AI helps, know where it fails, and build resilient internal capabilities instead of becoming permanently tied to closed vendors.

The real choice is therefore political as much as technical. Agentic AI can either deepen dependence on a few multinational platforms or become part of a broader strategy for digital sovereignty, productive resilience, and democratic accountability. If we want artificial intelligence that serves the public interest and strengthens local capacity, then the foundation must be open. It must be auditable, affordable, locally deployable, and governed by institutions that remain responsible for its outcomes. Open-source local LLMs are not a niche alternative. They are the necessary basis for an AI strategy that is serious about autonomy, trust, and long-term public value.

Sources:
Open Source Initiative, The Open Source AI Definition 1.0: A current and highly relevant reference for distinguishing truly open AI from partially accessible but still dependent model ecosystems: https://opensource.org/ai/open-source-ai-definition

LangGraph, Overview: A primary source on open orchestration for stateful agent systems, with explicit support for human-in-the-loop control and production-grade workflows: https://docs.langchain.com/oss/python/langgraph/overview

ggml-org, llama.cpp: A foundational open-source project for efficient local inference, central to making low-cost local LLM deployment practical across diverse hardware environments: https://github.com/ggml-org/llama.cpp