Artificial Intelligence and the Public Interest

Scientific Arguments Against Uncritical Deployment in the Public Sector

Artificial Intelligence is frequently presented as a neutral instrument of modernization within public administration. Claims of efficiency and cost reduction dominate policy discourse. Yet a growing body of scientific research demonstrates that uncritical deployment of AI systems in public institutions poses structural risks to democratic governance, legal accountability, and institutional trust.

AI as Institutional Infrastructure

In “How AI Destroys Institutions,” Woodrow Hartzog and Jessica Silbey argue that AI systems function not merely as tools but as infrastructural forces reshaping institutional practices. When decision-making authority shifts toward opaque models, responsibility becomes diffused across developers, vendors, and administrators. This diffusion undermines clear chains of accountability.

In legal and administrative contexts, reasoned justification and reviewability are foundational principles. If an administrative act relies on a proprietary algorithm whose internal logic is inaccessible, meaningful oversight becomes compromised.

Predictive Limits and Statistical Bias

In “AI Snake Oil,” Arvind Narayanan and Sayash Kapoor critically examine inflated claims surrounding AI accuracy. Most contemporary systems rely on statistical inference from historical datasets. Consequently, they replicate embedded biases and past inequities.

In domains characterized by normative conflict and uncertainty, such as judicial sentencing or welfare allocation, predictive correlation does not equal normative legitimacy. Human judgment incorporates contextual reasoning and ethical deliberation that statistical optimization cannot replace.

Algorithmic Authority and Sovereignty

Empirical studies in algorithmic governance identify automation bias as a recurring phenomenon: users tend to overtrust system outputs. In the public sector, this tendency risks transforming probabilistic suggestions into de facto authoritative decisions.

Furthermore, reliance on closed systems controlled by non-European corporations raises concerns of digital sovereignty. Initiatives such as GAIA-X highlight Europe’s recognition of infrastructural dependency risks. Meanwhile, companies like Mistral AI demonstrate the feasibility of developing competitive AI models within European jurisdiction.

Transferring critical public functions to opaque external infrastructures is not merely a technical choice. It is a governance decision with geopolitical implications.

A Strategy of Institutional Safeguards

Scientific evidence does not justify technological rejection but demands institutional safeguards. AI deployment in public administration must satisfy strict criteria of transparency, auditability, reproducibility, and democratic oversight.

Investment in open standards and locally governed infrastructures enhances institutional resilience and accountability. Without such safeguards, AI systems risk accelerating power concentration and eroding public trust.

The central policy question is therefore not whether AI should be used, but under what governance conditions it can coexist with democratic principles.

Key Sources

“How AI Destroys Institutions”, Woodrow Hartzog & Jessica Silbey: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5870623

“AI Snake Oil”, Arvind Narayanan & Sayash Kapoor: https://press.princeton.edu/books/hardcover/9780691249133/ai-snake-oil

“GAIA-X: A Federated Data Infrastructure for Europe”: https://gaia-x.eu/

“Mistral 7B Announcement”, Mistral AI: https://mistral.ai/news/announcing-mistral-7b/