Everyone should know what has been produced by Artificial Intelligence

Transparency is not optional, it is a requirement

Artificial Intelligence has already moved from novelty to routine production. It writes text, generates images, edits audio, creates video, suggests code and helps reshape entire workflows. The central issue is no longer whether AI is being used. It is that, in too many cases, it is being used without the audience clearly knowing it.

That cannot become normal. In a democratic society, no one should read, watch, hear or use material without knowing whether it was produced by a human, by a machine or through a combination of both. Opacity is not neutral. It weakens trust, blurs responsibility and leaves citizens exposed to a new zone of confusion.

What clear labelling should look like

The solution is straightforward. Every text should be marked either “generated by Artificial Intelligence” or “produced with the support of Artificial Intelligence and human editing.” Every image, audio file or video that has been generated or materially altered with such tools should be clearly labelled. Every source code file should include documentation in its opening comments describing the use of Artificial Intelligence tools.

A vague statement saying that “AI was used” is not enough. People need to know what AI actually did, how much of the work it affected and whether meaningful human review took place. A text fully generated by a machine is not the same as a text drafted by a person and improved with limited AI support.

Why this matters across education, media and software

In education, the absence of labelling breaks the distinction between learning and automated production. In research, it blurs authorship and scholarly responsibility. In public administration, it creates uncertainty about who is truly accountable for an official document. In journalism and public debate, it turns images, audio and video into potentially misleading evidence. In software development, it becomes unclear which parts of the code were carefully reviewed by humans and which were inserted with minimal scrutiny.

This is why labelling is not a cosmetic addition. It is a condition of accountability. It is a minimum rule of digital honesty.

The real choice

The real choice is not whether we are for or against Artificial Intelligence. The real choice is whether we are for or against transparency. Whether we believe citizens have a right to know, or whether we accept a world of automated production without clear notice.

We need mandatory AI labelling policies now across schools, universities, public bodies, companies, research institutions, media organisations and software teams. Not after the next scandal. Not after opacity has already become standard practice.

Because without labelling, machines will speak more and more while humans take responsibility less and less. That is not progress. It is the erosion of trust.

Sources:

  1. Label It Now: Why Schools, Universities and Public Bodies Need Immediate AI Disclosure Policies: Recent Greek policy article arguing for immediate institutional rules on AI disclosure: https://gfoss.eu/label-it-now-why-schools-universities-and-public-bodies-need-immediate-ai-disclosure-policies/
  2. AI Attribution Toolkit: Initiative proposing more detailed AI attribution statements for transparency: https://aiattribution.github.io/
  3. Code of Practice on marking and labelling of AI-generated content, European Commission: European policy effort related to transparency and labelling obligations for AI-generated content: https://digital-strategy.ec.europa.eu/en/policies/code-practice-ai-generated-content
  4. mprpic/aiattribution, GitHub: Open Python utility for creating and interpreting AI attribution statements: https://github.com/mprpic/aiattribution