EU AI Transparency Code Rewrites the Rulebook: New Obligations for General-Purpose AI

Serge Bulaev

Serge Bulaev

The EU has introduced a new AI Transparency Code requirements for generalpurpose AI providers 2025, making strict rules for companies that provide general AI systems in Europe. These companies must share clear information about how their AI was trained, what data it uses, and how much energy it needs. They also need to keep public records showing how they handle copyright and any big risks their AI might cause. If companies do not follow these rules, they can face huge fines, just like with GDPR

EU AI Transparency Code Rewrites the Rulebook: New Obligations for General-Purpose AI

The EU has introduced a new AI Transparency Code requirements for general-purpose AI providers 2025, making strict rules for companies that provide general AI systems in Europe. These companies must share clear information about how their AI was trained, what data it uses, and how much energy it needs. They also need to keep public records showing how they handle copyright and any big risks their AI might cause. If companies do not follow these rules, they can face huge fines, just like with GDPR. From now on, people in Europe can ask for details about any AI system they use, and companies must respond quickly.

What new transparency requirements does the EU AI Transparency Code introduce for general-purpose AI providers?

The EU AI Transparency Code requires all general-purpose AI providers to publish: 1) a transparency dossier detailing training data, energy use, and limitations; 2) a copyright ledger for compliance and takedown requests; and 3) a risk ledger for systemic risks, with strict deadlines and high penalties for non-compliance.

The European Commission has unveiled a binding Code of Practice on AI Transparency that rewrites the rulebook for any company offering general-purpose AI within the EU market. Published on 10 July 2025, the 43-page document turns soft promises into hard obligations ahead of the full AI Act rollout next August.

Under the code, every provider of foundation models or large language models must open three "glass boxes":

  • Transparency dossier - complete model cards describing training data sources, energy use, known limitations and intended downstream uses
  • Copyright ledger - public policy plus a named contact for takedown requests, ensuring compliance with EU copyright law
  • Risk ledger - for models judged to carry systemic risk, a full security and safety report updated quarterly

Penalties sit at the same level as GDPR sanctions: up to
35 million or 7 % of worldwide annual turnover, whichever is higher. For a company like Microsoft that equates to roughly $18 billion based on 2024 figures.

What changes for users today
Starting immediately, European users interacting with chatbots, code assistants or image generators can demand documentation on where training data came from, which copyrights were respected, and what safeguards are in place. Firms have 15 working days to respond.

Timeline in 2025-26
- 2 Aug 2025: obligations for GPAI providers switch from code to law
- 2 Aug 2026: high-risk AI systems must meet full Act standards
- 2027: new EU-wide harmonised standards are expected, possibly making the current code obsolete

The Commission drafted the code with input from nearly 1 000 stakeholders, including OpenAI, academic labs and civil-rights groups, coordinated by the newly created European AI Office. While the code is technically voluntary, regulators say following it will "reduce administrative burden" and provide legal certainty during the two-year transition.

Serge Bulaev

Written by

Serge Bulaev

Founder & CEO of Creative Content Crafts and creator of Co.Actor — an AI tool that helps employees grow their personal brand and their companies too.