On July 10, 2025, the European Commission published the final General-Purpose AI Code of Practice (GPAI-CoP)—a voluntary but influential code of conduct that offers model providers a clear path to compliance with the EU AI Act, which will take effect on August 2, 2025. Vice-President Henna Virkkunen called the Code “a clear, joint route to compliance” in the accompanying press release from Brussels1.
One Compass, Three Chapters
The Code bundles the main obligations for providers into three thematic chapters. The core information is detailed in the table below.
Chapter | For Whom? | Essence of the Obligations |
---|
Transparency | All GPAI providers | Standardized Model Documentation Form including architecture, compute & energy consumption, data provenance, and distribution channels1. |
Copyright | All GPAI providers | Internal copyright policy, crawlers that respect robots.txt and other rights reservations, filters against infringement in model output, complaint handling for rights holders1. |
Safety & Security | High-impact models only | Lifecycle risk analysis, red teaming, security of model weights, mandatory reporting of serious incidents within 2–15 days, semi-annual reports to the AI Office1. |
What Does This Mean in Practice?
The Transparency module requires every provider to have a fully completed Model Documentation Form ready at launch. This form meticulously details how a model is built, trained, and distributed, what data was used, and how much energy was consumed. This gives downstream developers the information they need for their own AI Act obligations, while regulators can request access to the full documentation1.
The Copyright chapter stipulates that web crawlers must respect not only technological barriers (paywalls, DRM) but also machine-readable rights reservations. It also obligates providers to implement technical and contractual safeguards to prevent their models from producing plagiarized or otherwise infringing material1.
For the most powerful models—those with potential “systemic risk”—an additional layer is added. For these models, providers must continuously identify, test, and mitigate risks, from the first training run until long after launch. Serious incidents, such as large-scale data breaches or harm to public health, must be reported swiftly: for a cyber intrusion, for example, within five days1.
Relevance for Regulators
-
AI Office (Brussels) – Thanks to the semi-annual Safety & Security Model Reports, the AI Office will receive a consistent stream of data on systemic risks, red-teaming results, and incident reports. This standardization makes it easier to compare market risks and plan targeted enforcement actions.
-
Data Protection Authorities (e.g., the Dutch AP) – The transparency form reveals in detail which data sources were used, how they were filtered, and what bias detection was applied. This allows the DPA to verify whether the processing of (special categories of) personal data in training and validation sets is lawful.
-
Sectoral Regulators (e.g., ACM, DNB) – They gain insight into the underlying models integrated into critical services. The incident reporting regime and mandatory risk analyses provide early signals of potential financial or consumer risks.
These interconnected information streams create a ‘regulatory backbone’: providers submit one set of standardized documents, upon which various authorities can base their own supervisory tasks.
A New Standard for Trust
With the GPAI-CoP, Europe gets its first uniform, public framework that combines transparency, copyright protection, and safety standards into a single package. For providers, the question is no longer if they will establish documentation and risk processes, but how quickly they can meet the required standard. For regulators, the Code provides a clear, harmonized basis for overseeing a rapidly evolving technological sector.
Anyone bringing a general-purpose AI model to the European market from now on will find that this voluntary code is de facto becoming the minimum standard for trust—just in time to be ready for the legal hard-launch of the AI Act in August 2025.