A Pro-Innovation Code of Practice For Europe鈥檚 AI Continent Ambitions

The final draft of the EU Code of Practice for General Purpose AI (GPAI) models is expected to be unveiled in the coming weeks. Companies developing GPAI models like Large Language Models will be able to use the Code to demonstrate compliance with their legal requirements under the EU AI Act. As such, the Code has the potential to serve as a valuable tool to create legal certainty for businesses and support compliance in an area where relevant standards are still emerging.

However, the current draft version of the Code still contains several requirements that are too complex, prescriptive, or beyond the scope of the AI Act, which could make uptake less attractive and undermine its very purpose. For Europe to be competitive and a global AI leader, companies need clear and simple rules to innovate with confidence and safely bring products on the market without unnecessary burdens or delays. It’s critical to address these challenges and ensure the final version of the Code increases flexibility, reduces unnecessary complexity and better aligns with the scope of the AI Act.

Leaders across the EU have stressed the need for a simpler and more innovation-friendly regulatory framework. Following the alarm bells raised by the Draghi report, the European Commission identified simplification of EU legislation as a top priority, especially for strategic tech like AI. Last week’s AI Continent Action Plan – which lays out the EU’s ambitious plan to compete globally on AI - envisions simplification as one of its key pillars and notes that the AI Act’s success will depend on how workable its rules are in practice.Getting the Code right will be the first proving ground of this ambition.

To do so, here are 91pro视频’s 5 recommendations for the finalization of the Code:

  1. Maintain alignment with the AI Act

The Code still contains several requirements that go beyond the scope of the AI Act. For example, there is no support in the Act for the broad mandate to include third parties in risk assessments (e.g., Commitment II.11), or the burdensome notification and adequacy assessment requirements for GPAI models with systemic risk (e.g., across Commitments II.9 and II. 14) or for some of the specific copyright measures.

Since the Code is a tool to show compliance with the AI Act, it cannot be used to expand or renegotiate the requirements of the Regulation. If not addressed in the final draft, this will create serious legal uncertainty in the market and discourage companies from subscribing to the Code. Measures outside the scope of the AI Act should either be removed or made voluntary but not required for the purpose of compliance.

  1. Risk assessments and evaluations should cover the Model Level, Not Systems

GPAI models are not built for a specific purpose or use-case. Thus, evaluations on generic data at model level cannot fully anticipate and mitigate the specific risks and circumstances associated with a model’s use in different contexts or in specific AI systems.

However, some measures of the Code would either require evaluating models when integrated in a system (measure II.4.7) or testing and evaluating for broad systemic risks such as ‘harmful manipulation’ (Appendix 1.1) – which are connected to system-level decisions and integrations.

Since the Code solely applies to GPAI models, these system-level expectations should be removed. This will help ensure requirements are levied at the appropriate point in the value chain. In fact, the AI Act already addresses the system level, including by mandating evaluations for high-risk AI systems and envisioning mechanisms for allocating responsibilities across the value chain. Any requirement at the system level in the Code would conflict with these measures, create uncertainty and would be potentially unfeasible for model providers.

  1. Requirements Should Only Apply to Model Providers, Without Downstream Implications

The third draft of the Code contains ambiguous language that would require downstream providers – e.g., companies who license a GPAI model and build on top of it - to ‘cooperate’ with GPAI model providers to support their compliance with the Code. For example, these unclear references appear in the risk evaluation section (measure II. 4.7 or II. 4.14) and the copyright section (measure I. 2.5).

The Code must solely apply to model providers, in line with the AI Act. For the final version, it will be crucial to make it clear that no Code requirements should be inadvertently transferred to downstream entities simply due to their deployment or integration of a GPAI model.

  1. Reduce unnecessary Reporting and Notifications

The current version of the Code would mandate complex and overlapping documentation and notification requirements for providers of GPAI models with systemic risk. Many of these requirements exceed the scope of the AI Act and create additional burden, without demonstrably improving safety outcomes. For example, the Code would require a Safety and Security Framework - subject to yearly updates, general and model-specific adequacy assessments and notifications to the AI Office (commitments II.1, II.9 and II.14); an individual model report also subject to regular updates and notification requirements (commitment II.8, and II.14); regular notifications to the AI Office about the implementation of the Code (commitment II.14) plus documentation requirements in commitments II.15 and I.1.

This complexity creates an enormous bureaucratic burden on model providers. The final version of the Code must significantly streamline and simplify documentation requirements, in line with the EU’s goal to ensure a workable implementation of the AI Act.鈥

  1. Engage with Industry on Next Steps

The Commission is expected to produce additional guidance to clarify questions around key definitions, scope and applicability of the Code. This includes for example the concepts of finetuning and modification, or the notion of ‘placement in the market’ of a GPAI model. To inform the upcoming guidelines, the Commission opened a public consultation in April 2025.

It will be crucial to robustly engage with industry to appropriately reflect in the guidelines existing difficulties, uncertainties and concerns. Importantly, a targeted definition of finetuning can help avoid overreach. The guidelines are a great opportunity to support a simple, proportionate and workable implementation of the AI Act, in line with the objectives of the AI Continent Action Plan.

Tags: Artificial Intelligence

Related