On 14 November, the EU AI Office circulated the first draft of the EU AI Act Code of Practice for general-purpose AI (GPAI) Models. Drafted by a group of experts, the Code is meant to guide developers of GPAI models – known as providers under the AI Act – on how to demonstrate compliance with their AI Act obligations. 91pro视频 represents the tech industry in the drafting group and submitted written feedback to the first draft of the Code on 28 November.
The Code will be a success if it is widely used by GPAI model providers as a reference to demonstrate compliance with the specific legal requirements of the AI Act.
To do so, the next draft of the Code should be proportionate and flexible given the fast-evolving state of the art. Importantly, since the Code is meant to implement the AI Act, it must be strictly aligned with the legal text of the regulation and provide an actionable framework for GPAI model providers to demonstrate compliance with the AI Act. The AI Office should intentionally pursue close involvement in the drafting and in considering the consultation responses to ensure the Code meets these practical needs.
Here are 5 key recommendations to achieve that important goal:
-
Stay Within the Scope of the AI Act
The current draft contains various measures that do not appear in the AI Act. For example, requirements on the mandatory involvement of third parties in risk evaluations and testing, which were previously rejected by the co-legislators, appear several times – such as measures 13.7 or measure 17. Some of the copyright requirements – such as measures 3.2 and 3.3 on downstream and upstream compliance – are also outside the scope of the AI Act.
Since the Code is solely meant to facilitate the application of the AI Act requirements for GPAI model providers, its scope should stay within those limits. This is fundamental for legal certainty and for widespread adoption of the Code. Exceeding the scope of the Act would make adhering to the Code more burdensome and would not align with the regulatory balance agreed by co-legislators.
-
Balanced and Meaningful Transparency Requirements
Measures 1 and 2 of the Code contain details on the transparency requirements for GPAI model providers, respectively towards regulators (the AI Office) and downstream providers (i.e., companies who use the model to build a specific AI system and would need information about the model to perform risk assessments or comply with potential regulatory requirements).
When it comes to transparency towards the AI Office, it will be important to ensure procedural safeguards for information requests and appropriate protection of trade secrets. Disclosures will also need to be proportionate and feasible, especially in areas like measurement of energy or compute – where standardized practices are not yet available.
When it comes to information for downstream providers it will be crucial to find the right balance between detail and feasibility. On the one hand, downstream providers will need sufficiently detailed and actionable information to support their decision making. On the other hand, excessive documentation or disclosure could be burdensome for model providers, breach trade secrets and compromise model security.
-
A Clear Taxonomy of Systemic Risks
The draft Code contains a taxonomy of the systemic risks that providers of certain GPAI models will have to consider for their risk assessments and mitigations pursuant to article 55 of the AI Act.
GPAI Providers will need this taxonomy to be specific and clear enough for effective policy development, measurement, and mitigation. Lack of specificity may result in erroneous prioritization of certain risks, and enormous resource investment across a wide range of negligible or unlikely risks, complicating risk management efforts from model providers.
This is why broad concepts and risks like ‘persuasion and manipulation’ or ‘large scale negative impact on society as a whole’ should be avoided or specified in the next draft.
-
Feasible risk evaluation and mitigation requirements
The Code contains detailed requirements for evaluation and mitigation of systemic risks associated with certain GPAI models.
The next draft should explicitly recognize the limits of model-level risk-evaluations and mitigations. Since risk is highly context dependent, and given GPAI model’s wide applicability, evaluations and mitigations on generic data at model level cannot anticipate and mitigate specific risks associated with a model’s use in different contexts or specific AI systems.
This is why prescriptive measures that require to identify specific deployment level risks (like measure 9), to evaluate system-level risks at the model level (like measure 10.5), or to extensively monitor the model post-deployment (measure 11.4) would be highly burdensome and in many cases unfeasible.
-
Improving timelines for stakeholders consultation
The short consultation period for the first draft of the Code has made it difficult for many industry leaders - particularly those with complex use cases and supply chains - to meaningfully assess the draft.
While the drafting of the Code will inevitably have to follow the ambitious timelines set by the AI Act – a meaningful dialogue with industry is key to ensure the Code is actionable and reflects the state of the art. Subsequent drafting rounds should take this into account and allow for sufficient time and preparation.
Way forward…
Reflecting these five points in the next draft of the Code will be essential for the Code to stay true to its original purpose – as intended by the co-legislators. That is to facilitate compliance with the AI Act for GPAI model providers in absence of relevant standards, while allowing for flexibility for the emergence and development of this nascent technology.