On 30 September, 91pro视频 will represent the global technology industry in the first plenary meeting of stakeholders for the drafting of the Code of Practice for general-purpose AI (GPAI) models. These are powerful AI models underpinning numerous AI uses, including generative AI, and are crucial for AI innovation.
Developers of GPAI models – known as providers under the AI Act – will be able to use the Code of Practice to demonstrate compliance with their AI Act obligations. 91pro视频 will participate in its drafting over the next nine months, to contribute to making the Code a balanced and flexible tool that GPAI model providers can effectively use to comply with their legal requirements while continuing to innovate. To achieve that, here are the 5 key priorities the Code must follow:
1. Industry must be in the driving seat for drafting the Code.
The Code is first and foremost a compliance tool for GPAI model providers. Given the technical nature of the obligations and the evolving best practices in this space, the contributions of GPAI model providers should be the primary input for drawing up the Code – in line with article 56 of the AI Act.
The Code will also help downstream actors – such as companies who build specific AI applications from the model – obtain information they need about the model. Thus, it is important that the drafting also consider the perspective of downstream providers in support of these objectives.
2. Clear scope within the limits of the AI Act is needed.
The Code of Practice is solely meant to facilitate the application of the AI Act. Its scope should be clear and within the parameters established by the AI Act. This is essential for legal certainty and for encouraging widespread adoption of the Code.
The Code shall not become an avenue to impose additional obligations on GPAI model providers or downstream actors unforeseen by the Regulation. For example, since the EU AI Act does not modify EU copyright law, the Code must not include obligations that go beyond the existing legal framework. Similarly, the Code should not foresee mandatory third-party involvement in the testing and evaluation of GPAI models, since it is not an obligation under the AI Act.
3. Flexibility is key to reflect rapid technological evolution.
The Code must provide flexibility and choice on how GPAI model providers can comply with legal requirements. With a technology that is rapidly evolving, setting rules that are too prescriptive or mandating specific technical solutions over others— especially where equally effective alternatives may exist – will ultimately take away from the effectiveness of the Code and stifle innovation. For example, evaluation science for AI safety is in its infancy, and industry best practices on GPAI testing and evaluation are constantly emerging.
To reflect this evolving state of the art, the Code should remain flexible and outcome-focused and allow providers to adapt and implement different approaches tailored to the individual company and product. For the same reason, we recommend avoiding setting prescriptive KPIs for the Code.
4. Carefully balancing between different policy objectives.
The Code will have to find a careful balance between different key objectives such as transparency, model security, protection of trade secrets and innovation.
For example, it’s crucial that GPAI model providers provide adequate information about the model to downstream actors, to inform their risk assessments and choices when building an AI application. At the same time, these disclosures must remain proportionate for GPAI model providers, to adequately protect trade secrets and business sensitive information which are key to their innovation strategies. Similarly, excessively detailed public disclosures – for example to comply with the summary of training data requirement – could compromise model security by empowering malicious actors to find and exploit vulnerabilities.
5. Alignment with global best practices is crucial.
Since the AI value chain is global in nature, pursuing global consensus on AI governance will be crucial to support AI innovation and facilitate broad availability of AI technologies. Diverging approaches across jurisdictions – for example when it comes to definitions or risks – could complicate companies’ AI governance efforts.
This is why the Code must leverage or align with established approaches, such as international standards or other global frameworks for AI risk management and safety, rather than creating new, potentially conflicting guidance and regimes. The Code must remain consistent with ongoing industry-led global standardization activities, such as within ISO/IEC, as well as with existing frameworks such as the G7 Hiroshima AI Code of Conduct, the OECD work on AI governance, the AI Safety Summits or the NIST AI Risk Management Framework in the U.S.