Governments Must Seize This Pivotal Moment for International AI Safety Cooperation

This week in San Francisco, the United States will convene the International Network of AI Safety Institutes (hereafter “Network”), established with the signing of the at the Seoul Summit in May 2024. This will be the first time these leaders – which consists of representatives from AI Safety Institutes in Australia, Canada, the European Union, France, Japan, Kenya, the Republic of Korea, Singapore, the United Kingdom, and the United States - has come together since then, and marks a pivotal moment in the global conversation on AI safety ahead of France’s AI Action Summit in February 2025. At this crucial convening, the Network should seize the opportunity to agree on a proactive AI safety research agenda and mechanisms for collaboration between Institutes and with external stakeholders and highlight how it intends to complement ongoing international work.

This week’s meeting is also an opportunity for U.S. policymakers to demonstrate their continued commitment to the U.S. AI Safety Institute. As the U.S. Congress begins its sprint of year-end legislating, 91pro视频 reiterates our call to bipartisan leaders on Capitol Hill to enact legislation that authorizes the U.S. AI Safety Institute within the National Institute of Standards and Technology (NIST) before the end of the 118th Congress. Doing so will provide certainty and stability for the U.S. AI Safety Institute and solidify its critical role in advancing AI innovation and facilitating trust, including in international fora like the one in San Francisco this week.

91pro视频 will be on the ground in San Francisco and here is what we hope to see materialize:

  • An agreement on the Network’s priority issues. As organizations and governments consider how best to address AI safety challenges, an affirmative, well-scoped agenda for international collaboration in the nascent field of AI safety science is necessary. There are numerous areas that are important to work on in order to advance AI safety, but if the Network tries to tackle too many things, it risks diluting efforts. A clear, defined set of objectives is critical, as is agreement on which research efforts to prioritize. In our view, the Network should first commit to developing a common understanding of key terms – for example, defining what constitutes a frontier AI model and risks of concern. Next, the Network should commit to tackling the development of pre-deployment testing and evaluation practices, including developing metrics to consistently measure and evaluate known and emerging risks associated with advanced AI system. The Network should also consider developing consistent ways to measure and report on model and system performance. Driving consensus in these areas will help foster international alignment and regulatory compatibility as different regions develop and implement their AI governance approaches.
  • A clear recognition of international standards bodies’ important role. The development and adoption of technical standards through organizations like ISO/IEC SC 42 will be essential for implementing AI safety practices at a global scale. The Network should bring their findings on advanced AI model testing and evaluation to these standards bodies in order to foster global alignment and common understanding. At the same time, international standards organizations can provide valuable input to the Network to help shape and refine the safety practices they are working to advance.
  • A clear articulation of how the Network will complement or otherwise support ongoing multilateral and/or international efforts. That includes the ongoing work to advance the adoption of the and for Advanced AI Systems stemming from the Hiroshima AI Process, a adopted in the United Nations General Assembly to promote trustworthy AI, and a aimed at governing digital technology and AI, in addition to the efforts taking place in both the G7 and G20. Further, while it remains unclear who will assume responsibility for the next global AI summit after France’s AI Action Summit, the Network should consider how it can inform tangible Summit outputs, even if the scope of these events remains broad. The Network should clearly lay out how it will complement these efforts so as not to duplicate work. We believe the Network can play an especially important role in providing evidence-based guidance to operationalize many of the policy documents produced via these larger multilateral efforts.
  • A framework for cooperation between the International Network of AI Safety Institutes and external stakeholders. While we appreciate that the U.S. has included certain stakeholders in its initial convening, we hope to see the Network outline how it will collaborate with a diverse set of stakeholders from across the AI ecosystem moving forward. While the U.S. AI Safety Institute has a Consortium and the EU AI Office has convened a group of stakeholders to weigh-in on the establishment of a General-Purpose AI Code of Practice, it is important for the Network to outline how it will work collaboratively with these and other groups, including establishing clear protocol for engagement and for stakeholders to provide input. In particular, the Network should consider how to de-duplicate efforts where multinational companies or other organizations may be members of stakeholder groups established in different jurisdictions.
Tags: Artificial Intelligence

Related