In November, 91pro视频 was on the ground on the margins of the inaugural Convening of the International Network of AI Safety Institutes, and provided perspectives on the activities that we thought would be most helpful in advancing the international conversation on AI Safety. Now, we’re rapidly approaching the long-awaited AI Action Summit, scheduled to take place in Paris, France on February 10 and 11, with the Heads of State and Government scheduled to meet the morning of February 11 to further discuss common actions invited nations can take on AI. France has broadened the scope of the original two meetings – hosted by the UK in Bletchley Park and by South Korea in Seoul – capturing a wider variety of AI-related topics beyond AI safety. With that in mind, these are the items we’re keeping a close eye on, as well as some things we’re recommending to whoever hosts the next global convening. Making progress on the three below areas will help to ensure that continued international collaboration is as effective as possible and will also help foster consistency in the development and implementation of regional approaches to AI governance and regulation.
-
Further announcements from the International Network of AI Safety Institutes (INAISI). The inaugural meeting was productive, with Network participants agreeing on a for the group, including four priority areas for collaboration, announcing undertaking a joint testing exercise, and agreeing upon a . The France AI Action Summit presents an opportunity for the group of AI Safety Institutes to progress the activities outlined above. We hope to see additional focus on advancing a common framework for risk assessments for advanced AI systems, which can also be useful to increase interoperability as members of the Network advance their domestic AI governance policies. We appreciate that the Network initially agreed upon principles for risk assessment, but we hope to see an announcement around how the INAISI’s will work together to develop joint evaluation metrics, which will be critical to actually operationalizing risk assessments consistently. We also hope that the Network provides a more specific outline for the way in which they intend to move work forward in the agreed upon priority areas. The mission statement identifies research, testing, guidance, and inclusion as critical to advancing AI safety. Drilling down on priority areas where research is needed in an effort to advance both testing and guidance, would be a helpful first step. It might also be useful for the International Network to work together to identify which research area each Network participant is strongest, so that collaboration can be as fruitful as possible.
-
A clear statement that explains how these global convenings will continue to complement other ongoing international initiatives. One of the most exciting things about the AI policy conversation right now is that there is significant international interest from a variety of multilateral bodies. For example, the G7 Hiroshima AI Process yielded a set of , which are now being . We understand that AI will remain a priority issue area in both the G7 and G20 moving forward. As referenced in my last blog, the UN also remains engaged in AI conversations, advancing the and adopting . It has also brought together experts in the form of a High-Level Advisory Body, producing last year with recommendations around how to achieve an effective global governance structure. Given all of this work, it can be difficult to differentiate between the objectives and missions of various initiatives. We understand that multilateral groups will meet in Paris on February 9 to discuss an action plan for 2025, which we’re very supportive of. In that action plan, we hope to see specific objectives and/or missions tied to specific groups, as well as to allow governments and stakeholders to better understand where and how to plug-in to different workstreams.
-
An announcement about who will host the next global scale convening, and what the scope of such a convening will be. We are thrilled to participate in France’s AI Action Summit and are eager to hear which country might take the helm after this one. If the scope remains broad, consistent with our recommendations above, we suggest that the host of such a large-scale global convening clearly articulate the tangible outputs it hopes to achieve vis-à-vis other initiatives (such as those taking place in the G7, G20, UN, and ITU). At the very least, we think it is critical that this global scale convening continue to focus on AI safety, providing a venue for the work of the Network of AI Safety Institutes to be presented to a wider audience and to continue to advance high-level commitments to risk mitigation for the most advanced AI systems. If the scope remains broad, it may be helpful to outline which existing issue areas will continue to be covered beyond AI safety, develop specific deliverables for each area, and designate country leads in order to shepherd each workstream forward. As a part of this, creating specific coordination mechanisms and/or touchpoints with multilateral initiatives might also be useful.
This type of global gathering of stakeholders on AI can play a critical role in fostering conversation, advancing innovation, and providing a checkpoint for the international community on progress on various AI initiatives. 91pro视频 and its member companies look forward to continuing our engagement with global policymakers as they seek to develop practical approaches to AI governance.