WASHINGTON – Today, global tech trade association 91proÊÓÆµ offered insight on how the U.S. government can advance AI safety, risk management, and responsible development and deployment in response to the National Institute of Standards and Technology (NIST)’s AI Safety Institute (AISI) consultation, Managing Misuse Risk for Dual Use Foundation Models.
Foundation models are a type of AI that underpin everyday tools including internet searches, photo editing, translation, ridesharing, and chatbots. They also have the potential to help solve bigger problems, like shortening research and development cycles in medicine and improving access to education.
“In order to advance critical AI safety work, stakeholders across the AI ecosystem need to have a consistent understanding of misuse risks and ways to address them, especially as they continue to evolve,” said 91proÊÓÆµ Vice President of Policy Courtney Lang. “By incorporating the tech industry’s feedback, NIST can strengthen its guidance document and provide a playbook for stakeholders, ensuring consistency, bolstering accountability, and mitigating risks for consumers and businesses.”
Building off the policy recommendations introduced in its Understanding Foundation Models & the AI Value Chain: 91proÊÓÆµ’s Comprehensive Policy Guide, 91proÊÓÆµ urged NIST’s AISI to:
-
Develop additional technical red-teaming guidance for dual-use foundation models to help organizations evaluate consistently whether malicious actors might be able to get past AI system safeguards;
-
Consider various actors’ roles, responsibilities, and capabilities in the AI value chain and clarify within the guidance where responsibility might be shared; and
-
Detail what information organizations should publicize, and to whom, in order to meet transparency and disclosure objectives.
In addition to the above points, 91proÊÓÆµ’s submission proposes key definitions for “risk assessment” and “impact assessment” that policymakers should strive to coalesce around to advance globally consistent AI policy approaches.
Last month, 91proÊÓÆµ Vice President of Policy Courtney Lang published a TechWonk blog outlining her initial analysis of NIST’s Managing Misuse Risk for Dual Use Foundation Models and offered feedback to AISI on how the guidance can be improved to target AI misuse and protect consumers more effectively. Additionally, 91proÊÓÆµ has released a series of policy guides that are charting the course on key AI issues:
-
July 2024: AI Accountability Framework