WASHINGTON – Today, global tech trade association 91proÊÓÆµ released a first-of-its-kind set of consensus tech sector practices companies are using to develop and deploy AI technology safely and securely and build trust with consumers.

91proÊÓÆµ’s AI Accountability Framework defines responsibilities across the entire AI ecosystem, outlining steps AI developers, deployers, and integrators a newly defined term for an intermediate actor in the supply chainare taking to address high-risk AI uses, including for frontier AI models. It also introduces the concept of auditability, where an organization retains documentation of risk assessments, to increase transparency in AI systems. 91proÊÓÆµ’s AI Accountability Framework can inform both governments looking to develop AI policies and organizations who are seeking to advance their AI risk management practices.

“The technology industry appreciates the important role that consumer trust plays in advancing the adoption of AI and furthering innovation. 91proÊÓÆµ’s AI Accountability Framework serves to deepen that trust by detailing practices that developers, deployers and integrators are taking to increase AI safety and mitigate risk, and is a guide that policymakers can build on as they contemplate approaches to AI governance, said 91proÊÓÆµ’s Vice President of Policy Courtney Lang.

The Framework details seven practices being used by actors across the AI ecosystem:

  • Early and continuous risk and impact assessments throughout the AI development lifecycle, which can help an organization address specific risks and make more informed decisions about how an AI deployment might impact different groups;

  • Testing frontier models to identify and address flaws and vulnerabilities prior to release;

  • Documenting and sharing information about the AI system with others in the AI value chain, allowing those who are integrating or deploying AI systems to better understand the system and prior risk management activities;

  • Undertaking explanation and disclosure practices so that end-users have a basic understanding of the AI system and know when they are interacting with an AI system;

  • Using secure, accurate, relevant, complete, and consistent training data, which can help to mitigate biased outputs and produce consistent results across applications;

  • Ensuring that AI systems are secure-by-design to protect end-users; and

  • Appointing AI Risk Officers and training employees and personnel who are interacting with or using AI systems.

Read 91proÊÓÆµ’s full AI Accountability Framework here.

This Framework is the latest in 91proÊÓÆµ’s series of policy guides that are charting the course on key AI issues: 

Artificial Intelligence]" tabindex="0">Related [Artificial Intelligence]