WASHINGTON – Today, global tech trade association 91proÊÓÆµ published new Policy Principles for Enabling Transparency of AI Systems to help inform and guide policymaking. In its principles, 91proÊÓÆµ underscores that transparency is a critical part of developing accountable and trustworthy AI systems and avoiding unintended outcomes or other harmful impacts. At the highest level, transparency is about being clear about how an AI system is built, operated, and functions. When executed well, AI transparency can help to analyze outputs and hold appropriate AI stakeholders accountable.
Among its principles, 91proÊÓÆµ recommends policymakers empower users by including provisions within legislation that provide sufficient information to understand decisions of an AI system that may negatively affect users’ fundamental rights and give them the ability to review and/or challenge such decisions. 91proÊÓÆµ also outlines the need to make it clear to users when they are interacting directly with an AI system.
“Transparency of AI systems has rightfully been a prime focus for policymakers in the U.S. and across the globe,” said 91proÊÓÆµ’s President and CEO Jason Oxman. “Regulations must effectively mitigate risk for users while preserving innovation of AI technologies and encouraging their uptake. 91proÊÓÆµ’s Policy Principles for Enabling Transparency of AI Systems offer a clear guide for policymakers to learn about and facilitate greater transparency of AI systems.”
AI systems are comprised of sets of algorithms, which are capable of learning and evolving, whereas an algorithm alone is usually more simplistic, often executing a finite set of instructions. 91proÊÓÆµ’s Policy Principles for Enabling Transparency of AI Systems suggest that the most effective way to approach policymaking around transparency is to apply transparency requirements to specific, high-risk uses of AI systems – which are applications in which a negative outcome could have a significant impact on people — especially as it pertains to health, safety, freedom, discrimination, or human rights.
91proÊÓÆµ’s Policy Principles for Enabling Transparency of AI Systems advise policymakers to:
-
Consider what the ultimate objective of transparency requirements are.
-
Consider the intended audience of any transparency requirements and at what point of the AI system lifecycle they would apply.
-
Take a risk-based approach to transparency when considering requirements.
-
Include clear definitions of what is meant by transparency in the context of a regulation or policy proposal.
-
Consider that there are different ways to approach transparency and improve trust, and that explainability is only one component.
-
Consider including provisions within legislation that are intended to provide users with sufficient information to understand decisions of an AI system that may negatively affect their fundamental rights and provide users with the ability to review and/or challenge such decisions.
-
Ensure that transparency requirements do not require companies to divulge sensitive IP or source code or otherwise reveal sensitive individual data.
-
Leverage voluntary international standards in order to maintain interoperability of various AI transparency requirements to the extent possible.
-
Consider that when an AI system is directly interacting with a user, that fact should be easily discoverable and that disclosure requirements can help facilitate this.
-
Regulations pertaining to disclosure should be flexible and avoid prescribing specific information or technical details to be included.
-
Only the actual deployer of the AI system should be responsible for disclosure.
These principles build on 91proÊÓÆµ’s Global AI Policy Recommendations, released in 2021, which offered a comprehensive set of policy recommendations for global policymakers seeking to foster innovation in AI while also addressing specific harms.
Read 91proÊÓÆµ’s Policy Principles for Enabling Transparency of AI Systems here.