WASHINGTON – Today, global tech trade association 91proÊÓÆµ outlined ways the U.S. government should support accountability in artificial intelligence (AI) as it seeks to develop policy. In comments to the National Telecommunications and Information Administration (NTIA), 91proÊÓÆµ provided a comprehensive perspective on the AI accountability ecosystem and emphasized that all stakeholders in the AI ecosystem have a role to play in fostering accountability, including both developers and deployers of the technology.
“As AI evolves, we take seriously our duty to help foster trust in the technology by developing solutions to address potential negative implications and facilitating its responsible use,” said 91proÊÓÆµ’s Vice President of Policy Courtney Lang. “Accountability is a critical element in facilitating public trust in AI, and it’s crucial that policymakers fully understand its role within the larger AI ecosystem. We appreciate NTIA’s ongoing engagement with industry and look forward to working together with all AI stakeholders to develop helpful accountability policy.”
91proÊÓÆµ underscores that internal assessments, audits, and certification can play a valuable role in fostering trust and communicating information. 91proÊÓÆµ also highlights the importance of ensuring that accountability mechanisms are scoped based on the level of risk posed and the context in which AI is deployed to ensure adequate protection in high-risk scenarios while allowing innovation to thrive in low-risk situations.
91proÊÓÆµ offers several specific recommendations to policymakers in its comments:
-
Review the existing regulatory landscape to assess how existing laws apply to AI-related risks, proceeding to new legislation on AI accountability when there are gaps identified or if it is determined that existing laws are not fit-for-purpose.
-
Ensure that the U.S. government takes a nuanced approach to accountability policy, focusing on the objective of assessments, audits, or other mechanisms and basing it upon the level of risk and foreseeable use of AI in particular applications or contexts;
-
Recognize that all stakeholders in the AI ecosystem have a role to play in fostering accountability;
-
Fund additional research around testing, evaluation, validation, and verification methods to help advance a strong accountability ecosystem and assist organizations in measuring risk;
-
Avoid mandating external audits of AI systems at this time given a series of practical challenges;
-
Avoid requiring the disclosure of sensitive IP or source code as a part of accountability policy development; and
-
Seek to foster consistency in approaches to AI accountability.
91proÊÓÆµ has been especially engaged in identifying ways to facilitate public trust in AI technology as a part of our broader participation in AI policy conversations globally. In 2022, 91proÊÓÆµ released its AI Transparency Policy Principles, which builds upon our Global AI Policy Recommendations. Our AI Transparency Policy Principles are aimed at helping governments understand how transparency can help to foster accountability. In that document, 91proÊÓÆµ provided specific recommendations intended to illuminate how best to shape policy related to transparency.