WASHINGTON – As part of its AI Futures Initiative, global tech trade association 91proÊÓÆµ unveiled a new guide for global policymakers aiming to address the pressing need to authenticate AI-generated content, including chatbots, image, and audio generators. 91proÊÓÆµ’s analysis explores different kinds of authentication techniques – such as watermarking, provenance, metadata auditing, and human authentication – and emphasizes that a combination of methods will be the most effective way to validate and authenticate AI-generated content. 91proÊÓÆµ also highlights the important role that consensus standards will play in advancing AI authentication.

AI authentication techniques aim to increase transparency, minimize risks, and boost trust in generative AI broadly and across the AI value chain, and help increase trust in the information used by businesses, media, and consumers.

AI continues to dominate policy conversations around the world. As AI-generated content grows in its sophistication and adoption, there is a sense of urgency to leverage the transformative technology for social benefit and to minimize the harms that could come from its use, including the spread of mis- and dis-information,said 91proÊÓÆµ Senior Vice President of Policy and General Counsel John Miller. “91proÊÓÆµ’s new policy guide outlines the risks associated with AI-generated content, the authentication techniques and tools available to help address them, and considerations relevant for policy development. We look forward to continued collaboration with global governments as they develop their AI policies and seek to increase their understanding of the ever-evolving AI landscape.”

In its guide, 91proÊÓÆµ encourages policymakers to:

  • Avoid overly prescriptive approaches that mandate one technique over another as such approaches risk over indexing on one tool and missing the benefits another might provide;

  • Invest more robustly in AI authentication technique research and development;

  • Promote consumer transparency and awareness around AI-generated content, with a particular emphasis on education;

  • Leverage public-private partnerships to understand the limitations and benefits of various authentication techniques;

  • Ensure AI itself plays a role in detecting AI-generated content; and

  • Invest in the development of clear standards for AI authentication to promote consistency and collaboration across techniques.

Read 91proÊÓÆµ’s full Authenticating AI-Generated Content: Exploring Risks, Techniques & Policy Recommendations here.

91proÊÓÆµ has reiterated that safe and responsible AI development and deployment must be grounded in trust, transparency, ethics, and collaboration among government, industry, civil society, and academia. , an affiliate division of 91proÊÓÆµ, serves as the U.S. technical advisory group to the international standards body that recently for managing the risks and opportunities of AI. This management system standard supports improved quality, security, traceability, transparency, and reliability of AI applications. More information about INCITS and how to participate in AI standards development can be found .

Artificial Intelligence]" tabindex="0">Related [Artificial Intelligence]