The AI governance landscape is complex and fragmented, with many documents proposing frameworks, standards, and guidance. This project addresses this challenge by using LLMs to categorize legal and governance documents from the Center for Security and Emerging Technology’s ETO AGORA (AI GOvernance and Regulatory Archive), a living collection of AI-relevant laws, regulations, standards, and other governance documents from the United States and around the world.
This interactive visualization shows how governance documents in the AGORA dataset cover different risk domains included in the MIT AI Risk Domain Taxonomy. Current findings are based on a snapshot of the dataset taken 23 March 2026.
The AI Governance Mapping project is part of the MIT AI Risk Initiative, which aims to increase awareness and adoption of best practice AI risk management and AI regulation across the ecosystem.
You can explore different types of governance and the underlying documentation in more detail using the interactive dashboards linked below.
Click through the links below to explore each of the interactive dashboards. These cover governance of AI across the AI life cycle, ecosystem actors, and sectors. You can also use the document view to inspect the coverage for specific documents within the ETO AGORA database.
The patterns and trends observed in the data should be taken as indicative and will need to be validated through further analysis.
This governance mapping project is intended to explore the potential capabilities and limitations of a scalable framework for analyzing how governance documents address AI risks. The analysis uses documents from CSET’s Emerging Technology Observatory AGORA dataset as input data, the majority of which originate in the United States. We intend to expand the coverage of other jurisdictions in future work. The classification analysis uses an LLM-based approach, which has been evaluated for reliability, however a systematic validation study is ongoing. Spot-checks are being used to provide feedback on misclassifications and to iterate the tool, improving its reliability.
Please feel free to share feedback using this form - this will help us make the tool as useful and relevant as possible.
We welcome expressions of interest in engaging with our work - we will continue collecting user-stories to refine the tool.
We want to thank the following people for their useful contributions and feedback: Graham Ryan; Jones Walker LLP; Himanshu Joshi, Vector Institute for Artificial Intelligence; Emre Yavuz, Cambridge Boston Alignment Initiative; Sophia Lloyd George, Brown University; Echo Huang, Minerva University; Clelia Lacarrière, MIT; Lenz Dagohoy, Ateneo de Manila University; Henry Papadatos, SaferAI; and Aidan Homewood, Centre for the Governance of AI.