How is AI being governed?

The MIT AI Governance Mapping project classifies more than 1000 cases of AI risk governance and regulation by risk coverage, sector, scope, and other relevant dimensions.

What is the AI Governance Mapping project?

The AI governance landscape is complex and fragmented, with many documents proposing frameworks, standards, and guidance. This project addresses this challenge by using LLMs to categorize legal and governance documents from the Center for Security and Emerging Technology’s ETO AGORA (AI GOvernance and Regulatory Archive), a living collection of AI-relevant laws, regulations, standards, and other governance documents from the United States and around the world.

This interactive visualization shows how governance documents in the AGORA dataset cover different risk domains included in the MIT AI Risk Domain Taxonomy. Current findings are based on a snapshot of the dataset taken 23 March 2026.

The AI Governance Mapping project is part of the MIT AI Risk Initiative, which aims to increase awareness and adoption of best practice AI risk management and AI regulation across the ecosystem.

You can explore different types of governance and the underlying documentation in more detail using the interactive dashboards linked below.

What AI Risks Are Currently Being Governed?

Explore the Governance Mapping Project

Click through the links below to explore each of the interactive dashboards. These cover governance of AI across the AI life cycle, ecosystem actors, and sectors. You can also use the document view to inspect the coverage for specific documents within the ETO AGORA database.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

What can I use the Governance Mapping data for?

Limitations of the analysis and dataset

The patterns and trends observed in the data should be taken as indicative and will need to be validated through further analysis. 

This governance mapping project is intended to explore the potential capabilities and limitations of a scalable framework for analyzing how governance documents address AI risks. The analysis uses documents from CSET’s Emerging Technology Observatory AGORA dataset as input data, the majority of which originate in the United States. We intend to expand the coverage of other jurisdictions in future work. The classification analysis uses an LLM-based approach, which has been evaluated for reliability, however a systematic validation study is ongoing. Spot-checks are being used to provide feedback on misclassifications and to iterate the tool, improving its reliability.

What's Next

Please feel free to share feedback using this form - this will help us make the tool as useful and relevant as possible.

We welcome expressions of interest in engaging with our work - we will continue collecting user-stories to refine the tool.

Team

Alumni

Acknowledgments

We want to thank the following people for their useful contributions and feedback: Graham Ryan; Jones Walker LLP; Himanshu Joshi, Vector Institute for Artificial Intelligence; Emre Yavuz, Cambridge Boston Alignment Initiative; Sophia Lloyd George, Brown University; Echo Huang, Minerva University; Clelia Lacarrière, MIT; Lenz Dagohoy, Ateneo de Manila University; Henry Papadatos, SaferAI; and Aidan Homewood, Centre for the Governance of AI.