How is AI being governed?

The AI Governance Mapping project classifies over 900 AI risk governance documents by risk domain and other taxonomies

What is this project?

The AI governance landscape is complex and fragmented, with many documents proposing frameworks, standards, and guidance. This project, led by Simon Mylius, addresses this challenge by using LLMs to categorize legal and governance documents from the Center for Security and Emerging Technology’s ETO AGORA (AI GOvernance and Regulatory Archive).

This interactive visualization shows how governance documents in the AGORA dataset cover different risk subdomains included in the MIT AI Risk Domain Taxonomy.

You can explore other governance mapping in more detail in the interactive dashboards sections below.

Explore the Governance Mapping Project

Click through the links below to explore each of the interactive dashboards.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

What can I use the Governance Mapping data for?

Limitations of the analysis and dataset

The patterns and trends observed in the data should be taken as indicative and will need to be validated through further analysis. 

This governance mapping project is intended to explore the potential capabilities and limitations of a scalable framework for analyzing how governance documents address AI risks. The analysis uses documents from CSET’s Emerging Technology Observatory AGORA dataset as input data, the majority of which originate in the United States. We intend to expand the coverage of other jurisdictions in future work. The classification analysis uses an LLM-based approach, which has been evaluated for reliability, however a systematic validation study is ongoing. Spot-checks are being used to provide feedback on misclassifications and to iterate the tool, improving its reliability.

What's Next

  • We welcome feedback and expressions of interest in engaging with our work - we will continue collecting user-stories to refine the tool in order to make it as relevant and useful as possible.
  • We will add classification of all documents in the dataset according to the MIT AI Risk Mitigation taxonomy.
  • We will create reports, visualizations, and a database to help users explore the AI governance landscape and understand which AI risks and mitigations are addressed or neglected by current AI governance approaches.

Please feel free to share feedback using this form - this will shape the direction of the work and help to make the tool as useful and relevant as possible.

Team

Alumni

Acknowledgments

We want to thank the following people for their useful contributions and feedback: Graham Ryan; Jones Walker LLP; Himanshu Joshi, Vector Institute for Artificial Intelligence; Emre Yavuz, Cambridge Boston Alignment Initiative; Sophia Lloyd George, Brown University; Echo Huang, Minerva University; Clelia Lacarrière, MIT; Lenz Dagohoy, Ateneo de Manila University; Henry Papadatos, SaferAI; and Aidan Homewood, Centre for the Governance of AI.