The AI Risk Repository has three parts:
The repository is part of the MIT AI Risk Initiative, which aims to increase awareness and adoption of best practice AI risk management across the AI ecosystem.
The MIT AI Incident Tracker classifies more than 1,300 real-world reported incidents from the AI Incident Database by risk, cause, harm, severity, and other relevant dimensions.
AI incidents are on the rise, yet current databases struggle with inconsistent structure, limiting their utility for policymaking. The AI Incident Tracker project addresses this by creating a tool to classify AI incidents based on risks and harm severity. Using a Large Language Model (LLM), the tool processes raw reports from the AI Incident Database (AIID) and categorizes them using established frameworks, such as the MIT Risk Repository and a harm severity rating system based on CSET’s AI Harm Taxonomy.
The AI Risk Mitigation Taxonomy has three parts:
The AI Risk Mitigation Taxonomy is part of the MIT AI Risk Initiative, which aims to increase awareness and adoption of best practice AI risk management across the AI ecosystem.
The MIT AI Governance Mapping project classifies more than 1000 cases of AI risk governance and regulation by risk coverage, sector, scope, and other relevant dimensions.
The AI governance landscape is complex and fragmented, with many documents proposing frameworks, standards, and guidance. This project addresses this challenge by using LLMs to categorize legal and governance documents from the Center for Security and Emerging Technology’s ETO AGORA (AI GOvernance and Regulatory Archive), a living collection of AI-relevant laws, regulations, standards, and other governance documents from the United States and around the world.
Get a quick preview of how we group risks by causal factors in our database. Search for one of the causal factors (eg 'pre-deployment') to see all risks categorized against that factor. For more detailed filtering and to freely download the data, explore the full database.
The Domain Taxonomy of AI Risks classifies risks from AI into seven domains and 24 subdomains.
Get a quick preview of how we group risks by domain in our database. Search for one of the domain/subdomain names (eg 'fraud') to see all risks categorized against that domain. For more detailed filtering and to freely download the data, explore the full database.
We provide examples of use cases for some key audiences below.
Feedback and useful input: Anka Reuel, Michael Aird, Greg Sadler, Matthjis Maas, Shahar Avin, Taniel Yusef, Elizabeth Cooper, Dane Sherburn, Noemi Dreksler, Uma Kalkar, CSER, GovAI, Nathan Sherburn, Andrew Lucas, Jacinto Estima, Kevin Klyman, Bernd W. Wirtz, Andrew Critch, Lambert Hogenhout, Zhexin Zhang, Ian Eisenberg, Stuart Russell, and Samuel Salzer.