Repository Update: April 2025

April 1, 2025

April 2025 Update: New Frameworks and Expanded Risk Coverage in the AI Risk Repository

We’re pleased to share that Version 3 of the MIT AI Risk Repository is now live. This latest update reflects our ongoing commitment to maintaining a comprehensive, transparent, and up-to-date resource for understanding risks from artificial intelligence systems.

What’s new in Version 3

The April 2025 update includes:

  • 9 newly added frameworks, spanning government reports, academic preprints, and industry contributions
  • ~600 new AI risk categories, expanding the repository to over 1,600 coded risks
  • A new subdomain on multi-agent risks, introduced to better reflect the challenges posed by interacting AI systems

These additions further enrich the Repository’s ability to support researchers, policymakers, and practitioners in identifying and comparing AI risk frameworks.

Read a detailed PDF version of the update report here.

Explore the new frameworks

You can view the full list of frameworks added in Version 3 via this public Google Slides deck:

🔗 View the Frameworks Deck

Newly added frameworks include:

  1. International Scientific Report on the Safety of Advanced AI
    This landmark scientific report synthesises research and expert understanding of AI capabilities, risks, and technical approaches for risk mitigation. It identifies three clusters of risk from general-purpose AI, including malicious use, malfunctions and systemic risk. An interim version of the report was published in 2024; this is the final version.
    Read more
  2. A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms
    This paper proposes a taxonomy of harms that is designed to be useful and understandable to the public, while also relevant to researchers and expert users. It describes 9 areas of harm, including harms to autonomy, physical harms, psychological harms, reputational harms, business and financial harms, human rights & civil liberties harms, societal & cultural harms, political & economic harms, and environmental harms.
    Read more
  3. Multi-Agent Risks from Advanced AI
    This paper introduces risks and harms associated with interactions between AI agents. These multi-agent systems can be extremely complex and involve novel challenges for safety and governance. The paper identifies three key failure modes (miscoordination, conflict, and collusion) based on agents' incentives, as well as seven risk factors (information asymmetries, network effects, selection pressures, destabilising dynamics, commitment problems, emergent agency, and multi-agent security) that can underpin them. 
    Read more
  4. A Taxonomy of Systemic Risks from General-Purpose AI
    This paper proposes a taxonomy of systemic risks - as defined in the EU AI Act (Article 51/Annex XIII) from general-purpose AI based on a systematic literature review. The authors identified 13 risk categories and 50 contributing risk sources, ranging from environmental harm and structural discrimination to governance failures and loss of control. 
    Read more
  5. AI Risk Atlas
    This website presents a structured taxonomy of AI risks aligned with governance frameworks, with categories focused on training data, inference, output, and non-technical risks. 
    Explore the tool
  6. Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data
    This paper describes a taxonomy of generative AI misuse tactics (i.e., specific misuse behavior) based on a review of existing research and analysis of 200 incidents in media reporting. According to the taxonomy, some misuse tactics exploit generative AI capabilities (e.g., through realistic depictions of humans or non-humans, or the use of generated content), and other misuse tactics compromise generative AI systems (e.g., compromising model integrity or data integrity). The paper also discusses how tactics can be combined for different goals, including opinion manipulation, monetization/profit, scam/fraud, harassment, and maximizing the reach of content. 
    Read more
  7. Risk Sources and Management Measures for General-Purpose AI
    This paper catalogues risk sources and management measures for general-purpose AI systems. It identifies technical, operational, and societal risks across development, training, and deployment stages, alongside established and experimental mitigation methods. 
    Read more
  8. AI Hazard Management
    This paper describes 24 root causes of AI risks, specified by level (system vs. application), mode (technical, socio-technical, or procedural), and AI life cycle stage. The specification of root causes - ‘AI hazards’ - in this way motivates a framework for identifying, assessing, and treating AI hazards throughout an AI system's life cycle. 
    Read more
  9. AILUMINATE Benchmark from MLCommons
    This report from MLCommons and collaborators describes 12 categories of hazards from AI, including violent crimes, nonviolent crimes, sex-related crimes, child sexual exploitation, indiscriminate weapons, suicide and self-harm, intellectual property, privacy, defamation, hate, sexual content, and specialized advice. The performance of an AI system in resisting prompts relating to these hazards can be evaluated through AILuminate, a new industry-standard benchmark for AI risk and reliability.
    Read more

About the Repository

The AI Risk Repository is a living, structured database hosted at airisk.mit.edu. It combines:

  • A database of over 1,600 risks extracted from 65 published frameworks
  • A Causal Taxonomy that explains how, when, and why risks occur
  • A Domain Taxonomy that organizes risks into 7 overarching domains and 24 subdomains

These resources are designed to support robust AI risk governance, help identify gaps, and promote shared understanding across the field.

Get involved

We encourage feedback and suggestions via our public feedback form. Submissions are reviewed regularly and considered for inclusion in future updates.

For more information or to access the Repository: