Repository Update: December 2025

December 4, 2025

December 2025 Update: New Frameworks and Expanded Risk Coverage in the AI Risk Repository

We’re pleased to share that Version 4 of the MIT AI Risk Repository is now live. This latest update reflects our ongoing commitment to maintaining a comprehensive, transparent, and up-to-date resource for understanding risks from artificial intelligence systems.

What’s new in Version 4

The December 2025 update includes:

  • 9 newly added frameworks, spanning government & industry reports, peer reviewed journal articles, conference papers and preprints,
  • ~200 new AI risk categories, expanding the repository to over 1,700 coded risks

These additions further enrich the Repository’s ability to support researchers, policymakers, and practitioners in identifying and comparing AI risk frameworks.

Read a detailed PDF version of the update report here.


Newly added frameworks include:

A Closer Look at the Existing Risks of Generative AI: Mapping the Who, What, and How of Real-World Incidents

Li et al., 2025

This preprint presents a taxonomy of Generative AI Failures based on a systematic analysis of 499 publicly reported AI incidents. Incidents are categorised in terms of the type of harm caused, how they occurred, and who experienced them. 

Capabilities and Risks from frontier AI

Department for Science, Innovation and Technology, 2023

This UK government discussion paper examines frontier AI's current capabilities, future development trajectories, and associated risks to inform policy discussions at the 2023 AI Safety Summit.

Dimensional Characterization and Pathway Modeling for Catastrophic AI Risks

Chin, 2025

This paper presents a framework for analyzing AI catastrophic risks through dimensional characterization across seven attributes (intent, competency, entity, polarity, linearity, reach, and order) and concrete risk pathway modeling. It applies this framework to six risks: CBRN threats, cyber offense, sudden loss of control, gradual loss of control, environmental risk, and geopolitical risk.

Emerging Risks and Mitigations for Public Chatbots: LILAC v1

Stanley & Lettie, 2024 

This MITRE report presents a taxonomy of AI risks based on an analysis of 135 real-world incidents related to conversational LLMs/chatbots. The authors link included risks to mitigations identified through a review, and develop a protocol for implementing them. 

Embodied AI: Emerging Risks and Opportunities for Policy Action

Perlo et al., 2025

This paper examines risks from embodied EAI - AI systems that can perceive, learn from, and act in the physical world through robots and other physical embodiments. The authors develop a comprehensive taxonomy of these risks across four categories: informational, economic, social and physical. 

Risks of AI Scientists: Prioritizing Safeguarding Over Autonomy

Tang et al., 2024

This academic paper develops a framework of risks associated with the emerging scientific capabilities of AI agents. For example, the ability to autonomously conduct experiments and make new discoveries. The authors classify identified risks by user intent (direct or indirect), scientific domain of agents (chemical, biological, radiological, physical, informational, and emerging technology), and impacts on the external environment (natural environment, human health, and the socioeconomic environment). 

Frontier AI Risk Management Framework (v1.0)

Shanghai AI Lab & Concordia AI, 2025

This report provides a six-stage management framework for risks from General-Purpose AI, including Risk Identification, Thresholds, Analysis, Evaluation, Mitigation, and Governance. The Risk Identification stage of the framework categorizes risks into four major risk types: misuse, loss of control, accident, and systemic risks.

Foundational Challenges in Assuring Alignment and Safety of Large Language Models

Anwar et al., 2024

This major collaborative research agenda identifies 18 foundational challenges and 200+ research questions associated with the successful alignment of large language models. Challenges include risks from agentic LLMs, such as multi-agent safety failures, dual-use capabilities for malicious use, system untrustworthiness, disruptive socioeconomic impacts, and security vulnerabilities like jailbreaks.

A Survey on Responsible LLMs: Inherent Risk, Malicious Use, and Mitigation Strategy

Wang et al., 2024

This survey presents a unified framework for responsible LLMs that maps risks (categorized as inherent—privacy leakage, hallucinations, value misalignment—or malicious use—toxicity, jailbreaking) to targeted mitigation strategies across four LLM lifecycle phases. 

About the Repository

The AI Risk Repository is a living, structured database hosted at airisk.mit.edu. It combines:

  • A database of over 1,600 risks extracted from 65 published frameworks
  • A Causal Taxonomy that explains how, when, and why risks occur
  • A Domain Taxonomy that organizes risks into 7 overarching domains and 24 subdomains

These resources are designed to support robust AI risk governance, help identify gaps, and promote shared understanding across the field.

Get involved

We encourage feedback and suggestions via our public feedback form. Submissions are reviewed regularly and considered for inclusion in future updates.

For more information or to access the Repository:

Featured blog content