New Version of the AI Risk Repository Preprint Now Available
April 23, 2025
Updated Version of the AI Risk Repository Preprint Now Available
We are pleased to announce the release of an updated version of our preprint, The AI Risk Repository: A Comprehensive Meta-Review, Database, and Taxonomy of Risks from Artificial Intelligence, published on 10 April 2025.
Integration of 22 new AI risk frameworks, bringing the total number of included documents to 65.
Expansion of the AI Risk Database to 1,612 unique risk entries, each systematically extracted and coded.
Introduction of a new risk subdomain on multi-agent risks, reflecting recent developments in AI research on complex agent interactions.
These updates are informed by ongoing expert consultation and reflect our continued commitment to maintaining the Repository as a living, authoritative resource.
About the AI Risk Repository
The AI Risk Repository is a structured and evolving resource that aims to consolidate and clarify how risks from artificial intelligence are categorized across academic, industry, and policy literatures. It was developed in response to a key challenge in the AI risk landscape: the lack of shared frameworks for understanding, comparing, and prioritizing risks.
The project involves:
A systematic review of published taxonomies and classifications of AI risk.
The development of two taxonomies:
A Causal Taxonomy, which categorizes risks by who causes them (human or AI), the intent behind them (intentional or unintentional), and when they occur (pre- or post-deployment).
A Domain Taxonomy, which classifies risks into seven thematic domains and 25 more granular subdomains.
A publicly accessible living database of coded risks, designed to support research, evaluation, policy, and practice.
Purpose and impact
The Repository provides a foundation for more coherent and coordinated approaches to AI risk management. It enables:
Risk identification and prioritization
Development of auditing frameworks
Improved transparency in policy and governance processes
Identification of underexplored areas in AI safety research
We invite researchers, policymakers, developers, and auditors to explore the Repository and use it as a reference point for designing more comprehensive and accountable AI systems.