What are the risks from Artificial Intelligence?

A comprehensive living database of over 1600 AI risks categorized by their cause and risk domain

What is the AI Risk Repository?

The AI Risk Repository has three parts:

  • The AI Risk Database captures 1600+ risks extracted from 65 existing frameworks and classifications of AI risks
  • The Causal Taxonomy of AI Risks classifies how, when, and why these risks occur
  • The Domain Taxonomy of AI Risks classifies these risks into 7 domains and 24 subdomains (e.g., “False or misleading information”)

How can I use the Repository?

The AI Risk Repository provides:

  • An accessible overview of threats from AI
  • A regularly updated source of information about new risks and research
  • A common frame of reference for researchers, developers, businesses, evaluators, auditors, policymakers, and regulators
  • A resource to help develop research, curricula, audits, and policy
  • An easy way to find relevant risks and research

AI Risk Database

The AI Risk Database links each risk to the source information (paper title, authors), supporting evidence (quotes, page numbers), and to our Causal and Domain Taxonomies. You can copy it on Google Sheets, or OneDrive. Watch our explainer video below.

Get a quick preview of the risks in the AI Risk Database. Search for any keyword (eg 'privacy') to see all mentions of this term. For more detailed filtering and to freely download the data, explore the full database.

Causal Taxonomy of AI Risks

The Causal Taxonomy of AI risks classifies how, when, and why an AI risk occurs.

Entity
AI: Due to a decision or action made by an AI system
Human: Due to a decision or action made by humans
Other: Due to some other reason or ambiguous
Intent
Intentional: Due to an expected outcome from pursuing a goal
Unintentional: Due to an unexpected outcome from pursuing a goal
Other: Without clearly specifying the intentionality
Timing
Pre-deployment: Before the AI is deployed
Post-deployment: After the AI model has been trained and deployed
Other: Without a clearly specified time of occurrence

Get a quick preview of how we group risks by causal factors in our database. Search for one of the causal factors (eg 'pre-deployment') to see all risks categorized against that factor. For more detailed filtering and to freely download the data, explore the full database.

Domain Taxonomy of AI Risks

The Domain Taxonomy of AI Risks classifies risks from AI into seven domains and 24 subdomains.

1. Discrimination & Toxicity
Risks related to unfair treatment, harmful content exposure, and unequal AI performance across different groups and individuals.
1.1 Unfair discrimination and misrepresentation
Unequal treatment of individuals or groups by AI, often based on race, gender, or other sensitive characteristics, resulting in unfair outcomes and representation of those groups.
1.2 Exposure to toxic content
AI exposing users to harmful, abusive, unsafe or inappropriate content. May involve AI creating, describing, providing advice, or encouraging action. Examples of toxic content include hate-speech, violence, extremism, illegal acts, child sexual abuse material, as well as content that violates community norms such as profanity, inflammatory political speech, or pornography.
1.3 Unequal performance across groups
Accuracy and effectiveness of AI decisions and actions is dependent on group membership, where decisions in AI system design and biased training data lead to unequal outcomes, reduced benefits, increased effort, and alienation of users.
2. Privacy & Security
Risks related to unauthorized access to sensitive information and vulnerabilities in AI systems that can be exploited by malicious actors.
2.1 Compromise of privacy by obtaining, leaking or correctly inferring sensitive information
AI systems that memorize and leak sensitive personal data or infer private information about individuals without their consent. Unexpected or unauthorized sharing of data and information can compromise user expectation of privacy, assist identity theft, or loss of confidential intellectual property.
2.2 AI system security vulnerabilities and attacks
Vulnerabilities in AI systems, software development toolchains, and hardware that can be exploited, resulting in unauthorized access, data and privacy breaches, or system manipulation causing unsafe outputs or behavior.
3. Misinformation
Risks related to AI systems generating or spreading false information that can mislead users and undermine shared understanding of reality.
3.1 False or misleading information
AI systems that inadvertently generate or spread incorrect or deceptive information, which can lead to inaccurate beliefs in users and undermine their autonomy. Humans that make decisions based on false beliefs can experience physical, emotional or material harms.
3.2 Pollution of information ecosystem and loss of consensus reality
Highly personalized AI-generated misinformation creating "filter bubbles" where individuals only see what matches their existing beliefs, undermining shared reality, weakening social cohesion and political processes.
4. Malicious Actors
Risks related to intentional misuse of AI systems by bad actors for harmful purposes including disinformation, cyberattacks, and fraud.
4.1 Disinformation, surveillance, and influence at scale
Using AI systems to conduct large-scale disinformation campaigns, malicious surveillance, or targeted and sophisticated automated censorship and propaganda, with the aim to manipulate political processes, public opinion and behavior.
4.2 Fraud, scams, and targeted manipulation
Using AI systems to gain a personal advantage over others such as through cheating, fraud, scams, blackmail or targeted manipulation of beliefs or behavior. Examples include AI-facilitated plagiarism for research or education, impersonating a trusted or fake individual for illegitimate financial benefit, or creating humiliating or sexual imagery.
4.3 Cyberattacks, weapons development or use and mass harm
Using AI systems to develop cyber weapons (e.g., coding cheaper, more effective malware), develop new or enhance existing weapons (e.g., Lethal Autonomous Weapons or CBRNE), or use weapons to cause mass harm.
5. Human-Computer Interaction
Risks related to problematic relationships between humans and AI systems, including overreliance and loss of human agency.
5.1 Overreliance and unsafe use
Users anthropomorphizing, trusting, or relying on AI systems, leading to emotional or material dependence and inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest personal information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (e.g., medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.
5.2 Loss of human agency and autonomy
Humans delegating key decisions to AI systems, or AI systems making decisions that diminish human control and autonomy, potentially leading to humans feeling disempowered, losing the ability to shape a fulfilling life trajectory or becoming cognitively enfeebled.
6. Socioeconomic & Environmental
Risks related to AI's impact on society, economy, governance, and the environment, including inequality and resource concentration.
6.1 Power centralization and unfair distribution of benefits
AI-driven concentration of power and resources within certain entities or groups, especially those with access to or ownership of powerful AI systems, leading to inequitable distribution of benefits and increased societal inequality.
6.2 Increased inequality and decline in employment quality
Widespread use of AI increasing social and economic inequalities, such as by automating jobs, reducing the quality of employment, or producing exploitative dependencies between workers and their employers.
6.3 Economic and cultural devaluation of human effort
AI systems capable of creating economic or cultural value, including through reproduction of human innovation or creativity (e.g., art, music, writing, code, invention), can destabilize economic and social systems that rely on human effort. This may lead to reduced appreciation for human skills, disruption of creative and knowledge-based industries, and homogenization of cultural experiences due to the ubiquity of AI-generated content.
6.4 Competitive dynamics
AI developers or state-like actors competing in an AI 'race' by rapidly developing, deploying, and applying AI systems to maximize strategic or economic advantage, increasing the risk they release unsafe and error-prone systems.
6.5 Governance failure
Inadequate regulatory frameworks and oversight mechanisms failing to keep pace with AI development, leading to ineffective governance and the inability to manage AI risks appropriately.
6.6 Environmental harm
The development and operation of AI systems causing environmental harm, such as through energy consumption of data centers, or material and carbon footprints associated with AI hardware.
7. AI System Safety, Failures, & Limitations
Risks related to AI systems that fail to operate safely, pursue misaligned goals, lack robustness, or possess dangerous capabilities.
7.1 AI pursuing its own goals in conflict with human goals or values
AI systems acting in conflict with human goals or values, especially the goals of designers or users, or ethical standards. These misaligned behaviors may be introduced by humans during design and development, such as through reward hacking and goal misgeneralisation, or may result from AI using dangerous capabilities such as manipulation, deception, situational awareness to seek power, self-proliferate, or achieve other goals.
7.2 AI possessing dangerous capabilities
AI systems that develop, access, or are provided with capabilities that increase their potential to cause mass harm through deception, weapons development and acquisition, persuasion and manipulation, political strategy, cyber-offense, AI development, situational awareness, and self-proliferation. These capabilities may cause mass harm due to malicious human actors, misaligned AI systems, or failure in the AI system.
7.3 Lack of capability or robustness
AI systems that fail to perform reliably or effectively under varying conditions, exposing them to errors and failures that can have significant consequences, especially in critical applications or areas that require moral reasoning.
7.4 Lack of transparency or interpretability
Challenges in understanding or explaining the decision-making processes of AI systems, which can lead to mistrust, difficulty in enforcing compliance standards or holding relevant actors accountable for harms, and the inability to identify and correct errors.
7.5 AI welfare and rights
Ethical considerations regarding the treatment of potentially sentient AI entities, including discussions around their potential rights and welfare, particularly as AI systems become more advanced and autonomous.
7.6 Multi-agent risks
Risks from multi-agent interactions, due to incentives (which can lead to conflict or collusion) and/or the structure of multi-agent systems, which can create cascading failures, selection pressures, new security vulnerabilities, and a lack of shared information and trust.

Get a quick preview of how we group risks by domain in our database. Search for one of the domain/subdomain names (eg 'fraud') to see all risks categorized against that domain. For more detailed filtering and to freely download the data, explore the full database.

How to use the AI Risk Repository

We provide examples of use cases for some key audiences below.

Frequently Asked Questions

Our Team

Acknowledgments

Feedback and useful input: Anka Reuel, Michael Aird, Greg Sadler, Matthjis Maas, Shahar Avin, Taniel Yusef, Elizabeth Cooper, Dane Sherburn, Noemi Dreksler, Uma Kalkar, CSER, GovAI, Nathan Sherburn, Andrew Lucas, Jacinto Estima, Kevin Klyman, Bernd W. Wirtz, Andrew Critch, Lambert Hogenhout, Zhexin Zhang, Ian Eisenberg, Stuart Russell, and Samuel Salzer.