Project Updates: December 2024

December 30, 2024

Key points

News about the AI Risk Repository

  • The Repository (airisk.mit.edu) has reached 90,000+ users since August 2024, and is being used by companies, governments, and researchers globally
  • 13 new AI risk frameworks added to the Repository in Dec 2024 (see post)
  • We are exploring new features based on user feedback: risk domain profiles, improved visualizations, and mapping against regulatory / standards frameworks (eg NIST, EU AI Act)

New initiative: The AI Risk Index Project

  • This project, commencing Q1 2025, will document and evaluate organizational and institutional responses to high-priority AI risks
  • Our goal is to improve coordination on AI risks by identifying gaps and emerging practices in mitigating risks from AI
  • Watch a 10 minute video briefing on this project

Opportunities to contribute

Read on for the full update!

Full update

News about the AI Risk Repository

The reach of the Repository has been much greater than we anticipated. The website, airisk.mit.edu, has received about 90,000 hits since its launch in August, and is linked to by about 2,000 other websites. Several governments and large companies have told us that they are using the Repository in their work on AI risks. Others have leveraged the Repository in aligned projects, such as by using it to classify incidents of harm from AI.

New frameworks and classifications added to the Repository

User and expert suggestions for new frameworks and classifications are reviewed on a rolling basis by the core research team. As a result, 13 new documents have been added to the Repository. The documents were published between 2018-2024, and are a mix of government & industry reports, peer reviewed journal articles, and preprints, with authors from US, UK, Australia, Canada, China, and Germany. The types of AI examined include generative AI, large language models, and “Artificial General Intelligence”, in addition to generic definitions of AI.

2025 plans for the Repository

We are committed to maintaining and updating the Repository through 2025 as a piece of knowledge infrastructure for people and organisations working on understanding and addressing risks from AI. We intend to share an update each quarter in 2025 with (1) new frameworks added to the repository, and (2) changes in risk definitions based on new frameworks. A stretch goal for 2025 is a major update to the Repository, which could involve adding or removing categories of risk.

New initiative: The AI Risk Index

In 2025-2026, we are establishing a new project, the AI Risk Index, to address a major problem in managing risks from AI:

  • Key AI ecosystem actors (e.g., developers, deploying firms, regulators) lack visibility into how others are responding to AI risks.
  • This makes it harder to coordinate action on priority risks or effective mitigations

The four phases of this project include:

  1. Prioritize: Prioritize AI risks and map their relevance to key ecosystem actors using expert consultation
  2. Document: Systematically document AI risk responses of AI developers, major companies, and governance bodies using public statements and policies, and a targeted survey
  3. Evaluate: Evaluate effectiveness of current responses to AI risks using expert consultation
  4. Adapt & scale: Develop tools and methods for replication, adaptation, and scale-up

Watch a 10 minute video briefing on this project to learn more! We are seeking collaborators for this work. If you’re interested in contributing to this project (e.g., as an advisor, an expert, a financial contributor, an end-user), or are working on an aligned project, please contact us for a more detailed briefing.

Watch a video briefing (10m) on the AI Risk Index

Opportunities to contribute

Feedback

You can provide feedback on the AI Risk Repository (Google form), including suggesting additional frameworks for consideration.

We are also exploring additional activities that leverage the Repository; please tell us (using feedback form) if any of the following would be useful to you, to help with our future planning:

  • Risk category profiles - Could include an extended definition / description of the risk, real-world incidents of harm associated with the risk, more in-depth analysis of the included frameworks that discuss the risk, identify experts working in the risk area
  • User experience - Could improve how users discover insights from the Repository and its taxonomies, such as through interactive web-based filters, automated reports, or other visualisations
  • Expanding / updating the Casual and Domain taxonomies - Could add a second layer of complexity into the Causal Taxonomy (e.g., distinguish between stages of pre-deployment), or a third layer of complexity into the Domain Taxonomy (e.g., distinguish between sub-types of misinformation)
  • Linking / “crosswalking” the Taxonomies with standards or regulatory frameworks - Could identify how, for example, activities described in the “Map” section of the NIST AI Risk Management Framework could benefit from the Domain Taxonomy in the AI Risk Repository

Funding

We are accepting a financial contribution to support the ongoing development of the AI Risk Repository and the AI Risk Index project. We are currently finalizing arrangements with several large financial collaborators. Please contact us to discuss funding opportunities and receive detailed information about our research program and planned activities.

Expertise

You could connect us with experts in AI, risk management, governance, or related fields (published/working in area >5 years) to provide input on key research activities. If you match these criteria or can connect us with expert networks, particularly curated databases of domain experts, please contact us. Experts will be invited to participate in online workshops and/or online surveys (e.g., Delphi) to:

  • Help validate and refine the risks in the AI Risk Repository
  • Prioritise the risks based on criteria such as severity / type of harm, likelihood, frequency, etc
  • Determine which risks are most relevant for different types of actors
  • Evaluate mitigations and other responses to risks from AI that are identified through a review of existing organisational and institutional practices - including estimating their effectiveness, ease of implementation, and cost.

Give feedback on AI Risk Repository

Contact us

© 2025 MIT AI Risk Repository