AI Incident Tracker

Incident Overview

Insights:

Causal taxonomy:

EU AI Act Risk Classification:

AI System Primary Purpose


Interactive Chart

Explore the chart by applying filters (e.g. Domain, Entity, Intent, Timing, AI Purpose etc) and view by different categories to see the distribution of incidents matching your filters.
Select incidents where any category of harm exceeded a severity threshold, or select harm severity thresholds for individual categories.
The links below the chart quickly apply preset example filter configurations.

Preset filter configurations:

Incident Numbers by Risk Domain

This chart shows the breakdown of incidents by each risk domain based on the MIT AI Risk Domain Taxonomy.

Insights:

Preset filter configurations:

Important context for interpreting these results:

The data presented is the output of an LLM classifier pipeline applied to the raw reports from the AI Incident Database (AIID), which relies on submissions from the public and subject matter experts. The quality, reliability and depth of detail in the reports varies across the dataset. As the reporting is voluntary, the dataset is inevitably subject to some degree of sampling bias.
The LLM classification tool has been developed iteratively, and its agreement with human expert consensus is comparable to the agreement between 2 independent human experts (article forthcoming). Spot-checks have been used to provide feedback on misclassifications and to iterate the tool, improving its reliability, however there are likely to remain incidents where the LLM classification does not match expert consensus.

Therefore patterns and trends observed in the data should be taken as indicative and validated through further analysis.

Explore Other Views