AI Incident Tracker

Incident Timeline

Using an LLM pipeline, we classified all the incidents in the AI Incident Database against the MIT Causal Taxonomy, the MIT Domain Taxonomy the EU AI Act Risk Levels and a harm severity scale covering 10 categories of harm.

Insights:

Interactive Chart

Explore the chart by adding filters (e.g. Domain, Entity, Intent, Timing, AI Purpose etc) and stack by different categories to see the distribution of incidents in each year.
Select incidents where any category of harm exceeded a severity threshold, or select harm severity thresholds for individual categories.
The links below the chart quickly apply preset example filter configurations.

Preset filter configurations:

Important context for interpreting these results:

The data presented is the output of an LLM classifier pipeline applied to the raw reports from the AI Incident Database (AIID), which relies on submissions from the public and subject matter experts. The quality, reliability and depth of detail in the reports varies across the dataset. As the reporting is voluntary, the dataset is inevitably subject to some degree of sampling bias.
The LLM classification tool has been developed iteratively, and its agreement with human expert consensus is comparable to the agreement between 2 independent human experts (article forthcoming). Spot-checks have been used to provide feedback on misclassifications and to iterate the tool, improving its reliability, however there are likely to remain incidents where the LLM classification does not match expert consensus.

Therefore patterns and trends observed in the data should be taken as indicative and validated through further analysis.

Explore Other Views