Sources of Risk of AI Systems

March 4, 2025

What are the risks from AI?

This week we spotlight the fourteenth risk framework included in the AI Risk Repository: 

Reproduced from Sources of Risk of AI Systems by Steimers, A. & Schneider, M., published in International Journal of Environmental Research and Public Health, available at https://doi.org/10.3390/ijerph19063641.

Steimers, A., & Schneider, M. (2022). Sources of Risk of AI Systems. International Journal of Environmental Research and Public Health, 19(6). https://doi.org/10.3390/ijerph19063641

This paper presents a taxonomy of AI-specific sources of risk. The taxonomy classifies single sources of risk into those that relate to ethical aspects of AI systems (i.e., fairness, privacy, degree and automation and control) and those that influence the reliability and robustness of AI systems (i.e., complexity of the intended task and usage environment, transparency and explainability, security, system hardware and technological  maturity). 

Key features of the framework and associated paper:

  • Proposes and explains a risk management process that integrates sources of AI risk into the risk assessment of a system
  • Analyses differences between AI systems based on modern machine learning methods and classical software 
  • Evaluates current research fields of trustworthy AI to identify sources of AI risk 
  • Extended discussion of each identified source of risk, including their challenges 

Disclaimer

This summary highlights a paper included in the MIT AI Risk Repository. We did not author the paper and credit goes to André Steimers & Moritz Schneider. For the full details, please refer to the original publication: https://doi.org/10.3390/ijerph19063641

Further engagement 

View all the frameworks included in the AI Risk Repository 

Sign-up for our project Newsletter