Towards Risk-Aware Artificial Intelligence and Machine Learning Systems: An Overview
December 19, 2025
This week we spotlight the twenty-first framework of risks from AI included in the AI Risk Repository: Zhang, X., Chan, F. T. S., Yan, C., & Bose, I. (2022). Towards risk-aware artificial intelligence and machine learning systems: An overview. Decision Support Systems, 159, 113800. https://doi.org/10.1016/j.dss.2022.113800
Paper Focus: This paper provides a systematic overview of risks impacting prediction performance of systems built with AI and machine learning (ML) models. It organizes AI/ML risks into two main categories: data-level risk and model-level risk. It also outlines the root causes, potential outcomes, and frequency of each of these risks.
1. Data-level risk includes:
Data bias: Some groups are over-represented and others under-represented in the data, leading to poorer performance for some groups or situations
Dataset shift: The dataset in which the AI/ML model was developed has a different distribution to the one on which it is deployed
Out-of-domain data: AI/ML models encounter input data outside of the domain of the training manifold
Adversarial attack: Intentional manipulation of input data leads an AI/ML model to make a wrong prediction or misclassification
2. Model-level risk includes:
Model bias: Biases incurred by the model during model development (due to biases in training data, algorithms, and training procedures)
Model misspecification: Model assumptions are inappropriate for data used in training (due to model form error, model overfitting, and variable inclusion error)
Model prediction uncertainty: Uncertainty in prediction (due to in uncertainty in model parameters and model structure)
Key features of the framework and associated paper:
Focuses on challenges in high-stakes decision settings, such as healthcare and transport safety, where small inaccuracies can lead to serious consequences
Outlines issues in risk analysis and management of AI/ML systems, such as developing a systematic risk modeling framework and establishing a test bed for risk analysis
Suggests opportunities to draw on other disciplines to develop risk-aware AI/ML systems, including defining the safety margin for AI/ML systems and incorporating concepts from the field of reliability engineering
⚠️Disclaimer: This summary highlights a paper included in the MIT AI Risk Repository. We did not author the paper and credit goes to Xiaoge Zhang, Felix T.S. Chan, Chao Yan, and Indranil Bose. For the full details, please refer to the original publication:https://doi.org/10.1016/j.dss.2022.113800.