Artificial Intelligence Trust, Risk and Security Management (AI TRiSM)

February 19, 2026

What are the risks from AI? 

This week we spotlight the 29th framework of risks from AI included in the AI Risk Repository: Habbal, A., Ali, M. K., & Abuzaraida, M. A. (2024). Artificial Intelligence Trust, Risk and Security Management (AI TRiSM): Frameworks, applications, challenges and future research directions. Expert Systems with Applications, 240, 122442. https://doi.org/10.1016/j.eswa.2023.122442

Paper focus

This paper provides a review of the AI Trust, Risk, and Security Management (AI TRiSM) framework, designed to assist organizations with managing the risks associated with AI relating to security, privacy, and ethical issues.

Included risk categories

This paper presents an overview of AI challenges organized into the AI TRiSM framework’s three primary aspects, each consisting of distinct threat types and damages.

1. AI trust management

  • Bias and discrimination (destruction of public trust, hindrance to AI adoption, etc.)
  • Privacy invasion (erosion of user data, compromised sensitive data, etc.)

2. AI risk management

  • Society manipulation (fostering social division, contributing to an environment susceptible to misinformation, etc.)
  • Deepfake technology (damaging reputations, undermining public trust by generating deceptive content, etc.)
  • Lethal Autonomous Weapons Systems (LAWS) (misuse, uncontrolled use of AI in warfare, etc.)

3. AI security management

  • Malicious use of AI (breach of sensitive data, compromised system integrity, etc.)
  • Insufficient security measures (unauthorized access to sensitive information, potential misuse of AI systems, etc.)

Key features of the framework and associated paper

  • Focuses on AI trust, risk, and security management, particularly in sectors such as healthcare and finance
  • Based on a synthesis of frameworks and methods for risk mitigation in AI applications from the academic literature. The AI TRiSM framework brings together separate trust, risk, evaluation, and security protocols and is designed to be applied throughout the entire life cycle of an AI system.

⚠️Disclaimer: This summary highlights a paper included in the MIT AI Risk Repository. We did not author the paper and credit goes to Adib Habbal, Mohamed Khalif Ali, and Mustafa Ali Abuzaraida. For the full details, please refer to the original publication: https://doi.org/10.1016/j.eswa.2023.122442.

Further engagement 

View all the frameworks included in the AI Risk Repository 

Sign-up for our project Newsletter

Featured blog content