Examining the differential risk from high-level artificial intelligence and the question of control

December 1, 2024

❓ What are the risks from AI?

This week we summarize the seventh risk framework included in the AI Risk Repository: Kilian, K. A., Ventura, C. J., & Bailey, M. M. (2023). Examining the differential risk from high-level artificial intelligence and the question of control. Futures, 151(103182), 103182. https://doi.org/10.1016/j.futures.2023.103182

 

This study investigates the risks and uncertainties that could arise from advanced AI development (including artificial general intelligence). It also models how variations in social and technological change can impact outcomes.

The research presents a spectrum of risk from advanced AI systems, divided into four classes: 

1️⃣ Misuse Risks

  • 1.1 AI-enabled cyberattacks
  • 1.2 Disinformation or misinformation
  • 1.3 Deep fake media generation
  • 1.4 Ubiquitous surveillance

2️⃣ Accident Risks

  • 2.1 Single system failures
  • 2.2 Multi-system failure cascades
  • 2.3 Specification errors
  • 2.4 Contagion and amplification

3️⃣ Structural Risks

  • 3.1 Value erosion
  • 3.2 Decision erosion
  • 3.3 Offense-defense balance disruption
  • 3.4 Uncertainty
  • 3.5 Preference manipulation

4️⃣ Agential Risks

  • 4.1 Goal alignment failures
  • 4.2 Inner alignment failures
  • 4.3 Influence seeking
  • 4.4 Specification gaming and tampering
  • 4.5 Misaligned objectives

⭐️ Key features of the framework and associated paper:

Presents a hierarchical complex systems framework to model AI risk 

Classifies AI impact and likelihood using original survey data from domain experts in the public and private sectors 

Determines that the highest impact risks include monopolistic race dynamics, AI alignment failures, and power-seeking behavior 

Presents a novel exploratory modeling technique to characterize future scenarios and the associated risks

📄 Disclaimer: This summary highlights a paper included in the MIT AI Risk Repository. We did not author the paper and all credit goes to Kyle A. Kilian, Christopher J. Ventura and Mark M. Bailey. 

For full details, please refer to the original publication: https://doi.org/10.48550/arXiv.2211.03157

🔔 Further engagement 

👀→ View all the frameworks included in the AI Risk Repository 

🌐→ Explore the AI Risk Repository Website 

© 2025 MIT AI Risk Repository