The Risks Associated with Artificial General Intelligence: A Systematic Review

December 17, 2024

What are the risks from AI?

This week we spotlight the eighth risk framework included in the AI Risk Repository: 

McLean, S., Read, G. J. M., Thompson, J., Baber, C., Stanton, N. A., & Salmon, P. M. (2023). The risks associated with Artificial General Intelligence: A systematic review. Journal of Experimental & Theoretical Artificial Intelligence: JETAI, 35(5), 649–663. 

This study systematically reviews articles on the risks associated with Artificial General Intelligence (AGI), following PRISMA guidelines. 

In total, the authors included and synthesised 16 articles (provided in Table 1 of the article), identifying 6 distinct categories of risk from AGI: 

  1. AGI removing itself from the control of human owners/managers
  2. AGIs being given or developing unsafe goals
  3. Development of unsafe AGI
  4. AGIs with poor ethics, morals and values
  5. Inadequate management of AGI
  6. Existential risks

⭐️ Key features of the framework and associated paper:

Summarises the risks and risk controls discussed in the AGI risk literature 

Summarises the range of analysis methods used in the AGI risk literature

  • The most common method by far appears to be ‘philosophical discussion’

Discusses current limitations of the AGI risk literature, including: 

  • Few peer-reviewed articles
  • Limited risk modeling
  • Unclear definitions
  • No standard terminology

📄 Disclaimer

This summary highlights a paper included in the MIT AI Risk Repository. We did not author the paper and credit goes to McLean, S., Read, G. J. M., Thompson, J., Baber, C., Stanton, N. A., & Salmon, P. M. (2023). For full details, please refer to the original publication: https://doi.org/10.1080/0952813X.2021.1964003

🔔 Further engagement 

👀→ View all the frameworks included in the AI Risk Repository 

🌐→ Explore the AI Risk Repository Website 

© 2025 MIT AI Risk Repository