Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction

January 16, 2025

What are the risks from AI?

This week we spotlight the eleventh risk framework included in the AI Risk Repository: 

Shelby, R., Rismani, S., Henne, K., Moon, A., Rostamzadeh, N., Nicholas, P., Yilla-Akbari, N., Gallegos, J., Smart, A., Garcia, E., & Virk, G. (2023, August). Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction. Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society.

Reproduced from Sociotechnical harms of algorithmic systems: Scoping a taxonomy for harm reduction by Shelby, R., Rismani, S., Henne, K., Moon, A., Rostamzadeh, N., Nicholas, P., Yilla-Akbari, N., Gallegos, J., Smart, A., Garcia, E., & Virk, G, published in AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, available at https://doi.org/10.1145/3600211.3604673.

Using a scoping review and reflexive thematic analysis of computing research on harms (n = 172), this paper presents an applied taxonomy of sociotechnical harms from algorithmic systems. 

The taxonomy consists of five major categories and 20 subcategories: 

  1. Representational Harms 
  • Stereotyping social groups 
  • Demeaning social groups 
  • Erasing social groups 
  • Alienating social groups 
  • Denying people the opportunity to self-identify
  • Reifying essentialist social categories 
  1. Allocative Harms 
  • Opportunity loss 
  • Economic loss 
  1. Quality of Service Harms 
  • Alienation 
  • Increased labour 
  • Service/benefit loss 
  1. Interpersonal Harms 
  • Loss of agency 
  • Tech-facilitated violence 
  • Diminished health and well-being 
  • Privacy violations 
  1. Social System Harms 
  • Information harms 
  • Cultural harms 
  • Civic and political harms 
  • Socio-economic harms 
  • Environmental harms 

Key features of the framework and associated paper:

  • Discusses and gives examples of each of the five major harm categories in detail, including their subcategories, and how technical components and social dynamics interact to produce them. 
  • Frames harms in terms of their impacts across micro, meso, and macro levels of society. 
  • Final taxonomy builds on and references existing taxonomies, classifications and terminology. 

Disclaimer:

This summary highlights a paper included in the MIT AI Risk Repository. We did not author the paper and credit goes to Shelby, R., Rismani, S., Henne, K., Moon, A., Rostamzadeh, N., Nicholas, P., Yilla-Akbari, N., Gallegos, J., Smart, A., Garcia, E., & Virk, G. (2023). For the full details, please refer to the original publication: https://doi.org/10.1145/3600211.3604673

Further engagement 

View all the frameworks included in the AI Risk Repository 

Explore the AI Risk Repository Website 

© 2025 MIT AI Risk Repository