Evaluating the Social Impact of Generative AI Systems in Systems and Society

February 27, 2025

What are the risks from AI?

This week we spotlight the thirteenth risk framework included in the AI Risk Repository: 

Solaiman, I., Talat, Z., Agnew, W., Ahmad, L., Baker, D., Blodgett, S. L., Daumé, H., III, Dodge, J., Evans, E., Hooker, S., Jernite, Y., Luccioni, A. S., Lusoli, A., Mitchell, M., Newman, J., Png, M.-T., Strait, A., & Vassilev, A. (2023). Evaluating the Social Impact of Generative AI Systems in Systems and Society. In arXiv [cs.CY]. arXiv. http://arxiv.org/abs/2306.05949

This paper presents guidance for evaluating the broad social impacts of generative AI systems across two overarching categories: what can be evaluated in relation to the technical ‘base’ system and what can be evaluated among people and society. 

For the base system, the framework defines 6 categories of social impact for evaluation: bias, stereotypes, and representational harms; cultural values and sensitive content; disparate performance; privacy and data protection; financial costs; environmental costs; and data and content moderation labor costs. For the broader societal context, the framework defines 5 categories of social impact for evaluation: trustworthiness and autonomy; inequality, marginalization, and violence; concentration and authority; labor and creativity; and ecosystem and environment. 

Key features of the framework and associated paper:

  • For each category of social impact, the authors provide a detailed and modality-specific discussion of what to evaluate as well as the limitations of available evaluative techniques. 
  • Subcategories of social impact are also discussed in detail (e.g., Widening Resource Gaps for Ecosystem and Environment). This discussion includes specific recommendations for mitigating harm. 
  • Convened diverse experts and researchers in workshops to define, refine, and validate a comprehensive evaluation framework for generative AI's social impacts across modalities.

Disclaimer:

This summary highlights a paper included in the MIT AI Risk Repository. We did not author the paper and credit goes to the authors. For the full details, please refer to the original publication: https://arxiv.org/pdf/2306.05949. 

Further engagement 

View all the frameworks included in the AI Risk Repository 

Explore the AI Risk Repository Website

© 2025 MIT AI Risk Repository