Generating Harms: Generative AI’s Impact and Paths Forward

February 24, 2026

What are the risks from AI? 

This week we spotlight the 31st framework of risks from AI included in the AI Risk Repository: Electronic Privacy Information Center (2023). Generating harms: Generative AI’s impact and paths forward. https://epic.org/documents/generating-harms-generative-ais-impact-paths-forward/

Paper focus

This paper provides an outline of problems that could be caused by the rapid adoption of generative AI without adequate safeguards. It has been prepared by the Electronic Privacy Information Center (EPIC), a non-profit research and advocacy center for protecting privacy, freedom of expression, and democratic values.

Included risk categories

This paper draws on major taxonomies of AI harms to present an overview of 9 common harm categories caused by generative AI:

  1. Physical (i.e., bodily injury, death)
  2. Economic (i.e., monetary loss)
  3. Reputational (i.e., loss of reputation, stigmatization)
  4. Psychological (i.e., emotional distress, disturbance, or other negative mental responses)
  5. Autonomy (i.e., influencing individual choices such as through coercion or manipulation)
  6. Discrimination (i.e., increasing inequality so that certain groups of people are disadvantaged)
  7. Relationship (i.e., damage to personal or professional relationships)
  8. Loss of opportunity (i.e., preventing access to employment, educational, or other opportunities)
  9. Dignitary (i.e., diminishing individuals’ sense of self and dignity)

These are accompanied by real-world examples of harms caused by generative AI, including suicide, impersonation, deepfakes, defamation, sexualization, threats of physical harm, misinformation, copyright infringement, labor disputes, and data breaches.

Key features of the framework and associated paper

  • The paper discusses topics relating to harms from generative AI (e.g., the propensity for information manipulation at scale, the focus on profits over privacy) and provides recent documented examples, case studies, and interventions involving laws, regulations, or industry practices.

⚠️Disclaimer: This summary highlights a paper included in the MIT AI Risk Repository. We did not author the paper and credit goes to the Electronic Privacy Information Center (EPIC). For the full details, please refer to the original publication: https://epic.org/documents/generating-harms-generative-ais-impact-paths-forward/

Further engagement 

View all the frameworks included in the AI Risk Repository 

Sign-up for our project Newsletter

Featured blog content