Governance of artificial intelligence: A risk and guideline-based integrative framework

July 16, 2025

What are the risks from AI?

This week we spotlight the nineteenth framework of risks from AI included in the AI Risk Repository: 

Wirtz, B. W., Weyerer, J. C., & Kehl, I. (2022). Governance of artificial intelligence: A risk and guideline-based integrative framework. Government Information Quarterly, 39(4), 101685. https://doi.org/10.1016/j.giq.2022.101685

This paper presents a systematic taxonomy consisting of 6 AI risk categories specifically focused on public sector governance. The taxonomy was developed through a systematic literature review that analyzed 1,471 initial records and ultimately included 16 final studies.

1. Technological, Data, and Analytical Risks

The potential for loss of control over AI systems, including autonomous decision-making without human oversight, programming errors due to complexity or lack of expertise, and poor data quality or biases in training data that lead to system malfunctions.

2. Informational and Communicational Risks

Risk of AI-driven information manipulation, including targeted disinformation campaigns, computational propaganda, algorithmic censorship, and the creation of "filter bubbles" that restrict access to diverse information sources.

3. Economic Risks

Disruption of economic systems through widespread automation, including massive unemployment, loss of taxpayer base, organizational knowledge loss as AI systems replace human workers, and potential collapse of economic structures.

4. Social Risks

Technological unemployment leading to social unrest, privacy and security threats to individuals and society, growing resistance to AI adoption, and transformation of human-to-human interactions in potentially harmful ways.

5. Ethical Risks

AI systems lacking legitimate ethical foundations when making decisions that affect society, AI-based discrimination against certain population groups, and reproduction of human biases and prejudices through AI systems.

6. Legal and Regulatory Risks

Unclear accountability and liability frameworks when AI systems fail or cause harm, inadequate regulatory scope that misses important governance aspects, and the challenge of regulating rapidly evolving AI technologies.

Key features of the framework and associated paper:

  • Integrative approach: Unlike previous research that has examined AI risks and guidelines in isolation, this framework systematically links each AI risk category to specific governance guidelines through a four-layer conceptual model
  • Risk-oriented governance process: Includes a structured four-stage process (framing, assessment, evaluation, and guidance elaboration) that enables organizations to identify and address AI risks systematically
  • Implementation focus: Provides a seven-stage AI governance layer based on policy cycles, offering concrete steps for transforming guidelines into binding regulations and governance measures
  • Public sector emphasis: Specifically designed for public organizations and governments, addressing unique challenges like public value creation, citizen trust, and democratic legitimacy

⚠️Disclaimer

This summary highlights a paper included in the MIT AI Risk Repository. We did not author the paper and credit goes to Bernd W. Wirtz, Jan C. Weyerer, and Ines Kehl. For the full details, please refer to the original publication: https://doi.org/10.1016/j.giq.2022.101685.

Further engagement 

View all the frameworks included in the AI Risk Repository 

Sign-up for our project Newsletter