Reproduced from AI Risk Profiles: A Standards Proposal for Pre-deployment AI Risk Disclosures by Sherman, E. & Eisenberg, I.W., published in AAAI '24: Proceedings of the 2024 AAAI Conference on Artificial Intelligence, available at https://doi.org/10.1609/aaai.v38i21.30348.
This paper presents a high-level taxonomy consisting of 9 AI risk categories:
Abuse & Misuse: The potential for intentional malicious use of AI systems, such as creating deep fakes, conducting automated cyber attacks, or implementing invasive surveillance.
Compliance: Risk of violating laws, regulations, and ethical guidelines (including copyright), which can result in legal penalties and reputational damage.
Environmental & Societal Impact: Encompasses broader effects like labor displacement, mental health impacts, manipulative technologies, and environmental concerns including carbon emissions from training.
Explainability & Transparency: Risk related to inability to understand/interpret AI decisions and lack of openness about data, algorithms, and decision-making processes.
Fairness & Bias: Risk of systematic disadvantage to certain groups through biased training data, algorithmic design, or deployment practices.
Long-term & Existential Risk: Speculative risks from advanced AI systems potentially harming human civilization through misuse or misalignment with human values.
Performance & Robustness: Risk of system failure in fulfilling intended purposes and lack of resilience to unusual or adverse inputs.
Privacy: Risk of infringing on individual privacy rights through data collection, processing, and conclusions drawn.
Security: Vulnerabilities that could compromise system integrity, availability, or confidentiality, with special concern for model weight leakage.
The authors selected these risk categories because of their subsumption of known risks. This choice allowed the taxonomy to be both comprehensive and flexible.
Key features of the framework and associated paper:
Proposes a new risk profiling standard designed to “guide downstream decision-making, including triaging further risk assessment, including triaging further risk assessment, informing procurement and deployment, and directing regulatory frameworks”.
This risk profiling standard uses the author’s high-level taxonomy of AI risks as its foundation.
The authors provide practical examples by applying their framework to evaluate several prominent AI systems (including Claude, GPT APIs, Microsoft Copilot, GitHub Copilot, and Midjourney), making it immediately useful for practitioners
The methodology bridges technical and non-technical stakeholders by creating a "lingua franca" for discussing AI risks - making it valuable for everyone from developers to business leaders to regulators
Disclaimer:
This summary highlights a paper included in the MIT AI Risk Repository. We did not author the paper and credit goes to Eli Sherman and Ian W. Eisenberg of Credo AI. For the full details, please refer to the original publication: https://doi.org/10.1609/aaai.v38i21.30348.