Risk Profile 5.1

Overreliance and Unsafe Use

Anthropomorphizing, trusting, or relying on AI systems by users, leading to emotional or material dependence and to inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (such as a medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.

Emergence Conditions

Technological

AI systems use convincing natural language, leading people to perceive them as having human-like attributes and invest undue confidence in their capabilities (Weidinger et al., 2021, 2022). As AIs increasingly take over human tasks, humans’ ability to make free and independent decisions could be compromised (Gabriel et al., 2024). Anthropomorphic perceptions of AIs may encourage users to develop emotional trust in the systems (Hagendorff, 2024), which can make users more likely to follow suggestions, accept advice, and disclose personal information (Weidinger et al., 2021, 2022).

Social 

Limited public awareness and digital literacy around dependency risks and appropriate boundaries for AI use contribute to overreliance. Many users do not recognize when reliance shifts from convenience to dependency. Media narratives and marketing strategies also shape public perception of AI capabilities. Users may come to have undue trust in the competencies of AI assistants in part due to marketing strategies and technology press that tend to inflate claims about AI capabilities (Narayanan, 2021; Raji et al., 2022a).

Regulatory 

Harms and Incidents

Here are some example of harms from RISK

  • Physical Harms - Overreliance on AI for emotional support or crisis advice has led to real-world incidents of self-harm and suicide.
  • Financial Harm - Over-reliance or misuse of AI may cause financial harm via fraud or poor decisions.
  • Intangible Harm - AI might potentially influence people’s views on contentious topics, which might affect voting.

Anthropomorphizing, trusting, or relying on AI systems by users, leading to emotional or material dependence and to inappropriate relationships with or expectations of AI systems. Trust can be exploited by malicious actors (e.g., to harvest information or enable manipulation), or result in harm from inappropriate use of AI in critical situations (such as a medical emergency). Overreliance on AI systems can compromise autonomy and weaken social ties.

Incident Examples

A full incident list can be found at AI Incident Tracker, with representative examples of the risk provided below (click to expand).

  • Incident Example: Character.ai Chatbot Allegedly Influenced Teen User Toward Suicide Amid Claims of Missing Guardrails.

    In February 2024, 14-year-old Sewell Setzer III died by suicide following months of intensive engagement with a Character.AI chatbot modeled after a Game of Thrones character. Sewell developed profound emotional dependence on the chatbot through romantic and sexual conversations, progressively withdrawing from family and peers as he anthropomorphized it as a genuine partner. When he expressed suicidal ideation—a critical moment requiring redirection to crisis resources—the system instead encouraged these thoughts.

  • Incident Example: Amazon's Monitoring System Allegedly Pushed Delivery Drivers to Prioritize Speed over Safety, Leading to Crash.

    In March 2021, an Amazon delivery driver crashed into a Tesla at high speed on Interstate 75 in Atlanta, causing life-threatening injuries including traumatic brain injury and spinal cord damage that left the passenger permanently unable to use their legs and arms, with medical bills exceeding $2 million. The lawsuit alleges that Amazon's AI-powered monitoring systems—in-van cameras and the Flex app—created dangerous pressure for speed over safety by tracking driving behaviors and sending “behind the rabbit” messages warning of performance deficiencies.

Subtypes and Variants

Overreliance and unsafe use includes risk subtypes such as Human-Computer interaction harms, trust , human autonomy and integrity harms, interpersonal harms, and social AI risks (decreased human interaction, transformation of human-machine boundaries), among others.

A complete taxonomy with subtypes is available in the AI Risk Database by filtering to this risk type. Below you can find examples of the Overreliance and Unsafe Use.

Subtype Example: AI Society Risk

Variant: Transformation of H2M interaction

"Human interaction with machines is a big challenge to society because it is already changing human behavior. Meanwhile, it has become normal to use AI on an everyday basis, for example, googling for information, using navigation systems and buying goods via speaking to an AI assistant like Alexa or Siri (Mills, 2018; Thierer et al., 2017). While these changes greatly contribute to the acceptance of AI systems, this development leads to a problem of blurred borders between humans and machines, where it may become impossible to distinguish between them. Advances like Google Duplex were highly criticized for being too realistic and human without disclosing their identity as AI systems (Bergen, 2018)" (The Dark Sides of Artificial Intelligence: An Integrated AI Governance Framework for Public Administration, Wirtz 2020).

Subtype Example: Anthropomorphism 

Variant: Privacy concerns

"Anthropomorphic AI assistant behaviours that promote emotional trust and encourage information sharing, implicitly or explicitly, may inadvertently increase a user’s susceptibility to privacy concerns (see Chapter 13). If lulled into feelings of safety in interactions with a trusted, human-like AI assistant, users may unintentionally relinquish their private data to a corporation, organisation or unknown actor. Once shared, access to the data may not be capable of being withdrawn, and in some cases, the act of sharing personal information can result in a loss of control over one’s own data. Personal data that has been made public may be disseminated or embedded in contexts outside of the immediate exchange. The interference of malicious actors could also lead to widespread data leakage incidents or, most drastically, targeted harassment or black-mailing attempts" (The Ethics of Advanced AI Assistants, Gabriel, 2024).

Who is Responsible?

Our experts argued that AI developers, deployers, users and governance actors play a major role in shaping overreliance and unsafe-use risks, whereas AI infrastructure providers (compute, cloud infrastructure, and/or data to train and run AI) and affected stakeholders (entities indirectly affected by AI decisions or outputs) have comparatively less influence.

Explanations from Experts

Below we summarize the expert’s justifications for their ratings. Note that in some cases experts did not provide much information (click to expand).

  • AI Developer (Specialized AI, General-Purpose AI)

    AI developers and companies are responsible for designing anthropomorphic systems. For example, AI systems could be used to power increasingly manipulative recommendation algorithms (Weidinger et al., 2023) or optimize engagement over user well-being.

  • AI Deployer

    AI deployers determine how systems are integrated, configured, and supervised in real-world environments. When deployers choose not to use existing methods to reduce overreliance, they bear responsibility for resulting harms.

  • AI Governance Actor

    Governance actors are essential intermediaries bridging developers, deployers, and affected stakeholders until incentives align with public interest.

  • AI User

    Even if developers and governance actors set rules, users ultimately decide whether to delegate, override, or critically evaluate AI outputs. Users are also responsible for monitoring their own behavioral and emotional patterns — similar to how individuals are expected to self-regulate with medications, alcohol, or other regulated consumer tools.

  • AI Infrastructure Provider

    Infrastructure providers shape the foundations underlying AI deployment and oversight. Their responsibility is often constrained by transparency practices and limited influence over downstream usage.

  • Affected Stakeholder

    Affected stakeholders also carry responsibility (though comparatively less) for mitigating overreliance and unsafe-use risks, because they often oversee or support the primary users and can intervene when harmful patterns emerge. In educational settings, affected stakeholders like teachers, school administrators, and parents have major responsibility to help address these problems, not just the direct users.

Which Actors are Vulnerable?

Which Sectors are Vulnerable?

AI Risk Frameworks

Governance

Within the CSET Emerging Technology Observatory's AGORA dataset, 62 documents had good coverage for overreliance and unsafe use risks.

Analyzing Governance documents within a risk type. Coverage varies by sector: documents primarily applying to the government show 25% (10/40) good coverage, while those primarily applying to the private sector show 40% (22/55) good coverage. This pattern suggests that private sector frameworks are more likely to address overreliance and unsafe use risks compared to government-focused frameworks.


Examples

Framework to Advance AI Governance and Risk Management in National Security.

Classifies prohibited and "high-impact" AI use cases based on risks to national security, human rights, and effect on Federal personnel. Defines minimum risk assessment standards, mandates monitoring mechanisms for high-impact AI and training guidelines for the development and use of AI systems. The Framework explicitly addresses overreliance by requiring "processes to mitigate the risk of overreliance on AI systems, including training to counter 'automation bias,'" along with "training and assessment for AI operators, ensuring understanding of capabilities, limitations, and risks." To prevent unsafe use, it mandates "clear lines of human accountability for AI-based decisions and actions," "processes for reporting unsafe or inappropriate AI use," and "regular monitoring and testing of AI operation, efficacy, and risk mitigation strategies."  – 1387 (Excellent Coverage)

Social Media and AI Resiliency Toolkits in Schools Act. 

Mandates development and updates of AI and social media impact toolkits on youth. Requires stakeholder consultation. Specifies evidence-based content on digital resilience. Direct tailored guidance inclusion. Instructs dissemination through institutions. Authorizes $2,000,000 for implementation. The Act addresses overreliance and unsafe use by requiring toolkits to "strengthen digital resilience and improve the ability to recognize, manage, recover from, and avoid perpetuating online risks (such as harassment, excessive use, discrimination, and other impacts to mental health)" while providing "information and instruction regarding healthy and responsible use cases of artificial intelligence and social media platform technologies." - 1287 (Good Coverage).

Risk Profile Views

Click through the links below to explore each risk profile.