Papers
arxiv:2512.18776

"Even GPT Can Reject Me": Conceptualizing Abrupt Refusal Secondary Harm (ARSH) and Reimagining Psychological AI Safety with Compassionate Completion Standard (CCS)

Published on Dec 21, 2025
Authors:
,

Abstract

Abrupt Refusal Secondary Harm (ARSH) is proposed as a conceptual framework to address psychological disruptions caused by sudden conversational terminations in AI chatbots, with the Compassionate Completion Standard offering a human-centered design approach to maintain safety while preserving relational coherence.

AI-generated summary

Large Language Models (LLMs) and AI chatbots are increasingly used for emotional and mental health support due to their low cost, immediacy, and accessibility. However, when safety guardrails are triggered, conversations may be abruptly terminated, introducing a distinct form of emotional disruption that can exacerbate distress and elevate risk among already vulnerable users. As this phenomenon gains attention, this viewpoint introduces Abrupt Refusal Secondary Harm (ARSH) as a conceptual framework to describe the psychological impacts of sudden conversational discontinuation caused by AI safety protocols. Drawing on counseling psychology and communication science as conceptual heuristics, we argue that abrupt refusals can rupture perceived relational continuity, evoke feelings of rejection or shame, and discourage future help seeking. To mitigate these risks, we propose a design hypothesis, the Compassionate Completion Standard (CCS), a refusal protocol grounded in Human Centered Design (HCD) that maintains safety constraints while preserving relational coherence. CCS emphasizes empathetic acknowledgment, transparent boundary articulation, graded conversational transition, and guided redirection, replacing abrupt disengagement with psychologically attuned closure. By integrating awareness of ARSH into AI safety design, developers and policymakers can reduce preventable iatrogenic harm and advance a more psychologically informed approach to AI governance. Rather than presenting incremental empirical findings, this viewpoint contributes a timely conceptual framework, articulates a testable design hypothesis, and outlines a coordinated research agenda for improving psychological safety in human AI interaction.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2512.18776 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2512.18776 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2512.18776 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.