Addressing the Emotional and Ethical Risks of Using AI Chatbots in Psychotherapy

Authors

  • Abeera Saleem Mughal Jinnah Sindh Medical University Author
  • Arwa Jabeen Jinnah Sindh Medical University Author
  • Abiha Fatima Jinnah Sindh Medical University Author
  • Hafiz Shahbaz Zahoor Quaid-e-Azam Medical College, Pakistan Author https://orcid.org/0009-0005-7692-6795

DOI:

https://doi.org/10.63501/29erxk86

Keywords:

Artificial Intelligence, Artificial Intelligence in Healthcare

Abstract

Artificial Intelligence (AI) chatbots have the potential to support mental health therapy by introducing novel approaches to diagnosis and treatment. AI psychotherapy is being tested as a possible substitute or supplement to conventional human-led therapy, and it has the potential to improve availability and customization of mental health services[1]. Despite their promise, AI chatbots lack core therapeutic competencies such as emotional empathy, ethical judgment, and contextual understanding. They may struggle to meaningfully integrate personal history or respond appropriately to emotional vulnerability. A key concern is accountability—particularly in cases where a chatbot provides misleading information or fails to recognize psychological distress. Despite rapid AI development, mental health clinicians have been slow to adopt these tools. Some psychiatrists who value interpersonal interactions with patients may be hesitant to use such procedures, implying a slow adoption of the innovation process within the mental health sector[2]. Because AI chatbots often mimic human conversation, users may develop unrealistic expectations of their capabilities. For example, anticipating sophisticated conversations or receiving emotional support that leads to new insights. Therapists and developers must clearly communicate the chatbot’s limitations and intended role in therapy. Human therapists should interpret and contextualize AI-generated insights within the therapeutic process. To manage user expectations effectively, AI should clearly describe the scope of its job at the start of any encounter, as well as which goals can and cannot be achieved[3]. Therefore, a responsible future for digital mental health necessitates establishing trust by assuring data privacy, security, and openness in AI-driven decisions, as well as evidence-based and effective regulatory supervision to ensure quality. Usability, design, and ethical alignment with users' interests will all be vital, while liability frameworks and accreditation standards ensure accountability as the field evolves.[4]. To summarize, while AI chatbots offer promising advances in increasing access to mental health care, their integration into psychotherapy should be treated with caution. The limitations in emotional comprehension, responsibility, and therapeutic nuance underline the importance of human clinicians. To enable safe and ethical implementation, clear communication of AI limitations, strong regulatory oversight, and alignment with user safety and values are essential. A collaborative model—in which AI assists but does not replace human therapists—has the greatest potential for ethically defining the future of digital mental health.

References

[1] Jesudason D, Bacchi S, Bastiampillai T. Artificial intelligence (AI) in psychotherapy: A challenging frontier. Australasian Psychiatry. 2025 Jan 6:10398562251346075.

[2] Pham KT, Nabizadeh A, Selek S. Artificial intelligence and chatbots in psychiatry. Psychiatric Quarterly. 2022 Mar;93(1):249-53.

[3] Sedlakova J, Trachsel M. Conversational artificial intelligence in psychotherapy: a new therapeutic tool or agent?. The American Journal of Bioethics. 2023 May 4;23(5):4-13.

[4] Löchner J, Carlbring P, Schuller B, Torous J, Sander L. Digital interventions in mental health: An overview and future perspectives. Internet Interventions. 2025 Apr 2:100824.

Downloads

Published

2025-06-25

Issue

Section

⁠Brief Report / Short Communication

Most read articles by the same author(s)

Similar Articles

1-10 of 41

You may also start an advanced similarity search for this article.