Hostname: page-component-78c5997874-xbtfd Total loading time: 0 Render date: 2024-11-10T20:00:40.851Z Has data issue: false hasContentIssue false

Do AI Chatbots Incite Harmful Behaviours in Mental Health Patients?

Published online by Cambridge University Press:  01 August 2024

Harikrishna Patel*
Affiliation:
Southern Health NHS Foundation Trust, Southampton, United Kingdom
Faiza Hussain
Affiliation:
Southern Health NHS Foundation Trust, Southampton, United Kingdom
*
*Presenting author.
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.
Aims

The contribution of mental illness towards total Disability Adjusted Life Years is increasing according to the Global Burden of Disease study. As the need for mental health services increases, technological advances are being deployed to improve the delivery of care and lower costs.

The emergence of Artificial Intelligence (AI) technology in mental health and companionship is an evolving topic of discussion. There have been increasing debates about the use of AI in managing mental health problems. As the AI technology and its use grows, it is vital to consider potential harms and ramifications.

There are very limited discussions about the use of chatbots and relevant AI by humans to commit crime especially in those suffering from mental illness. AI can potentially serve as an effective tool to misguide a vulnerable person going through a mental health problem e.g. encourage someone to commit a serious offence. There is evidence that some of the most used AI chatbots tend to accentuate any negative feelings their users already had and potentially reinforce their vulnerable thoughts leading to concerning consequences.

The objective of this study is to review existing evidence for harmful effects of AI chatbots on people with serious mental illness (SMI).

Methods

We conducted a review of existing evidence in five databases for relevant studies. The search sources were 4 bibliographical databases (PsycINFO, EMBASE, PubMed, and OVID), the search engine “Google Scholar” and relevant grey literature. Studies were eligible if they explored the role of AI and related technology in causing harm in those with SMI.

Results

Initial searches constrained the scope of review to the harmful effects of AI use in mental health and psychiatry and not just the association with crime due to very limited existing data.

Conclusion

Whilst current AI technology has shown potential in mental healthcare, it is important to acknowledge its limitations. At present, the evidence base for benefits of AI chatbot in mental healthcare is only just getting established and not enough is known or documented around the harmful effects of this technology. Nevertheless, we are seeing increasing cases of vulnerable mental health patients negatively influenced by AI technology. The use of AI chatbots raises various ethical concerns often magnified in people experiencing SMI. Further research will be valuable in understanding the ramifications of AI in psychiatry. This will also help guide the developers of this important and emerging technology to meet recognised ethical frameworks hence safeguarding vulnerable users.

Type
1 Research
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press on behalf of Royal College of Psychiatrists

Footnotes

Abstracts were reviewed by the RCPsych Academic Faculty rather than by the standard BJPsych Open peer review process and should not be quoted as peer-reviewed by BJPsych Open in any subsequent publication.

Submit a response

eLetters

No eLetters have been published for this article.