Hostname: page-component-6b88cc9666-cdh4f Total loading time: 0 Render date: 2026-02-16T15:51:33.600Z Has data issue: false hasContentIssue false

Mental health chatbots and their technical features: A systematic review of reviews and a thematic analysis

Published online by Cambridge University Press:  03 February 2026

Mohsen Khosravi*
Affiliation:
Social Determinants of Health Research Center, Birjand University of Medical Sciences, Birjand, Iran
Reyhane Izadi
Affiliation:
School of Health Management and Information Sciences, Shiraz University of Medical Sciences, Shiraz, Iran
*
Corresponding author: Mohsen Khosravi; Email: mohsenkhosravi@live.com
Rights & Permissions [Opens in a new window]

Abstract

Mental health is a global issue, and mobile applications, such as chatbots, offer a partial solution by providing improved services through various communication forms. This study aimed to identify chatbots and their technical features in mental health services. This study conducted a systematic review of mental health chatbots and their technical features from 2000 to 2025. A search was performed across databases such as PubMed, Scopus, ProQuest and the Cochrane database. The CASP (Critical Appraisal Skills Programme) appraisal checklist was used to assess the quality of the studies. In the next step, the Braun and Clarke’s approach was utilized for conducting thematic analysis on the data. The search yielded 2,921 records, of which 10 were duplicates and removed. After screening for relevance and eligibility, 33 papers met all the requirements. The mean quality score of the included studies was 13.36 (standard deviation = 1.36). The studies had a moderate risk of bias, as they mostly had a clear question, searched for the right type of papers, included all relevant papers and reported the results precisely. The research conducted an analysis of 138 mental health chatbots, categorizing them based on five distinct attributes: the disorder they target, their input and output modalities, the platform they operate on and their method of generating responses. The research emphasized the need for designing chatbots that suit patients’ preferences and needs, and also indicated that the digital divide within societies should be taken into account when designing and producing chatbots for mental health services. Although mental health chatbots can assist underserved communities, ethical concerns must be addressed before their deployment.

Information

Type
Review
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0), which permits non-commercial re-use, distribution, and reproduction in any medium, provided that no alterations are made and the original article is properly cited. The written permission of Cambridge University Press or the rights holder(s) must be obtained prior to any commercial use and/or adaptation of the article.
Copyright
© The Author(s), 2026. Published by Cambridge University Press

Impact statement

This paper presents a detailed overview of existing mental health chatbots and their technical specifications, introduces a novel synthesis of their strengths and limitations and offers fresh implications for clinical applications and developers of mental health chatbots.

Introduction

According to the World Health Organization, mental health is defined as a state of well-being in which an individual can realize their own potential, manage the normal stresses of life, work productively and fruitfully and make a positive contribution to their community. This definition emphasizes the importance of an individual’s ability to function effectively in society and to lead a fulfilling life. Mental health is not just the absence of mental illness, but a state of overall well-being (Organisation, 2014). Mental disorders are a global concern, affecting 29% of individuals and causing disability. The economic burden is projected to cost the global economy US $16 trillion between 2011 and 2030 (Steel et al., Reference Steel, Marnane, Iranpour, Chey, Jackson, Patel and Silove2014; Whiteford et al., Reference Whiteford, Ferrari, Degenhardt, Feigin and Vos2015). In this regard, notably depression and anxiety are highly prevalent across the globe and underscore the need for robust and effective therapeutic strategies (Khosravi and Azar, Reference Khosravi and Azar2024a).

Globally, 70% of people with mental illness receive no formal treatment due to low perceived need, perceived stigma or a shortage of mental health professionals, particularly in rural and low-income areas (Wang et al., Reference Wang, Aguilar-Gaxiola, Alonso, Angermeyer, Borges, Bromet, Bruffaerts, de Girolamo, de Graaf, Gureje, Haro, Karam, Kessler, Kovess, Lane, Lee, Levinson, Ono, Petukhova, Posada-Villa, Seedat and Wells2007; Thomas et al., Reference Thomas, Ellis, Konrad, Holzer and Morrissey2009; Conner et al., Reference Conner, Copeland, Grote, Koeske, Rosen, Reynolds and Brown2010; Mojtabai et al., Reference Mojtabai, Olfson, Sampson, Jin, Druss, Wang, Wells, Pincus and Kessler2011). There is a shortage of mental health resources, funding and literacy (Vaidyam et al., Reference Vaidyam, Wisniewski, Halamka, Kashavan and Torous2019). This is particularly pronounced in low- and middle-income countries, where there are only 0.1 psychiatrists per 1,000,000 people, compared to 90 in high-income countries (Murray et al., Reference Murray, Vos, Lozano, Naghavi, Flaxman, Michaud, Ezzati, Shibuya, Salomon, Abdalla, Aboyans, Abraham, Ackerman, Aggarwal, Ahn, Ali, Alvarado, Anderson, Anderson, Andrews, Atkinson, Baddour, Bahalim, Barker-Collo, Barrero, Bartels, Basáñez, Baxter, Bell, Benjamin, Bennett, Bernabé, Bhalla, Bhandari, Bikbov, Bin Abdulhak, Birbeck, Black, Blencowe, Blore, Blyth, Bolliger, Bonaventure, Boufous, Bourne, Boussinesq, Braithwaite, Brayne, Bridgett, Brooker, Brooks, Brugha, Bryan-Hancock, Bucello, Buchbinder, Buckle, Budke, Burch, Burney, Burstein, Calabria, Campbell, Canter, Carabin, Carapetis, Carmona, Cella, Charlson, Chen, Cheng, Chou, Chugh, Coffeng, Colan, Colquhoun, Colson, Condon, Connor, Cooper, Corriere, Cortinovis, de Vaccaro, Couser, Cowie, Criqui, Cross, Dabhadkar, Dahiya, Dahodwala, Damsere-Derry, Danaei, Davis, De Leo, Degenhardt, Dellavalle, Delossantos, Denenberg, Derrett, Des Jarlais, Dharmaratne, Dherani, Diaz-Torne, Dolk, Dorsey, Driscoll, Duber, Ebel, Edmond, Elbaz, Ali, Erskine, Erwin, Espindola, Ewoigbokhan, Farzadfar, Feigin, Felson, Ferrari, Ferri, Fèvre, Finucane, Flaxman, Flood, Foreman, Forouzanfar, Fowkes, Fransen, Freeman, Gabbe, Gabriel, Gakidou, Ganatra, Garcia, Gaspari, Gillum, Gmel, Gonzalez-Medina, Gosselin, Grainger, Grant, Groeger, Guillemin, Gunnell, Gupta, Haagsma, Hagan, Halasa, Hall, Haring, Haro, Harrison, Havmoeller, Hay, Higashi, Hill, Hoen, Hoffman, Hotez, Hoy, Huang, Ibeanusi, Jacobsen, James, Jarvis, Jasrasaria, Jayaraman, Johns, Jonas, Karthikeyan, Kassebaum, Kawakami, Keren, Khoo, King, Knowlton, Kobusingye, Koranteng, Krishnamurthi, Laden, Lalloo, Laslett, Lathlean, Leasher, Lee, Leigh, Levinson, Lim, Limb, Lin, Lipnick, Lipshultz, Liu, Loane, Ohno, Lyons, Mabweijano, MacIntyre, Malekzadeh, Mallinger, Manivannan, Marcenes, March, Margolis, Marks, Marks, Matsumori, Matzopoulos, Mayosi, McAnulty, McDermott, McGill, McGrath, Medina-Mora, Meltzer, Mensah, Merriman, Meyer, Miglioli, Miller, Miller, Mitchell, Mock, Mocumbi, Moffitt, Mokdad, Monasta, Montico, Moradi-Lakeh, Moran, Morawska, Mori, Murdoch, Mwaniki, Naidoo, Nair, Naldi, Narayan, Nelson, Nelson, Nevitt, Newton, Nolte, Norman, Norman, O’Donnell, O’Hanlon, Olives, Omer, Ortblad, Osborne, Ozgediz, Page, Pahari, Pandian, Rivero, Patten, Pearce, Padilla, Perez-Ruiz, Perico, Pesudovs, Phillips, Phillips, Pierce, Pion, Polanczyk, Polinder, Pope, Popova, Porrini, Pourmalek, Prince, Pullan, Ramaiah, Ranganathan, Razavi, Regan, Rehm, Rein, Remuzzi, Richardson, Rivara, Roberts, Robinson, De Leòn, Ronfani, Room, Rosenfeld, Rushton, Sacco, Saha, Sampson, Sanchez-Riera, Sanman, Schwebel, Scott, Segui-Gomez, Shahraz, Shepard, Shin, Shivakoti, Singh, Singh, Singh, Singleton, Sleet, Sliwa, Smith, Smith, Stapelberg, Steer, Steiner, Stolk, Stovner, Sudfeld, Syed, Tamburlini, Tavakkoli, Taylor, Taylor, Taylor, Thomas, Thomson, Thurston, Tleyjeh, Tonelli, Towbin, Truelsen, Tsilimbaris, Ubeda, Undurraga, van der Werf, van Os, Vavilala, Venketasubramanian, Wang, Wang, Watt, Weatherall, Weinstock, Weintraub, Weisskopf, Weissman, White, Whiteford, Wiebe, Wiersma, Wilkinson, Williams, Williams, Witt, Wolfe, Woolf, Wulf, Yeh, Zaidi, Zheng, Zonies, Lopez, AlMazroa and Memish2012; Oladeji and Gureje, Reference Oladeji and Gureje2016). Furthermore, mental health services reach only 15% and 45% of those in need in developing and developed countries, respectively (Hester, Reference Hester2017).

The need for improved mental health services has grown, but fulfilling these needs has become challenging and expensive due to a scarcity of resources (Jones et al., Reference Jones, Patel, Saxena, Radcliffe, Ali Al-Marri and Darzi2014). Therefore, innovative solutions are required to address the resource shortage and encourage self-care among patients. In this regard, mobile applications present a partial solution to the global mental health issues (Chandrashekar, Reference Chandrashekar2018; Khosravi and Azar, Reference Khosravi and Azar2024b).

Chatbots, in particular, are one of the primary mobile applications utilized for mental health purposes (Abd-Alrazaq et al., Reference Abd-Alrazaq, Alajlani, Alalwan, Bewick, Gardner and Househ2019; Khosravi et al., Reference Khosravi, Azar and Izadi2024a). Chatbots are software applications that can mimic human behavior and perform specific tasks by engaging in intelligent conversations with users (Adamopoulou and Moussiades, Reference Adamopoulou, Moussiades, Maglogiannis, Iliadis and Pimenidis2020). These conversational agents utilize text and speech recognition to interact with users (Nadarzynski et al., Reference Nadarzynski, Miles, Cowie and Ridge2019).

Chatbots employ various forms of communication, including spoken, written and visual languages (Valliammai, Reference Valliammai2017). Over the past decade, the use of chatbots has increased significantly and has become widespread in areas such as mental health (Abd-Alrazaq et al., Reference Abd-Alrazaq, Alajlani, Alalwan, Bewick, Gardner and Househ2019). It is anticipated that chatbots will play a crucial role in addressing the shortage of mental health care (Palanica et al., Reference Palanica, Flaschner, Thommandram, Li and Fossat2019). Chatbots can facilitate interactions with individuals who may be hesitant to seek mental health advice due to stigmatization and provide greater conversational flexibility (Radziwill and Benton, Reference Radziwill and Benton2017). Moreover, chatbots have demonstrated considerable promise in the delivery of healthcare services, particularly for individuals who are inclined to avoid specific services due to cultural or religious values, or personal factors such as fear or stigma (Khosravi et al., Reference Khosravi, Mojtabaeian and Aghamaleki Sarvestani2024b).

Due to the innovative technical features of chatbot platforms, there is a substantial need to investigate and gather data on the characteristics of chatbots. This information is crucial for researchers, producers, administrators and policymakers in this field. Researchers may draw on these data to generate in-depth, evidence-based evaluations of chatbot performance and to elaborate on their functional attributes. Producers can use the information on the technical strengths and weaknesses of different chatbots to benchmark their products and to plan targeted enhancements. Policymakers and service managers can also employ these findings to design and implement strategies for integrating chatbots into mental health service provision within their respective organizations.

To the best of our knowledge, no systematic review of reviews has been conducted with the aim of examining the technical features and characteristics of chatbots in mental health. Such an approach would yield a comprehensive catalog of chatbots and their technical characteristics as reported in previous reviews. Therefore, this paper presents a valuable and innovative approach to fill the existing gap in the literature through the provision of a thematic analysis on the existing chatbots and their technical features within mental health services.

Methods

This paper adopted a qualitative approach and conducted a systematic review of the current review studies within the literature published from 2000 to 2023. The main objective of this study was to identify the existing chatbots and their technical features in the field of mental health services. A systematic review of reviews, or umbrella review, is generally described in methodological guidance as addressing a well-defined yet relatively broad field of research or overarching theme. The topic is expected to be sufficiently expansive to encompass multiple existing systematic reviews (Belbasis et al., Reference Belbasis, Bellou and Ioannidis2022; Abdellatif et al., Reference Abdellatif, Dadam, Vu, Nam, Hoan, Taoube, Tran and Huy2025).

Data collection and search strategy

A systematic search was conducted across several databases, including PubMed, Scopus, ProQuest and the Cochrane Database of Systematic Reviews, to identify existing literature on the topic. The search terms were categorized into two domains: Mental health and Chatbot. Broad terms were initially used to enhance sensitivity, and synonyms were incorporated using the “OR” operator. The “AND” operator was used to ensure specificity and minimize irrelevant studies. The search was conducted on December 13, 2025, and the search strategy is presented in Table 1.

Table 1. The search strategy utilized to conduct the systematic review

Inclusion and exclusion criteria

This study selected review articles published in English from 2000 to 2025 that focused on mental health chatbots and their technical features as the inclusion criteria. The exclusion criteria were articles that did not discuss a relevant chatbot in the mental health domain, had no title, abstract or full text that referred to a chatbot intervention in mental health services or had an inaccessible full text. Moreover, any other types of publications, except for journal articles, were also excluded from the study.

Selection and extraction of data

The systematic review was carried out in several stages. In the first stage, the authors independently reviewed all articles retrieved from the databases multiple times. In the second stage, the authors examined the abstracts of the selected articles and then evaluated the full text of the chosen articles in depth. To ensure the comprehensiveness of the review, the authors also checked the references cited in the articles. In the final stage, the authors selected the articles that met the quality criteria for inclusion in the study. Moreover, the authors used a form to extract relevant information from the selected articles, such as the authors’ details, journal’s name, methodology, aim, results and the mentioned mental health chatbots. The extracted information was then summarized and synthesized using MAXQDA 12 software. The authors independently verified the results at each stage to ensure reliability and minimize bias.

Quality appraisal of final articles using the CASP checklist

The CASP (Critical Appraisal Skills Programme) appraisal checklist, which encompasses various study designs, was used to assess the quality of the selected studies. The checklist assists in evaluating the validity, relevance, bias and applicability of research studies. The checklist consists of 10 questions that assess articles based on various criteria, such as validity of results, quality of study and applicability of results (Programme, 2018).

We applied a scoring system where each question received 2 points for yes, 1 point for cannot tell and 0 points for no. The highest score was 16, which corresponded to three quality levels: low, medium and high. We only included articles with an average score of 9 or more. We calculated the total score for each study and presented the score for each study type in the results section. After selecting and appraising the quality of articles using the CASP checklist, we only included the final articles in our review.

Data analysis

In this phase of the research, we applied Braun and Clarke’s method for thematic analysis, which consists of six steps: familiarization, coding, generating themes, reviewing themes, defining and naming themes and writing up (Braun and Clarke, Reference Braun and Clarke2023). The objective of this phase was to discover the existing chatbots and their technical features in the mental health domain.

First, we followed a series of steps to familiarize ourselves with the topic and the context of the research by reading the content about mental health chatbots and their technical features within the final articles. Second, we then coded them based on their technical features mentioned within the data of the final articles. Third, we also generated subthemes and themes from the coded data by categorizing and grouping the codes. Fourth, we reviewed the generated themes multiple times to ensure the validity and reliability of the process. Fifth, we then defined and named the themes and their subthemes based on their essence and existential characteristics. Finally, we combined and accumulated the generated themes, subthemes and their codes in a single sheet. This process of qualitative thematic analysis was adhered to Lincoln and Guba’s criteria for qualitative research to validate and verify the findings. The criteria include four key aspects: credibility, transferability, dependability and confirmability of the qualitative content analysis process (Lincoln and G, Reference Lincoln and G1985).

Findings

The study involved three key steps: conducting a systematic review, evaluating the quality of the chosen articles and performing a thematic analysis of the obtained data. The outcomes of the study are organized into three sections, each reflecting one of the steps.

Systematic review

The search yielded 2,921 records, of which 10 were duplicates and were removed. The remaining records were screened by title and abstract for relevance and eligibility. The screening process excluded 2,878 records that did not meet the inclusion criteria or met the exclusion criteria. The full texts of the 33 remaining records were retrieved and assessed for quality and suitability. The final selection consisted of 33 papers that met all the requirements (Figure 1). The mean year of publication of the included studies was 2023 (standard deviation [SD] = 2.00). Among the included studies, systematic reviews constituted 48.5%, scoping reviews 27.3%, systematic reviews with meta-analyses 18.2% and narrative reviews 6% of the total. Supplementary Appendix 1 (Bibliography) contains the details regarding the final articles that were selected for the literature review.

Figure 1. PRISMA diagram of the systematic review.

Quality assessment of final articles

Figure 2 illustrates the distribution of quality assessment scores across the included studies based on the CASP checklist. The mean quality score of the studies was 13.36 (SD = 1.36). All 33 articles had a clearly focused question and searched for the appropriate type of papers, indicating their reasonable validity. However, not all articles included all relevant studies or adequately assessed their quality, although their data were still worth considering. All articles combined the results of the review appropriately, indicating their correct methodology. The results of the review were precise, but their applicability to the local population was unclear. Moreover, all articles considered all important outcomes and showed that the benefits outweighed the harms and costs, indicating their high efficiency. The final scores ranged from 11 to 15, indicating an acceptable quality of the included papers. The risk of bias within the studies was moderate, as they mostly had a clear question, searched for the right type of papers, included all relevant papers and reported the results precisely. The details of the quality assessment of the final papers are shown in Supplementary Appendix 2 (Quality Assessment).

Figure 2. Results of the quality assessment of the final papers.

Thematic analysis

Table 2 presents the findings of the thematic analysis that explored and described the characteristics of 138 chatbots for mental health mentioned within the reviewed studies in this paper, based on five categories: targeted disorder, input modality, output modality, platform and response generation. These categories represent the key aspects of chatbot design and functionality that are relevant to mental health interventions. Figure 3 presents the overall features of the mental health chatbots. Supplementary Appendix 3 (Study Chatbots) contains the detailed information about the chatbots (Abd-Alrazaq et al., Reference Abd-Alrazaq, Alajlani, Alalwan, Bewick, Gardner and Househ2019; Abd-Alrazaq et al., Reference Abd-Alrazaq, Rababeh, Alajlani, Bewick and Househ2020; Abd-Alrazaq et al., Reference Abd-Alrazaq, MBMMBAP, Ali, KDrn, BMBAMAP and Househ2021; Ahmed et al., Reference Ahmed, Hassan, Aziz, Abd-Alrazaq, Ali, Alzubaidi, Al-Thani, Elhusein, Siddig, Ahmed and Househ2023; Anaduaka et al., Reference Anaduaka, Oladosu, Katsande, Frempong and Awuku-Amador2025; Baek et al., Reference Baek, Cha and Han2025; Balan et al., Reference Balan, Dobrean and Poetar2024; Bérubé et al., Reference Bérubé, Schachner, Keller, Fleisch, F, Barata and Kowatsch2021; Bragazzi et al., Reference Bragazzi, Crapanzano, Converti, Zerbetto and Khamisy-Farah2023; Chiu et al., Reference Chiu, Lee, Lin and Cheng2024; Dehbozorgi et al., Reference Dehbozorgi, Zangeneh, Khooshab, Nia, Hanif, Samian, Yousefi, Hashemi, Vakili, Jamalimoghadam and Lohrasebi2025; Du et al., Reference Du, Ren, Meng, He and Meng2025; Farzan et al., Reference Farzan, Ebrahimi, Pourali and Sabeti2025; X Feng et al., Reference Feng, Tian, Ho, Yorke and Hui2025; Y Feng et al., Reference Feng, Hang, Wu, Song, Xiao, Dong and Qiao2025; Gaffney et al., Reference Gaffney, Mansell and Tai2019; Hawke et al., Reference Hawke, Hou, Nguyen, Phi, Gibson, Ritchie, Strudwick, Rodak and Gallagher2025; He et al., Reference He, Yang, Qian, Li, Su, Zhang and Hou2023; Im and Woo, Reference Im and Woo2025; Jabir et al., Reference Jabir, Martinengo, Lin, Torous, Subramaniam and Tudor Car2023; Joshi et al., Reference Joshi, Ghogare and Madavi2025; Kim, Reference Kim2024; Li et al., Reference Li, Zhang, Lee, Kraut and Mohr2023; Li et al., Reference Li, Li, Hu, Ma, Chan and Yorke2025; Lin et al., Reference Lin, Martinengo, Jabir, Ho, Car, Atun and Tudor Car2023; Mansoor et al., Reference Mansoor, Hamide and Tran2025; Martinengo et al., Reference Martinengo, Jabir, Goh, Lo, Ho, Kowatsch, Atun, Michie and Tudor Car2022; Nyakhar and Wang, Reference Nyakhar and Wang2025; Ogilvie et al., Reference Ogilvie, Prescott and Carson2022; Otero-González et al., Reference Otero-González, Pacheco-Lorenzo, Fernández-Iglesias and Anido-Rifón2024; Vaidyam et al., Reference Vaidyam, Linggonegoro and Torous2021; Vaidyam et al., Reference Vaidyam, Wisniewski, Halamka, Kashavan and Torous2019; Yang et al., Reference Yang, Cheung, Zhang, Zhang, Qin and Xie2025).

Table 2. Findings from the thematic analysis

Figure 3. Overall features of mental health chatbots.

Targeted disorder

This category refers to the specific mental health conditions that the chatbots were designed to address or prevent. The targeted disorders mentioned in the included studies spanned a wide range of mental health and well-being conditions, including depression, anxiety and stress, often appearing alone or in combination with loneliness, burnout, sleep problems or self-esteem issues. In this regard, depression was the target disorder for ~40% of the chatbots, anxiety for 31% and stress for nearly 10% of the chatbots included in the study. The data also included broader mood disorders, such as major depression and bipolar disorder, alongside trauma- and fear-related conditions like posttraumatic stress disorder and acrophobia. Several neurodevelopmental and social conditions were addressed, including autism, social communication disorders, social disorders and behavioral issues. Sleep-related disorders, particularly insomnia, appeared both independently and in combination with mood disorders. The dataset further covered substance use-related conditions, such as substance use disorder, addiction and cigarette smoking cessation, as well as eating- and body-related issues, including eating disorders, eating/feeding disorders, body image concerns and diet-related problems. In addition, there were categories related to chronic and physical health-associated mental health concerns, such as chronic disorders, lifestyle disorders, mental health issues of cancer patients and perinatal women’s mental health issues, while a substantial number of chatbots were labeled with “not specified,” indicating a general or nondiagnosis-specific mental health focus.

Input modality

This category refers to the way that the user provided information or feedback to the chatbot, such as text, audio, image or video. In this regard, text was used as the input modality in 91% of the chatbots, audio in 34%, video in 1% chatbots and image in merely 0.7% of the chatbots.

Output modality

This category refers to the way that the chatbot delivered information or feedback to the user, such as text, audio, video or an embodied conversational agent (ECA). An ECA is a graphical representation of a human or an animal that could interact with the user through speech and gestures. In this regard, text served as the output modality for 91% of the study chatbots, audio for 13%, video for 2% and ECA for 28% of the chatbots.

Platform

This category refers to the type of device or software that the chatbot runs on or requires to function. The data could be classified into two main types of platforms: web-based and stand-alone. Web-based platforms are those that require an internet connection and a web browser to access the chatbot. Stand-alone platforms are those that do not require an internet connection or a web browser and can be installed or downloaded on a device. In this regard, ~70% of the chatbots were web-based, while 28% operated as stand-alone systems.

Response generation

This category refers to how the chatbot produced its responses to the user’s inputs. The data could be classified into three main types of response generation: rule-based, generative and hybrid. Rule-based response generation is when the chatbot follows a set of predefined rules or scripts to select or construct its responses from a fixed pool of options. Generative response generation is when the chatbot uses natural language processing techniques to generate its responses from scratch based on the user’s inputs and context. Hybrid response generation is when the chatbot combines both rule-based and generative methods to produce its responses. In this regard, ~45% of the study chatbots employed rule-based, 18% utilized generative and 14% implemented hybrid models.

Discussion

In this section, the characteristics of the chatbots, as mentioned in the results section, are examined. The analysis encompasses various aspects, such as the chatbot’s targeted disorder, input modality, output modality, platform and response generation.

Targeted disorder

According to the study findings, the primary target disorders of the chatbots examined in the included studies were depression (40%) and anxiety (31%). In this regard, it is understandable that most of the mental health chatbots are related to depression, anxiety and stress disorders, since it has been shown that depression, anxiety and stress are associated with social isolation and loneliness (Hidaka, Reference Hidaka2012; Matthews et al., Reference Matthews, Danese, Wertz, Odgers, Ambler, Moffitt and Arseneault2016; Ratnani et al., Reference Ratnani, Vala, Panchal, Tiwari, Karambelkar, Sojitra and Nagori2017); and that chatbots can provide services without requiring any physical presence of the patient in the clinic or in society (Parviainen and Rantala, Reference Parviainen and Rantala2022). In this regard, an understanding of the impact of such technologies on the quality and outcomes of care delivery has been shown to be a key determinant of their utilization (Izadi et al., Reference Izadi, Bahrami, Khosravi and Delavari2023).

Individuals diagnosed with autism have been observed to develop severe and persistent impairments in social, communication and repetitive/stereotyped behaviors, which may make remote healthcare services, especially mental health chatbots, a preferable option for them to access their desired medical care (Parr et al., Reference Parr, Dale, Shaffer and Salt2010). Moreover, as misconceptions regarding specific diseases and treatments perpetuate stigma within certain societies, mobile health (mHealth) applications have emerged as a viable solution (Khosravi et al., Reference Khosravi, Azar and Izadi2024a).

Although chatbots yield positive mental health outcomes, evidence indicates potential harms. Interactions may reinforce delusions or trigger psychosis in vulnerable individuals with genetic or stress-related predispositions, exacerbate bipolar mania by validating elevated moods or collude with psychotic fantasies. Generative artificial intelligence (AI) risks hallucinations, biases and harmful advice, worsening conditions via misdiagnosis or unsafe encouragement (AlMaskari et al., Reference AlMaskari, Al-Mahrouqi, Al Lawati, Al Aufi, Al Riyami and Al-Sinawi2025; Hipgrave et al., Reference Hipgrave, Goldie, Dennis and Coleman2025; Hua et al., Reference Hua, Siddals, Ma, Galatzer-Levy, Xia, Hau, Na, Flathers, Linardon, Ayubcha and Torous2025). Hence, a situation analysis is required before designing and financing the production of chatbots for mental health disorders, in order to investigate and identify the most in need groups of patients who can benefit from chatbots and to adjust the quantity and diversity of chatbots according to the needs of these groups.

Input modality

As indicated by the study findings, the input modalities of the chatbots were classified into several categories, including text, audio, image and video. In this regard, the primary input modality of the chatbots was text, with ~91% of the chatbots utilizing text as their input method. In this regard, evidence indicates that text-based interfaces can be readily integrated with portals, messaging platforms and electronic health records, while facilitating data capture, auditing and algorithm development (Hindelang et al., Reference Hindelang, Sitaru and Zink2024; Barreda et al., Reference Barreda, Cantarero-Prieto, Coca, Delgado, Lanza-León, Lera, Montalbán and Pérez2025; Moore et al., Reference Moore, Ellis, Dellavalle, Akerson, Andazola, Campbell and DeCamp2025). Text-based chatbots are consistent with existing patient behaviors, as many individuals are already accustomed to interacting with healthcare services through secure messaging and portals (Bai et al., Reference Bai, Wang, Zhao, Feng, Ma and Liu2025; Moore et al., Reference Moore, Ellis, Dellavalle, Akerson, Andazola, Campbell and DeCamp2025). Furthermore, text interfaces are more easily standardized and processed using current natural language processing techniques and large language models (Singh et al., Reference Singh, Sillerud and Singh2023; Barreda et al., Reference Barreda, Cantarero-Prieto, Coca, Delgado, Lanza-León, Lera, Montalbán and Pérez2025; Loftus et al., Reference Loftus, Haider and Upchurch2025).

However, one study showed that voice-based interactions improved social bonding and did not raise discomfort, unlike text-based interactions. Even though this phenomenon is related to the input modality as well as the output modality, wrong expectations about discomfort or bonding could cause suboptimal choices for a text-based platform. Misjudging the outcomes of using different communication platforms could lead to preferences for platforms that do not enhance either one’s own or others’ well-being (Kumar and Epley, Reference Kumar and Epley2021). Therefore, the preferences and needs of the patients should be the basis for the design and production of chatbots for mental health services, as some groups of mental health patients may face difficulties in using chatbots that only accept written text as input and output.

Output modality

The study findings indicated that text (91%) was the primary output modality employed by the chatbots reported in the literature. Meanwhile, only 28% of the chatbots were ECAs. In this regard, several healthcare contexts have demonstrated the effectiveness of ECAs in improving disorders, such as stress management and mental health (Gardiner et al., Reference Gardiner, McCue, Negash, Cheng, White, Yinusa-Nyahkoon, Jack and Bickmore2017; Provoost et al., Reference Provoost, Lau, Ruwaard and Riper2017). Additionally, ECAs have been found to create a sense of companionship with patients and reduce loneliness, which is a risk factor for various mental health disorders (Hawkley and Cacioppo, Reference Hawkley and Cacioppo2010). Furthermore, applying patient-centered communication features within ECAs can enhance user satisfaction and engagement with healthcare services (Borghi et al., Reference Borghi, Leone, Poli, Becattini, Chelo, Costa, De Lauretis, Ferraretti, Filippini, Giuffrida, Livi, Luehwink, Palermo, Revelli, Tomasi, Tomei and Vegni2019; Kwame and Petrucka, Reference Kwame and Petrucka2021). Since several articles have indicated that the user’s perception of healthcare services is influenced by the behavior, language, emotional expression, virtual environment and embodiment of ECAs (Qiu and Benbasat, Reference Qiu and Benbasat2009; Kulms et al., Reference Kulms, Kopp, Krämer, Bickmore, Marsella and Sidner2014; Cerekovic et al., Reference Cerekovic, Aran and Gatica-Perez2016; Kang et al., Reference Kang, Phan, Bolas and Krum2016; Hoegen et al., Reference Hoegen, van der Schalk, Lucas and Gratch2018).

Several aspects are related to patient-centered communication: (1) exploring and comprehending patient perspectives, such as concerns, ideas, expectations, needs, feelings and functioning; (2) understanding the patient within his or her specific psychosocial and cultural contexts; and (3) achieving a common understanding of patient problems and the treatments that are consistent with patient values (Epstein and Street, Reference Epstein and Street2007).

The input and output modalities of chatbots should be designed according to the preferences and needs of the patients, especially for mental health patients, as some of them may have challenges with certain modalities of chatbots. Moreover, as discussed before, this phenomenon can potentially improve patient-centered communication in mental health services.

Platform

The study findings indicated that the majority of chatbots reported in the literature were web-based (70%). In this regard, there are significant and enduring gaps in the access and usage of digital technologies among different regions and communities, which are exacerbated by the increasing complexity and functionality of devices and connectivity. This leads to a situation where those groups that can leverage the full potential of digital technologies have a relative advantage over others. This phenomenon results in unequal access to digital technologies among different societies with diverse socioeconomic backgrounds (Selwyn, Reference Selwyn2004). Therefore, chatbots that have their own platforms seem to be more advantageous than chatbots that depend on web-based platforms in such settings. The term “Digital divide” is used for such a phenomenon of inequality among different societies with diverse socioeconomic backgrounds. The term connotes different access and use of digital tech by different groups. It is not a simple gap, rather a complex range of users who change over time based on infrastructure, environment and personal factors (Selwyn, Reference Selwyn2004; Fox, Reference Fox2016).

The manufacturers and designers of chatbots should consider the digital divide within societies, as it is a critical factor that can enhance the equality and accessibility of services, especially in the mental health domain. As mentioned earlier, providing chatbots that operate on stand-alone platforms can be a major strategy to achieve this vision.

Response generation

The study findings indicated that the majority of chatbots reported in the literature employed rule-based systems (45%). In this regard, rule-based chatbots may appeal to autistic individuals, as such chatbots operate in a consistent and predictable manner. Since autistic individuals may experience discomfort when they encounter new and unfamiliar situations that are not repetitive (American Psychiatric Association and Association, 2013). On the other hand, other types of patients may prefer chatbots that are generative or hybrid, as these chatbots emulate human behavior and emotions, which have been shown to enhance rapport, motivation and engagement (Giger et al., Reference Giger, Piçarra, Alves-Oliveira, Oliveira and Arriaga2019).

As mentioned earlier, the needs and preferences of each group of patients should be considered in the design and production of chatbots for mental health services, as this is a key strategy to improve the quality and satisfaction of the services. As discussed earlier, different forms of response generation of mental health chatbots may have varying and contrasting effects on different groups of patients, depending on their preferences for the type of care they receive. Some patients may favor chatbots that are rule-based, precise and repetitive, while others may favor chatbots that are innovative, emotionally responsive and motivating. Therefore, it is essential to understand the needs and expectations of the target users before developing chatbots for mental health services.

Overall, evidence indicates greater utilization of mental health chatbots among underserved populations relative to others. These tools address access barriers via cost-effectiveness, round-the-clock availability and stigma mitigation, particularly in low-resource settings and among refugees (Haque and Rubya, Reference Haque and Rubya2023; Khosravi et al., Reference Khosravi, Azar and Izadi2024a; Coelho et al., Reference Coelho, Pécune, Micoulaud-Franchi, Bioulac and Philip2025; Han and Zhao, Reference Han and Zhao2025; Pozzi and De Proost, Reference Pozzi and De Proost2025). However, ethical considerations must be addressed before deploying chatbots, particularly within the critical domain of mental health. In this regard, ethical considerations play a crucial role in the adoption of electronic mental health technologies, including chatbots. In this context, critical issues such as informed consent and the protection of privacy must not be overlooked (Khosravi et al., Reference Khosravi, Izadi and Azar2025). Moreover, algorithmic biases, stemming from training data that mirrors societal prejudices, result in discriminatory advice, stigmatization of conditions such as schizophrenia and diminished empathy toward ethnic minorities and marginalized groups. These biases exacerbate mental health disparities through inaccurate responses and potentially harmful recommendations (Khawaja and Bélisle-Pipon, Reference Khawaja and Bélisle-Pipon2023). Another critical concern is the risk of substitution, which warrants careful attention. AI-driven chatbots must not replace human professionals in delivering health services (Altamimi et al., Reference Altamimi, Altamimi, Alhumimidi, Altamimi and Temsah2023; Khawaja and Bélisle-Pipon, Reference Khawaja and Bélisle-Pipon2023; Greš and Staver, Reference Greš and Staver2025). In such a context, developers must deliver clear disclosures on data collection, storage, usage and sharing before interactions, ensuring comprehension by vulnerable mental health users. Continuous consent via opt-in prompts for sensitive data and deletion options upholds autonomy and other ethical standards (Coghlan et al., Reference Coghlan, Leins, Sheldrick, Cheong, Gooding and D’Alfonso2023; Talebi Azadboni et al., Reference Talebi Azadboni, Solat, Hematti and Rahmani2025). Moreover, anonymization through data masking, encryption (e.g., secure enclaves) and blockchain identity management mitigates breaches and re-identification. Meanwhile, regular audits, minimal sharing, transparency reports and automated privacy prompts foster trust and user control (Iwaya et al., Reference Iwaya, Babar, Rashid and Wijayarathna2023; Talebi Azadboni et al., Reference Talebi Azadboni, Solat, Hematti and Rahmani2025). Finally, chatbots should be considered supplementary tools to support, rather than replace, human professionals in mental health services (Khawaja and Bélisle-Pipon, Reference Khawaja and Bélisle-Pipon2023).

Implications and limitations

This study had some implications for mental health chatbot manufacturers, policymakers and future researchers. First, this paper emphasized the need to design and produce chatbots for mental health services that suit the preferences and needs of the patients. This involves considering the specific disorder, the input and output modalities, the platform and the response generation of the chatbots. Second, the research also suggested that chatbots can be beneficial for individuals who suffer from social isolation and loneliness due to mental health disorders, such as depression, anxiety and stress, as well as for individuals with autism who have impairments in social, communication and repetitive/stereotyped behaviors. Furthermore, the research indicated that the digital divide within societies should be taken into account when designing and producing chatbots for mental health services, and that providing chatbots that work on independent platforms (stand-alone chatbots) can be a key strategy to improve the equality and accessibility of the services. The study also examined ethical challenges in mental health chatbot deployment and proposed strategies to address them. These insights offer valuable guidance for health policymakers, managers implementing chatbots in healthcare services and manufacturers seeking to strengthen ethical standards for improved system integration.

This study had some limitations. First, it provided a limited overview of the current state of mental health chatbots and did not assess their effectiveness or impact on patient outcomes. Second, it also did not examine the cost-effectiveness or feasibility of implementing chatbots for mental health services in various settings. Moreover, it did not investigate the potential risks or negative consequences of using chatbots for mental health services due to the limited scope, data and time at hand of the researchers. Future researchers can address these issues.

Conclusion

This study conducted a thematic analysis on the existing data on mental health chatbots and provided some insights into the technical features of mental health chatbots, such as their targeted disorders, input and output modalities, platform and response generation. The research emphasized the need to design and produce chatbots for mental health services that suit the preferences and needs of the patients. Chatbots can be beneficial for individuals who suffer from social isolation and loneliness due to mental health disorders such as depression, anxiety and stress. Chatbots may also be an alternative option for individuals with autism who have impairments in social, communication and repetitive/stereotyped behaviors. The research also indicated that the digital divide within societies should be taken into account when designing and producing chatbots for mental health services, and that providing chatbots that work on independent platforms can be a key strategy to improve the equality and accessibility of the services. The study also addressed ethical concerns regarding the use of chatbots in mental health services, while proposing solutions and highlighting their high utility for underserved populations. However, the research also had some limitations, such as the lack of information on the effectiveness or impact of chatbots on patient outcomes, their cost-effectiveness or feasibility in different contexts and potential risks or negative consequences of using chatbots for mental health services.

Open peer review

To view the open peer review materials for this article, please visit http://doi.org/10.1017/gmh.2026.10144.

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/gmh.2026.10144.

Data availability statement

The research data can be accessed by contacting the corresponding author of the paper.

Author contribution

MK conducted the search within the databases, extracted the data and conducted the analysis. MK wrote the introduction, methods, results and discussion sections. RI validated the data collection process and revised the text of the manuscript. The final version of the manuscript was approved by all of the authors.

Financial support

There is no funding regarding this research.

Competing interests

The author declares none.

References

Abd-Alrazaq, AA, Alajlani, M, Alalwan, AA, Bewick, BM, Gardner, P and Househ, M (2019) An overview of the features of chatbots in mental health: A scoping review. International Journal of Medical Informatics 132, 103978. https://doi.org/10.1016/j.ijmedinf.2019.103978.Google Scholar
Abd-Alrazaq, AABMP, MBMMBAP, A, Ali, NBM, KDrn, D, BMBAMAP, B and Househ, MBMP (2021) Perceptions and opinions of patients about mental health chatbots. Scoping Review. Journal of Medical Internet Research 23(1). https://doi.org/10.2196/17828.Google Scholar
Abd-Alrazaq, AA, Rababeh, A, Alajlani, M, Bewick, BM and Househ, M (2020) Effectiveness and safety of using chatbots to improve mental health. Systematic Review and Meta-Analysis. J Med Internet Res 22(7), e16021. https://doi.org/10.2196/16021.Google Scholar
Abdellatif, M, Dadam, MN, Vu, NT, Nam, NH, Hoan, NQ, Taoube, Z, Tran, P and Huy, NT (2025) A step-by-step guide for conducting an umbrella review. Tropical Medicine and Health 53(1), 134. https://doi.org/10.1186/s41182-025-00764-y.Google Scholar
Adamopoulou, E and Moussiades, L (2020) An overview of chatbot technology. In: Maglogiannis, I, Iliadis, L and Pimenidis, E (eds.) Artificial Intelligence Applications and Innovations. Cham: Springer International Publishing.Google Scholar
Ahmed, A, Hassan, A, Aziz, S, Abd-Alrazaq, AA, Ali, N, Alzubaidi, M, Al-Thani, D, Elhusein, B, Siddig, MA, Ahmed, M and Househ, M (2023) Chatbot features for anxiety and depression: A scoping review. Health Informatics Journal 29(1), 14604582221146719. https://doi.org/10.1177/14604582221146719.Google Scholar
AlMaskari, AM, Al-Mahrouqi, T, Al Lawati, A, Al Aufi, H, Al Riyami, Q and Al-Sinawi, H (2025) Students’ perceptions of AI mental health chatbots: An exploratory qualitative study at Sultan Qaboos University. BMJ Open 15(10), e103893. https://doi.org/10.1136/bmjopen-2025-103893.Google Scholar
Altamimi, I, Altamimi, A, Alhumimidi, AS, Altamimi, A and Temsah, MH (2023) Artificial intelligence (AI) chatbots in medicine: A supplement, not a substitute. Cureus 15(6), e40922. https://doi.org/10.7759/cureus.40922.Google Scholar
American Psychiatric Association D and Association AP (2013) Diagnostic and Statistical Manual of Mental Disorders: DSM-5. American Psychiatric Association. Washington, DC.Google Scholar
Anaduaka, US, Oladosu, AO, Katsande, S, Frempong, CS and Awuku-Amador, S (2025) Leveraging artificial intelligence in the prediction, diagnosis and treatment of depression and anxiety among perinatal women in low- and middle-income countries: A systematic review. BMJ Ment Health 28(1). https://doi.org/10.1136/bmjment-2024-301445.Google Scholar
Baek, G, Cha, C and Han, JH (2025) AI chatbots for psychological health for health professionals: Scoping review. JMIR Human Factors 12, e67682. https://doi.org/10.2196/67682.Google Scholar
Bai, X, Wang, S, Zhao, Y, Feng, M, Ma, W and Liu, X (2025) Application of AI chatbot in responding to asynchronous text-based messages from patients with cancer: Comparative study. Journal of Medical Internet Research 27, e67462. https://doi.org/10.2196/67462.Google Scholar
Balan, R, Dobrean, A and Poetar, CR (2024) Use of automated conversational agents in improving young population mental health: A scoping review. NPJ Digit Med 7(1), 75. https://doi.org/10.1038/s41746-024-01072-1.Google Scholar
Barreda, M, Cantarero-Prieto, D, Coca, D, Delgado, A, Lanza-León, P, Lera, J, Montalbán, R and Pérez, F (2025) Transforming healthcare with chatbots: Uses and applications-a scoping review. Digit Health 11, 20552076251319174. https://doi.org/10.1177/20552076251319174.Google Scholar
Belbasis, L, Bellou, V and Ioannidis, JPA (2022) Conducting umbrella reviews. BMJ Medicine 1(1), e000071. https://doi.org/10.1136/bmjmed-2021-000071.Google Scholar
Bérubé, C, Schachner, T, Keller, R, Fleisch, E, F, VW, Barata, F and Kowatsch, T (2021) Voice-based conversational agents for the prevention and Management of Chronic and Mental Health Conditions. Systematic Literature Review. J Med Internet Res 23(3), e25933. https://doi.org/10.2196/25933.Google Scholar
Borghi, L, Leone, D, Poli, S, Becattini, C, Chelo, E, Costa, M, De Lauretis, L, Ferraretti, AP, Filippini, C, Giuffrida, G, Livi, C, Luehwink, A, Palermo, R, Revelli, A, Tomasi, G, Tomei, F and Vegni, E (2019) Patient-centered communication, patient satisfaction, and retention in care in assisted reproductive technology visits. Journal of Assisted Reproduction and Genetics 36(6), 11351142. https://doi.org/10.1007/s10815-019-01466-1.Google Scholar
Bragazzi, NL, Crapanzano, A, Converti, M, Zerbetto, R and Khamisy-Farah, R (2023) The impact of generative conversational artificial intelligence on the lesbian, gay, bisexual, transgender, and queer community: Scoping review. Journal of Medical Internet Research 25, e52091. https://doi.org/10.2196/52091.Google Scholar
Braun, V and Clarke, V (2023) Toward good practice in thematic analysis: Avoiding common problems and be(com)ing a knowing researcher. International Journal of Transgender Health 24(1), 16. https://doi.org/10.1080/26895269.2022.2129597.Google Scholar
Cerekovic, A, Aran, O and Gatica-Perez, D (2016) Rapport with virtual agents: What do human social cues and personality explain? IEEE Transactions on Affective Computing, 11. https://doi.org/10.1109/TAFFC.2016.2545650.Google Scholar
Chandrashekar, P (2018) Do mental health mobile apps work: Evidence and recommendations for designing high-efficacy mental health mobile apps. Mhealth 4, 6. https://doi.org/10.21037/mhealth.2018.03.02.Google Scholar
Chiu, YH, Lee, YF, Lin, HL and Cheng, LC (2024) Exploring the role of Mobile apps for insomnia in depression. Systematic Review. J Med Internet Res 26, e51110. https://doi.org/10.2196/51110.Google Scholar
Coelho, J, Pécune, F, Micoulaud-Franchi, JA, Bioulac, B and Philip, P (2025) Promoting mental health in the age of new digital tools: Balancing challenges and opportunities of social media, chatbots, and wearables. Frontiers in Digital Health 7, 1560580. https://doi.org/10.3389/fdgth.2025.1560580.Google Scholar
Coghlan, S, Leins, K, Sheldrick, S, Cheong, M, Gooding, P and D’Alfonso, S (2023) To chat or bot to chat: Ethical issues with using chatbots in mental health. Digit Health 9, 20552076231183542. https://doi.org/10.1177/20552076231183542.Google Scholar
Conner, KO, Copeland, VC, Grote, NK, Koeske, G, Rosen, D, Reynolds, CF and Brown, C (2010) Mental health treatment seeking among older adults with depression: The impact of stigma and race. The American Journal of Geriatric Psychiatry 18(6), 531543. https://doi.org/10.1097/JGP.0b013e3181cc0366.Google Scholar
Dehbozorgi, R, Zangeneh, S, Khooshab, E, Nia, DH, Hanif, HR, Samian, P, Yousefi, M, Hashemi, FH, Vakili, M, Jamalimoghadam, N and Lohrasebi, F (2025) The application of artificial intelligence in the field of mental health: A systematic review. BMC Psychiatry 25(1), 132. https://doi.org/10.1186/s12888-025-06483-2.Google Scholar
Du, Q, Ren, Y, Meng, ZL, He, H and Meng, S (2025) The efficacy of rule-based versus large language model-based chatbots in alleviating symptoms of depression and anxiety: Systematic Review and Meta-Analysis. Journal of Medical Internet Research 27, e78186. https://doi.org/10.2196/78186.Google Scholar
Epstein, RM and Street, RL (2007) Patient-centered communication in cancer care: Promoting healing and reducing suffering.Google Scholar
Farzan, M, Ebrahimi, H, Pourali, M and Sabeti, F (2025) Artificial intelligence-powered cognitive behavioral therapy chatbots, a systematic review. Iranian Journal of Psychiatry 20(1), 102110. https://doi.org/10.18502/ijps.v20i1.17395.Google Scholar
Feng, Y, Hang, Y, Wu, W, Song, X, Xiao, X, Dong, F and Qiao, Z (2025) Effectiveness of AI-driven conversational agents in improving mental health among young people: Systematic review and meta-analysis. Journal of Medical Internet Research 27, e69639. https://doi.org/10.2196/69639.Google Scholar
Feng, X, Tian, L, Ho, GWK, Yorke, J and Hui, V (2025) The effectiveness of AI chatbots in alleviating mental distress and promoting health behaviors among adolescents and young adults: Systematic Review and Meta-Analysis. Journal of Medical Internet Research 27, e79850. https://doi.org/10.2196/79850.Google Scholar
Fox, NJ (2016) Health sociology from post-structuralism to the new materialisms. Health (London, England) 20(1), 6274. https://doi.org/10.1177/1363459315615393.Google Scholar
Gaffney, H, Mansell, W and Tai, S (2019) Conversational agents in the treatment of mental health problems: Mixed-method systematic review. JMIR Mental Health 6(10). https://doi.org/10.2196/14166.Google Scholar
Gardiner, PM, McCue, KD, Negash, LM, Cheng, T, White, LF, Yinusa-Nyahkoon, L, Jack, BW and Bickmore, TW (2017) Engaging women with an embodied conversational agent to deliver mindfulness and lifestyle recommendations: A feasibility randomized control trial. Patient Education and Counseling 100(9), 17201729. https://doi.org/10.1016/j.pec.2017.04.015.Google Scholar
Giger, JC, Piçarra, N, Alves-Oliveira, P, Oliveira, R and Arriaga, P (2019) Humanization of robots: Is it really such a good idea? Human Behavior and Emerging Technologies 1(2), 111123.Google Scholar
Greš, A and Staver, D (2025) The utilization of artificial intelligence in mental health. Rivista di Psichiatria 60(4), 145149. https://doi.org/10.1708/4548.45486.Google Scholar
Han, Q and Zhao, C (2025) Unleashing the potential of chatbots in mental health: Bibliometric analysis. Frontiers in Psychiatry 16.Google Scholar
Haque, MDR and Rubya, S (2023) An overview of chatbot-based Mobile mental health apps: Insights from app description and user reviews. JMIR mHealth and uHealth 11, e44838. https://doi.org/10.2196/44838.Google Scholar
Hawke, LD, Hou, J, Nguyen, ATP, Phi, T, Gibson, J, Ritchie, B, Strudwick, G, Rodak, T and Gallagher, L (2025) Digital conversational agents for the mental health of treatment-seeking youth: Scoping Review. JMIR Ment Health 12, e77098. https://doi.org/10.2196/77098.Google Scholar
Hawkley, LC and Cacioppo, JT (2010) Loneliness matters: A theoretical and empirical review of consequences and mechanisms. Annals of Behavioral Medicine 40(2), 218227. https://doi.org/10.1007/s12160-010-9210-8.Google Scholar
He, Y, Yang, L, Qian, C, Li, T, Su, Z, Zhang, Q and Hou, X (2023) Conversational agent interventions for mental health problems: Systematic review and meta-analysis of randomized controlled trials. Journal of Medical Internet Research 25. https://doi.org/10.2196/43862.Google Scholar
Hester, RD (2017) Lack of access to mental health services contributing to the high suicide rates among veterans. International Journal of Mental Health Systems 11(1), 47. https://doi.org/10.1186/s13033-017-0154-2.Google Scholar
Hidaka, BH (2012) Depression as a disease of modernity: Explanations for increasing prevalence. Journal of Affective Disorders 140(3), 205214.Google Scholar
Hindelang, M, Sitaru, S and Zink, A (2024) Transforming health care through chatbots for medical history-taking and future directions: Comprehensive systematic review. JMIR Medical Informatics 12, e56628. https://doi.org/10.2196/56628.Google Scholar
Hipgrave, L, Goldie, J, Dennis, S and Coleman, A (2025) Balancing risks and benefits: Clinicians’ perspectives on the use of generative AI chatbots in mental healthcare. Frontiers in Digital Health 7, 1606291. https://doi.org/10.3389/fdgth.2025.1606291.Google Scholar
Hoegen, R, van der Schalk, J, Lucas, G and Gratch, J (2018) The impact of agent facial mimicry on social behavior in a prisoner’s dilemma. In Proceedings of the 18th International Conference on Intelligent Virtual Agents, Sydney, NSW, Australia. Available at https://doi.org/10.1145/3267851.3267911.Google Scholar
Hua, Y, Siddals, S, Ma, Z, Galatzer-Levy, I, Xia, W, Hau, C, Na, H, Flathers, M, Linardon, J, Ayubcha, C and Torous, J (2025) Charting the evolution of artificial intelligence mental health chatbots from rule-based systems to large language models: A systematic review. World Psychiatry 24(3), 383394. https://doi.org/10.1002/wps.21352.Google Scholar
Im, CH and Woo, M (2025) Clinical efficacy, therapeutic mechanisms, and implementation features of cognitive behavioral therapy-based chatbots for depression and anxiety. Narrative Review. JMIR Ment Health 12, e78340. https://doi.org/10.2196/78340.Google Scholar
Iwaya, LH, Babar, MA, Rashid, A and Wijayarathna, C (2023) On the privacy of mental health apps: An empirical investigation and its implications for app development. Empirical Software Engineering 28(1), 2. https://doi.org/10.1007/s10664-022-10236-0.Google Scholar
Izadi, R, Bahrami, MA, Khosravi, M and Delavari, S (2023) Factors affecting the acceptance of tele-psychiatry: A scoping study. Archives of Public Health 81(1), 131. https://doi.org/10.1186/s13690-023-01146-8.Google Scholar
Jabir, AI, Martinengo, L, Lin, X, Torous, J, Subramaniam, M and Tudor Car, L (2023) Evaluating conversational agents for mental health: Scoping review of outcomes and outcome measurement instruments. Journal of Medical Internet Research 25, e44548. https://doi.org/10.2196/44548.Google Scholar
Jones, SP, Patel, V, Saxena, S, Radcliffe, N, Ali Al-Marri, S and Darzi, A (2014) How Google’s ’ten things we know to be true’ could guide the development of mental health mobile apps. Health Aff (Millwood). 33(9), 16031611. https://doi.org/10.1377/hlthaff.2014.0380.Google Scholar
Joshi, AC, Ghogare, AS and Madavi, PB (2025) Systematic review of artificial intelligence enabled psychological interventions for depression and anxiety: A comprehensive analysis. Industrial Psychiatry Journal 34(2), 158166. https://doi.org/10.4103/ipj.ipj_363_24.Google Scholar
Kang, S-H, Phan, T, Bolas, M and Krum, D (2016) User Perceptions of a Virtual Human Over Mobile Video Chat Interactions.Google Scholar
Khawaja, Z and Bélisle-Pipon, JC (2023) Your robot therapist is not your therapist: Understanding the role of AI-powered mental health chatbots. Frontiers in Digital Health 5, 1278186. https://doi.org/10.3389/fdgth.2023.1278186.Google Scholar
Khosravi, M and Azar, G (2024a) Factors influencing patient engagement in mental health chatbots: A thematic analysis of findings from a systematic review of reviews. Digital Health 10, 20552076241247983. https://doi.org/10.1177/20552076241247983.Google Scholar
Khosravi, M and Azar, G (2024b) A systematic review of reviews on the advantages of mHealth utilization in mental health services: A viable option for large populations in low-resource settings. Cambridge Prisms: Global Mental Health 11, e43. https://doi.org/10.1017/gmh.2024.39.Google Scholar
Khosravi, M, Azar, G and Izadi, R (2024a) Principles and elements of patient-centredness in mental health services: A thematic analysis of a systematic review of reviews. BMJ Open Quality 13(3), e002719. https://doi.org/10.1136/bmjoq-2023-002719.Google Scholar
Khosravi, M, Izadi, R and Azar, G (2025) Factors influencing the engagement with electronic mental health technologies: A systematic review of reviews. Administration and Policy in Mental Health and Mental Health Services Research 52(2), 415427. https://doi.org/10.1007/s10488-024-01420-z.Google Scholar
Khosravi, M, Mojtabaeian, SM and Aghamaleki Sarvestani, M (2024b) A systematic review on factors influencing middle eastern women’s utilization of healthcare services: The promise of mHealth. Sage Open Medicine 12, 20503121241276678. https://doi.org/10.1177/20503121241276678.Google Scholar
Kim, HK (2024) The effects of artificial intelligence chatbots on women’s health: A systematic review and meta-analysis. Healthcare (Basel) 12(5). https://doi.org/10.3390/healthcare12050534.Google Scholar
Kulms, P, Kopp, S and Krämer, NC (2014) Let’s be serious and have a laugh: Can humor support cooperation with a virtual agent? In Bickmore, T, Marsella, S and Sidner, C (eds.), Intelligent Virtual Agents. Cham: Springer International Publishing.Google Scholar
Kumar, A and Epley, N (2021) It’s surprisingly nice to hear you: Misunderstanding the impact of communication media can lead to suboptimal choices of how to connect with others. Journal of Experimental Psychology. General 150(3), 595607. https://doi.org/10.1037/xge0000962.Google Scholar
Kwame, A and Petrucka, PM (2021) A literature-based study of patient-centered care and communication in nurse-patient interactions: Barriers, facilitators, and the way forward. BMC Nursing 20(1), 158. https://doi.org/10.1186/s12912-021-00684-2.Google Scholar
Li, J, Li, Y, Hu, Y, Ma, DCF, Chan, EA and Yorke, J (2025) Chatbot-delivered interventions on psychological health among young people: A systematic review and meta-analysis. Studies in Health Technology and Informatics 329, 18741875. https://doi.org/10.3233/shti251258.Google Scholar
Li, H, Zhang, R, Lee, YC, Kraut, RE and Mohr, DC (2023) Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. NPJ Digital Medicine 6(1), 236. https://doi.org/10.1038/s41746-023-00979-5.Google Scholar
Lin, X, Martinengo, L, Jabir, AI, Ho, AHY, Car, J, Atun, R and Tudor Car, L (2023) Scope, characteristics, behavior change techniques, and quality of conversational agents for mental health and well-being: Systematic assessment of apps. Journal of Medical Internet Research 25, e45984. https://doi.org/10.2196/45984.Google Scholar
Lincoln, YS and G, EG (1985) Naturalistic Inquiry. Beverly Hills, CA: Sage Publications.Google Scholar
Loftus, TJ, Haider, A and Upchurch, GR (2025) Practical guide to artificial intelligence, chatbots, and large language models in conducting and reporting research. JAMA Surgery 160(5), 588589. https://doi.org/10.1001/jamasurg.2024.6025.Google Scholar
Mansoor, M, Hamide, A and Tran, T (2025) Conversational AI in pediatric mental health: A narrative review. Children (Basel) 12(3). https://doi.org/10.3390/children12030359.Google Scholar
Martinengo, L, Jabir, AI, Goh, WWT, Lo, NYW, Ho, MR, Kowatsch, T, Atun, R, Michie, S and Tudor Car, L (2022) Conversational agents in health care: Scoping review of their behavior change techniques and underpinning theory. Journal of Medical Internet Research 24(10), e39243. https://doi.org/10.2196/39243.Google Scholar
Matthews, T, Danese, A, Wertz, J, Odgers, CL, Ambler, A, Moffitt, TE and Arseneault, L (2016) Social isolation, loneliness and depression in young adulthood: A behavioural genetic analysis. Social Psychiatry and Psychiatric Epidemiology 51(3), 339348.Google Scholar
Mojtabai, R, Olfson, M, Sampson, NA, Jin, R, Druss, B, Wang, PS, Wells, KB, Pincus, HA and Kessler, RC (2011) Barriers to mental health treatment: Results from the National Comorbidity Survey Replication. Psychological Medicine 41(8), 17511761. https://doi.org/10.1017/s0033291710002291.Google Scholar
Moore, AA, Ellis, JR, Dellavalle, N, Akerson, M, Andazola, M, Campbell, EG and DeCamp, M (2025) Patient-facing chatbots: Enhancing healthcare accessibility while navigating digital literacy challenges and isolation risks-a mixed-methods study. Digital Health 11, 20552076251337321. https://doi.org/10.1177/20552076251337321.Google Scholar
Murray, CJ, Vos, T, Lozano, R, Naghavi, M, Flaxman, AD, Michaud, C, Ezzati, M, Shibuya, K, Salomon, JA, Abdalla, S, Aboyans, V, Abraham, J, Ackerman, I, Aggarwal, R, Ahn, SY, Ali, MK, Alvarado, M, Anderson, HR, Anderson, LM, Andrews, KG, Atkinson, C, Baddour, LM, Bahalim, AN, Barker-Collo, S, Barrero, LH, Bartels, DH, Basáñez, MG, Baxter, A, Bell, ML, Benjamin, EJ, Bennett, D, Bernabé, E, Bhalla, K, Bhandari, B, Bikbov, B, Bin Abdulhak, A, Birbeck, G, Black, JA, Blencowe, H, Blore, JD, Blyth, F, Bolliger, I, Bonaventure, A, Boufous, S, Bourne, R, Boussinesq, M, Braithwaite, T, Brayne, C, Bridgett, L, Brooker, S, Brooks, P, Brugha, TS, Bryan-Hancock, C, Bucello, C, Buchbinder, R, Buckle, G, Budke, CM, Burch, M, Burney, P, Burstein, R, Calabria, B, Campbell, B, Canter, CE, Carabin, H, Carapetis, J, Carmona, L, Cella, C, Charlson, F, Chen, H, Cheng, AT, Chou, D, Chugh, SS, Coffeng, LE, Colan, SD, Colquhoun, S, Colson, KE, Condon, J, Connor, MD, Cooper, LT, Corriere, M, Cortinovis, M, de Vaccaro, KC, Couser, W, Cowie, BC, Criqui, MH, Cross, M, Dabhadkar, KC, Dahiya, M, Dahodwala, N, Damsere-Derry, J, Danaei, G, Davis, A, De Leo, D, Degenhardt, L, Dellavalle, R, Delossantos, A, Denenberg, J, Derrett, S, Des Jarlais, DC, Dharmaratne, SD, Dherani, M, Diaz-Torne, C, Dolk, H, Dorsey, ER, Driscoll, T, Duber, H, Ebel, B, Edmond, K, Elbaz, A, Ali, SE, Erskine, H, Erwin, PJ, Espindola, P, Ewoigbokhan, SE, Farzadfar, F, Feigin, V, Felson, DT, Ferrari, A, Ferri, CP, Fèvre, EM, Finucane, MM, Flaxman, S, Flood, L, Foreman, K, Forouzanfar, MH, Fowkes, FG, Fransen, M, Freeman, MK, Gabbe, BJ, Gabriel, SE, Gakidou, E, Ganatra, HA, Garcia, B, Gaspari, F, Gillum, RF, Gmel, G, Gonzalez-Medina, D, Gosselin, R, Grainger, R, Grant, B, Groeger, J, Guillemin, F, Gunnell, D, Gupta, R, Haagsma, J, Hagan, H, Halasa, YA, Hall, W, Haring, D, Haro, JM, Harrison, JE, Havmoeller, R, Hay, RJ, Higashi, H, Hill, C, Hoen, B, Hoffman, H, Hotez, PJ, Hoy, D, Huang, JJ, Ibeanusi, SE, Jacobsen, KH, James, SL, Jarvis, D, Jasrasaria, R, Jayaraman, S, Johns, N, Jonas, JB, Karthikeyan, G, Kassebaum, N, Kawakami, N, Keren, A, Khoo, JP, King, CH, Knowlton, LM, Kobusingye, O, Koranteng, A, Krishnamurthi, R, Laden, F, Lalloo, R, Laslett, LL, Lathlean, T, Leasher, JL, Lee, YY, Leigh, J, Levinson, D, Lim, SS, Limb, E, Lin, JK, Lipnick, M, Lipshultz, SE, Liu, W, Loane, M, Ohno, SL, Lyons, R, Mabweijano, J, MacIntyre, MF, Malekzadeh, R, Mallinger, L, Manivannan, S, Marcenes, W, March, L, Margolis, DJ, Marks, GB, Marks, R, Matsumori, A, Matzopoulos, R, Mayosi, BM, McAnulty, JH, McDermott, MM, McGill, N, McGrath, J, Medina-Mora, ME, Meltzer, M, Mensah, GA, Merriman, TR, Meyer, AC, Miglioli, V, Miller, M, Miller, TR, Mitchell, PB, Mock, C, Mocumbi, AO, Moffitt, TE, Mokdad, AA, Monasta, L, Montico, M, Moradi-Lakeh, M, Moran, A, Morawska, L, Mori, R, Murdoch, ME, Mwaniki, MK, Naidoo, K, Nair, MN, Naldi, L, Narayan, KM, Nelson, PK, Nelson, RG, Nevitt, MC, Newton, CR, Nolte, S, Norman, P, Norman, R, O’Donnell, M, O’Hanlon, S, Olives, C, Omer, SB, Ortblad, K, Osborne, R, Ozgediz, D, Page, A, Pahari, B, Pandian, JD, Rivero, AP, Patten, SB, Pearce, N, Padilla, RP, Perez-Ruiz, F, Perico, N, Pesudovs, K, Phillips, D, Phillips, MR, Pierce, K, Pion, S, Polanczyk, GV, Polinder, S, Pope, CA III, Popova, S, Porrini, E, Pourmalek, F, Prince, M, Pullan, RL, Ramaiah, KD, Ranganathan, D, Razavi, H, Regan, M, Rehm, JT, Rein, DB, Remuzzi, G, Richardson, K, Rivara, FP, Roberts, T, Robinson, C, De Leòn, FR, Ronfani, L, Room, R, Rosenfeld, LC, Rushton, L, Sacco, RL, Saha, S, Sampson, U, Sanchez-Riera, L, Sanman, E, Schwebel, DC, Scott, JG, Segui-Gomez, M, Shahraz, S, Shepard, DS, Shin, H, Shivakoti, R, Singh, D, Singh, GM, Singh, JA, Singleton, J, Sleet, DA, Sliwa, K, Smith, E, Smith, JL, Stapelberg, NJ, Steer, A, Steiner, T, Stolk, WA, Stovner, LJ, Sudfeld, C, Syed, S, Tamburlini, G, Tavakkoli, M, Taylor, HR, Taylor, JA, Taylor, WJ, Thomas, B, Thomson, WM, Thurston, GD, Tleyjeh, IM, Tonelli, M, Towbin, JA, Truelsen, T, Tsilimbaris, MK, Ubeda, C, Undurraga, EA, van der Werf, MJ, van Os, J, Vavilala, MS, Venketasubramanian, N, Wang, M, Wang, W, Watt, K, Weatherall, DJ, Weinstock, MA, Weintraub, R, Weisskopf, MG, Weissman, MM, White, RA, Whiteford, H, Wiebe, N, Wiersma, ST, Wilkinson, JD, Williams, HC, Williams, SR, Witt, E, Wolfe, F, Woolf, AD, Wulf, S, Yeh, PH, Zaidi, AK, Zheng, ZJ, Zonies, D, Lopez, AD, AlMazroa, MA and Memish, ZA (2012) Disability-adjusted life years (DALYs) for 291 diseases and injuries in 21 regions, 1990-2010: A systematic analysis for the global burden of disease study 2010. Lancet 380(9859), 21972223. https://doi.org/10.1016/s0140-6736(12)61689-4.Google Scholar
Nadarzynski, T, Miles, O, Cowie, A and Ridge, D (2019) Acceptability of artificial intelligence (AI)-led chatbot services in healthcare: A mixed-methods study. Digit Health 5, 2055207619871808. https://doi.org/10.1177/2055207619871808.Google Scholar
Nyakhar, S and Wang, H (2025) Effectiveness of artificial intelligence chatbots on mental health & well-being in college students: A rapid systematic review. Frontiers in Psychiatry 16, 1621768. https://doi.org/10.3389/fpsyt.2025.1621768.Google Scholar
Ogilvie, L, Prescott, J and Carson, J (2022) The use of chatbots as supportive agents for people seeking help with substance use disorder: A systematic review. European Addiction Research 28(6), 405418. https://doi.org/10.1159/000525959.Google Scholar
Oladeji, BD and Gureje, O (2016) Brain drain: A challenge to global mental health. BJPsych International 13(3), 6163. https://doi.org/10.1192/s2056474000001240.Google Scholar
Organisation WH (2014) Mental Health: A State of Well-Being. WHO.Google Scholar
Otero-González, I, Pacheco-Lorenzo, MR, Fernández-Iglesias, MJ and Anido-Rifón, LE (2024) Conversational agents for depression screening: A systematic review. International Journal of Medical Informatics 181, 105272. https://doi.org/10.1016/j.ijmedinf.2023.105272.Google Scholar
Palanica, A, Flaschner, P, Thommandram, A, Li, M and Fossat, Y (2019) Physicians’ perceptions of chatbots in health care: Cross-sectional web-based survey. Journal of Medical Internet Research 21(4), e12887. https://doi.org/10.2196/12887.Google Scholar
Parr, JR, Dale, NJ, Shaffer, LM and Salt, AT (2010) Social communication difficulties and autism spectrum disorder in young children with optic nerve hypoplasia and/or septo-optic dysplasia. Developmental Medicine & Child Neurology 52(10), 917921. https://doi.org/10.1111/j.1469-8749.2010.03664.x.Google Scholar
Parviainen, J and Rantala, J (2022) Chatbot breakthrough in the 2020s? An ethical reflection on the trend of automated consultations in health care. Medicine, Health Care and Philosophy 25(1), 6171. https://doi.org/10.1007/s11019-021-10049-w.Google Scholar
Pozzi, G and De Proost, M (2025) Keeping an AI on the mental health of vulnerable populations: Reflections on the potential for participatory injustice. AI and Ethics 5(3), 22812291. https://doi.org/10.1007/s43681-024-00523-5.Google Scholar
Programme CAS (2018) CASP checklist: 10 questions to help you make sense of a systematic review.Google Scholar
Provoost, S, Lau, HM, Ruwaard, J and Riper, H (2017) Embodied conversational agents in clinical psychology: A scoping review. Journal of Medical Internet Research 19(5), e151. https://doi.org/10.2196/jmir.6553.Google Scholar
Qiu, L and Benbasat, I (2009) Evaluating anthropomorphic product recommendation agents: A social relationship perspective to designing information systems. J. Manage. Inf. Syst. 25(4), 145182. https://doi.org/10.2753/mis0742-1222250405.Google Scholar
Radziwill, N and Benton, M (2017) Evaluating Quality of Chatbots and Intelligent Conversational Agents.Google Scholar
Ratnani, IJ, Vala, AU, Panchal, BN, Tiwari, DS, Karambelkar, SS, Sojitra, MG and Nagori, NN (2017) Association of social anxiety disorder with depression and quality of life among medical undergraduate students. Journal of Family Medicine and Primary Care 6(2), 243248.Google Scholar
Selwyn, N (2004) Reconsidering political and popular understandings of the digital divide. New Media & Society 6(3), 341362. https://doi.org/10.1177/1461444804042519.Google Scholar
Singh, J, Sillerud, B and Singh, A (2023) Artificial intelligence, chatbots and ChatGPT in healthcare—Narrative review of historical evolution, current application, and change management approach to increase adoption. Journal of Medical Artificial Intelligence 6.Google Scholar
Steel, Z, Marnane, C, Iranpour, C, Chey, T, Jackson, JW, Patel, V and Silove, D (2014) The global prevalence of common mental disorders: A systematic review and meta-analysis 1980-2013. International Journal of Epidemiology 43(2), 476493. https://doi.org/10.1093/ije/dyu038.Google Scholar
Talebi Azadboni, T, Solat, F, Hematti, H and Rahmani, M (2025) Information security and confidentiality in health chatbots: A scoping review and development of a conceptual model. Digital Health 11, 20552076251406637. https://doi.org/10.1177/20552076251406637.Google Scholar
Thomas, KC, Ellis, AR, Konrad, TR, Holzer, CE and Morrissey, JP (2009) County-level estimates of mental health professional shortage in the United States. Psychiatric Services 60(10), 13231328. https://doi.org/10.1176/ps.2009.60.10.1323.Google Scholar
Vaidyam, AN, Linggonegoro, D and Torous, J (2021) Changes to the psychiatric chatbot landscape: A systematic review of conversational agents in serious mental illness: Changements du paysage psychiatrique des chatbots: Une revue systématique des agents conversationnels dans la maladie mentale sérieuse. Canadian Journal of Psychiatry 66(4), 339348. https://doi.org/10.1177/0706743720966429.Google Scholar
Vaidyam, AN, Wisniewski, H, Halamka, JD, Kashavan, MS and Torous, JB (2019) Chatbots and conversational agents in mental health: A review of the psychiatric landscape. Canadian Journal of Psychiatry 64(7), 456464. https://doi.org/10.1177/0706743719828977.Google Scholar
Valliammai, SV (2017) Sanative chatbot for health seekers. International Journal of Engineering and Computer Science 5(3).Google Scholar
Wang, PS, Aguilar-Gaxiola, S, Alonso, J, Angermeyer, MC, Borges, G, Bromet, EJ, Bruffaerts, R, de Girolamo, G, de Graaf, R, Gureje, O, Haro, JM, Karam, EG, Kessler, RC, Kovess, V, Lane, MC, Lee, S, Levinson, D, Ono, Y, Petukhova, M, Posada-Villa, J, Seedat, S and Wells, JE (2007) Use of mental health services for anxiety, mood, and substance disorders in 17 countries in the WHO world mental health surveys. Lancet 370(9590), 841850. https://doi.org/10.1016/s0140-6736(07)61414-7.Google Scholar
Whiteford, HA, Ferrari, AJ, Degenhardt, L, Feigin, V and Vos, T (2015) The global burden of mental, neurological and substance use disorders: An analysis from the global burden of disease study 2010. PLoS One 10(2), e0116820. https://doi.org/10.1371/journal.pone.0116820.Google Scholar
Yang, Q, Cheung, K, Zhang, Y, Zhang, Y, Qin, J and Xie, YJ (2025) Conversational agents in physical and psychological symptom management: A systematic review of randomized controlled trials. International Journal of Nursing Studies 163, 104991. https://doi.org/10.1016/j.ijnurstu.2024.104991.Google Scholar
Figure 0

Table 1. The search strategy utilized to conduct the systematic review

Figure 1

Figure 1. PRISMA diagram of the systematic review.

Figure 2

Figure 2. Results of the quality assessment of the final papers.

Figure 3

Table 2. Findings from the thematic analysis

Figure 4

Figure 3. Overall features of mental health chatbots.

Supplementary material: File

Khosravi and Izadi supplementary material

Khosravi and Izadi supplementary material
Download Khosravi and Izadi supplementary material(File)
File 468.9 KB