Hostname: page-component-74d7c59bfc-nlwmm Total loading time: 0 Render date: 2026-02-01T16:16:18.240Z Has data issue: false hasContentIssue false

Increasing use of generative artificial intelligence by teenagers

Published online by Cambridge University Press:  26 January 2026

Scott Monteith*
Affiliation:
Michigan State University College of Human Medicine , Traverse City, Michigan, USA
Tasha Glenn
Affiliation:
ChronoRecord Association, Fullerton, California, USA
John R. Geddes
Affiliation:
Department of Psychiatry, University of Oxford, Warneford Hospital, Oxford, UK
Peter C. Whybrow
Affiliation:
Department of Psychiatry and Biobehavioral Sciences, Semel Institute for Neuroscience and Human Behavior, University of California Los Angeles (UCLA), Los Angeles, California, USA
Eric D. Achtyes
Affiliation:
Department of Psychiatry, Western Michigan University Homer Stryker M.D. School of Medicine, Kalamazoo, Michigan, USA
Suzanne Huberty
Affiliation:
Department of Psychiatry, Western Michigan University Homer Stryker M.D. School of Medicine, Kalamazoo, Michigan, USA
Rita Bauer
Affiliation:
Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Dresden University of Technology, Dresden, Germany
Michael Bauer
Affiliation:
Department of Psychiatry and Psychotherapy, University Hospital Carl Gustav Carus Medical Faculty, Dresden University of Technology, Dresden, Germany
*
Correspondence: Scott Monteith. Email: monteit2@msu.edu
Rights & Permissions [Opens in a new window]

Abstract

The use of Generative Artificial Intelligence (GenAI) by teenagers is increasing rapidly. GenAI is a form of artificial intelligence that creates new text, images, video and audio, using models based on huge amounts of training data. However, using GenAI can also create misinformation and biased, inappropriate and harmful outputs. Teenagers are increasingly using GenAI in daily life, including in mental healthcare, and may not be aware of the limitations and risks. GenAI may also be used for malicious purposes that may have long-term, negative impacts on mental health. There is a need to increase awareness of how GenAI may have a negative impact on the mental health of teenagers.

Information

Type
Feature
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press on behalf of Royal College of Psychiatrists

A few years ago, many products that used artificial intelligence were based on machine learning models that made predictions from large sets of example data. For example, machine learning models have been used to predict whether an X-ray shows signs of a tumour. Reference Zewe1 Recently, generative artificial intelligence (GenAI) models were developed, which create new data similar to the input data rather than making predictions. GenAI models that learn to produce new text from input text are labelled large language models (LLMs). Reference Kalota2Reference Melnyk, Ismail, Ghorashi, Heekin and Javan3 Examples of LLMs include Chat-4 (ChatGPT) from OpenAI, LLaMA from Meta and Gemini from Google. LLMs can be used to write articles and reports, create chatbots, summarise documents, translate between languages and generate software code. Some GenAI LLM models generate images, video and audio. Adolescents are most likely to interact with LLM GenAI text-based models. The use of GenAI by teenagers has grown rapidly, used by nearly 80% of British teenagers in 2023 Reference Thomas4 and 70% of US teenagers in 2024. 5 Teenagers are likely to use GenAI for homework help (53%) but also to fend off boredom (42%). 5

Teenage use of GenAI

Adolescents utilise GenAI to write essays and reports, or create videos for social sharing. Reference Munzer6 Many teens are using GenAI without telling their parents or teachers. 7 While 50% of children age 12–18 have used GenAI for school, only 26% of parents are aware of such use. 7 However, the potential consequences of over-reliance on GenAI may have an impact on critical thinking and creativity. Reference Yan, Greiff, Teuber and Gasevic8 Many teens easily believe GenAI output and treat it as if they were conversing with another human due to the human-like tone, aura of confidence and pattern-matching giving the convincing appearance of understanding and responding to what was said. Reference Eliot9 Many teenagers may be unaware that LLM models may produce errors and create misinformation as well as coherent but inaccurate comments, referred to as hallucinations, especially on topics where only limited data was available for training. Reference Stokel-Walker and Van Noorden10

While some children age 10–12 identify cultural, gender and racial biases in responses from GenAI, Reference Shrivastava, Sharma, Chakraborty and Kinnula11 children may not be sufficiently critical of GenAI actions and responses. Children may be unaware that GenAI can make basic errors, such as ChatGPT giving an incorrect list of the states in the USA. Reference Marcus12 Children and teenagers may be completely unsuspecting that GenAI can create coherent but inaccurate comments, referred to as hallucinations. GenAI may create harmful information that perpetuates historically biased stereotypes. Reference Vassel, Shieh, Sugimoto and Monroe-White13 After exposure to the limitations and mistakes of GenAI, teenagers’ attitudes may shift over time from overtrusting to disillusionment. Reference Solyst, Yang, Xie, Hammer, Ogan and Eslami14 Teenagers may also be unaware of the intentional misuse of GenAI, such as to create manipulative content or impersonate individuals. Reference Salah, Abdelfattah and Al Halbusi15 Some teenagers fear a loss of privacy due to unauthorised use of personal data in GenAI applications. Reference Yu, Sharma, Hu, Wang and Wang16

Malicious use of GenAI LLMs

GenAl LLMs can be used to alter real images to create fake images and create videos to deceive. Reference Mitra, Mohanty and Kougianos17 Fake videos together with GenAI LLM chatbots can generate audio from a text script in any language or voice. GenAI LLM technology allows very sophisticated fake products to be created, commonly called deepfakes. Reference Mitra, Mohanty and Kougianos17 If GenAI LLM generated images, video, audio and text are targeted at specific individuals for the purpose of harassment, it constitutes cyberbullying. Reference Ferrara18 Catfishing occurs when an online perpetrator purposefully deceives a victim into believing there is an emotional or romantic connection between them. Reference Wang and Topalli19

Use of GenAI apps for mental healthcare

The use of GenAI apps for healthcare, or wellness apps, can be risky. The GenAI app may not be able to recognise signs of mental illness. Reference De Freitas, Uğuralp, Oğuz‐Uğuralp and Puntoni20 When used for mental healthcare, the patients may not be aware that the app is not a real person and does not have the emotional foundation for a caring relationship and is not capable of providing professional therapy. Reference De Freitas and Cohen21 Some young people prefer human responses rather than GenAI responses for sensitive topics such as relationships and suicidal thoughts. Reference Young, Jawara, Nguyen, Daly, Huh-Yoo and Razi22 Unlike the non-judgemental listening and rapport of a professional therapist, the emotional expression of the wellness app may not provide the understanding and empathy of a human therapist. Reference Sezgin and McKay23 The GenAI app may provide answers that are inappropriate and worsen the mental health crises. Reference De Freitas, Uğuralp, Oğuz‐Uğuralp and Puntoni20 Additionally, the emotional expression and content of a GenAI app may not carry appropriately across international cultures. Reference Sezgin and McKay23 Yet many young people are using GenAI as a mental health advisor, and parents may be unaware that children are conferring with GenAI about their well-being and self-care or using it as a mental health advisor. Reference Eliot9

Limitations

The difficulties and challenges in teaching artificial intelligence concepts to youth, including high-school students, were not discussed. Reference Greenwald, Leitner and Wang24 The consequences of GenAI on the educational system were not discussed. Reference Yan, Greiff, Teuber and Gasevic8 The growing need and potential solutions for incorporating AI in classrooms for teenagers were omitted. Reference Forsyth, Dalton, Foster, Walsh, Smilack and Yeh25,Reference Macar, Castleman, Mauchly, Jiang, Aouissi and Aouissi26 The impact of the digital divide, or unequal access to artificial intelligence technologies was omitted. Privacy issues related to GenAI storing personal data were not discussed. The lack of regulation of GenAI and the potential to create stress from technical overload when incorporating artificial intelligence in work processes were not discussed. Reference Wach, Duong, Ejdys, Kazlauskaitė, Korzynski and Mazurek27 This paper discusses GenAI models in general, not the strengths or weaknesses of specific products. Cybersecurity and fraud related to GenAI were omitted, although GenAI may increase existing risks and introduce new risks. Reference Bullwinkel and Kumar28 Ethical standards for the use of GenAI in psychiatry were not included. Reference King, Nanda, Stoddard, Dempsey, Hergert and Shore29 The need for increased investigation of the potential harms to youth from GenAI was omitted.

Implications

The use of GenAI by teenagers is increasing rapidly and may have long-term impacts.

Steps are needed to teach children about the limitations of GenAI and how to differentiate fact from fiction. There is a clear need to understand how GenAI may impact the behaviour and mental health status of teenagers.

Author contributions

S.M. and T.G. wrote the initial draft. All authors edited, reviewed and approved the final manuscript.

Funding

This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

Declaration of interest

J.R.G., Director of the NIHR Oxford Health Biomedical Research Centre, is a member of the BJPsych editorial board and did not take part in the review or decision-making process of this paper.

References

Zewe, A. Explained: Generative AI. MIT News, 2023 (https://news.mit.edu/2023/explained-generative-ai-1109).Google Scholar
Kalota, F. A primer on generative artificial intelligence. Educ Sci 2024; 14: 172.10.3390/educsci14020172CrossRefGoogle Scholar
Melnyk, O, Ismail, A, Ghorashi, NS, Heekin, M, Javan, R. Generative artificial intelligence terminology: a primer for clinicians and medical researchers. Cureus 2023; 15: e49890.Google ScholarPubMed
Thomas, D. Nearly 80% of British Teenagers Have Used Generative AI. Financial Times, 2023 (https://www.ft.com/content/6054706b-b339-48a4-a6b4-d64b0bfd346f).Google Scholar
Common Sense Media. New Report Shows Students Are Embracing Artificial Intelligence Despite Lack of Parent Awareness and School Guidance. Common Sense Media, 2024 (https://www.commonsensemedia.org/press-releases/new-report-shows-students-are-embracing-artificial-intelligence-despite-lack-of-parent-awareness-and).Google Scholar
Munzer, T. How Will Artificial Intelligence (AI) Affect Children? HealthyChildren.org, 2024 (https://www.healthychildren.org/English/family-life/Media/Pages/how-will-artificial-intelligence-AI-affect-children.aspx).Google Scholar
Common Sense Media. New Poll Finds Parents Lag behind Kids on AI and Want Rules and Reliable Information to Help Them. Common Sense Media, 2023 (https://www.commonsensemedia.org/press-releases/new-poll-finds-parents-lag-behind-kids-on-ai).Google Scholar
Yan, L, Greiff, S, Teuber, Z, Gasevic, D. Promises and challenges of generative artificial intelligence for human learning. Nat Hum Behav 2024; 8: 1839–50.10.1038/s41562-024-02004-5CrossRefGoogle ScholarPubMed
Eliot, l. Generative AI is Going to Shape the Mental Health Status of Our Youths for Generations to Come. Forbes, 2024 (https://www.forbes.com/sites/lanceeliot/2024/04/16/generative-ai-is-going-to-shape-the-mental-health-status-of-our-youths-for-generations-to-come/).Google Scholar
Stokel-Walker, C, Van Noorden, R. The promise and peril of generative AI. Nature 2023; 614: 214–6.10.1038/d41586-023-00340-6CrossRefGoogle Scholar
Shrivastava, V, Sharma, S, Chakraborty, D, Kinnula, M. Is a sunny day bright and cheerful or hot and uncomfortable? Young children’s exploration of ChatGPT. Proceedings of the 13th Nordic Conference on Human–Computer Interaction (Uppsala, Sweden, 13–16 Oct 2024). Association for Computing Machinery, 2024.10.1145/3679318.3685397CrossRefGoogle Scholar
Marcus, G. ChatGPT in Shambles. Marcus on AI, 2025 (https://garymarcus.substack.com/p/chatgpt-in-shambles).Google Scholar
Vassel, FM, Shieh, E, Sugimoto, CR, Monroe-White, T. The psychosocial impacts of generative AI harms. Proceedings of the AAAI Symposium Series (Stanford, California, 25–27 Mar 2024). The AAAI Press, 2024.10.1609/aaaiss.v3i1.31251CrossRefGoogle Scholar
Solyst, J, Yang, E, Xie, S, Hammer, J, Ogan, A, Eslami, M. Children’s overtrust and shifting perspectives of generative AI. ArXiv [Preprint] 2024. Available from: https://arxiv.org/abs/2404.14511 [cited 22 Apr 2024].Google Scholar
Salah, M, Abdelfattah, F, Al Halbusi, H. The good, the bad, and the GPT: reviewing the impact of generative artificial intelligence on psychology. Curr Opin Psychol 2024; 21: 101872.10.1016/j.copsyc.2024.101872CrossRefGoogle Scholar
Yu, Y, Sharma, T, Hu, M, Wang, J, Wang, Y. Exploring parent-child perceptions on safety in generative AI: concerns, mitigation strategies, and design implications. ArXiv [Preprint] 2024. Available from: https://arxiv.org/abs/2406.10461 [cited 12 May 2025].Google Scholar
Mitra, A, Mohanty, SP, Kougianos, E. The world of generative AI: deepfakes and large language models. ArXiv [Preprint] 2024. Available from: https://arxiv.org/abs/2402.04373 [cited 6 Feb 2024].Google Scholar
Ferrara, E. GenAI against humanity: nefarious applications of generative artificial intelligence and large language models. J Comput Soc Sci 2024; 22: 121.Google Scholar
Wang, F, Topalli, V. The cyber-industrialization of catfishing and romance fraud. Comput Human Behav 2024; 154: 108133.10.1016/j.chb.2023.108133CrossRefGoogle Scholar
De Freitas, J, Uğuralp, AK, Oğuz‐Uğuralp, Z, Puntoni, S. Chatbots and mental health: insights into the safety of generative AI. J Consum Psychol 2023; 34: 481–91.10.1002/jcpy.1393CrossRefGoogle Scholar
De Freitas, J, Cohen, IG. The health risks of generative AI-based wellness apps. Nat Med 2024; 29: 17.Google Scholar
Young, J, Jawara, LM, Nguyen, DN, Daly, B, Huh-Yoo, J, Razi, A. The role of AI in peer support for young people: a study of preferences for human-and AI-generated responses. Proceedings of the CHI Conference on Human Factors in Computing Systems (Honolulu, Hawaii, 11–16 May 2024). Association for Computing Machinery, 2024.10.1145/3613904.3642574CrossRefGoogle Scholar
Sezgin, E, McKay, I. Behavioral health and generative AI: a perspective on future of therapies and patient care. NPJ Mental Health Res 2024; 3: 25.10.1038/s44184-024-00067-wCrossRefGoogle ScholarPubMed
Greenwald, E, Leitner, M, Wang, N. Learning artificial intelligence: insights into how youth encounter and build understanding of AI concepts. Proceedings of the AAAI Conference on Artificial Intelligence (virtual conference, 2–9 Feb 2021). The AAI Press, 2021.Google Scholar
Forsyth, S, Dalton, B, Foster, EH, Walsh, B, Smilack, J, Yeh, T. Imagine a more ethical AI: using stories to develop teens’ awareness and understanding of artificial intelligence and its societal impacts. 2021 Conference on Research in Equitable and Sustained Participation in Engineering, Computing, and Technology (RESPECT) (virtual conference, 23–27 May 2021). IEEE, 2021.Google Scholar
Macar, U, Castleman, B, Mauchly, N, Jiang, M, Aouissi, A, Aouissi, S, et al. Teenagers and artificial intelligence: bootcamp experience and lessons learned. ArXiv [Preprint] 2023. Available from: https://arxiv.org/abs/2312.10067 [cited 27 June 2025].Google Scholar
Wach, K, Duong, CD, Ejdys, J, Kazlauskaitė, R, Korzynski, P, Mazurek, G, et al. The dark side of generative artificial intelligence: a critical analysis of controversies and risks of ChatGPT. Entrepren Busin Econ Rev 2023; 11: 730.Google Scholar
Bullwinkel, B, Kumar, RSS. 3 Takeaways from Red Teaming 100 Generative AI Products. Microsoft Security, 2025 (https://www.microsoft.com/en-us/security/blog/2025/01/13/3-takeaways-from-red-teaming-100-generative-ai-products/).Google Scholar
King, DR, Nanda, G, Stoddard, J, Dempsey, A, Hergert, S, Shore, JH, et al. An introduction to generative artificial intelligence in mental health care: considerations and guidance. Curr Psychiatry Rep 2023; 25: 839–46.10.1007/s11920-023-01477-xCrossRefGoogle ScholarPubMed

This journal is not currently accepting new eletters.

eLetters

No eLetters have been published for this article.