A few years ago, many products that used artificial intelligence were based on machine learning models that made predictions from large sets of example data. For example, machine learning models have been used to predict whether an X-ray shows signs of a tumour. Reference Zewe1 Recently, generative artificial intelligence (GenAI) models were developed, which create new data similar to the input data rather than making predictions. GenAI models that learn to produce new text from input text are labelled large language models (LLMs). Reference Kalota2–Reference Melnyk, Ismail, Ghorashi, Heekin and Javan3 Examples of LLMs include Chat-4 (ChatGPT) from OpenAI, LLaMA from Meta and Gemini from Google. LLMs can be used to write articles and reports, create chatbots, summarise documents, translate between languages and generate software code. Some GenAI LLM models generate images, video and audio. Adolescents are most likely to interact with LLM GenAI text-based models. The use of GenAI by teenagers has grown rapidly, used by nearly 80% of British teenagers in 2023 Reference Thomas4 and 70% of US teenagers in 2024. 5 Teenagers are likely to use GenAI for homework help (53%) but also to fend off boredom (42%). 5
Teenage use of GenAI
Adolescents utilise GenAI to write essays and reports, or create videos for social sharing. Reference Munzer6 Many teens are using GenAI without telling their parents or teachers. 7 While 50% of children age 12–18 have used GenAI for school, only 26% of parents are aware of such use. 7 However, the potential consequences of over-reliance on GenAI may have an impact on critical thinking and creativity. Reference Yan, Greiff, Teuber and Gasevic8 Many teens easily believe GenAI output and treat it as if they were conversing with another human due to the human-like tone, aura of confidence and pattern-matching giving the convincing appearance of understanding and responding to what was said. Reference Eliot9 Many teenagers may be unaware that LLM models may produce errors and create misinformation as well as coherent but inaccurate comments, referred to as hallucinations, especially on topics where only limited data was available for training. Reference Stokel-Walker and Van Noorden10
While some children age 10–12 identify cultural, gender and racial biases in responses from GenAI, Reference Shrivastava, Sharma, Chakraborty and Kinnula11 children may not be sufficiently critical of GenAI actions and responses. Children may be unaware that GenAI can make basic errors, such as ChatGPT giving an incorrect list of the states in the USA. Reference Marcus12 Children and teenagers may be completely unsuspecting that GenAI can create coherent but inaccurate comments, referred to as hallucinations. GenAI may create harmful information that perpetuates historically biased stereotypes. Reference Vassel, Shieh, Sugimoto and Monroe-White13 After exposure to the limitations and mistakes of GenAI, teenagers’ attitudes may shift over time from overtrusting to disillusionment. Reference Solyst, Yang, Xie, Hammer, Ogan and Eslami14 Teenagers may also be unaware of the intentional misuse of GenAI, such as to create manipulative content or impersonate individuals. Reference Salah, Abdelfattah and Al Halbusi15 Some teenagers fear a loss of privacy due to unauthorised use of personal data in GenAI applications. Reference Yu, Sharma, Hu, Wang and Wang16
Malicious use of GenAI LLMs
GenAl LLMs can be used to alter real images to create fake images and create videos to deceive. Reference Mitra, Mohanty and Kougianos17 Fake videos together with GenAI LLM chatbots can generate audio from a text script in any language or voice. GenAI LLM technology allows very sophisticated fake products to be created, commonly called deepfakes. Reference Mitra, Mohanty and Kougianos17 If GenAI LLM generated images, video, audio and text are targeted at specific individuals for the purpose of harassment, it constitutes cyberbullying. Reference Ferrara18 Catfishing occurs when an online perpetrator purposefully deceives a victim into believing there is an emotional or romantic connection between them. Reference Wang and Topalli19
Use of GenAI apps for mental healthcare
The use of GenAI apps for healthcare, or wellness apps, can be risky. The GenAI app may not be able to recognise signs of mental illness. Reference De Freitas, Uğuralp, Oğuz‐Uğuralp and Puntoni20 When used for mental healthcare, the patients may not be aware that the app is not a real person and does not have the emotional foundation for a caring relationship and is not capable of providing professional therapy. Reference De Freitas and Cohen21 Some young people prefer human responses rather than GenAI responses for sensitive topics such as relationships and suicidal thoughts. Reference Young, Jawara, Nguyen, Daly, Huh-Yoo and Razi22 Unlike the non-judgemental listening and rapport of a professional therapist, the emotional expression of the wellness app may not provide the understanding and empathy of a human therapist. Reference Sezgin and McKay23 The GenAI app may provide answers that are inappropriate and worsen the mental health crises. Reference De Freitas, Uğuralp, Oğuz‐Uğuralp and Puntoni20 Additionally, the emotional expression and content of a GenAI app may not carry appropriately across international cultures. Reference Sezgin and McKay23 Yet many young people are using GenAI as a mental health advisor, and parents may be unaware that children are conferring with GenAI about their well-being and self-care or using it as a mental health advisor. Reference Eliot9
Limitations
The difficulties and challenges in teaching artificial intelligence concepts to youth, including high-school students, were not discussed. Reference Greenwald, Leitner and Wang24 The consequences of GenAI on the educational system were not discussed. Reference Yan, Greiff, Teuber and Gasevic8 The growing need and potential solutions for incorporating AI in classrooms for teenagers were omitted. Reference Forsyth, Dalton, Foster, Walsh, Smilack and Yeh25,Reference Macar, Castleman, Mauchly, Jiang, Aouissi and Aouissi26 The impact of the digital divide, or unequal access to artificial intelligence technologies was omitted. Privacy issues related to GenAI storing personal data were not discussed. The lack of regulation of GenAI and the potential to create stress from technical overload when incorporating artificial intelligence in work processes were not discussed. Reference Wach, Duong, Ejdys, Kazlauskaitė, Korzynski and Mazurek27 This paper discusses GenAI models in general, not the strengths or weaknesses of specific products. Cybersecurity and fraud related to GenAI were omitted, although GenAI may increase existing risks and introduce new risks. Reference Bullwinkel and Kumar28 Ethical standards for the use of GenAI in psychiatry were not included. Reference King, Nanda, Stoddard, Dempsey, Hergert and Shore29 The need for increased investigation of the potential harms to youth from GenAI was omitted.
Implications
The use of GenAI by teenagers is increasing rapidly and may have long-term impacts.
Steps are needed to teach children about the limitations of GenAI and how to differentiate fact from fiction. There is a clear need to understand how GenAI may impact the behaviour and mental health status of teenagers.
Author contributions
S.M. and T.G. wrote the initial draft. All authors edited, reviewed and approved the final manuscript.
Funding
This research received no specific grant from any funding agency, commercial or not-for-profit sectors.
Declaration of interest
J.R.G., Director of the NIHR Oxford Health Biomedical Research Centre, is a member of the BJPsych editorial board and did not take part in the review or decision-making process of this paper.
eLetters
No eLetters have been published for this article.