Hostname: page-component-5b777bbd6c-vfh8q Total loading time: 0 Render date: 2025-06-21T20:28:34.012Z Has data issue: false hasContentIssue false

Development and Evaluation of an AI-Powered MRCPsych CASC Simulator for Exam Preparation

Published online by Cambridge University Press:  20 June 2025

Sirous Golchinheydari*
Affiliation:
West London NHS Trust, London, United Kingdom
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Aims: Preparation for the MRCPsych CASC exam can present unique challenges for psychiatry trainees, including limited access to structured practice, real-time feedback and standardized patient interactions. This project aimed to develop the MRCPsych CASC Simulator (MCS), a custom AI-powered tool designed to enhance exam preparation by providing interactive clinical simulations, structured feedback and objective performance assessment.

Methods: The simulator incorporated three core roles – Doctor (candidate), Patient (actor), and Examiner – to create realistic CASC exam stations. MCS was trained in the functional aspects of the CASC, the requirements of both doctor and patient roles, along with the psychiatric expertise, knowledge and resources required. To test performance, we utilized validated assessment tools, including the examiner’s marking sheet for the CASC, Simulated Patient Rating Scale (SPRS), Objective Structured Clinical Examination (OSCE) the Communication Assessment Tool (CAT) to ensure objective and standardized evaluation. The simulator was tested in two roles, doctor and patient, by two different human assessors. The interactions were recorded and replayed for each assessment. Five stations were completed for each role from various psychiatric specialties. These scores were used to compare MCS with stock ChatGPT and to gain an overall understanding of MCS’ performance. Additionally, assessors requested MCS for immediate feedback on their questioning style, response phrasing, diagnostic accuracy and communication skills to gauge MCS’ effectiveness in providing feedback.

Results: The assessors found that MCS was competent in psychiatric assessments and patient simulation. MCS provided comprehensive learning support including mnemonics, diagnostic frameworks and summaries which facilitated differential diagnosis, clinical reasoning and memorisation. MCS provided real-time performance tracking, allowing potential candidates to refine their skills through iterative practice and targeted improvements.

MCS proved to be a significantly more effective tool for CASC practice than stock ChatGPT, scoring higher in both doctor and patient roles. MCS outperformed stock ChatGPT by an average 58% in doctor roles and 25% better in patient roles. Overall, the assessors found MCS to be a vital tool in CASC preparation.

Conclusion: MCS offers a novel and effective approach to psychiatric exam training by providing structured, objective and interactive practice opportunities. Its ability to provide tutoring, simulate realistic patient interactions and offer personalized feedback enhances clinical reasoning, communication skills and exam preparation.

Type
Research
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Royal College of Psychiatrists

Footnotes

Abstracts were reviewed by the RCPsych Academic Faculty rather than by the standard BJPsych Open peer review process and should not be quoted as peer-reviewed by BJPsych Open in any subsequent publication.

Submit a response

eLetters

No eLetters have been published for this article.