Hostname: page-component-78c5997874-94fs2 Total loading time: 0 Render date: 2024-11-14T16:33:28.219Z Has data issue: false hasContentIssue false

Hazards ahead?

Invited commentary on: assessing professional and clinical competence

Published online by Cambridge University Press:  02 January 2018

Rights & Permissions [Opens in a new window]

Abstract

In the most radical (and politically driven) changes to the National Health Service since it was founded, the training and assessment of doctors will focus more on what they do than on what they know. The UK's lack of tools to assess doctors' performance in the workplace has caused the Postgraduate Medical Education and Training Board to turn to US workplace assessement tools developed for medical practice. These have not been designed or evaluated for application to psychiatric practice, nor do they allow for independent evaluation. It remains unclear how workplace assessment in psychiatric training will tie in with national examinations such as the Royal College of Psychiatrists' membership examination (the MRCPsych); nor is it clear whether service users are yet to have a say in such an important matter as the training of their doctors.

Type
Research Article
Copyright
Copyright © The Royal College of Psychiatrists 2006 

Medical education has finally got political, a fact that informs Brown & Doshi's analysis but is never named (Reference Brown and DoshiBrown & Doshi, 2006, this issue). What they omit to mention is that doctors and the medical Royal Colleges are positioning themselves to deal with some of the most radical changes since the National Health Service (NHS) began, involving the restructuring of what we expect of doctors, the redefining of boundaries with other healthcare staff and some redistribution of power from the Royal Colleges to the newly created Postgraduate Medical Education and Training Board (PMETB). Changing the medical profession has been on the political agenda at least since the Thatcher era, but it took a series of high-profile cases of medical bad behaviour (Bristol, Alder Hey, Shipman, etc.) to create the head of steam necessary to bring it about. The most significant change is the long overdue shift to defining doctors in terms of their competencies or skills. These will now become the outcome measure of training, supported by relevant curricula and assessment programmes. Incidentally, in this we seem a little behind Canada and the USA (Reference Scheiber, Kramer and AdamowskiScheiber et al, 2003), but ahead of Europe and Australia. Hence, the actual performance of doctors now lies at the heart of the new training arrangements (Postgraduate Medical Education and Training Board, 2004) – something that many of us would say has always been the case, but sometimes in spite of, rather than because of, the curricula and assessment systems to date.

A special specialty

Despite being an enthusiast for many of the changes, I have reservations that I will push further than Brown & Doshi were inclined to. To start with, how appropriate is it to assume that we can adopt American assessment tools designed for non-psychiatric medical situations that have never been evaluated in UK psychiatry? Arguably, for example, specialist psychiatric assessment is less dominated by the differential diagnostic and treatment choices that characterise medical assessment, and is more focused on the fine-grained interpretation of complex biopsychosocial data, weaving this into case formulations that inform treatment and care management decisions. This process of critical analysis and judgement can't be readily assessed using the mini-CEX (which focuses on what the fly on the wall sees of the consultation) or the case-based discussion (which allows the trainee to cram up on the case notes that the assessor selects). Common sense tells us that case formulation competency in response to newly presenting clinical situations is one of the most fundamental of our skills. Yet existing workplace assessments seem to miss this out, thus undermining the validity of the process when applied to psychiatry. A redesigned and properly evaluated psychiatric mini-CEX might get round this problem, with carefully thought through operational criteria on the skills being evaluated; however, time is running out, with ‘run through’ specialist training due to go live in August 2007.

Observed and observer

Much has been made of workplace assessment measuring performance, as though this alleged gold standard were actually possible. But as we know from second-order cybernetics (Reference Von FoersterVon Foerster, 1981), the very presence of the observer changes what is observed. Assessments in the workplace may give the illusion of testing what goes on all the time, but they are open to much the same cramming and ‘special performance‘ factors as existing OSCE or individual case assessment exams. True, the clinical context is more everyday, but to attribute a gold standard authenticity to workplace assessment seems in part at least to be an act of political spin.

Brown & Doshi stress the benefits of using multiple assessments and assessment methods in order, among other things, to diminish sampling and interrater reliability problems. They seem to accept the given that even if individual assessments have problems of reliability, taken as a whole the assessment programme will make them go away. But an assessment programme can only be as good as its parts, and in respect to the current versions of the mini-CEXand case-based discussion used in the foundation years, these have weaknesses. Examples include the absence of written text required from the assessor to justify rating scores, the vagueness of the scoring criteria, the absence of a second assessor or observer, and the fact that assessments are done mainly by people well known to the trainee. This degree of nepotism may be fine for ongoing formative assessments used as a vehicle for learning and development, but lacks sufficient objectivity to carry much weight in any publicly accountable summative assessment. The lesson here seems to be, either tighten up the reliability of workplace assessments, for example by adding an independent assessor, or complement them with centrally organised tests of clinical skill so that there are sufficiently robust checks and balances in place to reassure everyone.

What role for national exams?

Foundation training began in earnest in August 2005, and to some extent is a rehearsal for what specialist training could look like 2 years down the line. This is particularly so in relation to workplace assessments, which PMETB sees as a core part of the assessment process throughout medical training. However, it is important to reflect on the differences between foundation and specialist training. Foundation training is generic in nature and intent, and replaces a stage of a doctor's career that has never been subject to any final assessment. This has changed, in as much as foundation training is now topped off by a summative assessment, which comprises the successful completion of the 2-year programme of formative workplace assessments. At no stage is there any attempt to make an independent assessment free from potential bias.

By contrast, independent central assessment in the form of the Royal College of Psychiatrists’ membership examination (the MRCPsych) has always played a key role in the first half of specialist psychiatric training, with local ongoing assessment by trainers being a rather poorly regulated and less influential affair. But as Brown & Doshi say, the way forward is undoubtedly to embrace workplace assessments (PMETB gives us no choice), although in so doing we create a dilemma that needs to be addressed: where do these changes leave national examinations? What should their purpose now be, and what can they do that workplace assessments cannot? Clearly national examinations will need to continue to play an important role in testing knowledge, knowledge application and critical thinking. But given the reliability problems associated with workplace assessment, any national exam is going to need to retain a hefty portion of clinical skills testing. Just how great a portion is a matter for interesting debate; but certainly enough to reassure a demanding public, a predatory press and a College highly motivated to maintain high standards in a changing world where clinical governance, accountability and legal defensibility matter.

Has anyone asked the patients?

One final area not mentioned by Brown & Doshi is the patient and carer perspective (Department of Health, 1997, 1999; British Medical Association, 2000). What aspects of the new ways of assessing clinical competency would most reassure service users? What would most concern them? What might they have to say about the balance between workplace and national assessment? Would they think that workplace assessment was independent enough to protect their interests, rather than the interests of the doctors concerned? And of course, where would they see the patient's voice in giving feedback to the trainee on their performance? An odd omission in the new user-focused NHS.

Declaration of interest

None.

References

British Medical Association (2000) Involving Patients in Quality Improvement Activities: An Introduction for Clinicians. London: BMA.Google Scholar
Brown, N. & Doshi, M. (2006) Assessing professional and clinical competence: the way forward. Advances in Psycahitric Treatment, 12, 8189.Google Scholar
Department of Health (1997) The New NHS: Modern, Dependable, cm 3807. London: TSO.Google Scholar
Department of Health (1999) National Service Framework for Mental Health. London: Department of Health.Google Scholar
Postgraduate Medical Education and Training Board (2004) Principles of an assessment system for postgraduate medical training. London. Postgraduate Medical Education and Training Board.Google Scholar
Scheiber, S.C., Kramer, T.A.M. & Adamowski, S.E. (Eds.) (2003) Core Competencies for Psychiatric Practice: a Report of the American Board of Psychiatry and Neurology, Inc. American Psychiatric Publishing, Inc. Washington, DC.Google Scholar
Von Foerster, H. (1981) Observing Systems. Seaside, CA: Intersystems Publications.Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.