Hostname: page-component-78c5997874-dh8gc Total loading time: 0 Render date: 2024-11-10T04:57:52.791Z Has data issue: false hasContentIssue false

Trainees’ experience of cognitive behavioural therapy training: a mixed methods systematic review

Published online by Cambridge University Press:  22 February 2018

Hannah Jenkins*
Affiliation:
11th Floor, Tower Building, School of Psychology, 70 Park Place, Cardiff University, Cardiff CF10 3AT
Louise Waddington
Affiliation:
11th Floor, Tower Building, School of Psychology, 70 Park Place, Cardiff University, Cardiff CF10 3AT
Nicola Thomas
Affiliation:
Hwyel Dda University Health Board Admin Suite, Bro Cerwyn Centre, Fishguard Road, Haverfordwest, Pembrokeshire SA61 2PG
Dougal Julian Hare
Affiliation:
11th Floor, Tower Building, School of Psychology, 70 Park Place, Cardiff University, Cardiff CF10 3AT
*
Author for correspondence: Hannah Jenkins, Swn-y-Gwynt Resource Centre, Tirydail Lane, Ammanford SA18 3AS (email: Hannah.Jenkins@wales.nhs.uk).
Rights & Permissions [Opens in a new window]

Abstract

Research in the field of cognitive behavioural therapy (CBT) has primarily focused on the acquisition and development of skills and competence. Little is known regarding the experience of training from trainees’ perspectives. This systematic review aimed to review and critique the research conducted on the experience of CBT training. Four electronic databases were searched for published studies reporting on the experience of CBT training. Thirteen articles were selected based on pre-determined inclusion and exclusion criteria and were assessed for quality using the Quality Assessment Tool for Studies with Diverse Designs (QATSDD; Sirriyeh et al., 2012). Due to the lack of consistency in the study designs and outcome measures used, a narrative synthesis of the findings was conducted. Findings were categorized within three themes for synthesis: ‘experience of benefit’, ‘internal processes of engagement’ and ‘external influences on engagement’. Overall, this review was able to draw conclusions regarding the experiences of aspects of CBT training from relatively good quality research. However, the review also highlights the lack of studies exploring specific hypotheses regarding the experience of training.

Type
Review Paper
Copyright
Copyright © British Association for Behavioural and Cognitive Psychotherapies 2018 

Introduction

Research into the efficacy of cognitive behavioural therapy (CBT) as an evidence-based treatment has increased in recent years, leading to a demand for high-quality training (McManus et al., Reference McManus, Westbrook, Vazquez-Montes, Fennell and Kennerley2010). Subsequently, investigations into CBT training have focused on whether training succeeds in increasing a trainee's skill and competence in CBT. In 2010, Rakovshik and McManus carried out a systematic review exploring the impact of training on CBT skill and competence; Beidas et al. (Reference Beidas, Edmunds, Marcus and Kendall2012) conducted a randomized controlled trial (RCT) to explore what form of training leads to the greatest increase in CBT skill and competence; Muse and McManus (Reference Muse and McManus2013) carried out a systematic review of approaches to assessing CBT skill and competence. Additionally, Bennett-Levy (Reference Bennett-Levy2006) proposed a theoretical model of the process of acquiring skill, to explore how therapists become competent in CBT. To date, the focus within the CBT training literature has been on the acquisition of skill and competence and little attention has been paid to the experience of training for the trainee.

Rakovshik and McManus (Reference Rakovshik and McManus2010) reviewed the available research to establish an evidence base for CBT training. They reported that more extensive training leads to increased therapist competence, which positively related to patient outcomes. By contrast, stand-alone workshops and CBT manuals do not significantly improve therapists’ skills or patient outcomes. The review also reported on the considerations of long, costly training programmes and concluded that as training is expensive, a wider dissemination of lower level CBT skills to other staff who had not undertaken a training programme may be necessary to maximize the financial investment of training. They concluded that more scientific studies focusing solely on CBT training are needed, as much of the available research is often obtained as a ‘by-product’ of studies exploring the dissemination and treatment of CBT.

As an example of a study exploring the impact of training on CBT skill and competence, McManus et al. (Reference McManus, Westbrook, Vazquez-Montes, Fennell and Kennerley2010) examined 278 trainees before and after taking a postgraduate diploma in CBT. The study reported on CBT skill and competence that was evaluated on the basis of written assessments marked by course staff and scores on the supervisor-rated ‘Cognitive Therapy Scale’ (CTS; Young and Beck, Reference Young and Beck1980, Reference Young and Beck1988). The study reported a clear increase in CBT skill and competence as a result of diploma-level training.

Other research has focused on the impact of the mode of training on therapist acquisition of skills and competence. In a randomized trial, Beidas et al. (Reference Beidas, Edmunds, Marcus and Kendall2012) examined the effectiveness of 1-day workshop training staff to offer CBT to young people with anxiety. Staff were randomly assigned to one of three training modalities: routine training, computer training and augmented training (emphasizing active learning) and their skill, adherence and knowledge were evaluated at a 3-month follow-up. Results indicated that the 1-day workshops, regardless of modality, produced only limited improvement in therapist adherence and did not result in therapist behaviour change; however, the number of further consultation hours after training, significantly predicted higher therapist adherence and skill at 3-month follow-up. Interestingly, although trainee experience was not a key focus of the study, participants reported greater satisfaction with the augmented training modality which emphasized active learning. This study suggests, similarly to the conclusions from the review by Rakovshik and McManus (Reference Rakovshik and McManus2010), that 1-day workshops are not sufficient to change therapist behaviour. Further consultation and supervised practice are required post-training, and trainees prefer ‘active training’ modalities.

Muse and McManus (Reference Muse and McManus2013) conducted a systematic review on approaches to assessing CBT competence. A total of 10 assessment methods were identified across four levels: knowledge-based assessments (such as essays and multiple choice questionnaires), assessments of practical understanding (including case reports and short-answer clinical vignettes), assessments of practical application of knowledge/skill (such as role-plays) and clinical practice assessments (including assessor-rated treatment sessions, supervisory assessments, therapist self-assessment and patient outcomes). Strengths and limitations of these methods were described in the review, and tentative conclusions drawn, suggesting the most robust measure of competence comes from assessments based on direct observation of treatment sessions.

In 2006, Bennett-Levy described a cognitive model to provide a deeper understanding of how therapists obtain their skills, and many of the subsequent published literature regarding CBT training utilizes this model within its study design or findings. The Declarative Procedural and Reflective model (DPR) visually explains the process of acquiring knowledge through training via three interacting systems (see Fig. 1). The declarative knowledge system describes factual knowledge gained through reading and attending lectures. The procedural knowledge system describes knowledge (both declarative and implicit) that leads to the application of skills (i.e. knowing what to do, how to do it and how all information fits together). Not all procedural knowledge is within therapist's direct awareness, and is often developed through experience, with the assistance of the third system – the reflective system. Often referred to as the ‘engine’ of the other two systems, the reflective system is activated for complex cognitive tasks such as problem-solving and utilizing previous knowledge and experience to guide future guidance and perspectives. This system enables the translation of declarative knowledge into procedural knowledge, and vice versa. The model provides a useful framework to understand the mechanisms of acquiring skill; however, due to its theoretical focus, it does not focus on the trainee perspective and experience of training.

Figure 1. The Declarative-Procedural and Reflective (DPR) Model of therapist skill acquisition (Bennett-Levy, Reference Bennett-Levy2006).

Whilst the literature on CBT training is growing, the focus has been on the acquisition of skill and competence and a summary of what it is currently known about trainees’ experiences of CBT training is not available. This systematic review provides a review and critique of the available literature on the experience of CBT training. The term ‘experience’ was interpreted broadly to capture the variation within the literature. For this review, ‘experience of training’ is defined as trainees’ perceptions and perspectives on the impact of their training in CBT. This includes elements such as their engagement, perception of the quality and the personal impact of training. Understanding the experience of CBT training is important at a workforce level with large numbers of National Health Service (NHS) staff undertaking CBT training. It is also important in facilitating a helpful response to individual differences in the experience of training.

Method

Search strategy

Electronic searches of four databases (PsycINFO, Medline, Embase and Web of Science) were conducted in March 2017 and reviewed in April 2017. The first search terms used were ‘cognitive behavioural therapy’, ‘cognitive behaviour therapy’ and ‘CBT’, combined with the Boolean operator ‘OR’. Both the UK and US spellings of ‘behaviour(al)’ were used for the first search term. The second search term was ‘training’ and the third was ‘experience’. All three searches were combined with the Boolean operator ‘AND’ to produce the completed searches.

Inclusion/exclusion criteria

Studies were included if they were published after 1990, in a peer-reviewed journal and focused on any aspect of CBT training, from trainees’ perspectives. Studies published before 1990 were excluded as it was agreed that CBT training was more formally established from this time (mainly due to the growth of the British Association for Behavioural and Cognitive Psychotherapists [BABCP] and the implementation of its accreditation standards and procedures), and any literature prior to this date may not be representative of CBT training in more recent years. Studies not available in English language were excluded, along with grey literature. Studies that solely explored the acquisition of competence, knowledge and skill during CBT training were deemed not relevant to this review on the wider experience of training.

Search outcome

Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidance (Moher et al., Reference Moher, Liberati, Tetzlaff and Altman2009), the systematic review was undertaken (see Fig. 2). The four searches yielded a total of 1494 records. Removal of duplicates left 1075 titles and abstracts, of which 1043 were deemed irrelevant, leaving 32 full-text articles to assess for eligibility. From the full-text screening, 19 articles were excluded (see Fig. 2 for reasons). This left 13 studies remaining for quality assessment.

Figure 2. PRISMA search flow diagram.

Quality assessment

The 13 identified studies were reviewed for their quality using the Quality Assessment Tool for Mixed Methods Systematic Reviews (QATSDD; Sirriyeh et al., Reference Sirriyeh, Lawton, Gardner and Armitage2012). The tool consists of 16 criteria, 14 applicable to quantitative studies, and 14 applicable to qualitative studies. Each study was given a score (0, ‘not at all’; 1, ‘very slightly’; 2, ‘moderately; 3, ‘complete’) for each relevant criterion. Total scores were then converted into a percentage to produce a descriptive quality assessment percentage (100‒75% = ‘high quality’, 74‒50% = ‘moderate quality’ and 49‒0% = ‘poor quality’). All papers were quality assessed by the first author and by an independent assessor, not affiliated with this research. Whilst there were some small degrees of difference in the individual quality scores between assessors, all the papers were rated as being in the same category. The quality ratings ranged from 33 to 88%, with an average rating of 75%. Studies were not excluded based on their quality assessment rating; however, ratings did inform the analysis and interpretation of the validity and reliability of the data.

Data extraction and synthesis

Due to the variation of methodologies and the diversity of outcome measures used, a statistical meta-synthesis could not be performed. Therefore, as recommended by the Centre for Reviews and Disseminations (2009), a narrative synthesis technique was employed to extract the main themes from the data. A summary table of the studies reviewed was created, which included their quality assessment rating (Table 1).

Table 1. Summary of papers included in the systematic review

Results

Description of studies

Of the 13 studies included in the review, three utilized quantitative methodology, five utilized qualitative methodology and five utilized both quantitative and qualitative data. Studies were conducted in the UK (n = 4), Australia and New Zealand (n = 5), Ireland (n = 1), Sweden (n = 1), Germany (n = 1) and Denmark (n = 1). The focus of the reviewed studies appeared to fit within three themes: experience of benefit, internal processes of engagement, and external influences of engagement. Each of these themes are discussed with reference to the data, and the limitations of the relevant reviewed studies.

1. Experience of benefit during training

Five of the studies included in this systematic review highlighted the experiences of benefit that trainees reported throughout CBT training. Two of the studies were rated as ‘high quality’ (Bennett-Levy and Beedie, Reference Bennett-Levy and Beedie2007; Bennett-Levy and Lee, Reference Bennett-Levy and Lee2014), two were rated as ‘moderate quality’ (Rees et al., Reference Rees, Krabbe and Monaghan2009; Schmidt and Foli-Andersen, Reference Schmidt and Foli-Andersen2017) and one as ‘poor quality’ (Foulkes, Reference Foulkes2003). The overall quality rating for these studies ranged from 33 to 86%. The sample sizes in these studies ranged from n = 24 to n = 94.

Firstly, Foulkes (Reference Foulkes2003) surveyed the satisfaction of 94 psychiatric trainees focusing on the quality and quantity of their psychotherapy training using quantitative and qualitative data. Before formal data collection was carried out, a pilot study was conducted; however, it was not clear whether any changes to the design were made following this. The survey explored five different modalities of therapy, although responses related only to CBT were focused on for this review. Only 31% of trainees felt satisfied with the quality of CBT training they received and 90% reported there was not enough time spent on CBT training. Whilst this study provided some basic information into these trainees’ experiences of CBT training, the results should be interpreted with caution. This study was rated as ‘poor’ quality, given its lack of detail across many aspects of the study's design. It is concluded from this study that the trainees reported that not having enough CBT training can impact on individuals’ experience of its benefit. It is noted that the CBT training explored in this study is not a stand-alone CBT course, but rather a component of general psychiatric training. This may explain the dissatisfaction with the quantity of CBT training received.

Exploring the experience of a one-year CBT training programme, Bennett-Levy and Beedie (Reference Bennett-Levy and Beedie2007) explored the self-rated assessment of competence from 24 trainees to study what influences trainees’ self-perception of competence. Trainees completed the ‘Cognitive Therapy Self-Rating Scale’ (CTSS) on six occasions throughout training. Whilst the CTSS was a modified, self-assessment version of the CTS, it had not been tested for reliability and validity. The study attempted to overcome this by establishing statistical consistency with supervisors’ ratings of competence, although formal statistical assessment of the CTSS was not available. Within the study, if scores had increased by two points or more, or decreased by one point or more, trainees completed a section headed ‘Beliefs regarding change in ratings over time’, to provide a qualitative description of reported changes. A detailed explanation and rationale for choosing the ‘increase’ and ‘decrease’ change ratings at two points was provided. Quantitative data were analysed (using a t-test and repeated measures analysis of variance, paired with post hoc tests) and a statistically significant increase in the self-ratings of competence was shown between the first and sixth ratings of the CTSS. Qualitative responses regarding the changes in ratings were analysed using grounded theory methodology to develop a model incorporating the influence of learning opportunities, cognitive impact and emotional state on self-perception of competence. Results summarized that trainees’ self-perception of competence during CBT training were influenced by new learning opportunities, self-reflection on performance, increased awareness of the standards required of a cognitive therapist and emotional state. This concludes that if these aspects are addressed then trainees can experience positive benefits from CBT training, which also contributes to trainees’ understanding of their own competence. Overall, the study provided a good fit between the research question and its method of data collection and analysis.

Exploring the use of technology in training, Rees et al. (Reference Rees, Krabbe and Monaghan2009) conducted a study exploring experience of CBT training via videoconferencing and measuring knowledge after this training. Quantitative data were obtained from the ‘Cognitive-Behavioural Therapy Knowledge Test’ (CBT-KT) and the ‘Videoconferencing Satisfaction Questionnaire’ (VSQ-7). Qualitative responses were obtained from a short group interview post-training and analysed using thematic analysis. Results described a positive experience of the training, with increased confidence and understanding in CBT being reported. Qualitatively, trainees reported supervised practice as the most useful aspect of the course. Overall, the study reports on an example of combining technology and training which appeared to result in an increase in a trainee's confidence in CBT. Detail was lacking in some elements of this study's design, and results should be interpreted with caution, as the research question focused on assessing the perception of CBT training via videoconferencing methods, as opposed to solely the experience of CBT training.

The study conducted by Bennett-Levy and Lee (Reference Bennett-Levy and Lee2014) aimed to develop a model that predicts the level of engagement and the experience of benefit of one component of CBT training: self-practice/self-reflection (SP/SR). Developed by Bennett-Levy et al. (Reference Bennett-Levy, Turner, Beaty, Smith, Paterson and Farmer2001), SP/SR is a structured, personal therapy-like programme that instructs cognitive behavioural therapists to practise CBT techniques on themselves, then reflect and evaluate their experiences. The SP/SR programme has been formalized into a CBT training paradigm, aimed at increasing CBT skill and competence via self-experiential experiences (Bennett-Levy and Lee, Reference Bennett-Levy and Lee2014). Participants in this study were from four different training groups: two groups of postgraduate students on a clinical psychology programme undertaking an introductory course in CBT, experienced psychologists undertaking a self-experiential training course in CBT and a group of mental health workers undertaking an introductory course in CBT. Demographic information such as age, background, experience and profession varied amongst the 46 participants.

Qualitative data were obtained from four sources: trainees’ written course reflections, transcribed post-course individual and group interviews, post-course questionnaires and trainer observations. Not all sources were utilized for each group. Data were analysed using grounded theory methodology and a detailed account of this process was provided. An empirically driven model was developed, with ‘Engagement’ and ‘Experience of benefit’ at the centre, in a reciprocating and repeating relationship, and so the results of this study contribute to all three of the themes within this systematic review. In relation to this theme, trainees’ ‘Expectation of benefit’ was found to influence trainees’ experience of benefit and their engagement with CBT training. Four other factors that were found to influence trainees’ experiences during CBT training will be discussed in the relevant, subsequent themes.

Strengths of this study included its detailed description of the research setting and varied sample, and its justification and descriptive account of the data analysis. Whilst data were collected from four different sample groups, the method of data collection varied, resulting in less comprehensive contributions from some of the groups of participants.

The final study within this theme was conducted by Schmidt and Foli-Andersen (Reference Schmidt and Foli-Andersen2017). Focusing on psychiatric trainees, a survey in two parts was completed by 60 participants exploring their evaluations of the psychotherapy training, and (of interest in this review), their perceptions of the quality of CBT supervision they received during their training. As not all had experienced CBT supervision as part of their training, only 36 participants completed the second part of the survey relating to CBT. All results in this study were quantitative, and further detail could have been obtained from supporting qualitative responses. Findings revealed that whilst trainees rated their CBT supervisors positively, specific CBT skills and supervision methods (such as summarizing in session and creating a collaborative agenda) appeared to be lacking, suggesting that in Denmark, good quality CBT supervision is of limited availability during psychiatrists’ psychotherapy training. It is not known whether these findings can be generalized to the other psychiatry training programmes in other countries. Similar to the study conducted by Foulkes (Reference Foulkes2003), these interpretations should be interpreted with caution as the CBT training experience is taken from a wider, psychiatric training programme containing many different components.

2. Internal processes of engagement with training

Five of the reviewed studies reported on results that explored the internal processes of trainees’ engagement with CBT training. All studies in this theme were rated as ‘high quality’, ranging from 76 to 88%. The sample sizes in these studies ranged from n = 4 to n = 46. All studies utilized qualitative data and methodology, and one (Chaddock et al., Reference Chaddock, Thwaites, Bennett-Levy and Freeston2014) also utilized quantitative data within their study.

Owen-Pugh (Reference Owen-Pugh2010) conducted a qualitative study of 12 qualified psychodynamic counsellors studying a university module that included an introductory course in CBT. A thematic analysis on participants’ learning journals and the transcripts from a focus group held a year after the module completion was undertaken. Whilst the researcher's position and approach to the data analysis was stated, the study failed to report on any formal assessment of reliability of the analysis, such as reviewing the data and subsequent themes with someone independent from the research. Additionally, alternative data collection methods (such as individual interviews) and subsequent analysis could have provided greater detail. Results found that initially participants struggled with anxieties and the differences between CBT and their core theory of psychodynamic therapy, which produced resistance to learning. Eventually, as the module progressed, participants made deliberate attempts to engage with CBT and its techniques, and ultimately appraised the new model as effective and ethical. This study reports on the internal ‘struggle’ that some trainees who have had other therapeutic training may experience when learning CBT as a new model for their practice. Whilst the study did explore the experience of CBT training, it described its findings with a focus on the transition from one psychological model to another, and subsequently, sole focus on CBT training was not apparent.

Employing a single-case design, Chaddock et al. (Reference Chaddock, Thwaites, Bennett-Levy and Freeston2014) examined the experiences of four CBT trainees undertaking SP/SR as part of their training. This study provided both quantitative and qualitative data via self-ratings of skill and written reflections following the completion of SP/SR, and linked its findings clearly to an explicit theoretical framework (the DPR model of therapist skill development; Bennett-Levy, Reference Bennett-Levy2006). However, an explanation for the choice of measurement tool was not provided and little justification was given for the analytical method adopted in this study. The findings reported that engagement differs based on individual differences including the preference to engage in different modalities of the self during SP/SR; some participants favoured ‘personal self’, some ‘therapist self’, and others ‘trainee self’. The greatest perceived benefit of SP/SR was found when the trainee engaged both ‘personal self’ and ‘therapist self’ during SP/SR, and the least beneficial stance of participating in SP/SR appeared to be if the ‘trainee self’ was the dominant focus. Overall, whilst the study provided an interesting insight into the internal processes activated during SP/SR, the small sample size means that these findings may not generalize to the wider context of CBT training.

Wolff and Auckenthaler (Reference Wolff and Auckenthaler2014) focused on the internal process of theoretical orientation development that 20 German psychotherapists experienced during the last phase of their professional training in CBT. Although some demographic information was provided about participants, limited information was reported on the process of recruitment. Individual, problem-centred interviews were conducted and qualitative data were coded and analysed using grounded theory methodology. The study provided a detailed account of the theoretical framework and the method of data collection and analysis allowed for a degree of depth to be explored within the topic of interest. Results found that the processes involved in developing theoretical orientation are complex, constantly changing and serve a psychological function for trainees. Most dominant within this complex journey was the process of constantly defining and redefining CBT and other approaches, utilizing strategies such as ‘blurring boundaries’ between CBT and other approaches if they experienced a positive encounter with another therapeutic approach, and ‘emphasizing the boundaries’ between approaches if a negative experience was encountered. The study reported that trainees are internally, actively involved in the development of their own theoretical orientation during CBT training and that it is not simply a result of client outcomes or trainees’ skill development, but also includes identity and orientation development. Methodological considerations focused only on the sample, and other strengths and limitations such as commenting on the procedure and data analysis were not explicitly reported.

Bennett-Levy and Lee (Reference Bennett-Levy and Lee2014) developed a model outlining the experience of trainees who were engaging in SP/SR during training. They found that two factors influenced trainees’ experience of benefit and engagement during training, relating to the theme of ‘Internal processes of engagement’; ‘Feeling of safety with the process’ (referring to the extent to which agreements and structures were put in place when conducting SP/SR, enabling trainees to feel safe to self-explore); and ‘Available personal resources’ (indicating the amount of time and energy trainees were able to give the process of SP/SR). As explored in the previous theme, this study provides an empirically driven model of the experience of benefit and engagement in SP/SR as a component of CBT training, which can be tested formally in other contexts.

Following the exploration of the SP/SR component of CBT training, the final study within this theme conducted by Bennett-Levy et al. (Reference Bennett-Levy, Wilson, Nelson, Rotumah, Ryan, Budden, Stirling and Beale2015) explored trainees undertaking a 10-day, formal CBT training course, where SP/SR was not recommended or explicitly encouraged. The researchers employed a ‘participatory action research’ approach to explore trainees’ reports of spontaneous engagement in self-practice during and after training and therefore, heavily included participants in the study's design – a component that many of the studies in this review lacked. Five participants provided qualitative data from two group meetings where this topic was discussed and analysed using thematic analysis. Findings reported that participants were motivated to practise CBT on themselves because of their training and due to the value they placed on the therapeutic model and for their own personal need. Participants reported that self-practice also increased their confidence and competence as CBT therapists and suggested that it also served as a useful burn-out prevention strategy. The findings from this study suggest that if trainees can self-motivate themselves to internally engage in an aspect of self-practice of CBT during their training, then they can experience benefits both personally and professionally. Results should be interpreted with caution from this study, mainly due to its small sample size and lack of detail within the data collection and analysis.

3. External influences on engagement with training

Five studies reported results that explored the external influences on engagement with CBT training. Studies were either rated as ‘high quality’ (Bennett-Levy and Lee, Reference Bennett-Levy and Lee2014; Rakovshik and McManus, Reference Rakovshik and McManus2013; Spafford and Haarhoff, Reference Spafford and Haarhoff2015) or ‘moderate quality’ (Bennett-Levy et al. Reference Bennett-Levy, McManus, Westling and Fennell2009; MacLiam, Reference MacLiam2015). The quality rating of studies ranged from 69 to 86%. The sample sizes in the studies within this theme ranged from n = 9 to n = 120.

Firstly, Bennett-Levy et al. (Reference Bennett-Levy, McManus, Westling and Fennell2009) conducted a study on 120 CBT therapists attending a workshop to improve their practice. A detailed account of the varied sample was provided. Participants completed the ‘Method of Learning Therapy Skills Questionnaire’, and were asked to identify the most effective learning methods for 11 items of therapist knowledge/skills. There was no statistical assessment of the validity and reliability of this measure, and an ‘eyeball’ analysis was performed as opposed to a formal statistical analysis. Results were interpreted within the context of an explicit theoretical framework – the DPR model of therapist skill development (Bennett-Levy, Reference Bennett-Levy2006). Reading and lectures/talks were rated as most effective for learning declarative and conceptual knowledge; modelling was highly rated for declarative and procedural skills; role-play was rated as most effective for procedural learning; and reflective practice and self-experiential work was rated as most effective for reflective and procedural learning systems. This study provides evidence of the external teaching methods that are perceived as useful by trainees during their learning.

In the second study focusing on external influences on engagement in CBT training, a focus is given again on the teaching methods within a CBT training programme. Rakovshik and McManus (Reference Rakovshik and McManus2013) described the results from a course evaluation of a 1-year, Masters-level CBT training course from three cohorts (n = 73). Despite a lack of an explicit theoretical framework, specific to this research question and findings, the study provided clear and detailed descriptions of recruitment, data collection and analysis. The course evaluation measure, however, did not appear to be statistically assessed for validity and reliability. Results from paired t-tests revealed significant differences between the endorsements of the impact of various aspects of learning. Supervision was perceived to have more influence on competence than clinical instruction, with interactions with trainers given the highest rating. There was a relatively low rating of peer-related learning, suggesting that it is not an essential criterion for effective training.

In Bennett-Levy and Lee's (2014) study, two external factors appeared to influence trainees’ engagement with SP/SR: ‘Course structure and requirements’ (describing the institutional context and specifications of the CBT training course such as the structure, length and components of the course, where when SP/SR was a requirement of a course, this facilitated engagement in the process) and ‘Group processes’ (referring to the impact of the SP/SR group's cohesiveness, feedback and participation, which when working effectively, could increase trainees’ engagement with the process of SP/SR). These findings suggest that for trainees’ engagement to be maximized, these external influences should be addressed and made explicit within any CBT training programme.

Focusing on specific CBT training, MacLiam (Reference MacLiam2015) conducted an internet-based survey with 43 graduates from a university-based CBT training course, primarily focusing on graduates’ learning, development and experience after the course. The survey also enquired into graduates’ retrospective experiences of the CBT course using quantitative and qualitative responses. A detailed description of the study's recruitment, sample and data collection method were provided. Quantitative findings were reported in a descriptive manner, such as the majority (55%) of participants described their experience as ‘excellent’ and no ratings were received for negative options. Limited qualitative findings were provided, such as participants’ comments regarding the positive experience of the reflective aspect of the course, and general complimentary comments were made about the teaching, organization and value of CBT. Conclusions and implications for training courses may have benefited from a more detailed presentation of the qualitative responses to enhance the understanding of training experience further.

Finally, exploring the external influence of technology within CBT training, Spafford and Haarhoff (Reference Spafford and Haarhoff2015) explored the utilization of an online blog to help facilitate the engagement in SP/SR within a CBT training programme. Nine participants provided qualitative data from a feedback questionnaire and a teleconference focus group, and data were analysed using thematic analysis. Whilst the method of data collection and analysis were adequate, conclusions may have benefited from more detailed accounts of individuals’ experiences of this component of CBT training, as well as involving previous trainees in the development of the feedback questionnaire. Findings suggested that the online blog enhanced most trainees’ experience of the self-reflective component of SP/SR. However, the authors concluded that factors such as anonymity, the role of a facilitator, assessment, and time and completion should be considered when contemplating the effectiveness of such a blog.

Discussion

This review aimed to explore the experience of CBT training from trainees’ perspectives. Some conclusions can be drawn, including that trainees positively experience specific elements of CBT training (namely, SP/SR and interactive components such as supervision), and trainees’ self-perception of their own competency and their experience of theoretical orientation changes throughout training, via internal and external influences on engagement with training. General satisfaction from specific CBT training courses appears to be positive; however, further detail and cross-course exploration would provide a broader understanding of trainees' experience of benefit throughout training. Training in CBT that is undertaken as part of general psychiatric training appears to be lacking in quality and quantity and more detailed research should explore this further.

Overall, this review highlights that there are numerous internal and external factors that impact on the engagement of trainees with CBT training. Elements such as the quantity of training, and an adoption of a variation of active strategies within a training programme (also as suggested within therapy by Waller, Reference Waller2009), have been shown to influence trainees’ engagement. Other external influences such as new technological developments can also influence the experience of benefit that trainees report.

Trainees’ own expectations have also been shown in this review to influence both their experience of benefit and internal factors of engagement throughout training. This appears to be an important element for training programme providers to reflect on, as it appears that if a potential trainee is enthusiastic and has an expectation for learning during training, they will have positive training experiences, as well as an increase in their skill and competence. It is therefore possible that trainees who may be encouraged into conducting a training programme by their managers, as opposed to engaging due to their own initiative, may not experience as much benefit from the course, questioning its cost and time effectiveness for these individuals.

The findings from this review appear to fit with previously published research. Many of the studies reviewed provided empirical evidence to supportive the DPR model (Bennett-Levy, Reference Bennett-Levy2006) of therapist skill acquisition and refinement. Some suggested that supervision and facilitator interaction is perceived by trainees as having the greatest impact on their competence during CBT training. These are consistent with the conclusions from the systematic review by Muse and McManus (Reference Muse and McManus2013), that supervisory ratings of ‘in-session’ clinical performance provide a robust method for assessing competence, and with the findings by Beidas et al. (Reference Beidas, Edmunds, Marcus and Kendall2012) that active learning is trainees’ preferred modality of training.

Strengths and limitations of the review

Overall, the studies included in this review used a variety of designs, outcome measures and analyses, in different research settings. A broad amount of data has been included in this review and it is possible that its breadth may have led to some studies being included which are arguably of little relevance to this topic (namely the studies exploring CBT training as part of psychiatry training). This review highlights a lack of literature surrounding the experience of CBT training and future studies should continue to explore different methodologies to enhance understanding of this in a scientific manner.

Due to the variation in these reviewed studies, generalizing the findings to the wider population of CBT training should be approached with caution. Further research needs to be conducted on the experience of CBT training in specific courses, across different countries to develop more stable conclusions.

It can be assumed that if future research continues to explore the experience of CBT training, courses can improve their standards, whilst incorporating new and innovative ways to deliver effective training.

Implications

The implications of this review are relevant to trainees, course providers and training commissioners. Firstly, trainees undertaking CBT training should acknowledge the internal processes that occur, namely, the change in their self-perception of competence, and the journey of acquiring their theoretical orientation and definition. Also, trainees should be aware that their own level of engagement has been shown to be central to benefiting from some elements of CBT training (namely, SP/SR).

Training course providers should acknowledge these findings, and incorporate the ‘trainee voice’ into course structures and developments. Components that are perceived by trainees to have an impact on their competence should be encouraged, and other aspects perceived as less effective should be reviewed.

At a strategic level, commissioners should incorporate the findings of this systematic review into their planning and commissioning of CBT training courses. This would ensure that trainees not only achieve the recognized level of skill and competence development, as dictated by course providers, but also that trainees’ personal reflective abilities and perception of competencies are addressed, as outlined in the body of evidence within this systematic review, reflecting the experience of internal processes of engagement in CBT training.

Acknowledgements

None.

Ethical statement

Not applicable.

Conflicts of interest

None.

Main points

  1. (1) This paper presents a systematic review of 13 peer-reviewed studies exploring the experience of cognitive behavioural therapy (CBT) training, from trainees’ perspectives.

  2. (2) Findings were categorized into three themes: ‘Experience of benefit’, ‘Internal processes of engagement’ and ‘External influences on engagement’.

  3. (3) Overall, CBT training is experienced in a relatively positive way; however, the journey can be difficult for trainees at times.

  4. (4) The review reveals a gap in the literature regarding the general experience of CBT training with no imposed focuses or pre-conceived themes.

Learning objectives

By studying this paper, readers will be able to:

  1. (1) Summarize the current literature regarding the experience of cognitive behavioural therapy (CBT) training.

  2. (2) Note the strengths and weaknesses of the current research into the experience of CBT training.

  3. (3) Understand the implications of research into the experience of CBT training for the provision of CBT training in future.

References

Recommended follow-up reading

Rakovshik, SG, McManus, F (2010). Establishing evidence-based training in cognitive behavioural therapy: a review of current empirical findings and theoretical guidance. Clinical Psychology Review 30, 496516.CrossRefGoogle ScholarPubMed

References

Beidas, RS, Edmunds, JM, Marcus, SC, Kendall, PC (2012). Training and consultation to promote implementation of an empirically supported treatment: a randomized trial. Psychiatric Services 63, 660665.Google Scholar
Bennett-Levy, J (2006). Therapist skills: a cognitive model of their acquisition and refinement. Behavioural and Cognitive Psychotherapy 34, 122.Google Scholar
Bennett-Levy, J, Beedie, A (2007). The ups and downs of cognitive therapy training: what happens to trainees’ perception of their competence during a cognitive therapy training course? Behavioural and Cognitive Psychotherapy 35, 6175.Google Scholar
Bennett-Levy, J, Lee, NK (2014). Self-practice and self-reflection in cognitive behaviour therapy training: what factors influence trainees’ engagement and experience of benefit? Behavioural and Cognitive Psychotherapy 42, 4864.Google Scholar
Bennett-Levy, J, McManus, F, Westling, BE, Fennell, M (2009). Acquiring and refining CBT skills and competencies: which training methods are perceived to be most effective? Behavioural and Cognitive Psychotherapy 37, 571583.Google Scholar
Bennett-Levy, J, Turner, F, Beaty, T, Smith, M, Paterson, B, Farmer, S (2001). The value of self-practice of cognitive therapy techniques and self-reflection in the training of cognitive therapists. Behavioural and Cognitive Psychotherapy 29, 203220.CrossRefGoogle Scholar
Bennett-Levy, J, Wilson, S, Nelson, J, Rotumah, D, Ryan, K, Budden, W, Stirling, J, Beale, D (2015). Spontaneous self-practice of cognitive behavioural therapy (CBT) by Aboriginal counsellors during and following CBT training: a retrospective analysis of facilitating conditions and impact. Australian Psychologist 50, 329334.Google Scholar
Centre for Reviews and Dissemination (2009). Systematic Reviews. York: Centre for Reviews and Dissemination.Google Scholar
Chaddock, A, Thwaites, R, Bennett-Levy, J, Freeston, MH (2014). Understanding individual differences in response to self-practice and self-reflection (SP/SR) during CBT training. the Cognitive Behaviour Therapist 7, 17.Google Scholar
Foulkes, P (2003). Trainee perceptions of teaching of different psychotherapies. Australian Psychiatry 11, 209214.Google Scholar
MacLiam, F (2015). Cognitive behavioural psychotherapy graduates in Ireland: a follow-up survey of graduates from an Irish University. Irish Journal of Psychological Medicine 32, 187195.Google Scholar
McManus, F, Westbrook, D, Vazquez-Montes, M, Fennell, M, Kennerley, H (2010). An evaluation of the effectiveness of diploma-level training in cognitive behaviour therapy. Behaviour Research and Therapy 48, 11231132.Google Scholar
Muse, K, McManus, F (2013). A systematic review of methods for assessing competence in cognitive behavioural therapy. Clinical Psychology Review 33, 484499.Google Scholar
Moher, D, Liberati, A, Tetzlaff, J, Altman, DG (2009). Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Medicine 6, e1000097.Google Scholar
Owen-Pugh, V (2010). The dilemmas of identity faced by psychodynamic counsellors training in cognitive behavioural therapy. Counselling and Psychotherapy Research 10, 153162.Google Scholar
Rakovshik, SG, McManus, F (2010). Establishing evidence-based training in cognitive behavioural therapy: a review of current empirical findings and theoretical guidance. Clinical Psychology Review 30, 496516.Google Scholar
Rakovshik, S, McManus, F (2013). An anatomy of CBT training: trainees’ endorsements of elements, sources and modalities of learning during a postgraduate CBT training course. the Cognitive Behaviour Therapist 6, 112.CrossRefGoogle Scholar
Rees, CS, Krabbe, M, Monaghan, BJ (2009). Education in cognitive-behavioural therapy for mental health professionals. Journal of Telemedicine and Telecare 15, 5963.Google Scholar
Schmidt, LM, Foli-Andersen, NJ (2017). Psychotherapy and cognitive behavioural therapy supervision in Danish psychiatry: training the next generation of psychiatrists. Academic Psychiatry 41, 49.Google Scholar
Sirriyeh, R, Lawton, R, Gardner, P, Armitage, G (2012). Reviewing studies with diverse designs: the development and evaluation of a new tool. Journal of Evaluation in Clinical Practice 18, 746752.CrossRefGoogle ScholarPubMed
Spafford, S, Haarhoff, B (2015). What are the conditions needed to facilitate online self-reflection for cognitive behaviour therapy trainees? Australian Psychologist 50, 232240.Google Scholar
Waller, G (2009). Evidence-based treatment and therapist drift. Behaviour Research and Therapy 47, 119127.Google Scholar
Wolff, S, Auckenthaler, A (2014). Processes of theoretical orientation development in CBT trainees: what internal processes do psychotherapists in training undergo as they ‘integrate’? Journal of Psychotherapy Integration 24, 223237.Google Scholar
Young, J, Beck, AT (1980). Cognitive Therapy Scale: Rating Manual. Unpublished manuscript, University of Pennsylvania, Philadelphia, PA, USA.Google Scholar
Young, J, Beck, AT (1988). Revision of Cognitive Therapy Scale. Unpublished manuscript, University of Pennsylvania, Philadelphia, PA, USA.Google Scholar
Figure 0

Figure 1. The Declarative-Procedural and Reflective (DPR) Model of therapist skill acquisition (Bennett-Levy, 2006).

Figure 1

Figure 2. PRISMA search flow diagram.

Figure 2

Table 1. Summary of papers included in the systematic review

Submit a response

Comments

No Comments have been published for this article.