Introduction
Evidence-based CBT
Cognitive behavioural therapy (CBT) is an evidence-based psychological therapy (EBPT) and a first-line treatment for anxiety and/or depression (National Institute for Health and Care Excellence, 2011). It has become more widely available in England since the roll-out of the Improving Access to Psychological Therapies (IAPT) initiative, and is endorsed throughout the UK.
The effectiveness of CBT is evidenced in randomised controlled trials (RCTs) (National Institute for Health and Care Excellence, 2004, 2005a, 2005b, 2009, 2011); the ‘gold-standard’ of research evidence informing the development of clinical guidelines. In RCTs, contextual factors hypothesised to influence outcome (for example therapist training or supervision) are tightly controlled or monitored to assure that clinical outcomes can be attributed to the therapeutic intervention.
Concrete specification of the competencies described in large-scale RCTs and underpinning clinical guidelines has been provided in the CBT competence framework (Roth and Pilling, Reference Roth and Pilling2008). The framework details the competencies, skills and activities required for effective CBT and enables commissioners and clinicians to specify and monitor the service delivered (Groom and Delgadillo, Reference Groom and Delgadillo2012). Roth and Pilling, (Reference Roth and Pilling2008) make the point that CBT competencies specified in the framework are not separable from the resources and infrastructure supporting delivery.
Pragmatic trials of real-world CBT
It is arguable that EBPTs are those which approximate the interventions delivered in RCTs (Groom and Delgadillo, Reference Groom and Delgadillo2012). However, real-world clinical practice is not subject to the same control or monitoring processes (Cook et al., Reference Cook, Schwartz and Kaslow2017). Pragmatic trials have explored the efficacy of CBT as delivered in routine services (e.g. Barkham et al., Reference Barkham, Saxon, Hardy, Bradburn, Galloway, Wickramasekera, Keetharuth, Bower, King, Elliott, Gabriel, Kellett, Shaw, Wilkinson, Connell, Harrison, Ardern, Bishop-Edwards, Ashley, Ohlsen, Pilling, Waller and Brazier2021; Bisson et al., Reference Bissonsubmitted for publication), generally identifying lower efficacy of CBT than research trials. One plausible hypothesis for this reduced efficacy of CBT in the real world is reduced availability of resoures and infrastructure support for CBT. It is notable, for example, that pragmatic trials have offered additional training and supervision to participating therapists in order to demonstrate acceptable fidelity to the evidence-basd model (Barkham et al., Reference Barkham, Saxon, Hardy, Bradburn, Galloway, Wickramasekera, Keetharuth, Bower, King, Elliott, Gabriel, Kellett, Shaw, Wilkinson, Connell, Harrison, Ardern, Bishop-Edwards, Ashley, Ohlsen, Pilling, Waller and Brazier2021; Bisson et al., Reference Bissonsubmitted for publication; Ehlers et al., Reference Ehlers, Grey, Wild, Stott, Liness, Deale and Clark2013).
The Job-Demands-Resources (JDR) model
Job resources are any workplace characteristics which (1) support staff in achieving their goals, (2) reduce the demands and costs associated with the work or (3) stimulate employee growth and development (Demerouti et al., Reference Demerouti, Bakker, Nachreiner and Schaufeli2001). The Job-Demands-Resources (JDR) model provides a framework for understanding the link between job resources and a range of personal and organisational outcomes (Demerouti et al., Reference Demerouti, Bakker, Nachreiner and Schaufeli2001). The authors theorise that an abundance of job resources stimulates engagement with work. Work engagement is predicted exclusively by job resources and can be defined as ‘a positive, fulfilling, work-related state of mind characterized by vigor, dedication, and absorption’ (Schaufeli et al., Reference Schaufeli, Salanova, González-Romá and Bakker2002; p. 74). It is proposed to mediate a relationship between job resources and positive organisational outcomes (Schaufeli & Bakker, Reference Schaufeli and Bakker2004a).
The model has been tested across several population samples, and a robust association (r = 0.25–0.40) established between job resources and work engagement (Schaufeli and Bakker, Reference Schaufeli and Bakker2004a; Schaufeli and Taris, Reference Schaufeli and Taris2014). Engagement is predictive of employee outcomes such as (1) how employees feel about their workplace (Meyer and Allen, Reference Meyer and Allen1991), (2) whether they intend to leave, and (3) burnout (Boyd et al., Reference Boyd, Bakker, Pignata, Winefield, Gillespie and Stough2011; Hakanen et al., Reference Hakanen, Schaufeli and Ahola2008; Maslach, Reference Maslach1998; Schaufeli and Bakker, Reference Schaufeli and Bakker2004a). Staff who are supported with more resources perform better at work and experience greater wellbeing (Nielsen et al., Reference Nielsen, Nielsen, Ogbonnaya, Känsälä, Saari and Isaksson2017).
In the NHS, insufficient job resources are often cited as a key challenge (Quirk et al., Reference Quirk, Crank, Carter, Leahy and Copeland2018) and are negatively related to all aspects of workplace wellbeing (Teoh et al., Reference Teoh, Hassard and Cox2020). Job resources are even more important when job demands are high (Bakker et al., Reference Bakker, Hakanen, Demerouti and Xanthopoulou2007).
Resources and infrastructure support for CBT
Associations between job demands and workplace wellbeing are of notable significance in IAPT services, where staff are subject to large caseloads and clinical complexity (Scott, Reference Scott2018) and very high rates of burnout have been identified (Westwood et al., Reference Westwood, Morison, Allt and Holmes2017; NHS England, 2015). The IAPT initiative tripled the proportion of the English population accessing psychological therapies in under 10 years (Layard and Clark, Reference Layard and Clark2015). The availability of resources and infrastructure can be difficult to ensure during rapid roll-out of large-scale service change, and the service delivered may not reflect the original intentions of service designers (Black et al., Reference Black, Hitchcock, Bevan, Clarke, Elliott and Dalgleish2018). For example, resource limitations have prompted some restrictions on delivery, such as limiting the course of therapy to an arbitrary number of sessions (Clark, Reference Clark2018). Initial analyses of IAPT outcome data indicate that organisational factors account for some variability in clinical outcomes between services (Clark, Reference Clark2018).
The availability of resources and infrastructure supporting CBT practitioners may facilitate or inhibit the delivery of an evidence-based intervention. Key resources supporting CBT practitioners are exemplified within key RCTs and model-specific treatment protocols. For example, the treatment of social phobia requires access to recording equipment (Clark et al., Reference Clark, Crozier and Alden2005) and the treatment of PTSD may require access to the internet (Murray et al., Reference Murray, Merritt and Grey2015). Other protocols require some flexibility in service provision, in order to facilitate increased frequency or duration of sessions (Beck et al., Reference Beck, Rush, Shaw and Emery1979) or work outside the clinic (Ehlers and Clark, Reference Ehlers and Clark2000). Therapists delivering CBT in high-quality RCTs will receive dedicated training as well as regular, high-quality, model-specific clinical supervision (Roth et al., Reference Roth, Pilling and Turner2010).
In conclusion, although CBT has become more widely available across the UK, it is not clear whether resources supporting the delivery of CBT in line with the evidence base have been widely incorporated into service delivery models. A means to measure and evaluate service infrastructure and resources for CBT is required to explore their availability and relationship with organisational outcomes.
Measuring service support for CBT
Groom and Delgadillo (Reference Groom and Delgadillo2012) reviewed key guidelines supporting practice in IAPT services including (1) exemplar RCTs (Roth et al., Reference Roth, Pilling and Turner2010), (2) the IAPT competency framework (Department of Health, 2007; Roth and Pilling, Reference Roth and Pilling2008) and (3) the BABCP standards of conduct, performance and ethics (British Association for Behavioural and Cognitive Psychotherapies, 2010) and idendified resouces or supports which were (1) reccomended within the IAPT competency framework, (2) available to therapists within exemplar RCTs or (3) required in order to adhere to the BABCP standards of conduct, performance or ethics.
As a result of this process, Groom and Delgadillo (Reference Groom and Delgadillo2012) developed a detailed and concrete description of key support factors which facilitate the competent delivery of CBT. Twenty-three support factors were identified, which were grouped thematically under seven standards and adapted into a questionnaire. This questionnaire has been piloted in one service (Groom and Delgadillo, Reference Groom and Delgadillo2012) and used to benchmark support for the delivery of CBT in another (Hadden et al., Reference Hadden, Groom and Waddington2018).
Aims
This study aims to establish the psychometric properties of the questionnaire developed by Groom and Delgadillo (Reference Groom and Delgadillo2012) to determine whether this is a reliable and valid measure of service support for CBT. In the absence of a ‘gold-standard’ measure to establish criterion validity, the study aims to establish construct validity through correlation with measures of engagement and wellbeing among practitioners in line with the JDR model (Demerouti et al., Reference Demerouti, Bakker, Nachreiner and Schaufeli2001; Schaufeli and Bakker, Reference Schaufeli and Bakker2004a).
Hypotheses
This study is split into two stages: (1) an evaluation of content validity and (2) psychometric evaluation. In Stage 1 it is hypothesised that:
-
Consensus feedback from the expert panel may result in adaptations to improve the content relevance and representativeness of the questionnaire.
-
The content validity of the questionnaire will be evidenced by content validity index (CVI) and content validity ratio (CVR) scores above established thresholds (Davis, Reference Davis1992; Wilson et al., Reference Wilson, Pan and Schumsky2012).
In Stage 2 it is hypothesised that:
-
Principal component analysis (PCA) will identify one or more components with an eigenvalue of above 1.
-
A PCA will reduce the number of items required within the measure.
-
The questionnaire will demonstrate internal consistency, as measured by Cronbach’s alpha.
-
The questionnaire will demonstrate good temporal stability as indicated by a positive correlation between administration at time 1 and time 2 (7–14 days later).
-
The questionnaire will demonstrate construct validity through positive correlations with the following scales:
-
a. The Utrecht Work Engagement Scale.
-
b. The Psychological Practitioner Workplace Wellbeing Scale.
-
Method
The study was conducted in two stages. In Stage 1 content validity was explored through consultation with an expert panel (n = 5) and feedback from BABCP-accredited practitioners (n = 20). In Stage 2, the questionnaire was distributed more widely for psychometric evaluation (n = 188).
Stage 1: Expert panel
Sample
An initial expert panel of five BABCP-accredited practitioners involved in teaching and training CBT were consulted (N = 5) and a further 15 BABCP-accredited practitioners reviewed the questionnaire online using the content validity survey.
Measures
Respondents were asked to rate each item according to relevance (1, not relevant; 2, somewhat relevant; 3, quite relevant; 4, very relevant) and necessity (1, not essential; 2, useful but not essential; 3, essential) in measuring service resource and infrastructure support for CBT. These ratings were used to calculate the Item and Scale Level Content Validity Indices (I-CVI/S-CVI) and CVR using approaches described in the literature (Rodrigues et al., Reference Rodrigues, Adachi, Beattie and MacDermid2017; Zamanzadeh et al., Reference Zamanzadeh, Ghahramanian, Rassouli, Abbaszadeh, Alavi-Majd and Nikanfar2015):
-
I-CVI = the number of ‘very relevant’ ratings divided by the number of experts.
-
S-CVI = the mean average of I-CVI scores.
-
CVR = (Ne – n/2)/N/2).
Procedure
The expert panel were introduced to the questionnaire and its domain of measurement, and were asked to rate each item according to its relevance and essentiality to the construct in question (service infrastructure and resource support for the delivery of CBT). The panel members were invited to comment on individual items within the questionnaire and discuss any aspects of the construct that were not captured by existing items (DeVellis, Reference DeVellis2016). Following the consultation, adaptations were made to the questionnaire (see Results section) and it was distributed to a further 15 BABCP-accredited practitioners for online review of item relevance and essentiality to the construct.
Stage 2: Psychometric evaluation
Sample
Health professionals delivering CBT were invited to participate online. Participants were excluded if they did not work as part of an organisation.
Sample size
For correlational analyses, a sample size of 84 is required to detect a small–moderate effect (0.3), and a sample size of 191 is required to detect a small effect (0.2; Cohen, Reference Cohen1988). These estimations were calculated using GPower (Faul et al., Reference Faul, Erdfelder, Lang and Buchner2007). The suitability of PCA is dependent on the strength of the relationships between items and whether factors are well determined (Tabachnick et al., Reference Tabachnick, Fidell and Ullman2007). In line with previous research, this study aimed to recruit 230 participants, at a participant to item ratio of 10:1.
Recruitment
The study was advertised online on Facebook groups for clinical psychologists and CBT therapists; by email to staff and current and past students from CBT and Clinical Psychology Training Programmes at Cardiff University and other academic programmes; to CBT therapists whose details are public on the BABCP Register; and to CBT therapists delivering face-to-face CBT as part of a research trial (RAPID trial: Nollett et al., Reference Nollett, Lewis and Kitchiner2018). Qualified and trainee therapists were invited to participate. Trainee therapists were NHS staff; they make a significant contribution to service structure across a range of settings and their recent knowledge of treatment protocols and guidelines make them well placed to identify gaps in service support.
Measures
a. Demographic and Training Questionnaire
Participants were asked about their age, gender, profession, service context, training and occupation.
b. CBT infrastructure and support questionnaire (Groom and Delgadillo, Reference Groom and Delgadillo2012)
The original questionnaire contained 23 items measuring resource and infrastructure support for CBT, organised under seven standards.
c. Utrecht Work Engagement Scale (UWES; Schaufeli and Bakker, Reference Schaufeli and Bakker2004b)
The UWES is a 17-item measure of work engagement (Schaufeli et al., Reference Schaufeli, Salanova, González-Romá and Bakker2002) which measures three dimensions of engagement: dedication, vigour and absorption. The internal consistency of subscales (over 0.8) and the composite scores is good (over 0.9) (Schaufeli, Reference Schaufeli2012). The UWES demonstrates discriminant validity through a negative relationship with burnout (Schaufeli and Bakker, Reference Schaufeli and Bakker2004b; Schaufeli et al., Reference Schaufeli, Salanova, González-Romá and Bakker2002). The authors endorse a three-factor structure (Schaufeli, Reference Schaufeli2012), yet high inter-correlations between factors have prompted others to use the total score as a composite measure of engagement (Christian and Slaughter, Reference Christian and Slaughter2007).
d. Pd. sychological Practitioner Workplace Wellbeing Measure (PPWWM) (Summers et al., Reference Summers, Morris and Bhutani2019)
The PPWWM is a 26-item measure of the wellbeing of psychological practitioners specifically. It has demonstrated good construct validity through a positive relationship with the Satisfaction with Life Scale and negative relationship with the General Health Questionnaire. It has also demonstrated high internal consistency (α = .92) and high temporal stability (r = .94) (Summers et al., Reference Summers, Morris and Bhutani2019).
Incentives
Participants were offered the chance to win £100 of vouchers for taking part.
Data collection and storage
Data were collected using Qualtrics, secure online survey software. Participants provided their email addresses for the purpose of follow-up contact and generated their own ID code. Identifiable information was held separately from the data and deleted after the study was completed. Questionnaire items were mostly administered in a ‘forced-response’ setting to minimise missing data.
Data analysis strategy
The analysis was conducted in two stages. In Stage 1, amendments were made to the questionnaire following expert review. Content validity scores were then calculated based on ratings provided by the expert panel and BABCP-accredited practitioners (N = 20). In Stage 2, the psychometric properties of the measure were established through analysis of questionnaire data provided by practitioners delivering CBT (N = 188). This stage consisted of an analysis of demographic variables and item analysis before a PCA was conducted in order to establish the underlying structure of the questionnaire. As a result of the PCA, items were removed from the questionnaire. Measures of reliability and validity were taken for the revised questionnaire.
PCA and exploratory factor analytic (EFA) procedures are commonly used in the initial stages of questionnaire validation (Tabachnick et al., Reference Tabachnick, Fidell and Ullman2007). PCA differs from EFA in that it does not assume that underlying factors cause the scores on individual items, and was deemed appropriate to validate an index.
A PCA with orthogonal rotation (Varimax) was conducted to determine the component structure of the measure. Pearson correlations are frequently used within PCAs and factor analyses of ordinal or Likert-scale data (LaVeist et al., Reference LaVeist, Isaac and Williams2009; Summers et al., Reference Summers, Morris and Bhutani2019) although some argue that PCAs on ordinal data (or in data which are skewed or have strong kurtosis) should be conducted using polychoric correlations (Baglin, Reference Baglin2014; Basto and Pereira, Reference Basto and Pereira2012). For the purposes of this study, PCAs were conducted based on both Pearson and polychoric correlations, derived using a combination of SPSS and FACTOR programmes: SPSS (IBM Corporation, 2015) and FACTOR (Lorenzo-Seva and Ferrando, Reference Lorenzo-Seva and Ferrando2006). Both analyses were conducted to establish confidence in the resulting component structure.
Results
Stage 1: Expert review
Amendments
Following consultation with a panel of experts, minor amendments were made to the questionnaire, including adding ‘don’t know’ options to four items (1a, 4b, 5d, 5e). The wording of item 2b was clarified to indicate that ‘bi-weekly’ means ‘twice-weekly’. Two additional questions were added under standard 2:
-
2c. Does your service allow for an extended number of sessions if required, in line with NICE guidance?
-
2d. Does your service allow for you to see a client in 6–8 months time for a ‘booster’ session, in line with NICE guidance?
These amendments were made prior to online distribution.
Stage 2: Psychometric evaluation
In total, 325 individuals accessed the survey and consented to participate. An initial screening question re-directed 58 participants from the questionnaire, because they indicated they did not work as part of an organisation. Participants who failed to complete the main support questionnaire were removed from analysis (N = 78). One hundred and eighty-eight participants were included in the final sample. Using GPower (Faul et al., Reference Faul, Erdfelder, Lang and Buchner2007) it is calculated that the sample achieved a power of 0.79 to detect a correlation of 0.2, with the alpha value set at 0.05.
Demographic variables
In order to determine whether participant characteristics might contribute to variation in the questionnaire’s scores, Kendall’s tau-b (τb), eta and one-way ANOVA analyses were carried out. A τb test indicated that there was no significant association between participant age and scale score (τb = 0.054, p = .28). Eta analyses and one-way ANOVAs indicated that there was no significant association between any nominal characteristic (e.g. gender, service type, country, profession, enrolment on training programme, BABCP accreditation) and questionnaire score. The only item to approach a standard threshold (α = 0.05) for statistical significance in a one-way ANOVA was BABCP accreditation status. However, a Bonferroni correction was made to account for multiple comparisons (α = 0.05÷8 = 0.006) and as a result BABCP accreditation status (accredited or not accredited) was not found to account for variation in questionnaire scores. A table of these scores is available on request.
Item analysis
Overall scale scores ranged from 6 to 40 (range 34). The median, mode and mean scores were as follows: 29, 33 and 29, with a standard deviation of 6.38.
Table 2 provides further information about individual items. The item-total correlations (correlations between each item and the total score excluding that item) were low to moderate, ranging from .107 (5e) to .594 (4c). Two items scored below 0.2 (5e and 1c), five under 0.3 (2a, 2c, 6b, 7b, 7c) and three items scored above 0.5 (4c, 5a 4a). Item mean and standard deviation (SD) scores were evaluated. All item responses ranged from 0 to 2. Floor and ceiling effects may be indicated by scores below 1 or above 2 with small SDs. Only four items had means under 1 (3a, 4a, 5e, 6c) with the lowest at 0.744 (5e) and a minimum SD of 0.84.
A review of the Pearson inter-item correlation matrix indicated that four items (1c, 2d, 5d, 6c) did not correlate with any other item (R < 0.3) although one of these (6c) had an item-total correlation of above 0.3. A further review of a polychoric correlation matrix confirmed low inter-item correlations among these items, except for 1c. Polychoric correlations are described further in the next section. Three items were removed from further analysis (2d, 5d, 5e) due to low inter-item and item-total correlations.
Principal components analysis
A PCA with orthogonal rotation (Varimax) was conducted using both Pearson and polychoric correlations. Initial assessment of the polychoric correlation matrix indicated that these data did not meet the assumption of sampling adequacy required for further interpretation. Results based on Pearson correlations are therefore reported here, and the polychoric results are available on request.
A PCA relies on assumptions of sampling adequacy and the suitability of the data for reduction. The Kaiser–Meyer–Olkin (KMO) statistic indicated satisfactory sampling (0.724) (Kaiser and Rice, Reference Kaiser and Rice1974). All individual items had a KMO score of above .602, surpassing the minimum threshold of 0.5 (Kaiser and Rice, Reference Kaiser and Rice1974). Bartlett’s test of sphericity (p = 0.00) indicated that the data were suitable for data reduction, and the determinant indicated (0.003) that the data were not affected by multicollinearity.
The PCA was conducted on 22 items. ‘Don’t know’ responses comprised 0.53% of the recorded values (21 responses across two items: 1a and 4b). These responses were subject to a pairwise deletion process to minimise the impact on statistical power (Van Ginkel et al., Reference Van Ginkel, Kroonenberg and Kiers2014).
The rotation method was selected following preliminary analyses using orthogonal (Varimax) and oblique (Direct Oblim) methods (Field, Reference Field2018). Inspection of the component correlation matrix following oblique rotation demonstrated negligible correlations between components, indicating that an orthogonal rotation method would be suitable (Pedhazur and Schmelkin, Reference Pedhazur and Schmelkin1991).
A PCA using varimax rotation generated six components explaining 58.32% of the variance (Table 3). The scree plot (Cattell, Reference Cattell1966) and Kaiser’s criterion (Kaiser, Reference Kaiser1970) can be consulted when choosing the number of factors to extract. However, a review of scree plots is reliable only for samples above 200 (Stevens, Reference Stevens2002) therefore in this study the Kaiser criterion was applied. The residuals matrix indicated that the rotated matrix was an adequate fit, with 32% of non-redundant residuals greater than 0.05.
Factor structure and loadings
The rotated factor matrix can be seen in Table 4. Field (Reference Field2018) recommends that items with loadings of below 0.4 should not be interpreted (Field, Reference Field2018). Only two items scored below 0.4 (6c and 6d). Item 6d only scored on one component and as a result, was not retained in the component structure. Item 6c loaded on two factors, and therefore only the loading above 0.4, in component 6, was interpreted. The resulting scale therefore consists of six components, and 21 items.
The components were summarised thematically in the following categories:
-
(1) Access to physical resources;
-
(2) Suitability of the clinical environment;
-
(3) Clinical supervision;
-
(4) Time to offer flexible sessions and prepare;
-
(5) Protocols for working outside the clinic;
-
(6) Professional development.
Reliability and validity
Content validity ratings
A further 15 BABCP-accredited practitioners completed content validity ratings. CVI and CVR scores range between 1 and –1, with higher scores indicating greater agreement between raters as to the essentiality (CVR) or relevance (CVI) of an item. The critical values table of Wilson et al. (Reference Wilson, Pan and Schumsky2012) was consulted in order to determine whether agreement on the essentiality of an item (CVR value) was great enough to exceed chance (Lawshe, Reference Lawshe1975). For CVI scores, Davis (Reference Davis1992) recommends that agreement should surpass 0.8. Further to this, items scoring between 0.7 and 0.79 were considered to require revision, and items below 0.7 were considered for elimination (Zamanzadeh et al., Reference Zamanzadeh, Ghahramanian, Rassouli, Abbaszadeh, Alavi-Majd and Nikanfar2015).
According to these standards, agreement among raters as to how essential eight items were to the construct did not exceed chance (2d, 3a, 4a, 4c, 5d, 5e, 6b, 6c). All items met Davis’s (1992) criteria for relevance, except for 5e, which had a borderline value. The full scale CVI was 0.92, indicating a high level of agreement that items were relevant for measuring resource and infrastructure support for the delivery of CBT. A full table of values is available on request.
Internal consistency
The scale demonstrated good overall internal consistency with Cronbach’s alpha score of 0.801 across 21 items. Table 5 summarises Cronbach’s alpha for each component.
Temporal stability
A Spearman’s rho correlation indicated that the measure had adequate temporal stability when participants were re-contacted 7–14 days later (r 96 = .735, p < .00). A paired samples t-test confirmed that there was no significant difference between the initial scores and those collected 7–14 days later (t 95 = 1.12, p = .27).
Construct validity
On average, the sample (N = 181) scored 3.9 (SD 0.59) on the UWES out of a possible score of 6, and 98.26 (SD 13.49) on the PPWWM (N = 176) out of a possible score of 130. These mean scores are slightly higher than the mean values reported within normative datasets (UWES mean 3.82; SD 1.09) and published papers (PPWWM mean 93.47; SD 17.67) (Schaufeli and Bakker, Reference Schaufeli and Bakker2004b; Summers et al., Reference Summers, Morris and Bhutani2019).
Spearman’s rho correlations indicated a significant positive relationship between total support questionnaire scores and engagement (UWES) (r 161 = .307, p < .00) and between total support questionnaire scores and practitioner wellbeing (PPWWM) (r 156 = .472, p < .00). There was a moderate positive correlation between scores on the UWES and the PPWWM (r 176 = .459, p < .00).
Discussion
This study aimed to determine the psychometric properties of a measure of service infrastructure and resource support for CBT. The properties of this measure were explored through consulting experts in the field and piloting the measure with CBT practitioners. This resulted in a shortened scale with good content validity. The measure demonstrated construct validity through positive correlations with a measure of engagement and psychological practitioner wellbeing. It also demonstrated good internal consistency and adequate temporal stability. The questionnaire provides a basis from which service infrastructure and support for CBT can be evaluated, audited and compared.
Strengths and limitations
This study took an existing measure and subjected it to rigorous evaluation resulting in a short, clear measure of resource and infrastructure support for the delivery of CBT. The 21-item measure consists of six thematically distinct components identified through PCA. An expert panel were consulted to ensure that the domain of measurement was fully represented by the items. The measure was piloted with CBT practitioners from a wide range of professional backgrounds working in diverse settings and in the absence of a ‘gold-standard’ the study capitalised on a model developed within the field of organisational psychology.
The study recruited 188 participants, short of the recruitment target of 230. According to the KMO (Kaiser and Rice, Reference Kaiser and Rice1974) measure of sampling adequacy, this sample was sufficient and is consistent with most EFA or PCA studies, which are conducted with participant-to-item ratios of 10:1 or lower (Costello and Osborne, Reference Costello and Osborne2005). A larger sample would, however, strengthen confidence in component loadings.
Over half the sample worked in Wales where there are fewer BABCP-accredited practitioners (British Association for Behavioural and Cognitive Psychotherapies, n.d.). and analysis of demographic characteristics indicated that BABCP accreditation status among participants was the only demographic factor to significantly influence scale scores (when α < 0.05). However, to compensate for multiple comparisons, the Bonferroni correction was applied and as a result, these differences were considered insignificant.
A quarter of participants were trainees whose perceptions of the resources and supports available to them may be influenced by the extra support afforded by their training programmes and/or limited availability of resources for temporary members of staff. However, trainee participants worked across diverse geographical locations in a range of services and arguably their presence in the study is reflective of service structure.
Job-Demands-Resources model
Findings are consistent with the JDR model (Schaufeli and Bakker, Reference Schaufeli and Bakker2004a) and extend the application of the model to CBT practitioners. Resource and infrastructure support for CBT was found to predict work engagement among CBT therapists. Previous studies testing the JDR model have looked at the association between latent, unobservable job resources, such as job control or supervisor support with work engagement (Bakker et al., Reference Bakker, Hakanen, Demerouti and Xanthopoulou2007). The current study demonstrates that physical resources and infrastructure which facilitate the delivery of CBT are also associated with engagement.
The relationship between job resources and workplace wellbeing has been demonstrated consistently across different professions and workplaces (Nielsen et al., Reference Nielsen, Nielsen, Ogbonnaya, Känsälä, Saari and Isaksson2017) including NHS doctors (Teoh et al., Reference Teoh, Hassard and Cox2020) and mental health professionals (Scanlan and Still, Reference Scanlan and Still2019). This study is the first to specifically describe a relationship between job resources and the wellbeing of psychological practitioners. According to the JDR model, high job demands predict poor health and wellbeing through increasing the likelihood of burnout (Schaufeli and Bakker, Reference Schaufeli and Bakker2004a). Job resources may therefore minimise the impact of job demands on health and wellbeing of psychological practitioners, through buffering against burnout (Bakker et al., Reference Bakker, Hakanen, Demerouti and Xanthopoulou2007). In contrast, the likelihood of burnout is increased when job demands are high and resources are low (Schaufeli et al., Reference Schaufeli, Bakker and Van Rhenen2009).
Work engagement, psychological wellbeing and organisational outcomes
The United Kingdom’s four nations have adopted different organisational frameworks for the delivery of psychological therapies. The IAPT initiative in England is notable for adopting a centralised approach to the provision of psychological therapies and routine publication of clinical outcome data (Clark, Reference Clark2018). Early investigations into the health and wellbeing of staff in IAPT services are consistent with the JDR model: practitioners are subject to high job demands, including organisational targets, large caseloads, and complex client presentations (Scott, Reference Scott2018), and can experience high levels of burnout and emotional exhaustion (Steel et al., Reference Steel, Macdonald, Schröder and Mellor-Clark2015; Westwood et al., Reference Westwood, Morison, Allt and Holmes2017). Practitioners cite organisational support as a key factor in their experience of stress and burnout including quality and frequency of supervision, time for reflection and learning/training (Scott, Reference Scott2018).
The JDR model proposes that job resources stimulate positive organisational outcomes through boosting engagement. Employees who are more engaged perform better at work and are less likely to consider leaving (Schaufeli and Bakker, Reference Schaufeli and Bakker2004a). This is significant for IAPT services, which have reported high rates of turnover (Scott, Reference Scott2018; NHS England, 2015). In healthcare settings, organisational performance is reflected in the quality and safety of patient care. Associations between work engagement and organisational performance have been demonstrated in NHS settings: The King’s Fund found that work engagement is associated with greater patient satisfaction, improved patient safety and reduced mortality (West and Dawson, 2002).
In summary, these findings confirm the construct validity of the questionnaire and confirm that resource infrastructure and support for CBT increases work engagement and psychological wellbeing among CBT practitioners. According to the JDR model, job resources may boost organisational outcomes and protect against staff burnout. This study suggests that job resources should be a core concern in services in which demands on staff are often high and primary outcomes are those of patient safety and quality of care.
Implications for practice and further research
This study has evidenced the validity and reliability of an index for measuring service support for CBT. Future studies may wish to replicate and extend this validation process with larger samples of CBT practitioners working across the UK. The results are consistent with the JDR model and indicate that resources and infrastructure supporting CBT therapists boost engagement with work and work-related wellbeing. This is especially important as services have moved online due to the pandemic. Future research could test the JDR model further by assessing whether job resources are predictive of wider organisational outcomes, such as turnover intention and performance at work, and whether service support for CBT predicts therapist competence and improved patient outcomes.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/S135246582200011X
Acknowledgements
Thanks to all participants.
Author Contributions
Ffion Evans: Conceptualization (equal), Data curation (equal), Formal analysis (equal), Investigation (equal), Methodology (equal), Project administration (equal), Writing – original draft (equal), Writing – review & editing (equal); Helen Penny: Writing – review & editing (equal); Louise Waddington: Conceptualization (equal), Supervision (equal), Writing – review & editing (equal).
Financial support
This research was completed as part of Clinical Psychology Doctoral Training funded by NHS Wales.
Conflicts of interest
Louise Waddington is a Trustee on the Board of the BABCP.
Ethics statements
The ethical principles of psychologists and Code of Conduct as set out by the BABCP and British Psychological Society (2018) were followed. Ethical approval was sought from the Cardiff University School of Psychology Ethics Committee (EC.19.09.10.5689A) to recruit therapists via the BABCP register, Facebook and Universities. NHS ethical approval was obtained to contact therapists on the RAPID trial (IRAS reference: 216979).
Data availability statement
Data are available on request from the authors.
Comments
No Comments have been published for this article.