Impact statement
This review highlights a lack of both quantity and rigorousness of assessments of implementation of mental health policies and plans as reported across the globe. While studies included in the review often addressed in detail the posed research questions and assessment objectives, they rarely presented clearly their methods and lacked sufficient descriptions of tools used in the evaluation, thus making them hardly interpretable and reproducible in other research contexts. Reports of assessments of entire policies were scarce. Instead, studies largely focused on assessment of certain policy objectives or tried to investigate questions of interest in relation to the implementation process. Thus, the review reveals gaps within implementation science in global mental health and calls for future efforts to better assess the impact of mental health policies and so to enable learning from the lessons made.
Introduction
The need for urgent improvements in mental healthcare systems across the globe has been recognised for a long time (WHO, 2001). The global burden of mental disorders is ever increasing, and the treatment gap still prevails across all income settings. Mental health and well-being have been further compromised by the COVID-19 pandemic, and forthcoming challenges, such as climate change and associated migration and population displacement, are likely to exacerbate the existing burden. Mental health policies and plans (MHPPs) are important policy instruments to spark and concert action for change, yet the methodologies and tools to assess the extent of implementation of MHPPs have not been properly examined.
Since the launch of the World Health Organisation (WHO) Global and European Mental Health Action Plans in 2013, many countries introduced new national MHPPs. Two-thirds of countries in the WHO European region have either developed or updated their national mental health policies or laws since then (WHO, 2018). While this is a welcome development, it is important to ensure that these newly developed or updated MHPPs have a real and important impact on the mental health and well-being of populations.
However, there are a number of challenges that both high-income (HICs) and low- and middle-income countries (LMICs) face in implementing their MHPPs (Zhou et al., Reference Zhou, Yu, Yang, Chen and Xiao2018), such as limited access to financial and human resources (WHO, 2015) and low public mental health literacy (Campion and Knapp, Reference Campion and Knapp2018). In LMICs, challenges including a lack of professional training among healthcare workers, opposition from key stakeholders and resistance to decentralisation of mental health services are reported as more pronounced than in HICs (Saraceno et al., Reference Saraceno, van Ommeren, Batniji, Cohen, Gureje, Mahoney, Sridhar and Underhill2007). The processes countries take to implement MHPPs, including identifying the bottlenecks and facilitating factors, are largely unknown due to the lack of implementation and evaluation studies (Zhou et al., Reference Zhou, Yu, Yang, Chen and Xiao2018). For example, the Strategy of Psychiatric Care Reform from Czechia that was launched in 2017/2018 and has ended in 2022 contains measurable indicators for each of the 10 implementation projects. However, to date no large-scale evaluation has been conducted to assess its implementation. Against this context, and while taking into consideration that many existing national mental health strategies, policies and action plans in the WHO European Region and beyond are now close to their expiration, we aimed to map and analyse tools and methodologies used to assess the extent and process of implementing national or regional MHPPs. This mapping review intends to inform policy development, implementation and evaluation in the WHO European Region.
Methods
We conducted a systematic search of peer-reviewed and grey literature to identify assessments of MHPPS. We followed the protocol recommended by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) (https://www.prisma-statement.org/) to report the screening process and findings (Moher et al., Reference Moher, Liberati, Tetzlaff and Altman2010; Page et al., Reference Page, Moher, Bossuyt, Boutron, Hoffmann, Mulrow, Shamseer, Tetzlaff, Akl, Brennan, Chou, Glanville, Grimshaw, Hróbjartsson, Lalu, Li, Loder, Mayo-Wilson, McDonald, McGuinness, Stewart, Thomas, Tricco, Welch, Whiting and McKenzie2021) and we registered the protocol in the Prospero Database for systematic reviews (Registration #CRD42022290862).
Search strategy and selection criteria
We searched the following bibliographic databases: Global Health, Medline, Embase, Web of Science, Global Index Medicus, WHO MindBank and Open Grey using four sets of search terms to identify relevant studies. These search terms included i) reform or policy or strategy or plan, ii) mental health or psych* or suicide or dementia, iii) implementation and iv) national or government*. Full search strategy is available in the Appendix. Additionally, we screened the lists of references of studies included from the main search as well as from the systematic review on mental health policies conducted by Zhou et al. (Reference Zhou, Yu, Yang, Chen and Xiao2018). The previous review by Zhou et al. (Reference Zhou, Yu, Yang, Chen and Xiao2018) provided an excellent foundation to understand the challenges in implementing mental health policies; however, to the knowledge of the authors of this systematic review, our study is the first of its kind to map existing methodologies and tools to assess the implementation of MHPPs.
We included studies that covered MHPPs, as well as policies covering specific mental health areas, including child and adolescent mental health, suicide prevention or dementia. These priority areas are included in the WHO European Framework for Action on Mental Health (2021–2025). We used WHO definitions of a mental health policy, which is referred to ‘an organized set of values, principles and objectives for improving mental health and reducing the burden of mental disorders in a population and defines a vision for future action’. A mental health plan is defined as ‘a detailed scheme for implementing strategic actions that addresses the promotion of mental health, the prevention of mental health conditions, and treatment and rehabilitation’.
We excluded studies that were 1) not focused on implementation of policies at a national or regional level; 2) policies that fall outside of the priority areas of the WHO European Framework for Action on Mental Health (e.g., substance use disorders, depression); 3) did not evaluate implementation of mental health policy or 4) did not describe any methods of evaluation. Conference abstracts, study protocols, opinion papers and editorials were also excluded. The study selection was not limited by year of publication or country of origin. Multiple languages (English, French, Spanish and Russian) were searched to ensure relevant studies are identified and captured.
All references identified through databases were imported into Rayyan’s online reference manager. After deleting duplicates, AA and HT independently screened titles and abstracts, following full-text examination of included articles. All disagreements were resolved by discussion.
Data extraction and analysis
We extracted and analysed data relevant to both characteristics of included studies and methods of assessment. Characteristics of the studies included the WHO Region of the studied country; country income classification according to World Bank; study objectives; policy period; policy area (e.g., such as suicide prevention, dementia prevention or mental healthcare development) and scope of evaluation of policy implementation, which we further divided into three categories: 1) progress and 2) process of implementation and impact 3) of MHPPs. We defined
-
• progress of implementation as a measure of extent to which MHPPs were implemented;
-
• process of implementation as referring to assessing barriers and facilitators, active ingredients, drivers, cultures, structures, ethics, pace and timing, or other related factors influencing implementation of mental health policies and
-
• impact of policy as referring to achievements resulting from implementation of MHPPs.
We also extracted information about methods of evaluation including study design (quantitative, qualitative; mixed methods), aspects of evaluation (such as quantitative indicators), theoretical frameworks used for evaluation and tools used to assess the extent and process of implementation of MHPPs (e.g., questionnaire, interview guide).
We distinguished studies that provided description of the tools used in evaluation (e.g., for interviews and focus group discussions, providing an interview guide showing the areas covered by the interviewer) and articles that were limited to simply mentioning the tool. When possible, we contacted authors to provide examples of the tools they had used, but not if these had not been reported in detail. We also highlighted studies that indicated that tools were pretested and those that supplemented detailed instruction or guides for using the methods and tools and interpreting results. Additionally, we wanted to distinguish studies that aimed at extensive assessment of all policy objectives from those that implemented specific programmes or interventions or certain parts of a policy.
In reporting the results, we adopted a structure used in the review of methods and tools to assess food and health policies by Phulkerd et al. (Reference Phulkerd, Lawrence, Vandevijvere, Sacks, Worsley and Tangcharoensathien2015). Findings are presented separately by progress, process and impact. Each section includes analysis of policy areas of the studies, measured indicators and finally tools and methods used to assess policy implementation.
Results
The electronic database search identified 7,298 studies. After removing duplicates, 3,120 unique items were left. Unrelated abstracts were excluded based on title/abstract screening leaving 88 full texts for further screening, of which 22 studies were eligible for inclusion. Another 26 studies were selected from either reference lists of included articles or publications known to authors including 7 grey literature publications. In total, 48 studies were included (Figure 1).
Most countries represented were HIC (n = 32) followed by five studies on upper-middle and four on lower-middle-income countries (UMIC and LMIC). Six articles focused on low-income countries (LIC). The European Region was the region with the most publications (n = 22), followed by African (n = 12) and Western Pacific (n = 9) regions; three studies were from the Americas, and one from South-East Asia and Eastern Mediterranean Region each (Table 1).
We found 40 studies that assessed implementation of mental health policies, seven studies assessed implementation of suicide prevention strategies in Australia, Japan (n = 2), Northern Ireland, UK Scotland, UK (n = 2) and United States of America and only one study assessed dementia prevention policy in three European countries (Denmark, Germany, Italy). We found no study assessing the implementation of child and adolescent mental health policy. All studies, with the exception of one, focused on policies implemented at the national level. One study assessed the policy at both national and district levels (Doku et al., Reference Doku, Ofori-Atta, Akpalu, Read, Osei, Ae-Ngibise, Awenva, Lund, Flisher, Petersen, Bhana, Bird, Drew, Faydi, Funk, Green and Omar2008). Only three studies presented results of assessment of entire MHPPs (Australian Health Ministers Advisory Council, 1997; Department of Health, Social Services and Public Safety, 2012; Loukidou et al., Reference Loukidou, Mastroyannakis, Power, Thornicroft, Craig and Bouras2013). More details on each study are provided in Tables 2–4.
Progress of policy implementation
Overview
We identified 14 studies that assessed progress of implementation, usually together with process assessment (n = 3) (Mwanza et al., Reference Mwanza, Sikwese, Mwanza, Mayeya, Lund, Bird, Drew, Faydi, Funk and Green2008; Draper et al., Reference Draper, Lund, Kleintjes, Funk, Omar and Flisher2009; Omar et al., Reference Omar, Green, Bird, Mirzoev, Flisher, Kigozi, Lund, Mwanza and Ofori-Atta2010) or impact assessment (n = 5) (Australian Health Ministers´ Advisory Council, 1997; Hickie and Groom, Reference Hickie and Groom2004; Loukidou et al., Reference Loukidou, Mastroyannakis, Power, Thornicroft, Craig and Bouras2013; Nakanishi et al., Reference Nakanishi, Yamauchi and Takeshima2015; Nakanishi and Endo, Reference Nakanishi and Endo2017) or both (n = 2) (Reid Howie Associates, 2006; Doku et al., Reference Doku, Ofori-Atta, Akpalu, Read, Osei, Ae-Ngibise, Awenva, Lund, Flisher, Petersen, Bhana, Bird, Drew, Faydi, Funk, Green and Omar2008). Two studies assessed solely progress of policy implementation (Dlouhy, Reference Dlouhy2014; Sheehan et al., Reference Sheehan, Griffiths, Rickwood and Carron-Arthur2015). Nine studies assessed policy implementation in HICs, four in UMICs, one in LMICs and three in LICs. Most (86%) of these studies were single-country focused, and two were multinational. Quantitative (e.g., surveys, questionnaires) and qualitative methods (e.g., interviews, focus group discussions) or a combination of both were used, with qualitative methods substantially prevailing.
Policy areas
Five studies, from HICs, focused on suicide prevention policies. The rest focused on mental health policies, which mostly included overarching mental healthcare development strategies. The contents of these overarching policies included mental health promotion and prevention of mental health problems (Hickie and Groom, Reference Hickie and Groom2004; Dlouhy, Reference Dlouhy2014; Nakanishi and Endo, Reference Nakanishi and Endo2017), improving quality of care (Hickie and Groom, Reference Hickie and Groom2004; Doku et al., Reference Doku, Ofori-Atta, Akpalu, Read, Osei, Ae-Ngibise, Awenva, Lund, Flisher, Petersen, Bhana, Bird, Drew, Faydi, Funk, Green and Omar2008; Nakanishi and Endo, Reference Nakanishi and Endo2017), strengthening research (Hickie and Groom, Reference Hickie and Groom2004) and deinstitutionalisation and development of community care (Doku et al., Reference Doku, Ofori-Atta, Akpalu, Read, Osei, Ae-Ngibise, Awenva, Lund, Flisher, Petersen, Bhana, Bird, Drew, Faydi, Funk, Green and Omar2008; Loukidou et al., Reference Loukidou, Mastroyannakis, Power, Thornicroft, Craig and Bouras2013). One study specifically looked at the integration of mental health services into primary healthcare (Draper et al., Reference Draper, Lund, Kleintjes, Funk, Omar and Flisher2009).
Aspects measured
Progress of policy implementation was expressed as a) existence of policy implementation at all or b) the degree of its implementation. Assessment of existence was expressed in qualitative or survey questions on whether any implementation activities were carried out. Eight studies examined the existence of policy implementation (Doku et al., Reference Doku, Ofori-Atta, Akpalu, Read, Osei, Ae-Ngibise, Awenva, Lund, Flisher, Petersen, Bhana, Bird, Drew, Faydi, Funk, Green and Omar2008; Mwanza et al., Reference Mwanza, Sikwese, Mwanza, Mayeya, Lund, Bird, Drew, Faydi, Funk and Green2008; Draper et al., Reference Draper, Lund, Kleintjes, Funk, Omar and Flisher2009; Omar et al., Reference Omar, Green, Bird, Mirzoev, Flisher, Kigozi, Lund, Mwanza and Ofori-Atta2010; Dlouhy, Reference Dlouhy2014; Nakanishi et al., Reference Nakanishi, Yamauchi and Takeshima2015; Nakanishi and Endo, Reference Nakanishi and Endo2017; Substance Abuse and Mental Health Services Administration, 2017). Degree of implementation was measured in a variety of ways in eight studies, seven of which focused on suicide prevention policies. For example, one study assessed perceptions of psychiatrists about degrees to which the key aspects of reform were implemented (Hickie and Groom, Reference Hickie and Groom2004), others measured progress by assessing whether a tangible outcome was produced (i.e., specific output) according to an implementation plan (e.g., completion of a report on training programmes) or the number or percentage of activities completed (e.g., number of training programmes provided, number of staff attending an event) (Department of Health, Social Services and Public Safety, 2012; Loukidou et al., Reference Loukidou, Mastroyannakis, Power, Thornicroft, Craig and Bouras2013; Sheehan et al., Reference Sheehan, Griffiths, Rickwood and Carron-Arthur2015; Substance Abuse and Mental Health Services Administration, 2017). One study examined level of usage of projects expressed in number of referrals to the programme or time users spent in the programme (Reid Howie Associates, 2006). Additionally, another study investigated patterns of implementation, that is, the frequency of types of suicide prevention programmes authorities chose to implement in different prefectures (Nakanishi et al., Reference Nakanishi, Yamauchi and Takeshima2015). Yet, other study compared areas of focus (e.g., addictions, unemployment) that were addressed by different authorities across the country in implementation of a national suicide prevention strategy (Nakanishi and Endo, Reference Nakanishi and Endo2017).
Methods and tools
Various methods were employed to measure progress of implementation. Four studies used quantitative methods: surveys or questionnaires (Hickie and Groom, Reference Hickie and Groom2004; Dlouhy, Reference Dlouhy2014; Nakanishi et al., Reference Nakanishi, Yamauchi and Takeshima2015; Sheehan et al., Reference Sheehan, Griffiths, Rickwood and Carron-Arthur2015). Three studies used qualitative semi-structured interviews (Doku et al., Reference Doku, Ofori-Atta, Akpalu, Read, Osei, Ae-Ngibise, Awenva, Lund, Flisher, Petersen, Bhana, Bird, Drew, Faydi, Funk, Green and Omar2008; Draper et al., Reference Draper, Lund, Kleintjes, Funk, Omar and Flisher2009; Omar et al., Reference Omar, Green, Bird, Mirzoev, Flisher, Kigozi, Lund, Mwanza and Ofori-Atta2010), and one study complemented qualitative interviews with literature review (Draper et al., Reference Draper, Lund, Kleintjes, Funk, Omar and Flisher2009). Five studies applied mixed-methods approaches (Mwanza et al., Reference Mwanza, Sikwese, Mwanza, Mayeya, Lund, Bird, Drew, Faydi, Funk and Green2008; Department of Health, Social Services and Public Safety, 2012; Loukidou et al., Reference Loukidou, Mastroyannakis, Power, Thornicroft, Craig and Bouras2013; Nakanishi and Endo, Reference Nakanishi and Endo2017; Substance Abuse and Mental Health Services Administration, 2017) with combination of qualitative (semi-structured interviews, focus group discussions, desk review, stakeholder consultations) methods and quantitative surveys or statistical analysis. With the exception of one, all studies used surveys that were specially designed ad hoc self-administered questionnaires, with the WHO-AIMS Instrument and Survey Checklist being the only standardised tool used (Mwanza et al., Reference Mwanza, Sikwese, Mwanza, Mayeya, Lund, Bird, Drew, Faydi, Funk and Green2008). Study participants varied across studies and included, depending on the context, users and families, service providers, traditional healers, agencies and organisations involved in the implementation, government and international policymakers.
Five studies used a single method such as survey or semi-structured interviews with key stakeholders, while the rest employed more than one method with a combination of following methods: questionnaires, qualitative interviews and focus group discussions, semi-structured discussions, literature and document review, and quantitative trends for suicide data. Only nine studies provided details on the content of their tools, of which two provided their interview and focus group guide templates in the Supplementary Materials (Doku et al., Reference Doku, Ofori-Atta, Akpalu, Read, Osei, Ae-Ngibise, Awenva, Lund, Flisher, Petersen, Bhana, Bird, Drew, Faydi, Funk, Green and Omar2008; Draper et al., Reference Draper, Lund, Kleintjes, Funk, Omar and Flisher2009).
Process of policy implementation
Overview
Overall, 27 studies assessed the process of policy implementation. Of these, 16 assessed only the implementation process; seven studies provided an assessment of the implementation process combined with an evaluation of the progress or impact of implementation and four studies assessed all three (progress, process, impact). Twenty-three were single-country studies and four evaluated more than one country. Nearly half (n = 13) of the studies were conducted in HICs; five studies by UMICs; four by LMICs and LICs each and one multinational study assessed both UMIC and LIC.
Policy areas
Of all 27 studies, 22 assessed mental health policies where four studies assessed suicide prevention strategies (Reid Howie Associates, 2006; Mackenzie et al., Reference Mackenzie, Blamey, Halliday, Maxwell, McCollam, McDaid, MacLean, Woodhouse and Platt2007; Department of Health, Social Services and Public Safety, 2012; Substance Abuse and Mental Health Services Administration, 2017) and one study focused on national dementia prevention policies (Boeree et al., Reference Boeree, Zoller and Huijsman2021). Most of the mental health policies were national development strategies that aimed at decentralisation and deinstitutionalisation of mental healthcare and establishing community-based systems. Other studies investigated policies on mental health system funding change (Aviram and Azary-Viesel, Reference Aviram and Azary-Viesel2018) or looked at the implementation process of multiple consecutive deinstitutionalisation reforms (from 1950 onwards) (Jones, Reference Jones2000). One study focused on a new policy framework that would expand the understanding of mental ill health within the country’s welfare system and investigated how the process was dealt with by implementing agencies (Fjellfeldt, Reference Fjellfeldt2020). Two studies specifically looked at policies centred around the service development process (Stanley-Clarke et al., Reference Stanley-Clarke, Sanders and Munford2014) and the introduction of a new model of care (Park et al., Reference Park, Lencucha, Mattingly, Zafran and Kirmayer2015).
Aspects measured
Most studies assessed challenges or barriers and/or facilitating factors on policy implementation (n = 24). Specific implementation determinants measured included context-dependant features of the policy implementation process such as public and political level of support for mental health reform and pace of its implementation (Ryan et al., Reference Ryan, Nwefoh, Aguocha, Ode, Okpoju, Ocheche, Woyengikuro, Abdulmalik and Eaton2020); ethical tensions arising during policy implementation (Park et al., Reference Park, Lencucha, Mattingly, Zafran and Kirmayer2015); key informants’ thoughts and feelings associated with the implementation process of a new policy framework (Fjellfeldt, Reference Fjellfeldt2020) and policy levers through which the mental health system reform was to be implemented (Grace et al., Reference Grace, Meurk, Head, Hall, Carstensen, Harris and Whiteford2015). Two studies used a theoretical framework on drivers and constraints that affect policy development and implementation as a conceptual background to the methodology to guide evaluation. Boeree et al. (Reference Boeree, Zoller and Huijsman2021) described the primary drivers for implementation as follows: 1) planning and infrastructure; 2) individual, group, organisational and systemic factors, as well as contextual factors; 3) the underlying theory and process of change involving all partners and 4) performance measures and evaluation Doku et al. (Reference Doku, Ofori-Atta, Akpalu, Read, Osei, Ae-Ngibise, Awenva, Lund, Flisher, Petersen, Bhana, Bird, Drew, Faydi, Funk, Green and Omar2008) described three major constraints for effective implementation: 1) lack of strategic planning; 2) inappropriate health system to support the policy and 3) lack of support or resistance to implementation, partly due to stigma associated with mental illness.
Other evaluation frameworks were more comprehensive and included various elements such as context, content and process or stream (such as in Kingdon’s conceptualisation of policy or Walt’s analytical framework), and the various actors involved in mental health policy (De Vries and Klazinga, Reference De Vries and Klazinga2006; Omar et al., Reference Omar, Green, Bird, Mirzoev, Flisher, Kigozi, Lund, Mwanza and Ofori-Atta2010; Grace et al., Reference Grace, Meurk, Head, Hall, Carstensen, Harris and Whiteford2015).
One study used a manualised case study methodology to organise and integrate data from various sources across domains of interest. The collection of data was accompanied by the overarching research questions: ‘Is this programme working? Why or why not?’ along with a description of strengths, weaknesses, opportunities and threats (Ryan et al., Reference Ryan, Nwefoh, Aguocha, Ode, Okpoju, Ocheche, Woyengikuro, Abdulmalik and Eaton2020). Similarly, in a study on ethical tensions that may arise during policy implementation, a specific ethical framework with three analytic levels: i) person-focused, ii) event-focused and iii) discursive practices was developed to capture the experiences of participants involved in the programme (Park et al., Reference Park, Lencucha, Mattingly, Zafran and Kirmayer2015).
Methods and tools
Studies used qualitative (n = 22) or mixed-methods (n = 5) techniques for investigation of implementation process. Every study in this review employed at least some form of qualitative methods: key informant interview, focus group discussion, other communications with stakeholders (discussions, meetings, forums, Theory of Change workshop), ethnography or observation or documents review. One study used a free listing technique with broad open-ended questions to elicit a comprehensive list of implementation barriers (Abdulmalik et al., Reference Abdulmalik, Kola and Gureje2016). Quantitative tools included self-reported surveys (e.g., to measure knowledge by health workers about the content of mental health policy). A quantitative survey provided primary healthcare staff with a list of challenges to integrate mental health into primary healthcare, which included dichotomous answers (yes/no or agree/disagree) on potential challenges/barriers that were reported during qualitative interviews (Abdulmalik et al., Reference Abdulmalik, Kola and Gureje2016). Other quantitative methods measured impact of implementation (see below for more details). Participants varied from study to study and included service users, service providers, healthcare managers, media representatives and policymakers at macro level.
Nine studies used single methods such as semi-structured qualitative interview (n = 7), document review (n = 1) or a workshop with stakeholders (n = 1), while the rest used more than one method. These were various combinations of qualitative interviews, focus group discussions, document reviews, meetings with stakeholders, observations and quantitative surveys. Only 10 studies provided rationale or details about the tools they employed of which three were available (Doku et al., Reference Doku, Ofori-Atta, Akpalu, Read, Osei, Ae-Ngibise, Awenva, Lund, Flisher, Petersen, Bhana, Bird, Drew, Faydi, Funk, Green and Omar2008; Draper et al., Reference Draper, Lund, Kleintjes, Funk, Omar and Flisher2009; Bikker et al., Reference Bikker, Lesmana and Tiliopoulos2020) (see Supplementary Material).
Impact
Overview
Of the 27 studies assessing the impact of policy implementation, 16 evaluated only impact, five evaluated impact together with progress, four evaluated impact together with process and two evaluated all three. Twenty-three studies were published in HICs and only four in LMICs. With the exception of two, all studies investigated single countries only. Eight studies applied qualitative methods (e.g., interviews, focus group discussions, document review), 11 studies used quantitative methods (e.g., survey) and eight studies employed a mixed-methods approach.
Policy areas
Most studies assessing the impact of policy implementation focused on general mental health policy (n = 22) where four studies assessed suicide prevention strategies and one study assessed dementia policies. Most mental health policies encompassed the goals of deinstitutionalisation with provision of more community services (n = 20), of which six studies focused on mental health reform in Italy (Lovell, Reference Lovell1986; Williams et al., Reference Williams, Salvia and Tansella1986; Palermo, Reference Palermo1991; Barbato, Reference Barbato1998; De Girolamo and Cozza, Reference de Girolamo and Cozza2000; Munizza et al., Reference Munizza, Gonella, Pinciaroli, Rucci, Picci and Tibaldi2011) and four studies investigated mental health policy in Greece (Madianos, Reference Madianos2002; Karastergiou et al., Reference Karastergiou, Mastrogianni, Georgiadou, Kotrotsios and Mauratziotou2005; Madianos and Christodoulou, Reference Madianos and Christodoulou2007; Loukidou et al., Reference Loukidou, Mastroyannakis, Power, Thornicroft, Craig and Bouras2013). Two other policies included a study where a mental health policy in the United Kingdom policy sought to introduce ‘tiered’ prioritisation of patients (Bindman et al., Reference Bindman, Beck, Glover, Thornicroft, Knapp, Leese and Szmukler1999) and a study in Australia introducing new ‘priority themes’ for mental healthcare development (Hickie and Groom, Reference Hickie and Groom2004). Other goals covered by policies included increase of public–private partnerships to deliver mental health services (Ryan et al., Reference Ryan, Nwefoh, Aguocha, Ode, Okpoju, Ocheche, Woyengikuro, Abdulmalik and Eaton2020), promote integrated primary mental healthcare (Petersen et al., Reference Petersen, Ssebunnya, Bhana and Baillie2011), improve quality of care (Whiteford et al., Reference Whiteford, Buckingham and Manderscheid2002; Hickie and Groom, Reference Hickie and Groom2004; Sharkey, Reference Sharkey2017; Winkler et al., Reference Winkler, Formánek, Mladá and Evans Lacko2021), ensure research evidence translation into practice (Hickie and Groom, Reference Hickie and Groom2004;Sharkey, Reference Sharkey2017; Winkler et al., Reference Winkler, Formánek, Mladá and Evans Lacko2021), raise awareness and reduce mental illness stigma (Sharkey, Reference Sharkey2017; Winkler et al., Reference Winkler, Formánek, Mladá and Evans Lacko2021), promote mental health and prevent mental disorders (Australian Health Ministers´ Advisory Council, 1997; Hickie and Groom, Reference Hickie and Groom2004) and protect consumer rights (Australian Health Ministers´ Advisory Council, 1997; Winkler et al., Reference Winkler, Formánek, Mladá and Evans Lacko2021).
Aspects measured
The identified studies mainly examined the effectiveness of policies and their appropriateness. Two studies measured appropriateness of policy implementation (Australian Health Ministers´ Advisory Council, 1997; Substance Abuse and Mental Health Services Administration, 2017), which was defined as whether the strategy’s goals and actions remain relevant and suitable to the implementation that is being carried out.
In terms of effectiveness, aspects measured varied substantially between the studies depending on the policy goals and planned deliverables. For example, deinstitutionalisation policies measured changes in psychiatric beds capacities in various settings (Williams et al., Reference Williams, Salvia and Tansella1986; De Girolamo and Cozza, Reference de Girolamo and Cozza2000; Vázquez-Barquero et al., Reference Vázquez-Barquero, García and Torres-González2001; Madianos, Reference Madianos2002; Whiteford et al., Reference Whiteford, Buckingham and Manderscheid2002; Madianos and Christodoulou, Reference Madianos and Christodoulou2007), number of referrals to community services (De Girolamo and Cozza, Reference de Girolamo and Cozza2000; Vázquez-Barquero et al., Reference Vázquez-Barquero, García and Torres-González2001; Sharkey, Reference Sharkey2017), number of deinstitutionalised patients (Loukidou et al., Reference Loukidou, Mastroyannakis, Power, Thornicroft, Craig and Bouras2013), ratio of psychiatric patients identified by a GP (Vázquez-Barquero et al., Reference Vázquez-Barquero, García and Torres-González2001), transinstitutionalisation of patients to other facilities (Lovell, Reference Lovell1986; Barbato, Reference Barbato1998), prevalence of homeless people with mental disorders and criminalisation of the mentally ill (Barbato, Reference Barbato1998), development of various decentralised services (Lovell, Reference Lovell1986; Williams et al., Reference Williams, Salvia and Tansella1986; De Girolamo and Cozza, Reference de Girolamo and Cozza2000; Madianos and Christodoulou, Reference Madianos and Christodoulou2007; Munizza et al., Reference Munizza, Gonella, Pinciaroli, Rucci, Picci and Tibaldi2011), epidemiological data on incidence and treated prevalence (Munizza et al., Reference Munizza, Gonella, Pinciaroli, Rucci, Picci and Tibaldi2011), change in suicide trends before and after the psychiatric reform (Williams et al., Reference Williams, Salvia and Tansella1986; Barbato, Reference Barbato1998; De Girolamo and Cozza, Reference de Girolamo and Cozza2000) and changes in the quality of care (De Girolamo and Cozza, Reference de Girolamo and Cozza2000; Rey et al., Reference Rey, Walter and Giuffrida2004), in psychiatric practice and in access to care (Rey et al., Reference Rey, Walter and Giuffrida2004). Other measures included change in service expenditures (Whiteford et al., Reference Whiteford, Buckingham and Manderscheid2002), involvement of service users and carers (Whiteford et al., Reference Whiteford, Buckingham and Manderscheid2002), clinical outcomes in hospital residents and community patients and quality of life of people with mental disorders (De Girolamo and Cozza, Reference de Girolamo and Cozza2000); change in public stigma (Vázquez-Barquero et al., Reference Vázquez-Barquero, García and Torres-González2001; Winkler et al., Reference Winkler, Formánek, Mladá and Evans Lacko2021) and change in number of published scientific articles (to measure the research potential) (Sharkey, Reference Sharkey2017). Some studies used subjective measures such as personal opinion from stakeholders about the policy implementation (Boeree et al., Reference Boeree, Zoller and Huijsman2021). Studies on specific programmes related to policy implementation measured the number of clients enrolled in a programme (Bindman et al., Reference Bindman, Beck, Glover, Thornicroft, Knapp, Leese and Szmukler1999; Ryan et al., Reference Ryan, Nwefoh, Aguocha, Ode, Okpoju, Ocheche, Woyengikuro, Abdulmalik and Eaton2020).
Studies on suicide prevention strategies measured incidence of suicide across prefectures in Japan where different prevention programmes were implemented (Nakanishi et al., Reference Nakanishi, Yamauchi and Takeshima2015) or the perceived level of support by bereaved families to assess effectiveness of support programmes and resources allocated on implementation of suicide prevention strategy in the Northern Ireland (Department of Health, Social Services and Public Safety, 2012).
An Australian study used a framework with four evaluation focus areas: rights of consumers and carers, mixed services, linkages between mental health services with other sectors and promotion and prevention. Each focus area contained a number of questions to be answered during the evaluation (Australian Health Ministers´ Advisory Council, 1997). Similarly, a Northern Ireland (UK) study, for each predefined evaluation question, defined an evaluation area as well as what to measure and recommended methods and tools for evaluation. For example, effectiveness and impact for individuals and families were defined as an evaluation area and for that particular area evidence base and families service use experiences were investigated by using a pre-developed survey, focus groups with families and reviewing available published evaluations of already completed initiatives (Department of Health, Social Services and Public Safety, 2012).
Methods and tools
Nine studies applied qualitative methods, 11 used quantitative and another seven used mixed-methods approach. Qualitative semi-structured interviews and document reviews were most commonly used with each method mentioned in six and seven studies, respectively. Other qualitative methods included focus group discussions, consultations with experts, meetings with officials and observation.
Among the quantitative methods, statistical analyses of epidemiological data (n = 7) were used most commonly. Other methods included survey questionnaire (n = 9) and using specific scales such as Community Attitudes towards Mental Illness (Taylor and Dear, Reference Taylor and Dear1981) and Reported and Intended Behaviour Scale (Evans-Lacko et al., Reference Evans-Lacko, Rose, Little, Flach, Rhydderch, Henderson and Thornicroft2011) to measure public stigma (Winkler et al., Reference Winkler, Formánek, Mladá and Evans Lacko2021). Quantitative surveys asked consumers and carers to rate their experience of healthcare (Hickie and Groom, Reference Hickie and Groom2004), to choose whether certain aspects of psychiatric practice had increased, remained the same or decreased (e.g., income, satisfaction, patients’ illness severity) or whether the perceived quality of care has improved or deteriorated (Rey et al., Reference Rey, Walter and Giuffrida2004).
Twelve studies used only one method while the rest employed a combination of the above-mentioned methods such as survey and statistical analysis, literature review and meeting with officials. Nine studies described the content of their tools, of which only three are available (see Supplementary Material).
Discussion
To our knowledge, this is the first study to systematically map and analyse methods and tools used to assess the implementation of MHPPs. We found no comprehensive, high-quality, peer-reviewed assessment of implementation of MHPPs as such. Given that MHPPs are important instruments to improve mental healthcare and well-being of populations, rigorous peer-reviewed assessment of their implementation is crucial so important lessons can be learned and mental health systems improved. Studies included in our review placed emphasis on the presentation of the results but lacked rigorous methodological description, which makes their tools and methods unclear. Less than half of the included studies provided details about the tools they used for data collection (Munizza et al., Reference Munizza, Gonella, Pinciaroli, Rucci, Picci and Tibaldi2011; Bikker et al., Reference Bikker, Lesmana and Tiliopoulos2020; Winkler et al., Reference Winkler, Formánek, Mladá and Evans Lacko2021). Only three studies pretested or piloted the tools. Very few publications provided a full description of their tools. For instance, although a substantial majority of studies employed interview and focus group guides and questionnaires that were specifically tailored to the evaluation purposes, they failed to provide samples of the questions they used. Most tools were not commonly standardised, which is likely due to the broad nature of MHPPs and the comprehensiveness of their specific policy area and the diversity of contexts in which they have been implemented.
Only three studies assessed all three categories of implementation (progress, process and impact) and they were all non-academic assessments of suicide prevention strategies. Similarly, assessments of entire MHPPs, as opposed to only certain parts of them, are rare. Clearly, a full and comprehensive assessment of an overarching policy like a mental health reform might be a lengthy (impact can be measured after decades of reform) and resource-demanding process. In contrast, we found that studies focused on specific evaluation questions related to MHPPs implementation (e.g., challenges associated with the reform in various contexts or opinions about the implementation progress; development of public stigma; changes in suicide rates, etc.) were mostly published in academic journals. Such assessment with the primary focus on only one or several aspects of MHPPs provided that rigorousness and transparency of reported methods and results are ensured certainly is a valid alternative strategy. However, these smaller assessments of an MHPP have to be put together into comprehensive reports of MHHPs implementation and made accessible to readers. In any case, evaluation strategy should be an integral part of the MHPPs.
Given the broad nature and complexity of MHPPs, it is likely that there could be a publication bias where studies with narrower research questions get published in academic journals, whereas extensive assessments and evaluations might have been published only as project reports or policy papers. We identified seven such reports, which were extensive national evaluation reports of countries’ MHPPs or suicide prevention strategies (n = 5), one Master’s dissertation on barriers to integration of mental health into primary care in Nigeria (Abdulmalik et al., Reference Abdulmalik, Kola and Gureje2016) and one project report on suicide prevention in prisons in Scotland, UK (Reid Howie Associates, 2006). Usually, the format of such studies allows for situation analysis and a more detailed description of methodology to be included. However, if made available online, such reports tend to be replaced or become inaccessible over years, may not be identifiable through traditional electronic database searches and are usually not peer-reviewed.
We used a broad definition to assess policy implementation focusing on three categories: progress, process and impact. Studies assessing the progress of implementation usually collected data through qualitative and quantitative questionnaires enquiring about progress or level of implementation against policy targets or goals. Findings of evaluations demonstrated that implementation of MHPPs in terms of target achievement or types of programmes adopted was most often partial. For example, in Northern Ireland, UK, only about a fifth of actions were fully progressing to plan, while the rest were in moderate or limited progress. Similarly, in Australia, a study showed that not even two-thirds of activities were measured, of which 42% were fully implemented while 20% were implemented partially. Studies in our review show that some activities are being implemented more effectively than others. In Greece, even though implementation of many activities of national mental health reform was successful, the rate of implementation substantially varied between rural and urban areas. In Japan, where authorities were left to choose the activities for suicide prevention on their own, most preferred to implement ‘public awareness campaign’ and ‘training of community service providers’ over ‘face-to-face counselling’ or introduction of ‘trauma-informed policies and practices’. Such results require a further deeper investigation into the reasons for and effects of certain patterns of implementation. For example, clearly defined one-off projects, activities with specified funding attached to them and having specific agencies responsible for their implementation increase the likelihood of full implementation, whereas activities that are less tangible and thus harder to define, and without a lead agency can be more difficult to implement (Sheehan et al., Reference Sheehan, Griffiths, Rickwood and Carron-Arthur2015).
Why certain activities were implemented over others can be understood through evaluation of implementation process in identifying challenges and facilitators as well as views of stakeholders on the process. We found that qualitative research methods, such as Theory of Change workshops, stakeholder meetings, qualitative interviews and focus group discussions, were frequently employed to understand barriers and facilitators of MHPPs’ implementation. Most cited barriers to implementation were poor dissemination of implementation guidelines, inadequate resources (e.g., financial, human or infrastructural) to support the reform process and resistance to changes. Some studies indicated low prioritisation of mental health and stigma as barriers; others reported weak management and poor intersectoral collaboration, difficult political context and the complex nature of interventions as factors hindering policy implementation. In LMICs, these challenges are more and greater than in HICs, especially in terms of funding, human resources and administration (Zhou et al., Reference Zhou, Yu, Yang, Chen and Xiao2018). In contrast, clear understanding of roles and responsibilities for implementation and ensuring coordination between different stakeholders were identified as facilitating factors.
Context is crucial for appropriate assessment and understanding of the implementation process. Studies largely adapted their evaluation questions to the features of the political, social or economic environment (Petersen et al., Reference Petersen, Ssebunnya, Bhana and Baillie2011; Ryan et al., Reference Ryan, Nwefoh, Aguocha, Ode, Okpoju, Ocheche, Woyengikuro, Abdulmalik and Eaton2020). In post-conflict areas like Bosnia and Herzegovina, foreign influence was identified as a central theme in implementation of mental health reform, which raised questions on sustainability. In resource-constrained contexts, prioritisation of mental healthcare can be challenging, especially when burden of physical health conditions is high, which hindered implementation of MHPPs (Doku et al., Reference Doku, Ofori-Atta, Akpalu, Read, Osei, Ae-Ngibise, Awenva, Lund, Flisher, Petersen, Bhana, Bird, Drew, Faydi, Funk, Green and Omar2008; Draper et al., Reference Draper, Lund, Kleintjes, Funk, Omar and Flisher2009; Ssebunnya et al., Reference Ssebunnya, Kigozi, Kizza and Ndyanabangi2010). In decentralised healthcare systems, such as in South Africa, translation of national policies into strategic plans appropriate to the provincial or district level contexts seems to be a key factor for ensuring their successful implementation (Draper et al., Reference Draper, Lund, Kleintjes, Funk, Omar and Flisher2009).
Assessment of the impact of implementation was largely performed via both quantitative methods, most often pre- and post-policy reform, and qualitative methods, most often by asking stakeholders about their perception on changes brought by MHPPs.
The relatively poor assessment of implementation of MHPPs is in contrast to the more advanced tools used to monitor and guide the implementation of policies at all levels in other public health areas such as in tobacco (WHO, 2013; Cox et al., Reference Cox, Lutz, Webb, Sahal-Estime, Small and Trivedi2014) and alcohol control (Rekve, Reference Rekve2011), breastfeeding promotion (WHO, 2003; The International Baby Food Action Network Asia, 2008; WHO, 2013) or family planning and reproductive health (Bhuyan et al., Reference Bhuyan, Jorgensen and Sharma2010). For instance, the Policy Implementation Assessment Tool was developed to guide an assessment of national family planning and reproductive health policy implementation. This tool includes instructions on policy assessment at various levels from stakeholder mapping to organising and analysing data (Bhuyan et al., Reference Bhuyan, Jorgensen and Sharma2010). It enables to gather information via multifaceted processes and in a systematic, user-friendly manner. The tool consists of an interview guide that is divided into eight sections that focus on assessing context, process of implementation and appropriateness of policy strategies in relation to its objectives. This tool could be potentially adjusted for mental health policies.
Limitations
We recognise that our search strategy was not able to capture all relevant studies, particularly those that focused on the impact of policy implementation. Potentially valuable information could have been missed when studies are published in project reports on certain areas of policy or published in academic journals without mentioning its relation to a specific policy.
There is a lack of information on the tools used in most studies included in our review, as such we were unable to assess the quality of evaluation methods. Instead, we provided information on whether studies described their tools sufficiently and whether they were interpretable.
Due to the broad scope of this review, we were unable to compare tools across contexts and applications. Further research is necessary to determine which tools are optimal for assessing the implementation of MHPPs and to develop recommendations and guidance on evaluation of MHPPs.
Conclusions and recommendations
Our review has highlighted substantial knowledge gaps in assessing the implementation of MHPPs. Our findings should contribute to policy dialogues on the development, implementation and assessment of implementation of national mental health strategies. Efforts should be made to consolidate available methods and tools into clear methodologies that would address various stages and objectives of implementation taking into consideration a variety of possible policy goals. Such a consolidated methodology might result in a checklist that would mirror each objective of MHPPs and that would allow for various contexts to be taken into account as well as for experiences and lessons from implementation and evaluation to be shared.
Based on our review, we recommend the following:
-
1. Strategy or plan of evaluation of implementation needs to be an integral part of MHPPs and it needs to contain responsibilities and funding.
-
2. Future evaluations of MHPPs implementation need to be more transparent in reporting details, especially on tools and methodologies used and, where possible, make them accessible to readers.
-
3. Since the resources are constrained in all settings, partnerships need to be built to ensure high-quality evaluations. Such partnerships might include universities, research institutes and other organisations, both nationally and internationally.
-
4. Evaluations can be fragmented into smaller studies focused on specific aspects of MHPPs; however, findings from these studies should be put together into complex evaluation reports of MHPPs implementation. Both smaller studies and complex evaluation reports shall be published in peer-reviewed journals to ensure their accessibility and impact.
-
5. More research needs to be done to understand the current implementation of MHPPs so the lessons made could be learned.
Open peer review
To view the open peer review materials for this article, please visit http://doi.org/10.1017/gmh.2023.3.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/gmh.2023.3.
Author contributions
A.A.A., P.W. and Y.Y. initiated, planed and designed the study. A.A.A. and H.T. conducted the literature review and prepared the first draft of the paper. A.K. and Z.G. participated in designing the study and contributed to writing and proofreading the draft. P.W., Y.Y. and L.L. supervised the whole study and critically revised the manuscript. M.R., C.R., A.M.T.I., and J.M. critically revised the manuscript and contributed to the final draft of the paper.
Financial support
This work was supported by the World Health Organisation Regional Office for Europe.
Competing interest
The authors declare none.
Comments
No accompanying comment.