Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-25T18:35:01.037Z Has data issue: false hasContentIssue false

EVALUATION OF PATIENT AND PUBLIC INVOLVEMENT INITIATIVES IN HEALTH TECHNOLOGY ASSESSMENT: A SURVEY OF INTERNATIONAL AGENCIES

Published online by Cambridge University Press:  10 November 2017

Laura Weeks
Affiliation:
Canadian Agency for Drugs and Technologies in Health (CADTH)lauraw@cadth.ca
Julie Polisena
Affiliation:
Canadian Agency for Drugs and Technologies in Health (CADTH)
Anna Mae Scott
Affiliation:
Centre for Research in Evidence Based Practice, Bond University
Anke-Peggy Holtorf
Affiliation:
Health Outcomes Strategies GmbH
Sophie Staniszewska
Affiliation:
University of Warwick
Karen Facey
Affiliation:
University of Edinburgh
Rights & Permissions [Opens in a new window]

Abstract

Objectives: Although there is increased awareness of patient and public involvement (PPI) among health technology assessment (HTA) organizations, evaluations of PPI initiatives are relatively scarce. Our objective as members of Health Technology Assessment International's (HTAi's) Patient and Citizen Involvement Group (PCIG) was to advance understanding of the range of evaluation strategies adopted by HTA organizations and their potential usefulness.

Methods: In March 2016, a survey was sent to fifty-four HTA organizations through the International Network of Agencies for Health Technology Assessment (INAHTA) and contacts of members of HTAi's PCIG. Respondents were asked about their organizational structure; how patients and members of the public are involved; whether and how PPI initiatives have been evaluated, and, if so, which facilitators and challenges to evaluation were found and how results were used and disseminated.

Results: Fifteen (n = 15) programs from twelve countries responded (response rate 27.8 percent) that involved patients (14/15) and members of the public (10/15) in HTA activities. Seven programs evaluated their PPI activities, including participant satisfaction (5/7), process (5/7) and impact evaluations (4/7). Evaluation results were used to improve PPI activities, identify education and training needs, and direct strategic priorities. Facilitators and challenges revolved around the need for stakeholder buy-in, sufficient resources, senior leadership, and including patients in evaluations.

Conclusions: A small but diverse set of HTA organizations evaluate their PPI activities using a range of strategies that reflect the range of rationales and approaches to PPI in HTA. It will be important for HTA organizations to draw on evaluation theories and methods.

Type
Policies
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © Cambridge University Press 2017

Increasingly, health technology assessment (HTA) organizations are involving patients, members of the public, or both, in some aspect of their HTA processes (Reference Abelson, Bombard, Gauvin, Simeonov and Boesveld1Reference Hailey, Werko, Bakri, Cameron, Gohlen and Myles8). Patient and public involvement (PPI) includes a range of strategies used across the HTA and decision-making process with the goal of informing (e.g., information broadcasts, Web site), consulting (e.g., survey, focus group) or actively engaging with patients or members of the public for the purpose of research, policy, or program development (e.g., citizen jury, advisory committee participation) (Reference Gauvin, Abelson, Giacomini, Eyles and Lavis9Reference Abelson, Wagner, DeJean, Boesveld, Gauvin and Bean12).

Although there may be increased awareness of PPI amongst HTA organizations, published evaluations of PPI initiatives are still relatively scarce (Reference Abelson, Wagner, DeJean, Boesveld, Gauvin and Bean12). Where such evaluations do exist, more tend to focus on evaluating the impact of PPI (Reference Abelson, Bombard, Gauvin, Simeonov and Boesveld1;Reference Berglas, Jutai, MacKean and Weeks2;Reference Dipankui, Gagnon, Desmartis, Legare, Piron and Gagnon4), as opposed to the process (Reference Lopes, Street, Carter and Merlin13), and tend to be circumscribed to a single HTA topic, or program (Reference Abelson, Bombard, Gauvin, Simeonov and Boesveld1;Reference Berglas, Jutai, MacKean and Weeks2;Reference Dipankui, Gagnon, Desmartis, Legare, Piron and Gagnon4;Reference Oliver, Milne, Bradburn, Buchanan, Kerridge and Walley14). In two surveys of member agencies of the International Network of Agencies for Health Technology Assessment (INAHTA) in 2005 (Reference Hailey and Nordwall15) and 2010 (Reference Hailey, Werko, Bakri, Cameron, Gohlen and Myles16), eight and six agencies, respectively, reported having evaluated their consumer involvement activities, although the definition for evaluation was broad and included limited activities such as noting the type of input and number of submissions received, and the influence of input on HTA quality and relevance. The Patient and Citizen Involvement Group of Health Technology Assessment International (HTAi) leads an initiative to document and share good practice examples of PPI in HTA (17). This initiative has helped document a variety of strategies used across eleven international HTA programs, and share perceptions of best practice. While it is possible that some descriptions are out of date, only three of the eleven programs who contributed good practice examples also reported evaluating their strategies or measuring their impact.

The lack of published evaluations of PPI initiatives in HTA limits the refinement of theory to guide best practices. Who best to include in HTA, how best to recruit, consult, involve or engage them, and with what support remain somewhat unanswered questions (Reference Dipankui, Gagnon, Desmartis, Legare, Piron and Gagnon4;Reference Staniszewska, Brett, Mockford and Barber1820). We conducted a survey of international HTA agencies to address the need to better understand whether and how HTA programs are evaluating their PPI strategies and with what results, as well as perceived facilitators and barriers to evaluation.

METHODS

A questionnaire (Supplemental File 1) was developed and informed by studies on the impact of PPI on health and social care research, and their stakeholders (Reference Staniszewska, Brett, Mockford and Barber18;Reference Brett, Staniszewska, Mockford, Herron-Marx, Hughes and Tysall21;Reference Brett, Staniszewska, Mockford, Herron-Marx, Hughes and Tysall22). Questions sought information on: (i) HTA organizations, including the structure and jurisdiction; (ii) how patients and members of the public are involved in HTA processes (e.g., topic identification, appraising evidence) and HTA decision making (e.g., participation on committees, making recommendations); (iii) whether and how PPI strategies have been evaluated, and if so how the information was used and disseminated; (iv) lessons learned including facilitators and challenges to PPI evaluations; and (v) specific details about completed evaluations including the objectives, methods, and results.

In March 2016, we sent email invitations: (i) through the INAHTA Secretariat to its 52 members; (ii) to contacts within the eleven HTA organizations who responded to the HTAi initiative to collect good practice examples of PPI in HTA (one of which is not an INAHTA member); and, (iii) to seven personal contacts of the authors (one of which was not contacted through either of the two prior methods). In total, we directly invited fifty-four individual organizations to participate, and additionally pursued snowball sampling by encouraging participants to forward the questionnaire to their contacts who might be doing this type of work.

The invitation introduced the purpose of our survey, requested that only one questionnaire per organization was completed, and requested the recipient to forward the questionnaire to “the appropriate person” within their organization for completion. The questionnaire took approximately 20 to 30 minutes to complete, and was hosted on the online SurveyMonkey platform (www.surveymonkey.com). Before distribution, a draft questionnaire was pilot tested by a staff member at the Canadian Agency for Drugs and Technologies in Health (CADTH) who was not involved in the development of the questionnaire, and the resultant feedback was used to revise the questionnaire.

Potential respondents were given the option to request a structured telephone interview instead of completing the online questionnaire. During the two interviews that took place, a study investigator covered the same content as was in the online questionnaire, took detailed notes and developed a written account of the participant's responses, which became part of the same dataset as responses submitted online. Similarly, we asked online participants for permission to contact them to clarify any submitted responses, if necessary, and four participants were contacted for follow up.

Two investigators (J.P. and A.H.) reviewed all responses for clarity, completeness and analysis. Frequencies of responses were calculated for close-ended questions, and responses to open-ended questions were summarized narratively.

This survey did not require formal approval from a research ethics board as the focus was on HTA organization procedures and not the individuals completing the questionnaire (23), although we followed ethical practices for survey research. We provided sufficient information about the purpose and process of the survey to enable an informed decision to participate or not, indicated that any information provided would be used in a peer-reviewed journal publication such that submission of a completed questionnaire implied informed consent for that purpose, and explicitly informed participants that their information would be shared publicly and their organization and related information would be identifiable.

RESULTS

We received fifteen completed questionnaires (15/54 = 27.8 percent response rate), among which twelve countries were represented. Two organizations each were from the United Kingdom (National Institute for Health and Care Excellence [NICE], Scottish Medicines Consortium (SMC]), Netherlands (Zorginstituut Nederland [ZiNL], Netherlands Organization for Health and Research Development [ZonMw]), and Taiwan (Center for Drug Evaluation [CDE], Clinical Effectiveness Group). The additional respondents were based in Canada (Canadian Agency for Drugs and Technology in Health [CADTH]), Columbia (Instituto de Evaluación Tecnológica en Salud [IETS]), France (Haute Autorité de Santé [HAS]), Germany (Gemeinsamer Bundesausschuss [G-BA]), Italy (Agenzia Nazionale per i servizi sanitari Regionali [AGENAS]), Luxembourg (Cellule d'expertise médicale [CEM]), Poland (Agency for Health Technology Assessment in Poland [AHTAPol]), Romania (Center of health care quality and control), and Sweden (Swedish Agency for health technology assessment and assessment of social services [SBU]). Table 1 describes participating HTA organizations, their stated reasons for involving patients and members of the public in HTA, and details of when in the HTA process patients and members of the public are involved.

Table 1. Description of Participating HTA Organizations

AGENAS, Agenzia Nazionale per i servizi sanitari Regionali; AHTAPol, Agency for Health Technology Assessment in Poland; CADTH, Canadian Agency for Drugs and Technology in Health; CDE, Center for Drug Evaluation; CEM, Cellule d'expertise médicale; G-BA, Gemeinsamer Bundesausschuss; HAS, Haute Autorité de Santé; IETS, Instituto de Evaluación Tecnológica en Salud; NICE, National Institute for Health and Care Excellence; SBU, Swedish Agency for health technology assessment and assessment of social services; SMC, Scottish Medicines Consortium; ZiNL, Zorginstituut Nederland; ZonMw, Netherlands Organization for Health and Research Development.

Eleven respondents (73.3 percent) reported involving both patients and members of the public in the HTA process or HTA decision making, while three (20.0 percent) reported involving patients only. One organization (6.7 percent) reported not involving patients or members of the public and indicated PPI would be considered in future cases if a commissioned request required it. Organizations reported involving patients for a range of activities spanning both HTA processes and HTA decision making including, in order of the HTA process, participating in a working group or committee to provide opinions and perspectives (n = 10; 66.7 percent), identifying topics for assessment (n = 4; 26.7 percent), refining the scope of assessments (n = 7; 46.7 percent), identifying clinical outcomes (n = 5; 33.3 percent), reviewing protocols (n = 3; 20.0 percent), collecting data (n = 3; 20 percent), analyzing data (n = 1; 6.7 percent), writing reports (n = 1; 6.7 percent), reviewing draft reports (n = 7; 46.7 percent), appraising evidence (n = 5; 33.3 percent), making recommendations (n = 3; 20.0 percent), and helping to disseminate results (n = 8; 53.3 percent).

Members of the public were similarly reported to participate across the HTA process, including: participating in a working group or committee (n = 9; 60.0 percent), identifying topics for assessment (n = 3; 20.0 percent), refining the scope of assessments (n = 4; 26.7 percent), identifying clinical outcomes (n = 2; 13.3 percent), reviewing research protocols (n = 3; 20.0 percent), analyzing data (n = 1; 6.7 percent), writing reports (n = 1; 6.7 percent), reviewing draft reports (n = 7; 46.7 percent), appraising evidence (n = 4; 26.7 percent), making recommendations (n = 4; 26.7 percent), and helping to disseminate results (n = 2; 13.3 percent).

Evaluation of Patient and Public Involvement

Types and Frequency of Evaluation Activities

As outlined in Table 1, of the fourteen respondents who conduct PPI activities, seven organizations (50.0 percent) responded that they evaluate, or have evaluated, those activities and two (14.3 percent) reported that they planned to start the evaluation process for upcoming HTAs. Of the remaining five organizations, one commented that they have not conducted any evaluation to date due to a lack of resources, but otherwise no specific reasons were reported for not evaluating PPI activities.

Of the seven organizations who have conducted some evaluation work, three (42.8 percent) reported having conducted evaluations of participant satisfaction, process evaluations, and impact evaluations, one (14.3 percent) reported conducting both process evaluations and impact evaluations, one reported conducting process evaluations only, and two (28.6 percent) reported conducting evaluations of participant satisfaction only. The frequency of evaluation varied across the respondents, the type of evaluation conducted and the type of PPI activity.

Application and Dissemination of Evaluation Results

Five of the seven organizations who have conducted evaluation work responded to an open-ended question about how results of their evaluation activities are used within their programs. All five commented that results are used to inform changes to PPI activities with the overall goal of improving activities. Some examples are to ensure that patients’ perspectives are captured efficiently and reliably, identify education and training needs of participants (e.g., patients, patient groups, HTA staff), identify and address issues raised by participants in the process, and help to direct strategic priorities and plan for PPI activities for the upcoming year. HAS and IETS reported that the results of their evaluation activities, in this case a survey, are summarized and used to identify any issues or concerns. Relevant feedback is then generated and used to refine particular processes.

At NICE, evaluation results are used to guide the level and type of support provided by their Patient Involvement Program. At the SMC, continuous evaluation informs annual work planning for PPI activities under the direction of the PIN Advisory Group. As required, formal recommendations are prepared for the SMC Executive, who to date have enacted all such recommendations. CADTH commented that in addition to shaping process change over time, evaluation results have been used to illustrate the value of PPI activities both internally and externally.

Four organizations reported how evaluation results are shared with other organizations, and all four reported sharing the results at conferences (n = 4; 57.1 percent), three reported publishing them on their Web site (n = 3; 42.8 percent) and two reported publishing them in a newsletter (n = 2; 28.6 percent).

Changes Made to Patient and Public Involvement Activities

Five organizations described specific changes made to their PPI activities as a result of their evaluations, with varying levels of detail provided. IETS commented generally that following their evaluation, channels of communication have been improved with patients gaining access to HTA results at various and appropriate stages. At HAS, conducting and reflecting on evaluation results motivated their intention to develop documents to clarify the intent of PPI activities, and guidance and tools to address specific issues, for example, desirable qualities for a patient representative and how to encourage participation among patient representatives on committees.

SMC identified numerous changes as a result of continuous evaluation, many focused on patient groups. Changes include streamlining the submission process, providing a document that summarizes background information on the technology under review, increased transparency around conflict of interest declarations, training and education, standardizing presentations at committee meetings, and changes to the embargoed decision-making process that gives patient groups advance warning. In addition, SMCs evaluation work motivated them to increase public awareness of their HTA activities, develop a proactive approach to identify patient and caregiver representatives for each HTA, and establish Patient and Clinician Engagement mentors.

Past evaluations at NICE (Reference Amis and Livingstone24;Reference Amis and Livingstone25) prompted changes in response to a key finding that patient expert members found committee participation to be daunting. Accordingly, several changes to committee structure were implemented including ensuring support is provided before committee meetings, having the committee chair personally greet them at meetings where possible, having a lay member sit next to them and updating the patient expert submission form to improve clarity, provide guidance on completion, and distinguish forms for patient organizations and individual patients.

A 2012 evaluation of the patient input process into the Common Drug Review at CADTH (26) revealed that, at the time, CADTH's PPI program was equivalent or more evolved compared with most other HTA programs and also identified several gaps. Resultant recommendations included ensuring that stakeholders are aligned on the purpose, value, and credibility of incorporating patient input and that CADTH learn from and apply methods used by their international counterparts. Accordingly, CADTH implemented several changes including developing information sessions and awareness strategies, hiring a dedicated staff member, holding training sessions, and developing a process for individual patients and caregivers to provide input when a patient group does not exist. In addition, the CADTH Patient Community Liaison Forum was formed and annual stakeholder sessions were established to better understand patient groups’ needs.

Lessons Learned from Evaluation of Patient and Public Involvement Activities

Challenges Faced during the Evaluation Process

Six organizations responded to an open-ended question about challenges faced during evaluation. Several were identified, with many repeated across organizations. Identified challenges include achieving stakeholder buy-in, managing conflicting stakeholder opinions, managing expectations of patients and caregiver representatives, resistance to change and resource constraints for both the evaluation itself and implementation of any resultant recommendations. Additionally, variation in HTA processes across different programs (e.g., drugs, medical devices, medical procedures, rapid HTA, health economics) was identified as a challenge to developing an overall evaluation strategy. A further challenge results from the variation in the goals for PPI for different stakeholders, for example patient groups, researchers, or committee members. Each stakeholder group may experience HTA involvement differently and have different interpretations of success.

Facilitators to the Evaluation Process

Three respondents (CADTH, NICE, SMC) identified specific facilitators to evaluation, two which focused on having sufficient resources to conduct evaluation activities and implement any recommendations, and the support of senior leadership to embrace any changes. NICE stated that their evaluations work best when patients are involved on the evaluation team, including patient groups, patient experts and lay members, for example, to help design a questionnaire, interpret results, and define recommendations and implementation plans. Similarly at the SMC, the PIN Advisory Group, which includes representation from public partners and patient and caregiver organizations, helps to ensure understanding of current experiences with SMC processes and advise on improvement initiatives that are both feasible and acceptable.

Insights on Evaluation of Patient and Public Involvement Activities

NICE and HAS shared further insights through an open-ended question on the evaluation of PPI activities. HAS advocated for sharing among HTA organizations, for example PPI satisfaction questionnaires, experienced challenges faced during the evaluation process and cases where the inclusion of patient perspectives was helpful. NICE recommended setting explicit objectives and developing an evaluation process at the same time as PPI activities are established. They also suggested that patients and patient groups be involved in designing and executing the evaluation process and applying the learnings in practice. Furthermore, NICE recommended that proposed changes be implemented using a phased approach, so that they remain manageable, and also managing expectations as to what can be achieved or changed.

Future Plans for Evaluation of Patient and Public Involvement Activities

Six respondents described their future plans, each indicating an ongoing commitment to evaluation. At SMC, the PIN Advisory Group ensures a continuous focus on developing and strengthening PPI. NICE reported a current and ongoing evaluation of PPI across the organization (broader that HTA), and CADTH similarly reported a current evaluation as part of a requirement for formal evaluation every 5 years. In addition, HAS reported a current initiative to both develop and evaluate a process for PPI in rapid HTA. Finally, while ZonMW and SBU reported not having yet conducting formal evaluations, they are currently planning for future evaluations of their PPI activities.

DISCUSSION

A primary goal of this survey was to identify approaches used by HTA organizations to evaluate their PPI initiatives, including perceived facilitators and barriers to evaluation. We obtained responses from fifteen organizations from twelve countries, representing a 27.8 percent response rate. Consistent with the findings of recent reviews (Reference Hailey, Werko, Bakri, Cameron, Gohlen and Myles8;Reference Whitty11;Reference Abelson, Wagner, DeJean, Boesveld, Gauvin and Bean12), patients (14/15) and members of the public (10/15) are involved in a wide range of HTA processes conducted by the organizations in our sample.

The results reveal that, of the organizations that responded, evaluation of PPI activities is occurring across a small but diverse set. Seven of the responding organizations reported having conducted evaluations, including patient satisfaction, process evaluation or impact evaluation. These results signal that HTA organizations are conducting evaluation activities more broadly than represented in the published literature, which have focused predominantly on evaluating and describing the impact of PPI (Reference Abelson, Bombard, Gauvin, Simeonov and Boesveld1;Reference Berglas, Jutai, MacKean and Weeks2;Reference Dipankui, Gagnon, Desmartis, Legare, Piron and Gagnon4;Reference Abelson, Wagner, DeJean, Boesveld, Gauvin and Bean12). Due to our small sample size and low response rate (27.8 percent), however, the proportion of HTA organizations that both conduct and evaluate PPI activities remains unclear.

Approaches to evaluating PPI appear to vary widely, from extensive interviews or document reviews for example to something more streamlined including regular surveying of participants. Regardless of the intensity of the strategy, a focus on evaluation is particularly notable in light of the considerable workload of HTA organizations, with many competing deadlines and finite resources. It is encouraging in this context to observe priority being given to evaluation activities, which ultimately aim to enhance efficiency and effectiveness.

Importantly, many specific changes were outlined by respondents as following from their evaluation activities, which spanned a wide range of issues. Conducting evaluations and implementing resultant recommendations appears to have positively impacted both the experience of participating in HTA from the perspective of patients and members of the public, as well as the quality of patient and caregiver input into HTA. In specific instances, evaluation activities were also reported to lead to increased awareness of PPI initiatives in HTA, and, therefore, facilitate the proactive recruitment of future participants, and to illustrate the value of PPI both internally to an HTA organization and externally to stakeholders.

Through this survey, we were able to elicit insights and perspectives on the evaluation process, which should be of value to those planning this sort of work in the future. Specifically, challenges noted by the respondents included both general issues related to the evaluative process (e.g., achieving stakeholder buy-in, managing conflicting opinions, resistance to change) and general methodological issues (e.g., how to define success with wide ranging goals, how to compare PPI in the context of rapid versus full HTA). Facilitators included provision of adequate resources to both conduct evaluations as well as implement any recommendations, in addition to the support of senior leadership and participation of patients and members of the public in the evaluative process. While no respondent explicitly commented that methodological development needs to occur, this seems implied in the elements stated as challenging the evaluation process.

The reported facilitators and challenges with evaluation of PPI activities in HTA are not unlike those reported in the broader evaluation literature. A foundation of the evaluation literature relates to the development and documentation of program theories, for example through the use of logic models, or theories of change. A program theory should outline the inputs, activities, outputs and short- and long-term outcomes intended for a program. A logic model, for example, can help make explicit the expected relationship between these program elements (Reference Patton27). Logic models can be useful to help design programs, facilitate accountability to a stated plan, and also guide program evaluation. Critical to the development of a program theory is the involvement of all relevant stakeholders to ensure buy-in regarding inputs, activities, and program goals in particular how program goals will be measured to define success. Stakeholder involvement should persist throughout the evaluation cycle, including data collection, analysis, and the development and implementation of recommendations.

Many of these concepts were mentioned implicitly or explicitly by respondents to our survey, although without reference to formal program evaluation theory or methods. For example, NICE recommended that organizations develop explicit objectives for PPI at the outset, and ideally also develop an evaluation plan at the same time as the PPI activities are established. NICE also remarked that their evaluations are more productive when patients or members of the public are part of the evaluation team. These reflections speak to the need to plan PPI programs and evaluation activities simultaneously, and also to be specific in terms of stated goals and how those goals should be measured. They also suggest that developing a greater understanding of evaluation theory and methods could be an important step forward for organizations engaged in PPI.

Of note, most organizations in our sample reported multiple reasons for implementing PPI activities. While it is widely acknowledged that PPI activities are grounded in a broad set of goals, including enhancing the relevance of assessments, strengthening the evidentiary contribution, complementing clinical and researcher expertise, and enhancing the openness and inclusiveness of the decision process (Reference Abelson, Bombard, Gauvin, Simeonov and Boesveld1;Reference Abelson, Wagner, DeJean, Boesveld, Gauvin and Bean12), these broad ranging objectives may complicate evaluation (20), as each objective would require its own set of anticipated and measureable outcomes.

First, broad agreement among stakeholders is required regarding how to evaluate whether or not often vaguely articulated objectives have been achieved. Second, tailored approaches to collect data against which to measure success for each distinct objective might be needed. What is important is that the goals for involving patients and members of the public are prespecified and measurable, that an evaluation plan gathers data targeted for those goals, and there is consensus among relevant stakeholders regarding how to define success. The concept of evaluability assessment might also be relevant, as a precursor to evaluation. Evaluability assessments could be used to assess and ensure that PPI programs are ready for evaluation, with sufficient logic or theory to support committed resources and activities resulting in the achievement of measurable objectives, and that there is sufficient stakeholder buy-in to both conduct an evaluation and implement resultant recommendations (Reference Hare and Guetterman28).

Limitations

The completeness of this survey is limited as it is based on only fifteen responses from twelve countries, representing seven organizations who conduct and evaluate PPI activities. While we cannot be certain of the extent, it is likely that some organizations who evaluate their PPI activities did not respond to our questionnaire. Furthermore, the maximum number of responses received from each organization was two, but in most cases there was one response per country or organization. Given the possibility that different PPI or evaluation strategies are used by different programs or groups within a given country or organization, there is further reason to believe that our results do not reflect all experiences with evaluation. While we made attempts to contact authors to clarify submitted responses, in the end, we spoke directly to seven of the fifteen respondents. We, therefore, did not verify reported data from eight represented HTA organizations, which raises the potential for inaccurate, or out of date, data especially given the evolving nature of PPI activities.

Finally, in our questioning relating to evaluation strategies, we did not specifically ask whether there were any differences in approach related to evaluating patient involvement as compared to involving members of the public, primarily due to not wanting to add further questions to an already long questionnaire. It is likely that a broader evaluation of experiences will expand over the coming years, and we hope this report may serve to encourage evaluation among those who have not yet established a process.

CONCLUSIONS

Our survey identified international HTA organizations that have developed and conducted initiatives to evaluate their PPI activities. A range of strategies are described that span the evaluation of process, impact and satisfaction, and at varying levels of time and resource requirements. Few explicit references to evaluation theory were noted, although respondents appear to acknowledge established facilitators to program evaluation including the need for explicit, measurable objectives and the inclusion of a range of stakeholders, including patients and members of the public on evaluation teams.

There is a continued interest in the evaluation of PPI activities through HTAi, and a recently published book (Reference Facey, Hansen and Single29) focused on patient involvement in HTA contains a chapter on this topic, with a proposed evaluation framework. It will be important for HTA organizations to share their approaches and experiences with evaluation and perhaps to test this framework.

SUPPLEMENTARY MATERIAL

Supplementary File 1: https://doi.org/10.1017/S0266462317000976

CONFLICTS OF INTEREST

The authors have nothing to disclose.

References

REFERENCES

1. Abelson, J, Bombard, Y, Gauvin, FP, Simeonov, D, Boesveld, S. Assessing the impacts of citizen deliberations on the health technology process. Int J Technol Assess Health Care. 2013;29:282289.Google Scholar
2. Berglas, S, Jutai, L, MacKean, G, Weeks, L. Patients' perspectives can be integrated in health technology assessments: An exploratory analysis of CADTH Common Drug Review. Res Involve Engag. 2016;7:2.Google Scholar
3. Cleemput, I, Christiaens, W, Kohn, L, Leonard, C, Daue, F, Denis, A. Acceptability and perceived benefits and risks of public and patient involvement in health care policy: A Delphi survey in Belgian stakeholders. Value Health. 2015;18:477483.Google Scholar
4. Dipankui, MT, Gagnon, MP, Desmartis, M, Legare, F, Piron, F, Gagnon, J, et al. Evaluation of patient involvement in a health technology assessment. Int J Technol Assess Health Care. 2015;31:166170.Google Scholar
5. Wortley, S, Wale, J, Grainger, D, Murphy, P. Moving beyond the rhetoric of patient input in health technology assessment deliberations. Aust Health Rev. 2016 [Epub ahead of print].Google Scholar
6. Lopes, E, Street, J, Carter, D, Merlin, T. Involving patients in health technology funding decisions: Stakeholder perspectives on processes used in Australia. Health Expect. 2016;19:331344.Google Scholar
7. Menon, D, Stafinski, T. Role of patient and public participation in health technology assessment and coverage decisions. Expert Rev Pharmacoecon Outcomes Res. 2011;11:7589.CrossRefGoogle ScholarPubMed
8. Hailey, D, Werko, S, Bakri, R, Cameron, A, Gohlen, B, Myles, S, et al. Involvement of consumers in health technology assessment activities by INAHTA agencies. Int J Technol Assess Health Care. 2013;29: 7983.Google Scholar
9. Gauvin, FP, Abelson, J, Giacomini, M, Eyles, J, Lavis, JN. “It all depends”: Conceptualizing public involvement in the context of health technology assessment agencies. Soc Sci Med. 2010;70: 15181526.CrossRefGoogle ScholarPubMed
10. Gagnon, MP, Desmartis, M, Lepage-Savary, D, Gagnon, J, St-Pierre, M, Rhainds, M, et al. Introducing patients' and the public's perspectives to health technology assessment: A systematic review of international experiences. Int J Technol Assess Health Care. 2011;27:3142.Google Scholar
11. Whitty, JA. An international survey of the public engagement practices of health technology assessment organizations. Value Health. 2013;16:155163.CrossRefGoogle ScholarPubMed
12. Abelson, J, Wagner, F, DeJean, D, Boesveld, S, Gauvin, FP, Bean, S, et al. Public and patient involvement in health technology assessment: A framework for action. Int J Technol Assess Health Care. 2016;32:256264.Google Scholar
13. Lopes, E, Street, J, Carter, D, Merlin, T. Involving patients in health technology funding decisions: Stakeholder perspectives on processes used in Australia. Health Expect. 2016;19:331344.CrossRefGoogle ScholarPubMed
14. Oliver, S, Milne, R, Bradburn, J, Buchanan, P, Kerridge, L, Walley, T, et al. Involving consumers in a needs-led research programme: A pilot project. Health Expect. 2001;4:1828.Google Scholar
15. Hailey, D, Nordwall, M. Survey on the involvement of consumers in health technology assessment programs. Int J Technol Assess Health Care. 2006;22:497499.Google Scholar
16. Hailey, D, Werko, S, Bakri, R, Cameron, A, Gohlen, B, Myles, S, et al. Involvement of consumers in the HTA activities of INAHTA members: Report on a survey [Internet]. Edmonton, AB: INAHTA; 2011. http://www.inahta.org/wp-content/uploads/2014/04/INAHTA_Survey_Consumer-Involvement_2011.pdf (accessed January 10, 2017).Google Scholar
17. Patient Involvement and Education Working Group. Good practice examples of patient and public involvement in health technology assessment [Internet]. Edmonton, AB: HTAi; 2015. www.htai.org/fileadmin/HTAi_Files/ISG/PatientInvolvement/EffectiveInvolvement/Good_Practice_Examples_Feb_2015.doc (accessed January 10, 2017).Google Scholar
18. Staniszewska, S, Brett, J, Mockford, C, Barber, R. The GRIPP checklist: Strengthening the quality of patient and public involvement reporting in research. Int J Technol Assess Health Care. 2011;27:391399.Google Scholar
19. Staniszewska, S. Patient and public involvement in health services and health research: A brief overview of evidence, policy and activity. J Res Nurs. 2009;14:295298.Google Scholar
20. OHTAC Public Engagement Subcommittee. Public engagement for health technology assessment at Health Quality Ontario—final report from the Ontario Health Technology Advisory Committee Public Engagement Subcommittee [Internet]. Toronto, ON: Queen's Printer for Ontario; 2015. http://www.hqontario.ca/Portals/0/documents/evidence/special-reports/report-subcommittee-20150407-en.pdf (accessed August 15, 2016).Google Scholar
21 Brett, J, Staniszewska, S, Mockford, C, Herron-Marx, S, Hughes, J, Tysall, C, et al. Mapping the impact of patient and public involvement on health and social care research: A systematic review. Health Expect. 2014;17:637650.Google Scholar
22. Brett, J, Staniszewska, S, Mockford, C, Herron-Marx, S, Hughes, J, Tysall, C, et al. A systematic review of the impact of patient and public involvement on service users, researchers and communities. Patient. 2014;7:387395.Google Scholar
23. Panel on Research Ethics [Internet]. Ottawa (ON): Government of Canada. TCPS 2 (2014)— the latest edition of Tri-Council Policy Statement: Ethical conduct for research involving humans; 2014. http://www.pre.ethics.gc.ca/eng/policy-politique/initiatives/tcps2-eptc2/Default/ (accessed January 10, 2017).Google Scholar
24. Amis, L, Livingstone, H. 605. The views of patients and carers involved in the development of NICE technology appraisals [Internet]. Abstract presented at: HTA in integrated care for a patient cenetred system. 9th HTAi Annual meeting; 2012; Bilbao (ES). http://www.htai.org/fileadmin/HTAi_Files/Conferences/2012/2012_HTAi_Bilbao_Oral_Presentations.pdf (accessed August 11, 2017).Google Scholar
25. Amis, L, Livingstone, H. Increasing visibility of patient, carer and citizen involvement in HTAs – An evaluation of the ‘lay leads’ pilot project at NICE [Internet]. Abstract presented at: HTA in integrated care for a patient centered system. 9th HTAi Annual meeting; 2012; Bilbao, ES. http://www.htai.org/fileadmin/HTAi_Files/Conferences/2012/2012_HTAi_Bilbao_Poster_Presentations.pdf (accessed August 11, 2017).Google Scholar
26. CADTH patient input process review: Findings and recommendations [PowerPoint Presentation on the Internet]. Montreal, QC: SECOR; 2013. https://www.cadth.ca/sites/default/files/pdf/2012_SECOR_Patient-Input-Review_e.pdf (accessed August 15, 2016).Google Scholar
27. Patton, MQ. Utilization-focused evaluation: The new century text. 3rd ed. Thousand Oaks, CA: Sage Publishing; 1997.Google Scholar
28. Hare, J, Guetterman, T. Evaluability assessment: Clarifying organizational support and data availability. J Multidiscip Eval. 2014;10:925.CrossRefGoogle Scholar
29. Facey, K, Hansen, HP, Single, ANV, eds. Patient involvement in health technology assessment. Singapore: Springer ADIS; 2017.Google Scholar
Figure 0

Table 1. Description of Participating HTA Organizations

Supplementary material: File

Weeks et al supplementary material

Weeks et al supplementary material 1

Download Weeks et al supplementary material(File)
File 27.4 KB