Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-27T05:09:29.312Z Has data issue: false hasContentIssue false

Empowering the Participant Voice (EPV): Design and implementation of collaborative infrastructure to collect research participant experience feedback at scale

Published online by Cambridge University Press:  06 February 2024

Rhonda G. Kost*
Affiliation:
The Rockefeller University Center for Clinical and Translational Science, New York, NY, USA
Alex Cheng
Affiliation:
Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, USA
Joseph Andrews
Affiliation:
Wake Forest School of Medicine, Clinical and Translational Science Institute, Winston-Salem, NC, USA
Ranee Chatterjee
Affiliation:
Department of Medicine, Duke University School of Medicine, Duke Clinical Translational Science Institute, Durham, NC, USA
Ann Dozier
Affiliation:
Department of Public Health Sciences, School of Medicine and Dentistry, University of Rochester, Rochester, NY, USA
Daniel Ford
Affiliation:
Johns Hopkins University Institute for Clinical and Translational Research, Baltimore, MD, USA
Natalie Schlesinger
Affiliation:
The Rockefeller University Center for Clinical and Translational Science, New York, NY, USA
Carrie Dykes
Affiliation:
Clinical and Translational Science Institute, University of Rochester, Rochester, NY, USA
Issis Kelly-Pumarol
Affiliation:
Wake Forest School of Medicine, Clinical and Translational Science Institute, Winston-Salem, NC, USA
Nan Kennedy
Affiliation:
Vanderbilt Institute for Clinical and Translational Research, Vanderbilt University Medical Center, Nashville, TN, USA
Cassie Lewis-Land
Affiliation:
Johns Hopkins University Institute for Clinical and Translational Research, Baltimore, MD, USA
Sierra Lindo
Affiliation:
Duke Clinical Translational Science Institute, Durham, NC, USA
Liz Martinez
Affiliation:
Johns Hopkins University Institute for Clinical and Translational Research, Baltimore, MD, USA
Michael Musty
Affiliation:
Duke Clinical Translational Science Institute, Durham, NC, USA
Jamie Roberts
Affiliation:
Duke Cancer Institute, Durham, NC, USA
Roger Vaughan
Affiliation:
The Rockefeller University Center for Clinical and Translational Science, New York, NY, USA
Lynne Wagenknecht
Affiliation:
Wake Forest School of Medicine, Clinical and Translational Science Institute, Winston-Salem, NC, USA
Scott Carey
Affiliation:
Johns Hopkins University Institute for Clinical and Translational Research, Baltimore, MD, USA
Cameron Coffran
Affiliation:
The Rockefeller University Center for Clinical and Translational Science, New York, NY, USA
James Goodrich
Affiliation:
Duke University School of Medicine, Duke Office of Clinical Research, Durham, NC, USA
Pavithra Panjala
Affiliation:
Clinical and Translational Science Institute, University of Rochester, Rochester, NY, USA
Sameer Cheema
Affiliation:
Duke University School of Medicine, Duke Office of Clinical Research, Durham, NC, USA
Adam Qureshi
Affiliation:
The Rockefeller University Center for Clinical and Translational Science, New York, NY, USA
Ellis Thomas
Affiliation:
Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, USA
Lindsay O’Neill
Affiliation:
Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, USA
Eva Bascompte-Moragas
Affiliation:
Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, USA
Paul Harris
Affiliation:
Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, USA
*
Corresponding author: R. G. Kost, MD; Email: kostr@rockefeller.edu
Rights & Permissions [Opens in a new window]

Abstract

Empowering the Participant Voice (EPV) is an NCATS-funded six-CTSA collaboration to develop, demonstrate, and disseminate a low-cost infrastructure for collecting timely feedback from research participants, fostering trust, and providing data for improving clinical translational research. EPV leverages the validated Research Participant Perception Survey (RPPS) and the popular REDCap electronic data-capture platform. This report describes the development of infrastructure designed to overcome identified institutional barriers to routinely collecting participant feedback using RPPS and demonstration use cases. Sites engaged local stakeholders iteratively, incorporating feedback about anticipated value and potential concerns into project design. The team defined common standards and operations, developed software, and produced a detailed planning and implementation Guide. By May 2023, 2,575 participants diverse in age, race, ethnicity, and sex had responded to approximately 13,850 survey invitations (18.6%); 29% of responses included free-text comments. EPV infrastructure enabled sites to routinely access local and multi-site research participant experience data on an interactive analytics dashboard. The EPV learning collaborative continues to test initiatives to improve survey reach and optimize infrastructure and process. Broad uptake of EPV will expand the evidence base, enable hypothesis generation, and drive research-on-research locally and nationally to enhance the clinical research enterprise.

Type
Special Communication
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided that no alterations are made and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use and/or adaptation of the article.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science

Introduction

Understanding the perceptions and experiences of study participants can help research teams improve recruitment, informed consent, diversity, retention, and other challenging aspects of clinical translational research and drive meaningful improvements for participants [Reference Joosten, Israel and Williams1Reference Kost, Lee, Yessis, Coller and Henderson3]. Participant input is essential in assessing whether the informed consent process is effective, whether communications are respectful and culturally sensitive, whether unaddressed language barriers exist, and what factors drive participants to leave studies prematurely or decline to join future studies. Whether participants feel valued is measurable and is highly correlated with their views of their research experiences [Reference Kost, Lee, Yessis, Wesley, Henderson and Coller4,Reference Kost, Lee and Yessis5].

The Association for Accreditation of Human Research Protection Programs (AAHRPP) [6] requires that organizations have policies to measure and improve the quality and effectiveness of their Human Research Protection Program. A recent study of accredited institutions found that few employed measures of participant-centered outcomes, e.g., the effectiveness of consent, in assessing the quality of their programs [Reference Fernandez Lynch and Taylor7]. The Consortium to Advance Ethics Review Oversight recently issued recommendations, including prioritizing assessments directly related to participant protection outcomes (e.g., quality of the informed consent process and understanding,…and overall participant experience in research) [Reference Fernandez Lynch and Taylor7]. However, few institutions regularly collect research participant experience data in ways that can be compiled or compared [Reference Signorell, Saric and Appenzeller-Herzog8], thereby neglecting a valuable opportunity to engage participants at scale as partners in the research process.

Applying a scientific approach to measuring and responding to participant experiences requires robust tools, an evidence base, engagement of stakeholders, hypothesis testing, representative sampling, and evaluation of measurable impact. Participant feedback, collected with appropriate standards and privacy protections, can be studied longitudinally and used for comparisons across studies, departments, and institutions to identify better practices. The Research Participant Perception Survey (RPPS), designed with extensive participant input, asks participants about aspects of their research experience, including respect, partnership, informed consent, trust, feeling valued, overall experience, and others. The RPPS measures are participant-centered and statistically reliable, as demonstrated through psychometric analyses and multiple fieldings [Reference Kost, Lee, Yessis, Coller and Henderson3Reference Kost, Lee and Yessis5,Reference Yessis, Kost, Lee, Coller and Henderson9Reference Kost and de Rosa11].

Over the past decade, Rockefeller University, the NIH Clinical Research Center, and Johns Hopkins University have used the RPPS to collect participant feedback to enhance research conduct. RPPS response rates range from 20% to 65% [Reference Kost, Lee and Yessis5,Reference Yessis, Kost, Lee, Coller and Henderson9,Reference Kost and de Rosa11]. Requests to use the RPPS indicate robust interest, but institutional barriers have limited the survey’s uptake. Of attendees polled at a 2019 Trial Innovation Network webinar [12], 70% said having real-time feedback from participants about research participation would be valuable, and 70% reported no program at their institution to collect participant feedback. Attendees felt uncertain about selecting the right survey (35%) and worried that the effort or cost would be too high (51%) or results would not be timely (21%). Fewer than 10% wanted to invent a survey, and most agreed that access to a short, validated survey (65%), integrated analysis tools (55%), mobile-friendly app (65%), and low-cost/free infrastructure (60%) would facilitate collecting timely feedback. Sites asked how RPPS users could benchmark with peer institutions [12].

To address these challenges, in 2020, the Rockefeller University led the creation of a consortium with Duke University, Wake Forest Health Sciences University, Johns Hopkins University, University of Rochester, and Vanderbilt University Medical Center (VUMC) to obtain NCATS funding for the Empowering the Participant Voice (EPV) project. The EPV project leverages the validated Research Participant Perception Survey (RPPS) [Reference Kost, Lee and Yessis5] as its core instrument and REDCap [Reference Obeid, McGraw and Minor13], a widely-used data-capture platform designed for clinical and translational research, to support a collaborative survey and data management system. This report describes methods and progress in fulfilling the first two EPV-specific aims: the development of effective, low-cost infrastructure to collect participant experience data using RPPS and REDCap and its implementation in demonstration projects at collaborating sites.

Developing the infrastructure

Design principles and values

The EPV initiative embraced explicit principles and values: engaging institutional and community stakeholders throughout the project, building a learning collaborative, respecting institutional autonomy and priorities, minimizing selection bias, aiming for actionability, designing for ease of use, evaluation, and broad dissemination. The sites agreed to use a common core of RPPS questions to maintain survey validity and comparability. Each site had autonomy over other details of local survey implementation (use case), custom questions and variables, local findings, and action plans.

Interdisciplinary EPV RPPS /REDCap team

The EPV principal investigator (PI) and site PIs formed the RPPS Steering Committee (RSC) to define the core questions, variables, common data elements to be collected, processes, standards, and dashboard design requirements to achieve the project aims. Each site secured leadership buy-in and assembled an interdisciplinary project team, including investigators and staff with expertise in translational research, participant engagement, research informatics, REDCap software deployment, and the RPPS tool. Project managers, the technical lead, software developers, and others joined the RSC. The results of RSC operational and administrative planning, and technical setup and implementation planning formed the first draft of the Implementation Guide.

Input from key stakeholders

EPV directed sites to engage stakeholders throughout the project, to understand expectations and concerns, and to develop the local use case and implementation plan. The engagement of institutional stakeholders was vital to overcoming technical, regulatory, resource, and social challenges within the institutions. Site teams used town halls, grand rounds, and small groups to engage investigators, research leadership, regulatory professionals, patients/participants/advocates, community members, and others. Standing community advisory boards and patient/faculty advisory committees were engaged to leverage existing institutional support and integrate participant feedback into ongoing operations [Reference Kost, Lee, Yessis, Coller and Henderson3Reference Kost, Lee and Yessis5,Reference Yessis, Kost, Lee, Coller and Henderson9].

During early planning, sites reported that stakeholders said RPPS feedback would be valuable to build participant trust, evaluate the effectiveness of informed consent, tailor approaches for specific groups or protocols, and improve the experiences of underrepresented groups. The data would enable the identification of high and low-performing teams and best practices, establish benchmarks, and build a participant-centered evidence base for improving research processes.

The RSC developed a Longitudinal Stakeholder Tracking survey to capture the themes of discussions with local stakeholders (Supplemental Appendix A). Through May of 2023, sites reported 96 meetings attended by various combinations of local stakeholders identified by their roles. Personal demographics were not collected. Meetings included institutional leadership (43% of meetings), IRB/Privacy professionals (30%), Investigators (56%), research coordinators/managers (47%), community members (15%), research participants/patients (8%), community partners/liaisons (3%), and others (26%). One or more community/participant/patient stakeholders were present at 23% of meetings. Stakeholder meetings ranged in size: 1–10 stakeholders (65%), 11–15 (18%), 26–50 (12%), and more than 50 (4%). A summary of the themes of discussions with stakeholders and related actions and impacts is provided in Table 1. Feedback from stakeholders is ongoing.

Table 1. Feedback was provided by stakeholders engaged locally at participating sites throughout the design and implementation phases of the project

Stakeholders included institutional and community members, such as institutional and research leadership, investigators and faculty, privacy/IRB staff, research coordinators and nurses, patients, research participants, community representatives and liaisons, and others. Comments and themes were not linked to specific individuals during reporting.

C Themes raised at an engagement meeting that included one or more community members/advocates/research participants/patient/patient representatives. Community meetings ranged in attendance from 5 to>50.

* Attendee roles and affiliations membership were not tabulated at stakeholder meetings held in the first 6 months of the project.

Developing use cases

In planning for project implementation, sites weighed critical operational choices: whether the scope of fielding surveys should reach across all or most studies at the institution (enterprise-wide fielding) or would be implemented study-by-study (study-level fielding); frequency of surveying; selection of the sample as a census of all eligible participants, a random sample, or a targeted group; the timing of the survey relative to an individual’s study participation, e.g., shortly after signing consent (post-consent), after completing participation (end-of-study), more than once for long studies (annual), or at an undefined time (unspecified); the survey distribution platform (email, portal, or via text [SMS]); project team membership; how to optimize data extraction from local systems; which additional local variables to track, e.g., department codes or study identifiers for analyzing study-level data; whether to return results to investigators; and whether to add custom questions. RSC members discussed the pros and cons and distilled their conclusions into Key Considerations for sites adopting EPV infrastructure in the EPV Implementation Guide. Each site formulated its site-specific use case reflecting those considerations in alignment with regulatory and institutional policies and local initiatives.

Designing infrastructure and process

An overall schematic for implementing the EPV infrastructure is shown in Supplemental Figure S1.

Data and standards

The EPV/RPPS infrastructure, hosted on the REDCap platform, was developed through close collaboration between the RSC and technical team (VUMC), mindful that the capacity to benchmark would require the ability to compare “apples to apples.” English and Spanish versions of the RPPS-Short [Reference Kost and de Rosa11] survey, with updates to gender, ethnicity, and remote consent questions, formed the core survey (Supplemental Appendix B). The team assigned project variables to the survey questions, encoded definitions for describing survey scope, cohort sampling, and the timing of the survey during study participation, and defined participant and study descriptors. Participant descriptors include the stage of study participation and email address (used to determine eligibility and send a survey), research study code, and demographics (age, sex, gender, race, and ethnicity). Study descriptors include disease domain (MeSH code) and optional locally defined variables (e.g., department) for local tracking. These descriptors are linked to the participant’s anonymized survey record, and provide filters for the data analysis, including characterizing non-responders. Formulas for response and completion rates were defined: surveys with at least one question answered were classified as either complete (>80% of core questions answered), partial (50%–80% answered), or break-off (<50% of core questions answered) responses [14]. These decisions were encoded into data collection tools, software, and dashboards as described in the Implementation Guide.

Flow of data

Participant and study descriptors are extracted by site programmers from institutional databases and uploaded into the local REDCap project. Local data practices govern how identifiers are removed before sharing data with teams. Using the REDCap survey function, sites send invitations and personalized survey links to participants via email, patient portals, or SMS accounts with a locally customized message. Survey responses populate the site’s local REDCap project database in real time. De-identified local project data syncs nightly to the Data Coordinating Center (DCC) and is aggregated in the EPV Consortium database and dashboard. A Reciprocal Data Use Agreement, developed using the Federal Demonstration Partnership Collaborative Data Transfer and Use Agreement template [15], governs data transfer and use between the sites and DCC. Locally defined variables and participant comments are not aggregated. The flow of data from site data sources into the local EPV/REDCap project is illustrated in Supplemental Figure S2. The technical installation details of the software are found in the EPV Implementation Guide.

Developing the At-a-Glance Dashboard external module

The EPV collaborators and technical team designed the dashboard to facilitate rapid analysis and visualization of data, including response and completion rates, and calculated scores for survey results. Dashboard design was iteratively influenced by feedback from stakeholders regarding ease of use, clarity, and analytics.

Survey responses are analyzed using Top-Box scores (percent with the optimal answer) [Reference Yessis, Kost, Lee, Coller and Henderson9], and displayed in total or filtered data columns in the dashboard. A difference of 10 percentage points or more between a filtered column and the total score can be used informally as the minimum important difference generally worthy of attention or action (Supplemental Appendix C). Formal statistical analyses of local dashboard data can be pursued by downloading de-identified data and any local variables from the REDCap database and using third-party software (e.g., SAS, STATA, R) to conduct analyses of interest. Complete descriptive data can be viewed using standard REDCap reports views. Sites have access to their own data on their local dashboard and to the EPV Consortium dashboard displaying results from all contributing in aggregate or filtered by site (blinded) for benchmarking. The technical team at VUMC built and maintained the Dashboard, releasing enhanced versions in collaboration with the RSC. The dashboard has many features to streamline analyses (Fig. 1). A Dashboard demonstration video showcases the analytic and filtering features of the Dashboard [16]; a hands-on test dashboard is available to the public.

Figure 1. At-a-Glance Dashboard features – visual analytics and filters for RPPS data. Dropdown menus display choices among the survey perception questions (shown) or response and completion rates. The middle menu filters the survey results (e.g., age, sex, race, etc.). Blue “i” icons display definitions and scoring information. Response data are displayed as Top Box scores with conditional formatting from high (green) to low (red) scores. The “Total” column contains aggregate scores; filtered results populate the columns to the right.

Demonstration use cases

The EPV project survey and data aggregation activities were reviewed and approved or deemed Exempt by the Institutional Review Boards at each site before retrieving participant data or surveying.

Use case implementation: administration of the survey

The demonstration goal of the project was the implementation of the local use cases. Measures of success included the number of surveys fielded, response rates, respondent demographics, ongoing stakeholder engagement, and a revised Dashboard and Implementation Guide.

In November 2021, sites began sending small-scale test survey fieldings; by May 2022, all sites were surveying 200–6000 participants per fielding at bimonthly, quarterly, or semiannual intervals. Several sites piloted initiatives during early fielding to increase response rates. Use case configurations and early optimization efforts are shown in Table 2.

Table 2. Empowering the Participant Voice infrastructure and use case implementation at five participating sites

Results of survey implementation

From November 2021–May 1, 2023, five EPV sites sent 13,850 surveys to participants and received 2,575 responses, of which 99% were complete (>80% of questions answered), with an overall response rate of 18.6%. Survey response rates differed among sites (15%–31%) (Table 2). Site A sent surveys at the study level, returning the highest response rate. Sites piloted effective efforts to use compensation and telephone outreach to increase response rates. The use of SMS was not effective.

Respondents were diverse in age, race, gender, and ethnicity, (Table 3) though minority populations were underrepresented overall, e.g., 11% were Black respondents compared to 14% Black individuals in the US population [17]. However, the representativeness of minority populations varied across sites. At the highest end of representativeness, Black participants made up 22% of the respondents at two sites, and Latino/a individuals comprised 19% of respondents at another site. The characteristics collected from all survey recipients add context: the racial/ethnic diversity of the participants who were eligible to receive a survey (of which the respondents are a subset) also varied considerably across sites, to some extent limiting the possible number of responses from minority groups (Supplemental Table S1). Efforts to increase engagement and representativeness are underway. Two sites have disseminated local results on public-facing websites. (Table 2)

Table 3. Characteristics of individuals returning the research participant perception survey, total and range across sites, February 2022–April 2023

* Individual site data are not shown to prevent inadvertent site identification.

Open text comments

Respondents engaged with the survey. Sites received comments from 15% to 33% (mean 29%) of respondents and discussed comment themes at RSC meetings. Themes identified at more than one site included: (1) gratitude and praise for the research team or study-specific issues; (2) dissatisfaction with unexpected out-of-pocket costs from participating in research; (3) unacceptable delays in receiving compensation; and (4) offense taken at the gender question response options (“Male and transgender male,” “Female and transgender female,” “Prefer not to answer”). In response, the RSC revised the options to: “Man,” “Woman,” “None of these describe me,” or “Prefer not to answer.” Sites also received informative positive and negative comments about study-specific issues or interactions and determined any local responses.

The goal of this project was to deliver a working infrastructure that could help sites collect RPPS feedback from their participants. Analysis of survey findings, acting on findings, and evaluating the impact, is the next stage of conducting clinical translational science using the participant experience data. Those performance improvement activities require additional institutional buy-in, participant engagement, infrastructure, and process, and are the subject of ongoing research.

Deliverables and dissemination

Infrastructure for adoption

The infrastructure for EPV/RPPS can be downloaded from the EPV website after contacting project leadership. Components include (1) the data dictionary for RPPS-Short survey and data collection forms (.XML file); (2) external modules for the At-a-Glance-Dashboard and Cross-Project Piping (REDCap external module repository [Reference Harris, Taylor and Minor18]); and (3) a comprehensive EPV Implementation Guide. Designed for leadership, project managers, and technical staff, the Guide discusses considerations with which all new sites grapple, estimates of effort, and clear recommendations. The technical section provides step-by-step instructions for installing the software components, importing the data needed to field the survey, and details regarding data analytics, scoring, and analysis. The Infrastructure is compatible with sending multilingual surveys using REDCap Multilingual Management functionality (REDCap version ≥ 12.0). Programing scripts for fielding RPPS in Spanish can be downloaded. The EPV team continues to evaluate and implement ways to streamline infrastructure and enhance value. The website links to the current technical change log.

Discussion

The EPV team designed and tested new EPV/RPPS/REDCap infrastructure that enabled five sites to collect, analyze, and benchmark participant feedback at scale, with standards assuring that data are compilable and comparable. The inclusion of participant characteristics and dashboard filters enables subgroup analyses responsive to recent federal guidance for increasing health equity by disaggregating data to understand the experiences of different groups [19]. The infrastructure and instructions are disseminated through a public website, free of charge, for adoption by a wider community of users; the EPV Learning Collaborative welcomes new members. The RPPS measures aspects of participation that are meaningful to participants, providing an evidence base to drive iterative improvements to the clinical research enterprise.

Sites continue to work with stakeholders to test initiatives to increase responses. Financial incentives to return the surveys were successful, but are expensive to sustain. Minority populations were underrepresented among respondents overall, but not at all sites, and outliers deserve study. Sites implemented community partners’ suggestions, testing ways to increase the diversity of responses, although the approaches tested so far have not proven effective. Trust may be an issue. Individuals who are unpersuaded of the trustworthiness of an institution tend to be wary of surveys [Reference O’Hare20]. Engagement requires the integration of multiple approaches to capture a broad population, and sites continue to explore ways to leverage engagement resources effectively. Stakeholders counseled that even limited feedback from underrepresented groups should be analyzed and solutions pursued while exploring ways to increase response rates in parallel.

Survey data serves as a valuable complement to interviews and other qualitative story-telling [Reference Pluye and Hong21] and offers a measure of whether improvements defined by a small group translate to benefits for a larger participant population. All sites have planned and/or initiated the return of survey results to the public through presentations and websites. One could envision a virtuous cycle where transparent and accountable return of results to investigators and the participant communities fosters trust over time, and increases participants’ willingness to answer the survey.

The EPV project fulfills many NCATS values: engaging stakeholders in all phases of research, maintaining a participant-centered focus, and creating and disseminating tools for others to adopt. It helped sites generate evidence and incorporated analytics that will be instrumental in identifying and addressing disparities in research. The infrastructure sets the stage for sites to act on participant data, conduct research-on-research to solve problems and accelerate research, engage in CTSA-CTSA collaborations, and leverage common infrastructure to overcome barriers to advance science.

With EPV infrastructure working and RPPS data in hand, some teams still found it challenging to activate the resources (including CTSA-supported cores) to act on findings from participant feedback, despite the support of leadership for the project. The clinical research enterprise lacks the centralized quality improvement infrastructure and expertise to parallel that which hospitals use to measure and improve the patient care experience [22]. Recent attention to guidance from AAHRPP [6,Reference Fernandez Lynch and Taylor7] to measure the effectiveness of human protections, and from the FDA [23] to elicit participant preferences, has gained increased attention. Further, NCATS has called on its awardees to conduct clinical translational science [24] as a platform for quality improvement in research. These complementary charges from multiple agencies could incentivize clinical research organizations to create an infrastructure for quality improvement in research which could unleash the power of participant feedback. RPPS measures are tools for evaluation, but cannot, in isolation, change institutional culture or practice. Overcoming the multi-step barriers to conducting Clinical and Translational Science, using RPPS data and EPV/REDCap infrastructure, will enable institutions to realize the power of the participant voice to enhance the clinical research enterprise.

Dissemination and the learning collaborative

EPV infrastructure is being disseminated broadly, through poster presentations [Reference Kost, Andrews and Chatterjee25,Reference Kost, Andrews and Chatterjee26], webinars [27], return of results webpages [28,29], and the EPV project website [30]. As of August 2023, two additional CTSA hubs have implemented the full EPV/RPPS infrastructure (early adopters), and others are exploring adoption. Aggregate responses have doubled. The EPV learning collaborative has welcomed early adopters to project team and technical calls and provided guidance implementing their use cases. Dissemination and broad adoption of EPV infrastructure will grow the RPPS evidence base, enhancing opportunities to learn from increasingly representative participant feedback.

Limitations

The average response rate (19%) is lower than optimal. Sites have more work to do socializing RPPS with teams and participants. Sites and practices that produced higher response rates are worthy of study. Sharing practices, testing hypotheses, and deepening engagement may increase response rates over time. Groups underrepresented in research were underrepresented among RPPS respondents. The diversity of respondents differed across sites. High-performing outliers merit more study. As a quantitative measure, the RPPS captures whether, but not why, a research experience was good or bad. The RPPS is a tool to score and benchmark important dimensions of the research experience. Measuring is the first step in evidence-driven quality improvement. Organizations can use the data, leveraging other institutional resources, to prioritize and effect change.

Summary and Conclusion

The EPV/RPPS/REDCap infrastructure proved effective at enabling sites to collect, analyze and visualize participant feedback, and to benchmark with and across institutions. The RPPS measures are meaningful to participants, responsive to AAHRPP standards [6], and provide an evidence base to drive iterative improvements to the clinical research enterprise. The infrastructure and instructions are disseminated on a public website, free of charge, for adoption by a wider community of users. Institutional implementation of the EPV/RPPS is worthy of consideration, even with limited resources. EPV activities may be most effective when embedded with initiatives related to outreach, community engagement, human research protection programs, research resource cores, and/or any local organizational structure that has the agency to lead, implement change, and harvest the impact.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/cts.2024.19.

Acknowledgments

The authors would like to thank the following individuals for their thoughtful input and support during the project: Barry S. Coller MD, James Krueger MD Ph.D., and Maija Neville Williams MPH.

Author contributions

R. Kost conceived the project, designed, led, conducted, and analyzed the multi-site research project and wrote the first draft fo the manuscript; A. Cheng led the development and implementation of the technical infrastructure and wrote technical sections of the manuscript; P. Harris contributed to technical design, strategy, and writing; J. Andrews, R. Chatterjee, A. Dozier and D. Ford each led the local configuration of conduct and data collection at their respective sites, contributed to project design, data collection and analysis, and contributed to writing; N. Schlesinger, C. Dykes, I. Kelly-Pumaraol, C. Lewis-Land, S. Lindo, L. Martinez, M. Musty, J. Roberts, L. Wagenknecht, and L. O’Neill contributed to project design. local implementation, data collection and analysis; N. Kennedy contributed to development of key deliverables, implementation analysis, manuscript drafting; A. Qureshi and R. Vaughan provided statistical support and analysis thorughout the project and writing; C. Coffran, S. Carey, J. Goodrich, P. Panjala, S. Cheema provided site-based technical expertise throughout impememtation and data collection, and contributed to writing. E. Thomas, E. Bascompte-Moragas provided programming, software development and technical expertise throughout the project and contributed to technical refinement and writing.

Funding statement

This work was supported in part by a Collaborative Innovation Award from the National Center for Accelerating Translational Science #U01TR003206 to the Rockefeller University, and by Clinical Translational Science Awards UL1TR001866 (Rockefeller University), UL1TR002553 (Duke University), UL1TR003098 (Johns Hopkins University), UL1TR002001 (University of Rochester), UL1TR002243 (Vanderbilt University), and UL1TR001420 (Wake Forest University Health Sciences). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

References

Joosten, YA, Israel, TL, Williams, NA, et al. Community engagement studios: a structured approach to obtaining meaningful input from stakeholders to inform research. Acad Med. 2015;90(12):16461650.CrossRefGoogle ScholarPubMed
Zuckerman-Parker, M, Shank, G. The town hall focus group: a new format for qualitative research methods. Qual Rep. 2008;13:630635.Google Scholar
Kost, RG, Lee, LM, Yessis, J, Coller, BS, Henderson, DK. Research participant perception survey focus group subcommittee. Assessing research participants’ perceptions of their clinical research experiences. Clin Transl Sci. 2011;4(6):403413.CrossRefGoogle ScholarPubMed
Kost, RG, Lee, LM, Yessis, J, Wesley, RA, Henderson, DK, Coller, BS. Assessing participant-centered outcomes to improve clinical research. N Engl J Med. 2013;369(23):21792181.CrossRefGoogle ScholarPubMed
Kost, RG, Lee, LN, Yessis, JL, et al. Research participant-centered outcomes at NIH-supported clinical research centers. Clin Transl Sci. 2014;7(6):430440.CrossRefGoogle ScholarPubMed
Association for Accreditation of Human Research Protection Programs. In: AAHRPP [Internet]. Available: https://www.aahrpp.org. Accessed December 10, 2023.Google Scholar
Fernandez Lynch, H, Taylor, HA. How do accredited organizations evaluate the quality and effectiveness of their human research protection programs? AJOB Empir Bioeth. 2023;14(1):2337.CrossRefGoogle ScholarPubMed
Signorell, A, Saric, J, Appenzeller-Herzog, C, et al. Methodological approaches for conducting follow-up research with clinical trial participants: a scoping review and expert interviews. Trials. 2021;22(1):961.CrossRefGoogle ScholarPubMed
Yessis, JL, Kost, RG, Lee, LM, Coller, BS, Henderson, DK. Development of a research participants’ perception survey to improve clinical research. Clin Transl Sci. 2012;5(6):452460.CrossRefGoogle ScholarPubMed
Kelly-Pumarol, IJ, Henderson, PQ, Rushing, JT, Andrews, JE, Kost, RG, Wagenknecht, LE. Delivery of the research participant perception survey through the patient portal. J Clin Transl Sci. 2018;2(3):163168.CrossRefGoogle ScholarPubMed
Kost, RG, de Rosa, JC. Impact of survey length and compensation on validity, reliability, and sample characteristics for ultrashort-, short-, and long-research participant perception surveys. J Clin Transl Sci. 2018;2(1):3137.CrossRefGoogle Scholar
Kost RG. Putting Real-Time Participant Feedback into the hands of investigators. Trial Innovation Network Collaboration Webinar; Webinar. Available: https://trialinnovationnetwork.org/elements/network-events/?key-element=18344&category=archives; https://trialinnovationnetwork.org/wp-content/uploads/2019/02/TIN-Webinar-Particpant-Experience-Kost.pdf. Accessed Febraury 25, 2019.Google Scholar
Obeid, JS, McGraw, CA, Minor, BL, et al. Procurement of shared data instruments for research electronic data capture (REDCap). J Biomed Inform. 2013;46(2):259265.CrossRefGoogle ScholarPubMed
AAPOR. American Association of Public Opinion Research. In: AAPOR [Internet], p. 11. https://aapor.org/wp-content/uploads/2023/05/Standards-Definitions-10th-edition.pdf. Accessed December 26, 2023.Google Scholar
Home–the federal demonstration partnership. Available: http://www.thefdp.org. Accessed May 29, 2023.Google Scholar
Exploring EPV for your institution, Dashboard demo. (https://www.rockefeller.edu/research/epv/joining-epv/#:~:text=At%2Da%2DGlance%20Dashboard%20demo%20video) Accessed Feburary 20, 2024.Google Scholar
US Census. In: The United States Census Bureau [Internet]. Available: https://data.census.gov/. Accessed December 28, 2023.Google Scholar
Harris, PA, Taylor, R, Minor, BL, et al. The REDCap consortium: building an international community of software platform partners. J Biomed Inform. 2019;95:103208.CrossRefGoogle ScholarPubMed
The Federal Government. FACT SHEET: Biden-Harris Administration Releases Recommendations for Advancing Use of Equitable Data. In: Whitehouse.gov [Internet]. 22 Apr 2022. Available: https://www.whitehouse.gov/briefing-room/statements-releases/2022/04/22/fact-sheet-biden-harris-administration-releases-recommendations-for-advancing-use-of-equitable-data/. Accessed January 1, 2024.Google Scholar
O’Hare, WP. Differential Undercounts in the U.S. Census: Who is Missed? New York: Springer, 2019: 46–47, 125.CrossRefGoogle Scholar
Pluye, P, Hong, QN. Combining the power of stories and the power of numbers: mixed methods research and mixed studies reviews. Annu Rev Public Health. 2014;35(1):2945.CrossRefGoogle ScholarPubMed
CDER Patient-Focused Drug Development. In: CDER Patient-Focused Drug Development [Internet]. (https://www.fda.gov/drugs/development-approval-process-drugs/cder-patient-focused-drug-development) Accessed December 23, 2023.Google Scholar
PAR-21-293: Clinical and Translational Science Award (UM1 Clinical Trial Optional). Available: https://grants.nih.gov/grants/guide/pa-files/PAR-21-293.html. Accessed December 26, 2024.Google Scholar
Kost, RG, Andrews, J, Chatterjee, R, et al. 471 empowering the participant voice (EPV): participant feedback to improve the clinical research enterprise. J Clin Transl Sci. 2022;6(s1):9393.CrossRefGoogle Scholar
Kost, RG, Andrews, J, Chatterjee, R, et al. 143 wouldn’t you like to know what your research study participants are thinking? A collaboration for empowering the participant voice. J Clin Transl Sci. 2023;7(s1):4344.CrossRefGoogle Scholar
Training Events–Trial Innovation Network. Available: https://trialinnovationnetwork.org/elements/network-events/?category=archives. Accessed May 29, 2023.Google Scholar
Johns Hopkins Institute of Clinical Translational Research. In: Research Participant Satisfaction Survey [Internet]. 2016. Available: https://ictr.johnshopkins.edu/community-engagement/research-participant-satisfaction-survey/. Accessed January 3, 2024.Google Scholar
Empowering the Participant Voice Public Report. In: University of Rochester [Internet]. Available: https://www.urmc.rochester.edu/research/health-research/empowering-the-participant-voice-public-report.aspx. Accessed January 3, 2024.Google Scholar
Empowering the participant voice. In: Research [Internet]. 19 Jan 2021. Available: http://www.rockefeller.edu/research/epv. Accessed October 12, 2023.Google Scholar
Figure 0

Table 1. Feedback was provided by stakeholders engaged locally at participating sites throughout the design and implementation phases of the project

Figure 1

Figure 1. At-a-Glance Dashboard features – visual analytics and filters for RPPS data. Dropdown menus display choices among the survey perception questions (shown) or response and completion rates. The middle menu filters the survey results (e.g., age, sex, race, etc.). Blue “i” icons display definitions and scoring information. Response data are displayed as Top Box scores with conditional formatting from high (green) to low (red) scores. The “Total” column contains aggregate scores; filtered results populate the columns to the right.

Figure 2

Table 2. Empowering the Participant Voice infrastructure and use case implementation at five participating sites

Figure 3

Table 3. Characteristics of individuals returning the research participant perception survey, total and range across sites, February 2022–April 2023

Supplementary material: File

Kost et al. supplementary material

Kost et al. supplementary material
Download Kost et al. supplementary material(File)
File 3.1 MB