Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-26T15:03:51.309Z Has data issue: false hasContentIssue false

Evaluating governance in a clinical and translational research organization

Published online by Cambridge University Press:  13 February 2024

Ingrid Philibert*
Affiliation:
Great Plains IDeA CTR, University of Nebraska Medical Center, Omaha, NE, USA
Amanda Fletcher
Affiliation:
Great Plains IDeA CTR, University of Nebraska Medical Center, Omaha, NE, USA
Katrina M. Poppert Cordts
Affiliation:
Great Plains IDeA CTR, University of Nebraska Medical Center, Omaha, NE, USA
Matthew Rizzo
Affiliation:
Great Plains IDeA CTR, University of Nebraska Medical Center, Omaha, NE, USA
*
Corresponding author: I. Philibert, PhD, MA, MBA; Email: ingrid.philibert@unmc.edu
Rights & Permissions [Opens in a new window]

Abstract

Institutional Development Awards for Clinical and Translational Research (IDeA-CTR) networks, funded by NIH/NIGMS, aim to advance CTR infrastructure to address historically unmet state and regional health needs. Success depends on the response to actionable feedback to IDeA-CTR leadership from network partners and governance groups through annual surveys, interviews, and governance body recommendations. The Great Plains IDeA-CTR applied internal formative meta-evaluation to evaluate dispositions of 172 governance recommendations from 2017 to 2021. Results provided insights to improve the classification and quality of recommendations, credibility of evaluation processes, responsiveness to recommendations, and communications and governance in a complex CTR network comprising multiple coalitions.

Type
Brief Report
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science

Background

Institutional Development Award (IDeA) Clinical and Translational Research (CTR) network awards from the National Institute of General Medical Sciences (NIGMS), aim to advance institutional infrastructure and personnel to address critical unmet clinical and public health needs across historically underserved states and regions [1]. Meeting this aim depends on systems thinking [Reference Meadows2], informed by successive cycles of strategic self-assessment, providing actional feedback for IDeA-CTR formation and growth across a network of regional partners and headquarters, with the aid of governance groups, sound principles of tracking and evaluation (T&E), providing input for effective administration and management of IDeA-CTR network goals, projects, and tasks.

The Great Plains (GP) IDeA-CTR network was established in 2016, with headquarters at the University of Nebraska Medical Center. CTR partners included (1) institutions across Nebraska and the Dakotas that receive technology and research resources and share equipment and expertise with the network; (2) investigators who enhance their CTR skills through scholar and pilot grants, mentoring, and other core services (e.g., biostatistics, informatics); and (3) patients and communities standing to benefit from CTR focus on regional health priorities. Governance is aided by an external advisory committee of national CTR experts, an internal advisory committee of research leaders at participating institutions, a community advisory board made up of members of academia and community organizations, and a Steering Committee including CTR core directors and partner institution representatives.

Effective tracking and evaluation of NIH-funded programs is key to enhancing capacity [Reference Trochim, Rubio and Thomas3]. We explored the application of internal formative meta-evaluation (IFME) to track and evaluate GP IDeA-CTR disposition of recommendations from the initial funding period (2017–2021), in efforts to improve our IDeA-CTR evaluation process, as outlined below.

Methods

Meeting IDeA-CTR/NIGMS goals depends on a cycle of feedback, operationalized by a T&E Core, through annual assessments of partners, investigators, patients and communities, and governance groups specified above, and providing critical feedback to the IDeA-CTR’s Principal Investigator (PI), administrative core, and other cores. The T&E core developed and disseminated five annual governance reports between 2017 and 2021. Each report comprised quantitative data on governance effectiveness using a modified internal coalition evaluation (ICE) survey [Reference Cramer, Atwood and Stoner4,Reference Cramer, Atwood and Stoner5], narrative comments on the survey, governance interviews, and classification of disposition of recommendations. The modified ICE tool has 18 questions spanning Social Vision, Efficient Practices, Knowledge and Training, Relationships, Participation, and Activities [Reference Cramer, Atwood and Stoner4,Reference Cramer, Atwood and Stoner5] (see supplemental information for details). The ICE instrument is among the tools highlighted by the National Academy of Medicine’s report on Assessing Meaningful Community Engagement[6] and by the National Institute of Health’s HEAL Initiative [7].

Meta-evaluation appraises the evaluation process or the resulting outcomes to enhance evaluation quality and credibility [Reference Stufflebeam8Reference Sturges and Howley10]. This includes summative meta-evaluations by external reviewers that focus on outcomes. In contrast, formative meta-evaluations are conducted by internal evaluators and focus on the evaluation process [Reference Sturges and Howley10,Reference Harnar, Hillman, Endres and Snow11]. We deployed Stufflebeam’s internal formative meta-evaluation (IFME) approach for its focus on member engagement, and timely communication and reporting [Reference Stufflebeam8], to assess: (1) strengths and opportunities for improvement in responses to governance recommendations; and (2) longitudinal evidence of coalition-building.

T&E staff collaborated with all GP IDeA CTR cores to update information on CTR responses to 172 total recommendations and classify these as (1) enacted/completed as recommended; (2) enacted by alternate approach; (3) scheduled for action upon CTR renewal; and (4) infeasible.

Results

Figure 1 shows the disposition of all 172 recommendations after IFME. Of these,120 (69.8%) were enacted; 8 (4.7%) were implemented through alternative approaches that met the recommendation’s intent; 37 (21.5%) were scheduled for action upon grant renewal; and 7 (4.1%) were deemed infeasible due to resource demands, implementation challenges, or limited anticipated benefit (e.g., a data repository for CTR-funded projects was judged infeasible because data varied widely across projects), with low potential for leveraging future research

Figure 1. Disposition of all governance recommendations 2017–2021.

Strengths and opportunities

Results highlighted CTR strengths in response to governance input during network evolution, notwithstanding external challenges, such as the global SARS-CoV-2 (COVID-19) pandemic. In total, 91.3% of the recommendations (157 of 172) were implemented or scheduled for implementation in the renewal, reflecting CTR responsiveness to governance recommendations, and actionable quality of recommendations received. Responsiveness also is reflected in successive improvements in satisfaction with governance reported on all dimensions of the longitudinal data from the ICE survey [Reference Trochim, Rubio and Thomas3,Reference Cramer, Atwood and Stoner4] for 2016–2017 and 2020–2021 (Fig. 2).

Figure 2. ICE scores 2017–2020 – steering committee, funded faculty and staff.

Use of IFME (through the ICE tool and governance interviews) allowed the CTR to address a small drop in scores for social vision, efficient practices, and relationships between 2017 and 2018. They were attributed to slower- than-anticipated network development after the initial year and rebounded in 2019, with continued growth in subsequent years.

Use of IFME improved the display of information in communications with governance advisory bodies. Tabular data in the 2020–2021 governance report efficiently presented the disposition of responses to all recommendations in one place, closing the feedback loop on all recommendations from the initial funding period. IFME also enabled refinements in the evaluation process, including splitting recommendations into multiple components for analysis and action and making explicit situations where implementation met governance recommendation intent through alternative approaches. An example of implementation using an alternative approach is the response to the recommendation to have unfunded pilot applicants meet with their reviewers. The alternative approach gives applicants the option to discuss their application with BERD consultants or with relevant experts through research studios. Implementation of a recommendation through an alternative approach is reported in the governance report, giving governance groups the opportunity to ask questions and receive clarifying information.

Evaluation of coalition building

IFME helped us examine how the number and character of annual recommendations related to the six ICE constructs for measuring effective coalitions (Social Vision, Efficient Practices, Knowledge and Training, Relationships, Participation, and Activities; Table 1). Earlier years were characterized by a larger number of recommendations to help the PI and the administrative core establish governance and management infrastructure, and education and training programs, with a focus on the ICE domains of Knowledge, Training, and Relationships. From 2019 to 2021, recommendations were fewer, more complex, and focused on actions in the domains of Participation and Activities, reflecting a maturing network. Recommendations for Participation included the active engagement of partner institutions and support of investigators from different member sites. Recommendations for Activities included the use of de-identified EHR data and the development of a precision medicine program. Actions in the Activities domain also showed a growing focus on the needs of historically and currently underserved populations in the region, including tribal members and rural residents. GP IdeA-CTR initiatives to benefit these groups included expansion of the practice-based research network (PBRN) to 89 sites that help translate research into care to improve the health of communities. An enacted governance recommendation created a vetting process for PBRN study proposals that tasks the PBRN’s board of directors with ensuring that research conducted across the network benefits the GP IdeA-CTR, the PBRN, and local communities.

Table 1. Categorizing governance using the ICE attributes of effective coalition

As shown in Table 1, the ICE construct Efficient Operations was a theme through all five years, including repositioning network assets and resources to address changing needs and priorities, such as (2019–2020) joining the National COVID Cohort Collaborative (N3C) [12].

Analysis of score differences for the CTR’s advisory boards and its steering committee, with the latter having significant representation from network members, showed advisory board scores were slightly lower than those for the steering committee, although differences were not statically significant. Trend analysis showed improvement in scores for both groups since the initial survey in 2017, with Shared Social Vision showing the highest score for both groups, and Efficient Practices and Participation showing the largest gains for both groups.

Discussion

Demonstrating responsiveness to feedback in an IDeA-CTR requires effective communication that closes network feedback loops [Reference Ossenberg, Henderson and Mitchell13] with partners, investigators, patients, communities, and governance groups. Doing this can enhance the relevance and credibility of feedback by reducing “noise,” and spacing feedback to allow recommendations to be considered before additional input is provided [Reference Renger, Basson and Hart14].

Classifying recommendations as “implemented using an alternative approach” in meta-evaluation helps improve classification and terminology for reporting, and clarifies reports on implementation responsiveness, communication of results, and comparisons across IDeA-CTRs, through a shared taxonomy of evaluation [Reference Proctor, Powell and McMillen15].

Our approach to how the GP IDeA-CTR responded to governance recommendations distinguished between recommendations to improve administrative and governance functions; CTR processes, capabilities, and capacity; and governance evaluation. Meta-evaluation in another CTR used “evaluation utility metrics” (EUMs) to classify whether recommendations were fully or partially adopted, and if they led to actionable changes in program processes or products [Reference Renger, Renger, Van Eck, Basson and Renger16].” Reflecting on the level of influence of a recommendation is germane to all IDeA CTRs. Generalization is challenging as findings depend on the system, culture, and context wherein governance recommendations were generated [Reference Renger, Renger, Van Eck, Basson and Renger16].

IFME offers new ideas for program evaluation across a complex CTR network. Our use of IFME identified efficiency and timeliness of communications with governance groups as opportunities, in line with NIGMS (PAR-20-175: IDeA Program Infrastructure for Clinical and Translational Research) (IDeA-CTR) (U54 - Clinical Trial Optional) (nih.gov). This tasked IDeA-CTRs with assessing short- and long-term aims, including implementation of specific program activities (process), and plans for documenting accomplishments for each budget period and the total award period (outcomes) [1]. A limitation of IFME is that it focuses more on process than outcomes of evaluations. Future refinements can include summative meta-evaluation, focusing more on the influence of recommendations and their ultimate impact on course corrections and success.

Conclusion

We used IFME to analyze five years of governance recommendations and the CTR’s response and described its evolution in a maturing network. Our findings may help other IDeA programs, funders, and the public gain insights into the value of meta-evaluation in appraising governance in programs serving multiple CTR teams and partners.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/cts.2024.25.

Acknowledgments

The authors acknowledge the contributions of Emily Frankel, Heather Braddock, and Alfred Jerrod Anzalone, core coordinators for the Great Plains IDeA CTR, and Paul Estabrooks, who served as Director of the Community Engagement and Outreach core, in reviewing and helping develop consensus on the disposition of the 172 governance recommendations from the first funding period.

The authors also thank all members of the Great Plains IDeA CTR Advisory Boards and Steering Committee.

Author contributions

IP designed the study. IP, AF, IP, AF, and KMPC completed additional data analyses. IP wrote the initial draft of the manuscript. KMPC and MR critically reviewed and revised the manuscript. All authors reviewed and edited sections of the manuscript and approved the final manuscript.

Funding statement

The project was supported by a grant from the National Institutes of Health/National Institute of General Medical Studies (NIGMS U54 GM115458).

Competing interests

Authors IP, AF, KMPC, and MR are employed by the University of Nebraska Medical Center.

Author MR is chair of the not-for-profit American Brain Coalition. The remaining authors have no relevant conflicts of interest to disclose.

Parts of this study were presented at the American Evaluation Association Conference, November 7–12, 2022 (New Orleans, LA).

References

National Institutes of Health: National Institute of General Medical Sciences. Institutional Development Award (IDeA) Networks for Clinical and Translational Research (IDeA-CTR) (U54 Clinical Trial Optional). PAR-20-175: IDeA Program Infrastructure for Clinical and Translational Research (IDeA-CTR)(U54 - Clinical Trial Optional) (nih.gov). Accessed July 5, 2023.Google Scholar
Meadows, DH. Thinking in Systems, White River. Junction, VT: Chelsea Green Publishing; 2015.Google Scholar
Trochim, WM, Rubio, DM, Thomas, VG. Evaluation key function committee of the CTSA consortium. Evaluation guidelines for the clinical and translational science awards (CTSAs). Clin Transl Sci. 2013;6(4):303309. doi: 10.1111/cts.12036.Google Scholar
Cramer, ME, Atwood, JR, Stoner, JA. A conceptual model for understanding effective coalitions involved in health promotion programming. Public Health Nurs. 2006;23(1):6773. doi: 10.1111/j.0737-1209.2006.230110.x.CrossRefGoogle ScholarPubMed
Cramer, ME, Atwood, JR, Stoner, JA. Measuring community coalition effectiveness using the ICE instrument. Public Health Nurs. 2006;23(1):7487. doi: 10.1111/j.0737-1209.2006.230111.x.CrossRefGoogle ScholarPubMed
National Academic of Medicine. Internal Coalition Effectiveness Instrument - National Academy of Medicine (nam.edu). Accessed December 4, 2023.Google Scholar
National Institutes of Health. Patient and Community Engagement | NIH HEAL Initiative. Accessed December 4, 2023.Google Scholar
Stufflebeam, DL. Meta-evaluation. J Multidiscip Eval. 2010;7(15):99158.CrossRefGoogle Scholar
Cooksy, LJ, Caracelli, VJ. Metaevaluation in practice: selection and application of criteria. J MultiDiscip Eval. 2009;6(11):115.Google Scholar
Sturges, KM, Howley, C. Responsive meta-evaluation: a participatory approach to enhancing evaluation quality. Am J Eval. 2017;38(1):126137. doi: 10.1177/1098214016630405.Google Scholar
Harnar, MA, Hillman, JA, Endres, CL, Snow, JZ. Internal formative meta-evaluation: assuring quality in evaluation practice. Am J Eval. 2020;41(4):603613. doi: 10.1177/1098214020924471.Google Scholar
National COVID Cohort Collaborative (N3C) | N3C (cd2h.org). Accessed May 5, 2023.Google Scholar
Ossenberg, C, Henderson, A, Mitchell, M. What attributes guide best practice for effective feedback? A scoping review. Adv Health Sci Educ Theory Pract. 2019;24(2):383401.CrossRefGoogle ScholarPubMed
Renger, R, Basson, MD, Hart, G, et al. Lessons learned in evaluating the infrastructure of a centre for translational research (CTR). Eval J Australas. 2020;20(1):622. doi: 10.1177/1035719x20909910.Google Scholar
Proctor, EK, Powell, BJ, McMillen, JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8(1):139. doi: 10.1186/1748-5908-8-139.Google Scholar
Renger, R, Renger, J, Van Eck, RN, Basson, MD, Renger, J. Evaluation utility metrics (EUMs) in reflective practice. Can J Program Eval. 2022;37(1):142154. doi: 10.3138/cjpe.72386.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Disposition of all governance recommendations 2017–2021.

Figure 1

Figure 2. ICE scores 2017–2020 – steering committee, funded faculty and staff.

Figure 2

Table 1. Categorizing governance using the ICE attributes of effective coalition

Supplementary material: File

Philibert et al. supplementary material

Philibert et al. supplementary material
Download Philibert et al. supplementary material(File)
File 16.6 KB