Introduction
Community engagement (CE) has been a cornerstone of translational science and research on health disparities for over twenty years. Academic entities, including multi-core centers and health-focused institutes, particularly those funded by the National Institutes of Health (NIH), are increasingly asked to include community partners and other stakeholders in their structure to develop sustainable and effective programs addressing health equity [Reference Duran, Oetzel and Magarati1,Reference Braveman, Arkin, Orleans, Proctor and Plough2]. More recently, CE has been promoted to enhance the broader connection between academic health centers and the communities they serve, extending beyond research to include teaching and clinical care missions [Reference Wilkins and Alberti3].
Evaluations of study-specific community-academic research partnerships have generally found that the deeper the perceived quality of group partnership processes, the stronger and broader the short- and long-term impact and outcomes [Reference Wallerstein, Oetzel and Sanchez-Youngman4–Reference Lucero7]. The increasingly validated surveys used to evaluate partnership processes (e.g., leadership structure, decision-making, communication) and structural factors (e.g., who is involved, what is their role), have deepened the science of CE by identifying how investment in group function contributes to process and ultimately to research outcomes [Reference Oetzel, Boursaw and Magarati8]. For example, evaluations indicate that participatory processes, such as shared power and decision-making between community and academic collaborators, promote equity, and foster trust in the research enterprise [Reference Braveman, Arkin, Orleans, Proctor and Plough2,Reference Oetzel, Wallerstein and Duran9,Reference Reopell, Nolan and Gray10]. Additionally, evaluation approaches have identified that strong, authentic, and long-lasting partnerships help ensure that research products are culturally sensitive and community-relevant, and enhance their quality, efficacy, and sustainability [Reference Oetzel, Wallerstein and Duran9,Reference Pearson, Duran and Oetzel11]. This research has advanced the science of CE, specifically regarding the relationship between community involvement and research outcomes.
In addition to assessing community-centered outcomes, a key purpose of CE evaluation is to identify areas for training and support to deepen participatory group processes and increase the impact of CE academic entities [Reference Eder, Evans and Funes12]. Equity-centered, community-driven evaluation requires mixed-methods approaches that incorporate both structured and narrative-based inquiry [Reference Wallerstein and Duran13]. Furthermore, when communities are involved in shaping the evaluation process, research becomes both a means and a product of relationship-building, cultural alignment, and accountability [Reference Mahmood, McEachan and Lockyer14]. Inclusive methods are particularly effective in surfacing structural inequities and power differentials, offering not only reflection but also a pathway toward actionable change [Reference Agnello, Anand-Kumar and An15].
While the number of tools and approaches, including validated surveys, are increasing, fewer tools are available for evaluating the work of multi-core centers or across institutional missions. The expanding scope of CE suggests a need for evaluation approaches that utilize a community-engaged evaluation approach to assess the quality and community-centered impact of advisory bodies across institutional missions and outcomes and provide means for improvement. A recent survey was developed specifically to assess academic health system-level institutional facilitators and barriers to community-engaged research [Reference Dickson, Kuhlemeier and Adsul16]. However, this survey did not address CE across education and clinical care missions. Furthermore, it was designed primarily as a quantitative assessment of institutions, with a focus on identifying actionable narratives through qualitative assessment.
While we expect the participatory processes within CE research and larger academic entities to be similar, the outcomes may be different, given that the work of academic entities may be broader than the focused work of a CE research project. For example, community members are often motivated to participate on advisory boards because they perceive opportunities to improve how institutions function or to impact policies or procedures related to the health and well-being of their communities [Reference Park, Frank and Likumahuwa-Ackman17]. Therefore, evaluation approaches must consider how the academic entity is structured and operates in terms of the experiences of community partners to assure mutually beneficial and equitable processes, and community-centered outcomes.
Within current frameworks for CE evaluation, there is a gap in tools that are tailored to the specific contexts, objectives, and challenges of academic entities that consider ways that the structure, organization, and processes of the entity contribute to, or undermine engagement with community perspectives, priorities, and power-sharing. This study describes the development and piloting of a mixed-methods tool, including an adapted survey and follow-up community inquiry to evaluate CE partnerships at multiple levels and across missions (research center and project; education; and clinical care). Consistent with our CE approach, we leveraged the Community of Practice (CoP) infrastructure established within the CE core of a National Institute of Minority Health and Health Disparities-funded P50 center through a collaborative process that included representatives from multiple community-academic advisory boards (Boards). The tool is intended to evaluate the advisory and joint decision-making capacity that community members experience within these groups and their perceptions of the groups’ success in achieving community-centered outcomes. It was designed to assess engagement and examine relationships, power dynamics, and pathways for lasting institutional change. This evaluation tool is intended to be used by advisory groups across academic health centers so they can understand and improve how community-academic partnerships function and achieve community-centered outcomes that create opportunities for growth.
Methods
The University of Minnesota Institutional Review Board determined that this study does not constitute human subjects research.
Procedures
Using a CoP approach, we brought together community and academic representatives of multiple Boards to create an evaluation tool. Key characteristics of the CoP approach include developing a shared purpose, fostering shared learning, building trust, promoting mutual engagement, and committing to the process [Reference Wenger, McDermott and Snyder18,Reference Wenger-Trayner, Wenger-Trayner, Reid and Bruderlein19]. CoPs can have different goals and be structured and organized differently, but all include three main components: domain, community, and practice [Reference Wenger, McDermott and Snyder18,Reference Wenger-Trayner, Wenger-Trayner, Reid and Bruderlein19]. Domain is the topic of the CoP and the group’s focus. Community is who needs to be part of the group. Practice is what the group will work on together. For this CoP, the domain topic was evaluating Boards, and the practice was developing an evaluation tool. For community, this was an invite-only CoP, meaning participants were invited due to their experience as a member of an institutional board focusing on clinical care, research, or education initiatives within academic health centers, or expertise in crafting evaluation tools. Equal numbers of academic and community partners were invited to participate. Academic members came from multiple departments and programs. Community members represented various advisory boards across the Medical School. Final membership is described in Table 1.
Table 1. Characteristics of community of practice participants*

* The discrepancy between the number of surveys and the number of unique respondents is due to two participants being representatives of multiple groups. They completed the survey for each group to which they belong. They hold the same role for each group.
The CoP series consisted of seven meetings from March to October 2023. The group first developed a shared definition of the work goals. To create a common understanding of community-engaged practices, the group identified the positions of the various participating Boards on the continuum of CE [Reference M.King, Tchouankam and Keeler20] and reached a consensus on the ideal placement, which was the “collaborate” categorization. While the group identified an aspirational placement of “shared leadership within the group,” they recognized institutional constraints on full co-leadership, given their structures. The group determined that “collaborate” was to be measured where the bidirectional partnership is characterized by trust, and all partners are active in all discussions and group decisions [Reference M.King, Tchouankam and Keeler20].
The academic lead and staff conducted a literature review and presented results to the group. The group determined that no existing evaluation tool met the intended purpose of evaluating group processes and outcomes across institutional advisory boards. The group reviewed established evaluation tools for partnership processes within community-engaged research projects and found that, while the themes addressed were similar, they would need to be adapted. The tools reviewed included the Goodman Quantitative CE Measure [Reference Goodman, Sanders Thompson and Johnson6], Engage for Equity CE Survey [Reference Wallerstein, Oetzel and Sanchez-Youngman4,Reference Oetzel, Zhou and Duran5], and the Trust Typology Model [Reference Lucero7]. We then worked together across multiple meetings to review and adapt the questions to be relevant for institutional advisory boards. Where existing surveys did not address specific prioritized topics, the group developed new questions based on their experiences within Boards and knowledge of participatory research. In the end, we created a quantitative 35-question study addressing five areas of quality community-academic group process (Depth of Involvement, Shared Values, Leadership Practices, Community-Centeredness, and Decision-making), and community-centered outcomes for clinical care, research, and education. See Table 2 for all questions. Branching logic differentiated outcomes across boards focused on each area. A prior tool’s use of a 7-point Likert scale (7) and community preference for more response options to some questions resulted in use of both 5 and 7 option Likert scales as specified in Table 2.
Table 2. Final survey developed by community of practice

* Adapted from Engage for Equity Community Engagement Survey [Reference Wallerstein, Oetzel and Sanchez-Youngman4,Reference Oetzel, Zhou and Duran5].
** Adapted from Goodman Quantitative Community Engagement Measure [Reference Goodman, Sanders Thompson and Johnson6].
*** Adapted from the Trust Typology Model [Reference Lucero7].
**** Questions were developed by the members of the Community of Practice.
# Agreement Likert scale: Completely disagree; Mostly disagree; Slightly disagree; Neither agree or disagree; Slightly agree; Mostly agree; Completely agree.
$Quality Likert scale: Poor; Fair; Good; Very good; Excellent.
+Frequency Likert scale: Never; Rarely; Sometimes; Often; Always.
We piloted the survey within a large, NIH-funded center that included a community-academic steering committee, CE core co-led by a community-academic coalition, and three studies that included community-academic advisory groups. One of the groups represented a long-standing community-academic research partnership, while others came together for the current study.
The survey was emailed to the 59 total community and academic members of the five advisory groups via REDCap [Reference Harris, Taylor, Thielke, Payne, Gonzalez and Conde21]. Based on survey results, qualitative questions were developed by community evaluator SLX and reviewed by the team to gain clarification and dig deeper into areas of potential growth or discrepancies between respondents. Consistent with our intention to identify specific examples for improvement from community members, we conducted three storytelling (focus group) sessions . Community members who completed the survey were invited by email to participate in a storytelling session. Eight community members representing five partnerships across C2DREAM participated. Sample questions included: (1) How effectively does your group “center” community perspectives into its processes; (2) How do you perceive power dynamics within the group and/or between academic and community partners; and, (3) Do you feel that decision-making processes within your group are collaborative? Each storytelling session was conducted virtually and recorded. Notes were taken during the group, and recordings were transcribed for clarity. With the storytelling sessions we intended to capture actionable qualitative results on areas of group process improvement and specific steps institutions can take to strengthen advisory board engagement.
Analysis
Scores were developed from Likert responses within each section of the quantitative survey. This was done by dividing the response by the maximum 5- or 7-point response to normalize scale, with reverse coding as needed, and then averaging across responses within the survey section, creating a score within the range 0 to 1. As an initial check for internal consistency, Cronbach’s alphas were calculated for each group process scale, using the normalized scores. Though this survey was not intentionally powered for statistical assessment, bivariate statistical significance of the associations between Likert responses from individual questions and respondent characteristics were assessed using Pearson Chi-Square tests. Similarly, bivariate significance of the associations between continuous normalized mean scores and respondent characteristics were assessed using ANOVA tests. Given the multiplicity of tests, a p-value of 0.01 was selected to identify significant associations highlighted here. We also considered tests where a p-value was less than 0.10 as a marginally significant association for individual items to consider for further exploration through storytelling group questions.
The qualitative portion of our evaluation utilized participatory evaluation methods in the design and implementation of the evaluation [22,Reference Sette23]. The team co-created questions utilized in community storytelling sessions to further explore the meaning of the quantitative results, particularly in survey topic areas that highlighted different perceptions across respondent groups [Reference Salm24]. Storytelling sessions were conducted via a meeting platform and verbatim notes were taken by two individuals and combined. Community evaluator SLX analyzed the qualitative data from the storytelling sessions using a deductive analytic approach, summarized the results, and identified themes within each topic area.
Results
Quantitative survey results
For the pilot survey, of the 59 surveys sent out, 23 were completed, representing 22 unique respondents (one individual completed a survey representing membership on two boards). The responses are weighted toward those representing two of the five advisory groups (Table 3). Both academic and community partner roles are represented among respondents (39% community respondents). About half of the respondents have three or more years of experience with their group, with slightly more than half having participated in more than six meetings over the last 12 months. Results of the trust typology assessment indicate that approximately 70% of respondents described trust at the proxy or reflective level (see Table 2 for typology).
Table 3. Characteristics of pilot survey respondents

Respondents’ evaluation of the four areas of community-engaged group processes were positive, with means ranging between 0.848 (SD = 0.112) for shared leadership, and 0.894 (SD = 0.151) for shared values, suggesting that they largely strongly agreed with the positive evaluation of how the group operated (Table 4). Cronbach’s alphas for each group process scale were high (above 0.90) except for leadership which was (0.694) suggesting moderate to strong internal consistency of these measures. Respondent’s assessment of research-focused outcomes (See Table 2 for questions) was slightly less positive, (mean = 0.787, SD = 0.142) suggesting there was less agreement on the impact of the group on these outcomes.
Table 4. Distribution of likert responses scores* by survey section

* Likert responses were expressed as a fraction of the maximum response (5- or 7-point scale), reverse coding as needed so that increasing score represents positive perception, and then averaged across questions within the section to produce a score.
We also assessed bivariate associations between individual survey items and variables identifying advisory group, respondent role (community, faculty, staff), tenure, and frequency of involvement. Only one statistically significant association emerged. A stronger belief in the group’s ability to resolve conflicts was associated with an increase in the number of meetings (p = 0.004). None of those who attended 0–3 meetings agreed that the group worked together to resolve conflicts when they occurred. In contrast, 57.2% of those who attended 7–49 meetings and 66.7% of those who had attended 50 or more meetings in the last year completely agreed.
Outcomes with marginal significance (p-value of ≤ 0.10), considered for further exploration through storytelling sessions, included those associated with a specific advisory group (questions regarding shared partnership values, and research outcomes), respondent role (staff may have greater comfort in group decision-making, and community may feel pressured to conform in decision-making), and frequency of involvement (questions regarding shared partnership values, collaborative decision-making, and education and research initiatives).
Qualitative action-oriented results
A total of eight individuals, representing five partnerships, participated in three story telling sessions. Qualitative results indicated that participants widely agreed that the center prioritized CE through structured mechanisms such as advisory councils, steering committees, and training programs. Community partners indicated that their impact was greater when they actively shaped decisions rather than just providing input. One participant shared how a community-driven idea for a cardiovascular health app became a reality, demonstrating that when institutions listen to communities, engagement leads to real outcomes. Another example highlighted research team responsiveness when they simplified presentations, added cultural considerations, and made materials more practical. Community members described the long-term value of including their expertise. One said, We’re not just here to advise – we bring solutions. When we are truly included, the work is better, and the community benefits.
Five thematic, action-oriented outcomes emerged (Table 5), identifying barriers to engagement and cultural nuances. Stories highlighted institutional dominance over final funding, decision-making, and research priorities as barriers. They emphasized the need for more equitable payment mechanisms to minimize financial strain on community partners and for hiring community members in research roles rather than relying solely on stipends. Additionally, they identified a need for greater transparency in decision-making to ensure advisory board contributions are fully acted upon.
Table 5. Institutional recommendations for strengthening equity-centered practices

Participants also shared barriers stemming from the lived realities of participating. Community partners must consider their reputation and association with the project, as they will continue to live in the community beyond the project’s duration. This is coupled with the emotional labor required to participate in a project aimed at improving disparities in their community.
The participants also identified communication barriers with academic partners, specifically an overreliance on emails and lack of follow-up on decisions. They suggested institutionalizing formalized decision documentation and communication channels to enhance transparency and accountability. Community respondents recommended expanding streamlined, multimodal communication strategies that utilize visual displays of information and are translated into multiple languages. Finally, participants emphasized that not seeing academic partners in community-driven spaces was a barrier to engagement, making the partnership feel transactional rather than relational.
Discussion
This study describes the participatory development and piloting of an equity-focused evaluation tool, including a survey and community storytelling, to address a gap in measuring the effectiveness of institutional advisory groups that span academic missions and applying the results [Reference Oetzel, Boursaw and Magarati8,Reference Oetzel, Wallerstein and Duran9]. The survey moves towards the goal of comparing across groups and exploration of associations between how groups function and their institutional impact [Reference Cargo and Mercer25,Reference Wiggins, Chanchien Parajón, Coombe, Wallerstein, Duran, Oetzel and Minkler26]. While multiple tools exist to evaluate CE within research projects, including those we adapted from [Reference Wallerstein, Oetzel and Sanchez-Youngman4,Reference Goodman, Sanders Thompson and Johnson6,Reference Lucero7,Reference Jagosh, Macaulay and Pluye27], our tool contributes to the literature in a number of important ways. First, we evaluated partnership processes and experiences within advisory groups across academic health or research center functions to assess community-centered outcomes. Second, our multi-methods participatory evaluation approach goes beyond measuring collaboration to building capacity for collaborative learning and adaptation, centering relational dynamics and power-sharing as evaluative endpoints. In doing so, this approach operationalizes equity as a continual, measurable practice – not just an intention – advancing the science of CE toward deeper accountability and impact.
In our pilot survey, we found that community-academic advisory groups within a NIH-funded center were generally positive about how the groups functioned and identified high levels of trust on the trust typology scale. As expected among these respondents with long-standing partnerships, the intensity of engagement related to enhanced perceptions of positive group functioning and the ability to manage conflict [Reference Allen, Salsberg and Knot28]. For this and marginal outcomes, results may also suggest reverse causality where those who perceive positive outcomes (e.g., ability to resolve conflicts, and the marginal associations with ability to reach consensus and promote decisions that affect communities) tend to engage more deeply with more frequent or persistent meeting attendance.
While quantitative measures provided a useful starting point, results suggested that the survey missed relational aspects of engagement. For example, higher community respondent scores related to pressure to conform to group decisions suggested the need to explore how power-sharing and decision-making function in practice and relate to outcomes. Respondents in storytelling sessions reiterated their overall satisfaction, but highlighted actionable challenges in power-sharing, decision-making, funding equity, and trust-building. For example, while survey responses suggested high levels of trust, storytelling sessions highlighted that trust is an ongoing process rather than a fixed outcome and pointed to specific actions, such as increasing meetings in community settings and community-responsive engagement strategies, that could be undertaken to improve trust, particularly in novice partnerships. Paired with qualitative and participatory methods, such as storytelling sessions, co-designed assessment tools offer a richer understanding of how partnerships function for those most affected. They also reveal gaps between institutional perceptions and community realities.
Additional outcomes suggest that participatory evaluation processes have utility. Participatory evaluation distributes power so that community members are not just being evaluated – they co-create the assessment itself [Reference Wiggins, Chanchien Parajón, Coombe, Wallerstein, Duran, Oetzel and Minkler26]. This approach prioritizes participatory and qualitative methods to capture the lived experiences of both academic and community partners. Integrating structured survey data with open-ended discussions, the evaluation highlighted not just whether engagement was occurring, but how it was experienced. Community partners shared where inequities persist, how trust is built or broken, and what structures support or hinder meaningful participation. Addressing these barriers will allow advisory boards to be true spaces for shared governance, not just symbolic representation. By centering participatory evaluation as both a process and a product, this equity-centered evaluation model extends beyond conventional mixed methods to transformative engagement, making the evaluation itself an intervention that reshapes how knowledge is produced and utilized.
Participatory and qualitative approaches were central to producing meaningful, context-specific insights. The collaborative development of the survey tool – with input from both community and academic partners – functioned as both a process measure and an outcome in itself, reflecting the values of co-creation and shared ownership. Insights from storytelling sessions added necessary depth to the survey findings, revealing power dynamics, barriers to engagement, and strategies for more equitable collaboration. This mixed-methods design did more than triangulate results; it expanded the scope of what counted as evidence by incorporating experiential knowledge and community-defined indicators of success. These findings affirm that community-engaged analysis contributes to more actionable and trustworthy knowledge.
Limitations of the study include our small sample size within a single center. However, even given our small sample size and the fact that two of the groups were overrepresented, our findings captured differences in outcomes across groups with differing levels of experience, which is reassuring regarding the survey’s ability to capture variation. Our questions were largely drawn from the Engage for Equity CE Survey [Reference Oetzel, Boursaw and Magarati8,Reference Boursaw, Oetzel and Dickson29], which has been validated over a number of iterations of development. While our adaptation built from this prior work, and our CoP group reviewed and edited all items over multiple cycles for clarity and community comprehension, and to ensure that the questions reflected real concerns and experiences, we did not perform formal validity testing. Their input was crucial in identifying expected outcomes that aligned with community priorities and conceptualizations of impact. Furthermore, while individual items did show variability in responses, our scales for community-engaged group processes were largely positive, despite differing perceptions from the storytelling sessions. This may represent social desirability bias, though this seems unlikely in this anonymous survey given participants’ willingness to share concerns in a group storytelling session. Additional limitations include that clinical outcomes were not evaluated, as the center does not have a clinical component. Future research should validate the use of this tool across a larger set of advisory groups, inclusive of those with clinical focus.
For the qualitative portion of the toolkit, despite the small number of participants, the depth and consistency of participants’ reflections offer evidence of thematic saturation, a recognized marker of rigor in qualitative research [Reference M.King, Tchouankam and Keeler20]. Within a participatory and experiential evaluation, repeated patterns across diverse individuals – especially when drawn from structurally marginalized communities – can yield important insights. The convergence of lived experiences across multiple storytelling sessions in this evaluation highlights systemic dynamics not as isolated anecdotes, challenges dominant paradigms that overlook the analytic power of fewer, yet deeply engaged, voices and reinforces calls for methodological pluralism in equity-centered research [Reference Wallerstein, Oetzel and Sanchez-Youngman4,Reference Salm24].
Moving forward, the tool will be implemented annually across the institution, with results informing group process capacity-building and troubleshooting. Survey components will be reviewed across outcomes, with the process housed in the institution’s CTSA-supported CE team. This multi-method approach positions evaluation as a vehicle for learning and institutional transformation, guiding capacity-building efforts based on annual results.
This study demonstrates the value of participatory evaluation in institutional learning and transformation. By embedding lived experience into both the content and process of evaluation, equity becomes measurable and actionable. Future efforts should prioritize iterative, community-driven approaches that build institutional capacity, foster authentic collaboration, and move beyond inclusion toward true power-sharing.
Author contributions
Michele L. Allen: Conceptualization, Data curation, Formal analysis, Funding acquisition, Investigation, Supervision, Writing-original draft, Writing – review and editing; Yasamin Graff: Conceptualization, Investigation, Writing – original draft, Writing – review and editing; Caroline Carlin: Data curation, Formal analysis, Writing – original draft, Writing – review and editing; Antonia Apolinario-Wilcoxon: Investigation, Writing – review and editing; Paulette Baukol: Investigation, Writing – review and editing; Kristin Boman: Investigation, Writing – review and editing; LaPrincess C. Brewer: Conceptualization, Investigation, Writing – review and editing; Roli Dwivedi: Investigation, Writing – review and editing; Milton Eder: Investigation, Writing – review and editing; Susan Gust: Investigation, Writing-review and editing; Mikow Hang: Conceptualization, Project administration, Writing – review and editing; Walter Novillo: Investigation, Writing – review and editing; Luis Ortega: Investigation, Writing – review and editing; Shannon Pergament: Investigation, Writing-review and editing; Chris Pulley: Investigation, Writing – review and editing; Rebecca Shirley: Investigation, Writing – review and editing; Sida Ly-Xiong: Conceptualization, Data curation, Formal analysis, Investigation, Writing – original draft, Writing-review and editing.
Funding statement
Research reported in this publication was supported by the National Institute of Minority Health and Health Disparities and the National Center for Advancing Translational Science of the National Institutes of Health under award numbers P50MD017342 and UM1TR004405. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Competing interests
None.




