Introduction
Clinical and translation science (CTS) encompasses multistage scientific investigations from fundamental discoveries in the laboratory, clinic, and community to interventions transformed as new treatments and approaches to improving the health of individuals and populations [1]. The National Center for Advancing Translational Sciences (NCATS) in the United States initiated the Clinical and Translational Science Awards (CTSA) program in 2006 and has invested about half a billion dollars annually on a national network of more than 50 medical research institutions (also called “hubs”) [2]. CTSA hubs vary in sizes, goals, priorities, services, and geographic locations, but all aim to accelerate the translation of scientific discoveries to improved patient care.
Evaluating a CTSA program hub is essential since they expend massive public funding annually [3] and spend considerable time and resources building the CTS pipeline. However, it is complicated and challenging to demonstrate that a CTSA hub is “well implemented, efficiently managed, adequately resourced and demonstrably effective,” as stated in CTSA-specific evaluation guidelines [Reference Trochim, Rubio and Thomas4]. Therefore, CTSA evaluators have explored an array of feasible evaluation approaches, measures, and models, including common metrics [Reference Rubio, Blank and Dozier5], logic models [6], return on investment model [Reference Grazier, Trochim, Dilts and Kirk7], Developmental Evaluation and the Context Input Process Product Model [Reference Zhang, Zeller and Griffith8], payback framework [Reference Rollins, Llewellyn, Ngaiza, Nehl, Carter and Sands9], and a mixed-methods approach of logic models and expert panel evaluation [Reference Wooten, Rose, Ostir, Calhoun, Ameredes and Brasier10]. Bibliometrics [Reference Yu, Van and Patel11–Reference Sayavedra, Hogle and Moberg15] and social network analysis (SNA) [Reference Bian, Xie, Topaloglu, Hudson, Eswaran and Hogan16–Reference Vacca, McCarty, Conlon and Nelson18] have also been explored for their feasibility to chart the research outcome, collaboration, and impact of CTSA-supported activities.
Bibliometrics has been widely used to outline the research landscape and disclose the direct outcome and impact of scientific investigations through quantitatively analyzing a chosen group of publications. In biomedical and health sciences, bibliometrics is a core method to evaluate research impact [Reference Milat, Bauman and Redman19]. The National Institutes of Health (NIH) requires each CTSA program hub to track and report the annual publication count. A CTSA consortium-led evaluation workgroup also identified shared interests in using publication analysis to assist in assessing annual programs of individual CTSAs [Reference Frechtling, Raue, Michie, Miyaoka and Spiegelman20]. Recent evaluation studies by CTSAs also confirmed the validity and feasibility of bibliometrics as a critical approach to CTSA-supported translational research evaluation [Reference Llewellyn, Carter, DiazGranados, Pelfrey, Rollins and Nehl12,Reference Schneider, Kane and Rainwater14]. For example, bibliometrics has been applied to assess (1) an individual CTSA program hub [Reference Yu, Van and Patel11], a group of CTSA program hubs [Reference Schneider, Kane and Rainwater14], overall CTSA consortium or a specific program across CTSAs [Reference Llewellyn, Carter, DiazGranados, Pelfrey, Rollins and Nehl12,Reference Llewellyn, Carter, Rollins and Nehl13,Reference Sayavedra, Hogle and Moberg15,Reference Qua, Yu, Patel, Dave, Cornelius and Pelfrey21]; (2) research productivity and citation impact using both basic publication/citation counts and advanced citation impact indicators (e.g., iCite’s relative citation ratio, Elsevier’s Field-Weighted-Citation-Impact, and Web of Science’s Category Normalized Citation Impact) [Reference Yu, Van and Patel11–Reference Sayavedra, Hogle and Moberg15,Reference Qua, Yu, Patel, Dave, Cornelius and Pelfrey21]; (3) interdisciplinary or inter-CTSA collaborations [Reference Yu, Van and Patel11,Reference Llewellyn, Carter, DiazGranados, Pelfrey, Rollins and Nehl12]; and (4) research areas align with the translational spectrum [Reference Yu, Van and Patel11,Reference Llewellyn, Carter, DiazGranados, Pelfrey, Rollins and Nehl12].
Another CTS evaluative approach - SNA focuses on the patterns of interaction between social entities [Reference Bian, Xie and Hudson22]. It is particularly suited to help understand multidisciplinary collaborations and team science essential for CTS’s success [Reference Nagarajan, Lowery and Hogan23,Reference Luke, Carothers and Dhand24]. Several CTSA programs produced use cases of applying SNA to evaluate the impact of their supported translational teams. These programs used grants, publications, and surveys to measure and visualize the temporal evolution and cross-discipline collaboration patterns. For example, SNA was adopted to model upon grant data to compare the biomedical research collaborations before and after CTSA awards [Reference Nagarajan, Peterson, Lowe, Wyatt, Tracy and Kern17] or to disclose “influential” researchers and identify “potential new collaborations” [Reference Bian, Xie, Topaloglu, Hudson, Eswaran and Hogan16]. Publications were either used as the sole data source to expand SNA to bibliometric network analysis by examining co-authorship [Reference Yu, Van and Patel11,Reference Sorensen, Seary and Riopelle25] or combined with grants to explore CTSA-supported research collaboration patterns [Reference Vacca, McCarty, Conlon and Nelson18,Reference Luke, Carothers and Dhand24]. In addition, a couple of CTSA hubs used survey data to investigate collaboration networks at macro- (i.e., entire network) and meso-levels (e.g., across departments) [Reference Vacca, McCarty, Conlon and Nelson18,Reference Dozier, Martina and O’Dell26]; or design a program to create collaborations between previously unconnected researchers [Reference Vacca, McCarty, Conlon and Nelson18]. Therefore, CTSA evaluators have experiences of adopting both SNA and bibliometrics to understand the scale and scope of their supported teamwork, identify missing connections, connect researchers, and improve team effectiveness.
Finally, due to increasing social media usage in scholarly communication, research enterprise stakeholders (e.g., sponsors, researchers, and evaluators) have pressed for alternative metrics to improve the evaluation of research output, also known as altmetrics [27]. Altmetrics is complimentary to the citation-based metrics for research impact evaluation by tracking immediate online attention within the scientific community such as usage (e.g., downloads, views), mentions (e.g., news, blogs, Wikipedia), and social media (e.g., Twitter/Facebook) [Reference Akers28]. Researchers have extensively applied altmetrics to measuring or identifying the social impact of health sciences research [Reference Giustini, Axelrod, Lucas and Schroeder29,Reference Punia, Aggarwal, Honomichl and Rayi30]. In addition, quite a few studies explored the correlation between traditional citation measures (e.g., citation counts) and altmetrics [Reference Giustini, Axelrod, Lucas and Schroeder29–Reference Luc, Archer and Arora33]. Two new CTSA evaluation studies reported using both biblometrics and altmetrics to assess the short- and long-term impact of translational research [Reference Llewellyn, Weber, Fitzpatrick and Nehl34] and exploring the association between those measures [Reference Llewellyn, Weber and Nehl35], further validating the potential of using both bibliometric and altmetrics measures for CTSA evaluations.
Therefore, building on our previous study [Reference Yu, Van and Patel11] that assessed the bibliometrics approach for publications citing North Carolina Translational and Clinical Institute (NC TraCS), CTSA hub for the University of North Carolina at Chapel Hill (UNC-CH) from 2008 to April 2017, this study took a mixed-metrics approach by applying bibliometrics, SNA and altmetrics to an expanded publication year range (i.e., 2008–March 2021). Particularly, we provide insights into the potential influence of several programmatic changes at NC TraCS including the creation of two new programs (e.g., Inclusive Science and Team Science), the inclusion of a required Community and Participant Engagement Plan for all our pilot grant applications, targeted pilot grant Request for Applications (RFAs) focusing on addressing health equity, and the creation of a formal partnership with North Carolina Agricultural and Technical University (NC A&T), the largest Historically Black Colleges and University (HBCU) in North Carolina. During this period (2017–2021), the world, nation, and UNC-CH were also impacted by and responded to the COVID-19 pandemic, resulting in potential changes in our supported research output.
In addition to the metrics and measures in our 2017 pilot study, this study examined two new metrics for bibliometric topic measuring (i.e., Topic Prominence Percentile and Approximate Potential to Translate). Particularly, we investigated NC TraCS-supported research topics that are pertinent to health disparity. We explored the following research questions (RQ):
RQ 1: How has the research productivity and impact of NC TraCS-supported CTS enterprise at UNC-CH changed since 2017?
RQ 2: How has the research collaboration catalyzed by NC TraCS-supported research changed since 2017?
RQ3: (a) How are the NC TraCS-supported research topics ranked upon prominence and translational potential, and (b) How do the research topics address health disparities?
RQ4: Is there any relationship between bibliometric, altmetric, and research topic measures?
Methods
Data Sample
We include NC TraCS-supported publications from September 1, 2008, to the most recent NIH annual progress report period, March 23, 2021, resulting in a total of eligible 1154 publications. We define NC TraCS-supported publications wherein the authors acknowledged and cited the NC TraCS grant as their research support. The bibliographic records of 1154 publications were retrieved from the NC TraCS account at the PubMed/National Center for Biotechnology Information (NCBI), downloaded from PubMed in Medline format, and used as the master data file for analysis.
Data Tools
This study used the following tools to collect and analyze publication data, consistent with what we used in the 2017 study. However, we adopted a few additional bibliometric and topic measures recently made available by the tools below.
Elsevier Scopus covers a broader spectrum of research publications across disciplines than its counterpart - Web of Science [Reference Pranckutė36]. The UNC-CH has maintained an active subscription to Scopus so that we can access citation impact measures (e.g., citation counts and comparative citation impact ratios), SciVal topic prominence percentile (STPP) [37], and PlumX metrics [38]. The citation impact data in this study were gathered in the same manner as the 2017 study by searching and matching the citation fields (i.e., PMID, DOI, and title) of the PubMed-exported publication records in Scopus. We added STPP and PlumX metrics to this study (defined below), which is the first CTSA evaluation utilizing these new Scopus metrics and data sources. While the STPP of each matched Scopus citation was collected through Web Scraper [39], the PlumX metrics were collected via Scopus API [40].
NIH iCite [41]: Since the NIH Office of Portfolio developed and validated Relative Citation Ratio (RCR) [Reference Hutchins, Yuan, Anderson and Santangelo42] as an article-level comparative citation impact indicator, RCR has been frequently used by evaluators and researchers to assess the impact of research publications supported by public funds, including CTSAs [Reference Yu, Van and Patel11,Reference Llewellyn, Carter, Rollins and Nehl13,Reference Sayavedra, Hogle and Moberg15]. iCite designed and launched a Translation module and a new metric - Approximate Potential to Translate (APT) publication. While we can use the Translation module to compare how close to human clinical applications two analysis groups of articles are, the APT score predicts the future translational progress in biomedical research.
VOSviewer (Version 1.6.16) [Reference Van Eck and Waltman43]: As a specialized bibliometric and network analysis application with excellent usability, VOSviewer has been widely used in analyzing and visualizing coauthorship networks across disciplines [44], including our pilot study.
Data Measures
-
Bibliometrics
We continued to use validated bibliometric measures from previous studies and other CTSA bibliometric evaluation reports, including publication counts, citation counts, average cites per year, field-, and time-normalized comparative citation ratios at the article level, such as Field-Weighted Citation Impact (FWCI), Citation Benchmarking (CB), and Relative Citation Ratio (RCR) [Reference Yu, Van and Patel11,Reference Llewellyn, Carter, Rollins and Nehl13–Reference Sayavedra, Hogle and Moberg15,Reference Frechtling, Raue, Michie, Miyaoka and Spiegelman20] (Table 1).
*New measures adopted in this study compared to measures in the 2017 pilot study; TraCS, Translational and Clinical Science Institute; CTSA, Clinical and Translational Science Award; FWCI, Field-weighted Citation Impact; CB, citation benchmarking; RCR, Relative Citation Ratio; NC, North Carolina; HBCU, Historical Black Colleges & Universities; STPP, SciVal Topic Prominence Percentile; APT, Approximate Potential to Translate.
-
Collaboration network analysis
The collaboration measures included both intra-organization collaboration (e.g., UNC unit collaboration network) and inter-organization (i.e., inter-NC CTSAs collaboration, Inter-UNC system collaboration, and the collaborations between NC TraCS with HBCUs). Our pilot study found that approximately half of the NC TraCS-supported publications were generated in collaboration with researchers at other CTSA hub institutes [Reference Yu, Van and Patel11]. Therefore, this study explores more granular level collaborations between NC TraCS and local institutions in North Carolina and the HBCUs across the nation.
-
Altmetrics
Altmetric.com and PlumX are the two major commercial altmetrics data providers [Reference Ortega45]. We chose PlumX metrics from Scopus as additional measures for this study. Five comprehensive article-level PlumX metrics were exported via Scopus API on March 30, 2021, including citations (e.g., clinical citations, patent citations), usages (e.g., abstract views, downloads), captures (e.g., bookmarks, reference manager saves), mentions (e.g., blog posts, news, or Wikipedia mentions), and social media (e.g., tweets, Facebook).
-
Topic measures
The topic measures in this study employed two new metrics: STPP and APT. Elsevier developed STPP, an article-level metric to show the current momentum or visibility of a topic [37]. It is calculated by weighing three metrics for a publication clustered in a topic: citation count, Scopus view count, and average CiteScore (Scopus journal impact metric). A high STPP means this topic has high momentum is likely to be well-funded, and thus, has higher grant success rates. The APT score is generated by a machine learning algorithm considering the citations a publication receives by clinical articles and the citation network [41]. The Translation module in iCite provides the average number of articles of an analysis group in three categories (i.e., Human, Animal, and Molecular/Cellular Biology) that are classified based on Medical Subject Heading (MeSH) terms in addition to an average APT score for topic translational potential prediction. Comparing with the topic clustering measure in our 2017 pilot study, which extracted key terms from the titles and abstracts and demonstrated the translational phases qualitatively, we believe the two new metrics (STPP and APT) for topics in this study can capture the impact and translational themes of supported publications more quantitatively.
In addition, this study focused on health disparity addressed by NC TraCS-supported publications. Words, terms, and their variations related to health disparity and inequality were searched against the citation fields (i.e., title, abstract, author keyword, and MeSH terms) of our master dataset (N = 1154), including health disparity (disparities), health equity/inequity, rural health/communities/hospitals, healthcare accessibility, African Americans, Hispanic, Latino/Latinos, race/racial, racism, ethnicity, underserved, minority, people of color, poverty, socioeconomic factors, and population health. Retrieved publications were manually screened for relevancy for topic analysis.
-
Correlation measure
The correlations between metrics of bibliometrics (i.e., citation counts, FWCI, CB, RCR), altmetrics (i.e., PlumX-Captures, Citations, Usages, Mentions, and Social media), and topic measures (i.e., STPP) were assessed by Spearman Rho correlation coefficients [Reference Giustini, Axelrod, Lucas and Schroeder29,Reference Punia, Aggarwal, Honomichl and Rayi30]. This study is the first CTSA evaluation that formally investigated these correlations using the data source provided by Scopus, contributing to the growing literature on this topic [Reference Richardson, Park, Echternacht and Bell46–Reference Llewellyn and Nehl49].
Data Analysis
This study compared data measures in 2017 [Reference Yu, Van and Patel11] and 2021. We used Microsoft Excel and SPSS for quantitative statistics and testing. First, we generated descriptive statistics and visualization to demonstrate the longitudinal change of the NC TraCS-supported publications regarding all three measures (bibliometric, collaboration, and topics) from 2008 to 2021. Second, we conducted paired samples t-tests to detect any statistical significance of citation impact (i.e., FWCI, CB) between the analysis of our 2017 pilot and 2021 studies. Third, we calculated Spearman rho correlation coefficients in SPSS for the statistical significance of the correlations. Fourth, we constructed a co-occurrence Medical Subject Headings (MeSH) term network map for the health disparity topic by identifying the most frequently occurring MeSH terms.
Results
Bibliometric Measures
-
Comparison of Research productivity and Citation impact
The average number of NC TraCS-supported publications increased from 82 per year (identified in the 2017 pilot study) to 87 per year identified by this study (Table 2). We excluded the year 2021, representing only the partial output (January to March). Since 2017, both research productivity and citation counts have continued to grow annually (Supplement Figure 1). In addition, the total citation counts of NC TraCS-supported articles increased from 24,010 by April 20, 2017 [Reference Yu, Van and Patel11] to 53,560 by April 27, 2021. The average cites per NC TraCS-supported publication were also improved from 33 times in 2017 to 48 in 2021.
*For both the pilot study and the current study, when calculating the average number of publications per year and the average cites per year, we excluded the partial output of a year (i.e., January to March). For example, in the pilot study, the scholarly output and citation counts of the year 2017 was excluded. RCR, relative citation ratio; SEM, standard error of the mean.
The paired t-test shows that there was not a significant difference in FWCI scores between 2017 (Mean = 3.89, Standard Deviation (SD) = 14.48) and 2021 (M = 4.01, SD = 19.93); t (639) = -0.432, P = .666). However, there was a significant difference in CB scores between 2017 (M = 74.81, SD = 21.10) and 2021 (M = 77.27, SD = 17.35); t (604) = -5.850, P = .000). Given the publication productivity increased 1.5 times since our pilot study, 86.57% of these publications were still above the average CB (50th percentile) in 2021 (Supplement Figure 2) and slightly more than 83% reported in the pilot study.
Collaboration Measures
-
UNC unit collaboration
Compared with our pilot study, NC TraCS-supported research collaboration has grown in the total number of supported authors, most published authors, the total number of UNC units, and the average number of coauthors of most published authors. Mainly, the most published UNC author (each has >5 publications) collaboration network across internal units (Supplementary Figure 3) shows two more UNC units (i.e., UNC School of Social Work and Renaissance Computing Institute) appeared as additional internal units in this collaboration network compared to our pilot study in 2017. However, during the latest 30-month (09/01/2018–02/28/2021), the average number of coauthors of each most published author in 30 months decreased to 4 authors from 6 in the previous 30-month period examined (03/01/2016–08/31/2018) (Table 3).
Note: UNC, University of North Carolina at Chapel Hill.
-
NC TraCS-supported local collaboration
Producing 1154 publications, NC TraCS-supported researchers coauthored with local researchers from 61 organizations in North Carolina, including two other CTSAs (i.e., Duke University and Wake Forest University) and 8 out of 16 schools in the UNC system (e.g., North Carolina State University, East Carolina University, UNC-Charlotte, UNC-Greensboro) (Table 4).
Note: NC, North Carolina; UNC, University of North Carolina at Chapel Hill; NC A&T University, North Carolina Agricultural and Technical State University.
-
NC TraCS-HBCU collaboration
NC TraCS-supported researchers also collaborated with researchers at four HBCUs and one CTSA that an HBCU participated in (i.e., Georgetown-Howard Universities Center for Clinical & Translational Science (GHUCCTS)) in the United States. These collaborations produced five coauthored publications with North Carolina Central University, two with North Carolina A&T State University and Howard University, respectively, and one with Meharry Medical College (Supplementary Table 1).
-
Altmetrics measures
The PlumX metrics scores of 1154 NC TraCS-supported publications are summarized in Supplementary Table 2 while publications with the highest PlumX metrics scores are listed in Supplementary Table 3. Notably, among 52,269 altmetrics citations, there are 287 clinical and 182 patent citations. The total of 619,722 times of usages (e.g., clicks, views, downloads) include 322,158 abstract views and 261,976 full-text views. Out of 108,360 times of total captures, there are 74,334 times Mendeley saves. In addition, these publications were mentioned 118 times in blogs and 1068 times in news; they appeared on Facebook 16,481 times and were tweeted 8,908 times.
Topic Measures
-
SciVal Topic Prominence Percentile (STPP)
Ninety-six percent of NC TraCS-supported publications (N = 1,074) have above the average STPP (>50 percentile), and 64% of the total publications (N = 730) have 90–99 percentile STPP (Fig. 1).
-
Approximate Potential to Translate (APT)
The average APT that NIH iCite generated for the total of 1154 NC TraCS-supported publications is 54.2%, which means the likelihood that a NC TraCS-supported article will be cited by a clinical article is 54.2%. In addition, the average “human” score for NC TraCS-supported articles is 0.80; the average “Animal” score is 0.06, and the average “Mol/Cell” score is 0.11. Overall, NC TraCS-supported research is human and human health-oriented (Fig. 2). According to the Translational Module, there are 524 papers already cited by a clinical article.
-
Focal topics
A total of 177 NC TraCS-supported publications addressed health disparity issues. Fig. 3 demonstrated a co-occurrence network map of MeSH terms associated with these publication records, picturing six characteristics of these studies. 1) Regarding populations and demographics, 127 studies focused on females while 108 studied males, and 58 on both; middle-aged (79 publications) and adult (75) populations are studied more than the other age groups, such as adolescent (30) or aged 80 and over (23). These studies also focused on populations in North Carolina (30), African Americans (31), continental African ancestry group (6), and rural population (14). 2) Regarding disease and health symptoms, HIV infections are the most addressed disease associated with health disparities (18) and followed by breast neoplasms (9). 3) Regarding treatment, 14 studies reported treatment outcomes, 6 investigated antihypertensive agents, and 5 studied highly active antiretroviral theory. 4) Regarding research methods, cross-sectional studies are the most frequently employed (24) and followed by surveys/questionnaires (22) and cohort studies (17). 5) Risk factors (16) and socioeconomic factors (13) are the two significant variables in the studies. 6) Regarding patient-healthcare interaction, NC TraCS-supported research covered a range of topics, including health knowledge, attitudes, practice (14), patient education (11), healthcare disparities (9), and physician–patient relations (9).
Correlation Measure
The Spearman’s rho testing shows that (1) FWCI, cites (i.e., Scopus citation counts), CB, RCR, PlumX-Citations, PlumX-Captures, and PlumX-Social Media counts are all positively correlated with each other (p < .05); (2) STPP is positively correlated with FWCI, Cites, CB, RCR, PlumX-Citations, PlumX-Mention, and PlumX-Social Media. However, the correlation is not statistically significant between STPP and PlumX-Captures (p > .05), STPP and PlumX-Usage (p > .05), and PlumX-Usage and PlumX-Mentions (p > .05).
Discussion
Bibliometrics
The results from the bibliometric measures show that NC TraCS-supported publications continued to grow in productivity and citation impact after our pilot study in 2017. The total number of NC TraCS-supported publications is about 1.5 times more in 2021 than when they were measured in 2017, boosting the average number of publications per year from 82 (measured in 2017) to 87 currently. Regarding citation influence, the average cites per year and the RCR mean improved from 33 and 2.26 in 2017 to 48 and 2.58 in 2021. Notably, we identified a statistically significant difference of CBs (a time-normalized citation impact measure) of NC TraCS-supported articles between 2017 and 2021, indicating these articles achieved higher citation benchmarking in 2021. In addition, slightly more publications are above the average CB in 2021 (86.57%) than in 2017 (83%), declaring most NC TraCS-supportted papers have been cited higher than the average papers from similar times and fields in the world.
Collaboration Network
Our collaboration measures illustrated an enlarged scale of research collaboration that NC TraCS-supported research has continued to catalyze since 2017. The number of supported authors doubled in 4 years, and the number of involved UNC units in the most published authors’ collaboration network increased from 7 (2017) to 10 (2021). Additionally, the most published authors have persisted in working with more researchers in a 30-month examining period and reached six from March 1, 2016, to August 31, 2018. However, this growing trend was interrupted at the last examining period (September 1, 2008, to February 28, 2021), which could be ascribed to the impact of the COVID-19 pandemic and warrants future studies investigating this phenomenon.
Particularly, NC TraCS-supported research has outreached to local and national HBCUs. For example, the supported publications with NC Central University and NC A&T University doubled since 2017; and researchers at two more HBCUs (Howard University & Meharry Medical College) participated in NC TraCS-supported research projects. The collaboration with HBCUs can be ascribed to the formalized partnership between UNC-CH and NC A&T University and the new Inclusive Science program, which places particular emphasis on groups that have been historically underrepresented in research or who experience significant health disparities in NC.
Altmetrics
In 2017, we could not conduct altmetrics analysis because many included articles did not have sufficient PlumX data. However, in 2021, all included articles have received altmetric citations. We observed a variety of altmetric citation types (e.g., clinical citation, patent citations, usages, blog and news mentions, Tweets, and Facebook, etc.) and identified a few star papers by PlumX measures highlighting their clinical and social impact (Supplementary Table 3).
Topics
The two new topic measures we adopted enabled us to analyze the impact/prominence of supported research topics quantitatively. Measured by STPP, 93% of NC TraCS-supported scientific publications are in the 90th–99th prominence percentile (Fig. 1), indicating extremely high momentum or visibility in the scientific field worldwide. Measured by APT, on average, slightly more than half of the publications are likely to be cited by clinical articles, directly contributing to improving human health. Consistent with the APT, the iCite Translational module also shows that NC TraCS-supported publications cover all three translational categories (i.e., Human, Animal, Molecular/Cellular) with a concentration on the “Human.” Therefore, we can affirm NC TraCS’s continuing effort in supporting the mission of NCATS.
Furthermore, about 15% of NC TraCS-supported publications promote or support health equity and community health by focusing on minority populations (e.g., African Americans, Hispanic Americans), people of color, underserved communities, and patients in rural areas. Notably, the MeSH term “North Carolina” and “Community-based participatory research” disclosed local stakeholders and community engagement with high co-occurrences. These identified focal topics are highly consistent with NC TraCS-modified pilot grant applications to require all applications to include a community and participant engagement plan.
Correlations
Our correlation testing results are consistent with previous studies that traditional citation count is positively correlated with Altmetrics scores (e.g., Altmetric, PlumX metrics) [Reference Maggio, Leroux, Meyer and Artino31–Reference Luc, Archer and Arora33]. However, this study went further by testing and identifying positive correlations between comparative citation ratios (e.g., FWCI, RCR) and PlumX metrics, including Citations, Mentions, Captures, and Social media. In addition, the topic measure, STPP, is positively correlated with citation measures and PlumX metrics (i.e., Citations, Mention, and Social Media). This is the first CTSA evaluation that used Scopus measures and data source to explore the correlations between (1) advanced citation measures with altmetrics and (2) between a new topic measure with both citation and altmetrics measures.
In 2017, NC TraCS submitted a CTSA application in response to a new program announcement from NCATS, which emphasized priorities on increasing inclusivity and health equity, and facilitating team science. Our 2017 study provided evidence for advancing our CTSA programming in several areas where NC TraCS responded with a set of new programs to address these priorities including Inclusive Science program, Team Science, Community and Participant Engagement Plan, and formalizing partnership with the largest local HBCU, etc. The identified progresses since 2017 confirm that these new programs are effective. During the past 4 years, new bibliometric measures, evaluation indicators, and applications have been developed and introduced to the CTSA communities, introducing additional perspectives to examine CTSA research performance, especially at the individual program hub level. It is important to keep tracking and measuring the success and impact of CTSA-supported translational science by using a growing set of metrics and tools.
Our finding has several implications. First, inclusion of collaboration network analysis allowed us to see where the growth in volume is happening. For example, we are seeing that there are more UNC units being represented in NC TraCS-supported publications. As evaluators, we can then assess if the new UNC units are same units where the programs have focused their efforts. Similarly, we have partnered with NC A&T University over the past 4 years to increase their research productivity and NIH funding portfolio by providing direct services. We expect an increase in the number of publications with coauthors from NC A&T University overtime.
Secondly, researchers and CTSA institutions are increasingly using social media (e.g., Twitter) to promote and broadly disseminate their research outputs so that more people can benefit from their scientific discovery that is publicly funded. It is therefore becoming important to consider altmetrics as a complement to the citation-based measurement for research impact.
Thirdly, with the focus on increasing translation of findings among CTSAs, it is vital for CTSA evaluators to understand the translational potential of supported research. Therefore, we included the STTP and iCite’s Translational Module to ensure that our CTSA-supported translational research was diverse in scope, addressed a wide spectrum of translational categories, and had a high translational potential.
Fourthly, we understand that the commercial application and database subscriptions are too diverse across CTSA institutions to standardize one or two methods to assess research productivity. Thus, the approach we have taken allows an institution to tailor their approach to assessing the growth and impact of its CTSA efforts.
Finally, the mixed approach and findings helped NC TraCS build program foci. Our evaluation team meets with the CTSA leadership regularly to discuss findings, inform strategic direction of initiatives (e.g., pilot award funding, team science opportunities), and increase our focus on community engagement and supporting projects that enhance health equity and areas that are of high priority to the CTSA in our current portfolio of publications.
Conclusion
We expanded our pilot study from 2017 and adopted a mixed-metrics approach (bibliometrics, SNA, and altmetrics) to evaluate the research impact of a CTSA program. We disclosed the changes in research productivity, citation impact, and research collaborations. We assessed the CTSA-supported research topics in prominence, extent to which health disparity is addressed, and potential-to-translate to improve human health. We also observed a positive correlation between citation measures and altmetrics of CTSA-supported publications. We suggest researchers and institutions utilize social media to disseminate their research output to the public widely. Lastly, we would like to encourage other CTSA programs to take a similar mixed-metrics approach to monitor and assess their programs over time and share their processes and experiences with the CTSA community so that we can advance translational science evaluation together.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/cts.2022.530.
Acknowledgments
The project described was supported by the National Center for Advancing Translational Sciences (NCATS), National Institutes of Health (NIH), through Grant Award Number UL1TR002489. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
Disclosures
The authors have no conflicts of interest to declare.