INTRODUCTION
The evidence to support the benefit of palliative care in oncology has been mounting, with demonstrated improvements in patient and caregiver quality of life and depression, patient symptoms, caregiver burden, patient survival, hospice use, and patient and family satisfaction (Temel et al., Reference Temel, Greer and Muzikansky2010; Greer et al., Reference Greer, Pirl and Jackson2012; Zimmermann et al., Reference Zimmermann, Swami and Krzyzanowska2014; Bakitas et al., Reference Bakitas, Lyons and Hegel2009a ; Davis et al., Reference Davis, Temel and Balboni2015; Bakitas et al., Reference Bakitas, Lyons and Hegel2009b ; Reference Bakitas, Tosteson and Li2015; Dionne-Odom et al., Reference Dionne-Odom, Azuero and Lyons2015). Despite professional recommendations and a growing evidence base, the level of integration of early concurrent oncologic palliative care in advanced cancer remains poor. This is true even in health systems with established palliative care teams (Wentlandt et al., Reference Wentlandt, Krzyzanowska and Swami2012; Smith et al., Reference Smith, Nelson and Berman2012). While most U.S. hospitals with more than 300 beds report providing palliative care services, “substantial pockets” of limited availability continue to exist, especially in geographically isolated regions (e.g., rural and local community settings) (Morrison & Meier, Reference Morrison and Meier2015).
ENABLE (Educate, Nurture, Advise, Before Life Ends) (Bakitas et al., Reference Bakitas, Lyons and Hegel2009a ; 2009b; Dionne-Odom et al., Reference Dionne-Odom, Azuero and Lyons2016a ; Reference Dionne-Odom, Lyons and Akyar2016b; Reference Dionne-Odom, Azuero and Lyons2015) is an early concurrent oncology palliative care telehealth model that was designed to be seamlessly integrated with usual oncology care at the time of an advanced cancer diagnosis, especially for rural-dwelling patients. ENABLE includes (1) an initial in-person palliative care consultation and (2) a series of structured phone-based coaching sessions for patients and their family caregivers. Trained palliative care nurse coaches facilitate the phone sessions using a guidebook called “Charting Your Course” (CYC). The topical areas include problem solving, symptom management, self-care, communication and decision-making, and life outlook and review (Steinhauser et al., Reference Steinhauser, Alexander and Byock2008; Reference Steinhauser, Alexander and Byock2009). After completion of phone sessions, nurse coaches follow-up monthly with patients and caregivers to reinforce prior content and address new issues. Two randomized controlled trials of ENABLE have demonstrated benefits related to patients' symptoms, mood, and survival and caregivers' mood and burden (Bakitas et al., Reference Bakitas, Lyons and Hegel2009b ; 2015; Dionne-Odom et al., Reference Dionne-Odom, Azuero and Lyons2015).
Recognizing the limited uptake of early palliative care, we developed a four-year study funded by the American Cancer Society to implement ENABLE in four rural-serving community cancer centers in Alabama and South Carolina. Our study has been guided by the RE-AIM (reach, effectiveness, adoption, implementation, and maintenance), a widely recognized implementation framework to evaluate the success and public health effectiveness of translating evidence-based interventions into real-world practice (Glasgow et al., Reference Glasgow, McKay and Piette2001; National Cancer Institute, 2016). During the implementation process, we recognized a gap in the form of a lack of tools to measure whether implementation had actually taken place. This paper describes the study's toolkit development and testing phases. We believe that this is the first time that the RE-AIM framework has been adapted to measure implementation of early palliative care.
METHOD
Our study aims were: (1) to assess palliative care practices and prepare cancer centers for organizational change; (2) to tailor and implement ENABLE at each site; and (3) to evaluate ENABLE implementation using the RE-AIM framework. A virtual learning community implementation model was employed to foster shared learning, knowledge, and peer support among participating sites. A “learning community,” similar to a learning collaborative, is a network of individuals or organizations with shared goals and attitudes who provide peer support and regularly communicate to promote collaboration, learning, sharing of knowledge, and who make a long-term commitment to sustainability (Bond et al., Reference Bond, Drake and Becker2016).
Setting and Subjects
Four rural-serving community cancer centers participated in this implementation study: Gibbs Cancer Center (Spartanburg, South Carolina); the University of South Alabama Mitchell Cancer Institute (Mobile, Alabama); the Birmingham Veterans Affairs Medical Center; and the University of Alabama at Birmingham (UAB) Division of Gynecologic Oncology. The institutional review board (IRB) of the UAB Coordinating Center and those of the other participating organizations approved this study.
RESULTS
We describe the relevant background and rationale, development/testing, domains/items, and scoring for the four measures developed for the implementation toolkit.
The ENABLE RE-AIM Self-Assessment Tool
Background and Rationale
The RE-AIM domains of “reach,” “effectiveness,” “adoption,” “implementation,” and “maintenance” are critically important in evaluating implementation of any new evidence-based intervention (Glasgow et al., Reference Glasgow, McKay and Piette2001; National Cancer Institute, 2016). RE-AIM specifically evaluates a new program's impact on population health compared to effectiveness testing of participant outcomes under ideal research conditions. Interventions must “reach” a target population who are able and willing to participate. It must be feasible for healthcare institutions and clinicians to initiate and “adopt” the program in practice settings with existing resources, personnel, and levels of expertise. Programs must also be able to “implement” the program's essential elements as tested in the research setting. Finally, programs must be “maintained” at the individual and institution/community levels for as long as they are relevant. These five domains interact to determine the overall impact of the population-based program (Gaglio & Glasgow, Reference Gaglio, Glasgow and Brownson2012). A program that performs poorly on one or two RE-AIM components may have low overall public health impact.
Development and Testing
We adapted the RE-AIM framework into open and closed items to evaluate community cancer centers' implementation of ENABLE. The tool was conceived as a way to annually evaluate institutions' goals, strengths, challenges, and benchmarks for success. The tool is not currently designed as a scale instrument (i.e., to tabulate numeric scores that assess the level of an underlying construct). Two coordinating center investigators (L.Z., J.N.D.O.) reviewed RE-AIM framework references and drafted items. Participating implementation site investigators reviewed tool drafts and provided written and verbal feedback to enhance face and content validity. The site principal investigators (PIs) (i.e., administrator or physician champion) received the finalized draft of the tool by email to complete and return prior to the baseline site visit (two to three months before formally rolling out ENABLE). Coordinating center staff reviewed responses with site personnel during individual and group interviews with site investigators, senior leadership (e.g., department chairs, hospital administrators), administrative support personnel, clinicians performing in-person palliative care assessments, and clinicians delivering ENABLE CYC sessions. Following the site visit, RE-AIM responses and notes were entered into a research electronic data capture (REDCap) database. Site visit interviews and focus groups were recorded and notes were taken during all interviews. Qualitative analysis will be performed after final site visits.
Domains/Items
The ENABLE RE-AIM Self-Assessment Tool (see Table 1) comprises 50 open and closed response items that assess “reach” (21 items), “adoption” (11 items), “implementation” (14 items), and “maintenance” (4 items). “Reach” items assess the number or proportion and representativeness of program participants. Program “effectiveness” for patient and caregiver is not assessed by this tool but rather by several validated instruments utilized in prior studies to examine quality of life, symptoms, and mood (Tables 2A and 2B) (Zimet et al., Reference Zimet, Powell and Farley1990; Oxman & Hull, Reference Oxman and Hull2001; Glasgow et al., Reference Glasgow, Wagner and Schaefer2005; Schmittdiel et al., Reference Schmittdiel, Mosen and Glasgow2008; Steinhauser et al., Reference Steinhauser, Clipp and Bosworth2004; Lyons et al., Reference Lyons, Bakitas and Hegel2009; Bjelland et al., Reference Bjelland, Dahl and Haug2002; Montgomery et al., Reference Montgomery, Gonyea and Hooyman1985; Reference Montgomery, Borgatta, Borgatta, Lui and Kendig2000; Bakas et al., Reference Bakas, Champion and Perkins2006). “Adoption” items evaluate the number, proportion, and representativeness of settings and institutional staff who support and deliver the program. “Implementation” items assess the site's fidelity to the essential elements of the ENABLE intervention as it was tested in randomized controlled trials. “Maintenance” items examine the extent to which the ENABLE program has become part of the institution's routine organizational practices and policies (National Cancer Institute, 2016).
Scoring
Changes to and trajectories of quantitative item responses within each RE-AIM domain will be assessed at each site visit (baseline, years 1, 2, and 3). For example, to measure “reach” we will calculate the numerator/denominator ratio at each timepoint to capture change over time. We will then examine qualitative comments for additional insights that can help explain the quantitative trends over time. This mixed-methods approach will be used for each RE-AIM self-assessment domain. At the conclusion of the study, these results will be shared with sites, and we will conduct a final assessment of tool usefulness and incorporate feedback and modify the tool, for inclusion in the final toolkit.
The ENABLE General Organizational Index (GOI–ENABLE)
Background and Rationale
This tool assesses institutional “readiness,” including infrastructure support, supervision, and adequately trained personnel. Originally developed to facilitate implementation of mental health best practices, the General Organizational Index (GOI) assesses institutional operating characteristics among 12 domains that are essential to any evidence-based practice uptake and maintenance (Arons & English, Reference Arons and English2002; Dartmouth College, 2002). The rationale for this tool is that programs with strength in these areas are expected to be more successful at achieving program implementation and target outcomes. We believe that this is the first tool to formally evaluate institutional readiness for early concurrent oncology palliative care.
Development and Testing
Similar to the ENABLE RE-AIM Self-Assessment Tool, this fidelity tool provides a structured way to assess organizational capacity to implement the ENABLE program. As suggested by the original GOI developers, it is necessary to tailor the GOI for each specific evidence-based practice (Arons & English, Reference Arons and English2002; Dartmouth College, 2002). Thus, initial tailoring of the GOI–ENABLE was performed by one coordinating center member (J.N.D.O.), and subsequent iterations were reviewed and edited by other team members (L.Z., M.A.B.) and by participating site investigators. The GOI–ENABLE was used at the initial and subsequent site visits.
Domains/Items
As shown in Table 3, the GOI–ENABLE tool includes 12 tailored domains, including program philosophy, eligibility/identification, penetration, in-person palliative care assessment, individualized initial palliative care treatment plan, individualized follow-up, training, supervision, process monitoring, outcome monitoring, quality assurance, and choice in services provided. Each domain is rated on a face-valid 5-point scale, where 1 represents no implementation and 5 represents full implementation.
* 1 (low) to 5 (high) capacity for implementing and sustaining ENABLE program.
Scoring
The ratings for each domain are determined by two or more external fidelity assessors who use multiple sources of information (semistructured interviews with key informants [e.g., staff, patients and caregivers]; meeting observations; and review of program materials and other documentation) to adjudicate scores. Guidance on discriminating high and low ratings is uniquely defined for each domain and based on concrete observable evidence of the practice element. Fidelity assessors independently assign initial scores for each domain and then decide final scores through discussion and consensus. In addition to individual domain scores, a total mean score is calculated. Higher scores represent a higher likelihood of successfully implementing ENABLE with high fidelity and sustainability.
At the conclusion of the study, we will graph each site's progress over time in each domain and provide feedback on domains that are still in need of strengthening. Change scores from baseline will be computed, and the association between change scores and change in total numbers of patients and caregivers participating in the ENABLE program each year will be estimated. From a tool development perspective, this type of analysis will be able to establish construct (criterion) validity. We will modify the GOI–ENABLE based on user experiences, fidelity assessors, and site feedback to produce the final version to be included in the toolkit.
The ENABLE Implementation Cost Tool
Background and Rationale
This tool was developed to collect information from sites to evaluate overall implementation costs. No such tool existed for early concurrent oncology palliative care prior to our work, so we developed an ENABLE-specific tool to capture information related to time spent by the site-specific staff to implement the program, and to participate in activities led by the study coordinating center, as well as other administrative costs.
Development and Testing
The coordinating center developed the cost tool through an iterative process among the investigators, the coordinating center economist (M.P.), and site staff. Using a template from a prior institutional project (Pisu et al., Reference Pisu, Meneses and Azuero2016), we identified potential program costs and sought site feedback on the feasibility of completing the tool. Teams pilot-tested the tool for 1–2 months before it was finalized and launched. The tool is a Microsoft Excel® spreadsheet that each site's key contact submits monthly to the coordinating center. The study coordinating center maintains cost data in REDCap. Sites either adopt the tool as developed or integrate the tool in their own participant tracking system.
Domains/Items
The ENABLE Implementation Cost Tool (Table 4) consists of three logs: the contact log, the administrative/meeting log, and the materials/costs log. Clinicians use the contact log to track time spent on patient and caregiver contacts, site meetings, and other administrative duties. A drop-down menu simplifies entry coding. The administrative and meeting log tracks and records time spent by staff other than clinicians (site physician champion, other nursing staff, and administrators) on such program-related duties as time spent on personnel training to implement ENABLE. Finally, the materials/costs log tracks costs for any program-related teaching or marketing materials purchased.
Scoring
Salary and other cost data are added to the ENABLE Implementation Cost Tool to calculate implementation costs. We will separate time spent on research activities from actual program implementation costs. Time will be valued using average salaries of staff according to the titles associated with specific activities—that is, the costs of clinicians will be valued using their average salaries. Materials and other resources will be valued using site-specific expenses. We will calculate total and per-patient-served implementation costs by site and overall. Sensitivity analyses will be conducted to obtain a range of possible costs that vary depending on salary levels, time spent on various ENABLE activities, or different uptake of ENABLE patient activities (e.g., number of follow-up calls).
Oncology Clinicians' Perceptions of Early Concurrent Palliative Oncology Care
Background and Rationale
Despite professional guidelines not all oncology staff are familiar with or supportive of early palliative care and their perceptions may impact program integration. Because we were unable to locate tools that specifically assessed these perspectives we created this survey.
Development and Testing
Tool development began with a literature review of studies and tools related to perceptions of and referral barriers to oncology palliative care (Bradley et al., Reference Bradley, Cramer and Bogardus2002; Cherny et al., Reference Cherny and Catane2003; Cherny & Palliative Care Working Group of the European Society for Medical Oncology, Reference Cherny2011; Fox et al., Reference Fox, Myers and Pearlman2007; Johnson et al., Reference Johnson, Girgis and Paul2008; Metzger et al., Reference Metzger, Norton and Quinn2013; Ogle et al., Reference Ogle, Mavis and Wyatt2002; Sheetz & Bowman, Reference Sheetz and Bowman2008; Ward et al., Reference Ward, Agar and Koczwara2009; Wotton et al., Reference Wotton, Borbasi and Redden2005). Based on this review, a palliative care physician team member (D.B.), extracted relevant items, edited them, and developed additional items. The initial draft survey was reviewed for clarity and face validity by three team members (L.Z., J.N.D.O., M.A.B.), and a subsequent draft was reviewed by eight oncology clinicians. The oncology clinicians reviewed the draft survey using a standardized scoring rubric with a 4-point Likert-type scale, where scores ranged from not relevant (1) to very relevant and succinct (4) (Lynn, Reference Lynn1986). Items were revised based on reviewer feedback, focusing on items that received an average content validity score of 3 or greater. Once the final survey was complete, the site lead (Bakas, no. 26) sent a web-based (Qualtrics®) link to their institution's oncology clinicians to complete online. Across all sites, 62 clinicians were invited to participate, 46 consented and responded, and 42 provided complete data (response rate = 68%).
Domains/Items
Table 5 presents the 39 items and response options in the survey. Although the responses to some items in these sections may covary, the tool is not currently designed as a scale instrument.
Scoring
The survey item scores will be summed as a total score and will be employed to determine perceptions that are more accepting and positive about early palliative care in general. The survey will be readministered at all four sites upon study completion. This tool may offer insights into changes in perceptions for oncologists at sites implementing the ENABLE program.
DISCUSSION
To our knowledge, this is the first study to develop and pilot test a toolkit to facilitate and measure implementation of early concurrent oncology palliative care. The four instruments cover the essential aspects of program implementation as per the RE-AIM framework. The overall study goal was to develop implementation processes and tools to assist community cancer centers to implement national guidelines recommending integration of early concurrent palliative care for individuals newly diagnosed with advanced cancer. As with much of health services research, valid measures are critical to determining implementation success. The biggest challenge faced is a lack of appropriate measures to assess implementation of a palliative care program. We have made substantial progress in assessing the implementation of the ENABLE program, but there is much work that remains to be done.
As we embarked on creating the tools and implementation toolkit, there were a number of challenges and lessons learned. First, we found variations in initial site readiness. One site had limited personnel resources, including a lack of dedicated program staff and palliative care specialists, reflecting the shortage of palliative care specialists in the United States (Bui, Reference Bui2012; Kamal et al., Reference Kamal, Maguire and Meier2015; Lupu & American Academy of Hospice and Palliative Medicine Workforce Task Force, Reference Lupu2010). Second, there were often increasing clinical responsibilities for clinicians who had planned to devote time to implement ENABLE. A lack of program staff and dedicated effort delayed program launch for some sites by several months compared to sites that had already identified staff champions to implement the ENABLE program. Third, our research grant provided sites with only meager start-up funds and did not cover salary support. This resulted in difficulties securing adequate space, staff, and enough time to launch the program.
The UAB Coordinating Center was also greatly hampered by administrative barriers. First, there were inconsistent IRB interpretations of implementation into medical practices. The lack of a standardized IRB approach (Patel et al., Reference Patel, Stevens and Puga2013) across sites led to multiple and prolonged IRB application revisions. Given the struggle with gaining IRB approval, future work might be enhanced by standardized IRB procedures, additional institutional support, and improved clinician knowledge prior to the start of the program. Relatedly, we encountered difficulties with centralized data collection, as several sites had data-sharing restrictions. Data collection had to be adapted in such a manner that only deidentified data could be shared with the coordinating center. Helping to mitigate this issue was creation of a firewall-protected REDCap database (Harris et al., Reference Harris, Taylor and Thielke2009) for all data collection.
Although we attempted to capture a diverse range of institutions, there are only four sites participating in this effort, which may limit the generalizability of the knowledge gained. Some of the instruments may be difficult to implement in other settings and may require modifications. Future use of these tools will include more geographically diverse populations and settings. We also recognize that the use of our instruments is labor-intensive, both for the research team and the sites completing the instruments. An automated electronic data collection tool would significantly reduce this burden.
Furthermore, the instruments rely on self-reported data, which may or may not be an accurate depiction of the program. This manuscript presents the development and initial use of the instruments. However, additional validation processes will be conducted upon conclusion of the study, including evaluation of the feasibility of its use in clinical practice.
CONCLUSIONS
Measuring whether program implementation and dissemination have occurred is a critical and developing area for early concurrent oncology palliative care and requires systematic collaboration among key stakeholders: patients and families, clinicians, program administrators, and implementation scientists. Study outputs advance the field by offering methods and measures when implementing early concurrent oncology palliative care. Despite these challenges, we believe that the work reported here will help community cancer centers overcome barriers to implementing early concurrent palliative care. Going forward, the remaining goals are to finalize the ENABLE Early Palliative Care Implementation Toolkit, to establish program sustainability at the four community cancer centers, and to develop a larger implementation study using the toolkit and comparing different implementation methods.
ACKNOWLEDGMENTS
We gratefully acknowledge the efforts of our partnering community cancer centers and their staff, including: Brian Bell, M.D., Steve Corso, M.D., Melissa Poulnott, L.P.N., Sara Ferguson, R.N., B.S.N., Andrew M. Fischer, M.Div., Nancy Anderson, M.Div., Noel Kinard, M.S.W., Chad Dingman, M.S.W., and Melody Carnes, Li.S.W.–C.P. at Spartanburg Regional Medical Center; Jennifer Scalici, M.D., and Leigh Minchew, D.N.P., at Mitchell Cancer Institute; Kerri Bevis, M.D., Elizabeth Kvale, M.D., Gabrielle Rocque, M.D., and Amanda Erba, B.S.N., R.N., from the Division of Gynecologic Oncology at the University of Alabama at Birmingham; and Amos Bailey, M.D. (recently relocated to the University of Colorado in Denver), Neal Steil, M.D., Alfreida Hogan, D.N.P., and Will Callans, M.P.H., at the Birmingham Veterans Affairs Medical Center.
AUTHOR DISCLOSURES
The authors hereby declare that they have no competing financial interests to disclose.
SOURCES OF SUPPORT
This study was funded by an American Cancer Society Research Scholar Grant (no. RSG PCSM-124668; PI: Bakitas). J. Nicholas Dionne-Odom, Ph.D., R.N., has been supported by NIH/NINR (no. 1K99NR015903), the National Cancer Institute (no. 2R25CA047888-24), and the National Palliative Care Research Center. Imatullah Akyar, Ph.D., R.N., was supported by The Scientific and Technological Research Council of Turkey (Tubitak).