Introduction
The cost of care for people with Parkinson’s disease (PwPD) is growing with the aging population. A recent study estimated the annual cost of care for PwPD in the US at $51.9 billion, with $25.4 billion attributable to direct medical costs [Reference Yang, Hamilton and Kopil1]. Additionally, the continued COVID pandemic has highlighted the deficiencies of a clinical model that requires clinical visits to be performed in person [Reference Dorsey, Bloem and Okun2]. While care from a neurologist improves outcome in PwPD [Reference Willis, Schootman, Evanoff, Perlmutter and Racette3], access to specialty care is a significant issue. In Arkansas, a primarily rural state, 3 of the 4 movement disorders trained neurologists practice at a single institution, and approximately 75% of our clinic’s population reside in designated medically underserved areas (MUAs). The driving limitations imposed by motor and cognitive impairment in PwPD [Reference Ranchet, Devos and Uc4] limit patients’ independence and make them reliant on spouses, younger family members, or community contacts to shuttle them for clinic visits. The long distances traveled and the time involved to obtain medical care place additional social and psychological burden on patients. Distance to travel is also often reported to us as a limitation to enrollment in research protocols. Objective, secure, and reliable methods of tracking disease progression in the home setting could improve access to care and mitigate some of the costs of care [Reference Dorsey, Bloem and Okun2,Reference Bloem, Henderson and Dorsey5], even though they may not completely replace in-person care [Reference Mulroy, Menozzi, Lees, Lynch, Lang and Bhatia6].
Although the COVID pandemic overall led to a widely increased utilization of telemedicine for clinical evaluations in movement disorders [Reference Larson, Schneider and Simuni7], several concerns have been raised regarding the adoption of more widespread digital technology in general for clinical care [Reference Mulroy, Menozzi, Lees, Lynch, Lang and Bhatia6,Reference Hassan, Mari and Gatto8]. People residing in areas with limited cellular or high-speed Internet connectivity or those with low socioeconomic status may have limited access to care [Reference Eberly, Kallan and Julien9,Reference Reed, Huang and Graetz10], thereby widening the so-called digital divide. This is an important issue to consider in Arkansas where a significant percentage of the state does not have access to broadband internet access [11]. Additionally in older adults, such as PwPD, ability to use unfamiliar devices could lead to more frustration instead of improved patient outcomes.
Studies have explored the use of telemedicine for clinical health and research visits in PwPD [Reference Beck, Beran and Biglan12–Reference Cubo, Gabriel-Galan and Martinez16], but the cohorts have primarily been selected from highly motivated individuals who recently participated in clinical trials [Reference Tarolli, Andrzejewski, Zimmerman and etal15] or had indicated interest in research participation by signing up in a registry [Reference Beck, Beran and Biglan12–Reference Dorsey, Wagner and Bull14]. Additionally, only a limited set of assessments were performed in these studies. Our goal in this study was therefore to determine the feasibility of performing a comprehensive set of objective assessments used in routine clinical care and research studies in a population of PwPD residing in a rural state. We also wanted to determine if the ability to perform assessments was different in participants residing in designated MUAs compared to those that did not. Lastly, we wanted to determine if the results of assessments performed remotely were comparable to in-person assessments, and we used two different methods of comparison. Some PwPD develop freezing of gait over the course of their disease progression [Reference Fahn17]. Along with other groups, we have shown that PwPD with freezing of gait have differences in disease and gait features outside of the actual episodes of gait freezing [Reference Amboni, Cozzolino, Longo, Picillo and Barone18–Reference Virmani, Pillai and Glover31]. We therefore performed a subgroup analysis using the presence or absence of freezing to split groups and compare to differential results previously reported in these sub-phenotypes of PwPD. We also compared the results of in-home and in-person assessments in those participants who had previously participated in in-person research studies in our lab.
Materials and Methods
Standard Protocol Approvals, Registrations, and Patient Consents
Participants were recruited from the Movement Disorders Clinic (MDC) at the University of Arkansas for Medical Sciences (UAMS) after the approval of the UAMS institutional review board (IRB#261021). All subjects met UK brain bank criteria for the diagnosis of Parkinson’s disease based on evaluation by a Movement Disorders trained neurologist (TV or ML).
While this project was partly conceived prior to and during the early stages of the COVID pandemic, all participants were enrolled and performed study assessments during the COVID pandemic, between November 2020 and July 2021. Participants were approached during their regularly scheduled clinic visits, and the study was explained in more detail to those who indicated interest. Consent forms were provided to participants for review, after which they were contacted to determine if they would participate. Those agreeing to participate were mailed a research packet approximately one week prior to their scheduled research visit with information regarding setup for the visit, administrative forms (including written informed consent forms), and forms requiring written responses. A return pre-paid envelope was provided for return of these packets. Informed consent was obtained over video from all participants before study assessments were performed, and written signatures were obtained via screen capture and subsequent return of signed forms.
All participants were evaluated at home via the web-based televideo service Doxy.me®, or Doximity® as a backup. The Doxy.me platform allowed multiple users to connect to a single session (i.e., patient, clinician, and research assistant), allowed screen sharing, and allowed patient invitation to the visit via an emailed link (allowing computer use) or a cellular text message link. At the time, Doximity did not have multi-user capability and only allowed participants invitation through cellular text messages and therefore was used as a backup if the Doxy.me platform did not connect with a particular user. Only one participant required the Doximity platform for their visit.
Instruments were created in the Research Electronic Data Capture database (REDCap). To minimize enrollment bias, the designation of medically underserved area (MUA) status of participants, using their home addresses, was performed after study visit completion. Distance to travel was calculated using the participant’s home address and the address for the UAMS MDC.
Study Assessments
The assessments performed in the study, including mode and reasons for collection, are outlined in Table 1 and discussed in more detail below.
Abbreviations: EHR: electronic health record; ESS: Epworth Sleepiness scale; MD: movement disorders neurologist; MoCA: Montreal Cognitive Assessment; N-FOG-Q: New Freezing of gait questionnaire; P: participant; PDQ-39: Parkinson’s disease Quality of Life questionnaire-39; RA: research assistant; RBD-Q: REM sleep disorder questionnaire; TUG: Timed-up-and-go; UPDRS: Unified Parkinson’s Disease Rating Scale.
1 Routine clinical care assessment.
2 Research assessment.
Standard of care assessments
A previously validated modified version of the Unified Parkinson’s disease Rating Scale (UPDRS) [Reference Abdolahi, Scoglio, Killoran, Dorsey and Biglan32] that excludes the motor assessments of tone (UPDRS item 22) and balance (UPDRS item 30) was utilized. Data were directly entered into REDCap during the video examination (by TV and ML). Medications and allergies were recorded on participant interview or through the UAMS electronic medical record. Orthostatic vitals were obtained by participants who had blood pressure cuffs at home, with explicit written and verbal instructions on correct performance.
Freezing of gait determination and quantification
The video for the new freezing of gait questionnaire (N-FOGQ) [Reference Nieuwboer, Rochester and Herman33] was shown to all participants via screenshare, and the questionnaire was completed by the neurologists. Participants with a score of 0 on item 1 of the N-FOGQ were assigned to the non-freezers group and those with a score of 1 to the freezers group.
Cognitive function
Cognitive function was assessed using the Montreal Cognitive Assessment (MoCA) [Reference Nasreddine, Phillips and Bedirian34], which has previously been remotely administered [Reference Abdolahi, Bull and Darwin35–Reference Lindauer, Seelye and Lyons37], including in PwPD [Reference Stillerova, Liddle, Gustafsson, Lamont and Silburn38]. For our study, the visuospatial sections of the MoCA were performed as follows. Participants were shown an image of the “trails” task on their screens and asked to verbalize the sequence rather than draw lines on paper. Participants still needed to visualize the sequence and report it correctly. Participants were shown the “cube” image by screenshare and asked to draw the figure on a blank sheet of paper. Instructions were provided for clock drawing as they would be in-person, and participants drew the clock on the provided blank sheet of paper. Immediately after cube and clock drawing, participants held up their drawings to their camera for screen capture and real-time scoring. The written hard copies were also mailed back to us. The animal pictures were screenshared, and participants were asked to name them. Due to visuospatial and executive function impairment being prominent in PwPD, the blind-MoCA, which excludes these important distinguishing features, was not used.
Handwriting analysis
Handwriting samples were obtained with a Pilot G2 ball point pen mailed to participants, on a pre-printed sheet that included instructions and spaces to write the sentence “There are earthquakes in California” three times and perform Archimedes spirals with the right and left hand. Participants were asked to angle their camera to allow the examiner to visualize the writing and ensure spirals were drawn without resting arms on the table. A screen capture of the writing samples was obtained immediately to verify correct performance of the tasks. The hard-copy writing sheets were mailed to us and were subsequently scanned, digitized, and used for spiral analysis. Image processing codes were written in Python programming language. OpenCV (Open Source Computer Vision) and NumPy libraries were used to obtain the spiral length, width, and the total distance traveled by the pen tip during spiral drawing. The spiral area was calculated as an ellipse using the formula
Gait measurement
Gait was assessed using the Timed-up-and-go test (TUG). Participants had pre-visit instructions to measure a 10-ft distance, place a chair at one end, and a tape marker at the other end. During the visit, participants were asked to either prop their device or have family members or caregivers hold their device, so that the camera showed the walking area. Participants were instructed when to start and were timed from start until they sat back in the chair. The average of three trials was used for analysis. Participants were allowed to use their assistive devices such as a cane or walker if they routinely utilized these at home.
Speech/voice analysis
Voice samples were collected using a secure voicemail that automatically digitized the sample into a *.wav file. Participants were asked to say the sound Ahh for approximately 3 seconds and then read the Rainbow passage [Reference Fairbanks and Pronovost39] aloud. The *.wav files were imported into Audacity®, a free open-source package, to remove background noise (Effect → Noise Reduction) and to split each *.wav file into two separate files for the Ahh sound and the Rainbow passage. Parselmouth [Reference Jadoul, Thompson and DeBoer40] (a Python library for acoustical analysis) was used to perform preliminary analysis for basic voice features. The Ahh sound has previously been used in voice analysis in PwPD [Reference Little, McSharry, Hunter, Spielman and Ramig41]. The Rainbow passage was additionally used as it has been suggested to be phonetically balanced and allows evaluation of more complex speech patterns. Recorded waveforms were filtered using floor and ceiling values of 75 decibels (dB) and 300 dB respectively for males, and 100 dB and 600 dB for females. Several features were extracted from the waveforms and analyzed as they have been previously shown to be affected in PwPD [Reference Mekyska, Janousova and Gomez-Vilda42]. The mean fundamental frequency (f0) describes the pitch of sound, while the standard deviation in f0 describes the variability in pitch. Local “jitter” is defined as the frequency variation in f0 from cycle to cycle and provides a measure of the extent of variation in voice range and vocal tremor [Reference Mekyska, Janousova and Gomez-Vilda42]. Shimmer describes the amplitude variation of the sound wave in each vocal cycle, thereby providing insight into the hypophonia and variations in voice amplitude that can occur in PwPD [Reference Mekyska, Janousova and Gomez-Vilda42]. The mean Harmonics to Noise Ratio (HNR), defined as the ratio of noise to tonal speech, is due to incomplete vocal cord closure [Reference Mekyska, Janousova and Gomez-Vilda42].
Other scales
REDCap survey instruments were developed for participants to complete the self-filled Parkinson’s disease quality of life scale-39 (PDQ-39) [Reference Peto, Jenkinson, Fitzpatrick and Greenhall43] and two sleep scales, the Epworth sleepiness scale (ESS) [Reference Johns44] and the REM sleep behavior disorder questionnaire (RBD-Q) [Reference Stiasny-Kolster, Mayer, Schafer, Moller, Heinzel-Gutenbrunner and Oertel45].
Telemedicine surveys
At the completion of their visit, participants were asked to complete a survey to gauge their satisfaction with the visit and their perception of audio-video quality. Optionally, they were asked to provide their annual income and estimated costs to attend in-person visits. The research team also completed a survey to document audio-video quality, perceived issues, and time taken to perform assessments over telemedicine compared to in-person research assessments. The total time for each visit was also recorded.
In-person Comparison Group
A group of 26 participants enrolled in this telemedicine study had previously undergone in-person research assessments, pre-COVID pandemic, as part of an IRB-approved protocol (UAMS IRB#203231). This protocol allowed use of their data for analysis in future studies. The results of the previously administered in-person UPDRS, MoCA, RBD-Q, PDQ-39 scores, and TUG times in these individuals were compared to the equivalent remotely administered assessments. For the in-person UPDRS, only the components that could be administered remotely were used.
Statistical Analysis
Statistical analysis was performed using SPSS version 25 (IBM). Normality was tested using the Shapiro-Wilk test for each assessment. Normally distributed variables included age at enrollment, N-FOG-Q, spiral width and height, spiral pen distance traveled, AAH sound mean f0 and HNR, and RP local shimmer and HND (Table 2). To assess group differences between MUA and non-MUA groups and freezers and non-freezers, one-way analysis of variance (ANOVA) was used for normally distributed variables, and the non-parametric Mann-Whitney U-test was used for non-normally distributed variables. A Benjamini-Hochberg correction was applied to the groups of voice and handwriting analysis features independently. Kendall’s tau-b correlation coefficients, with a Benjamini-Hochberg correction, were used to determine the associations between disease measures (age, disease duration, motor, and total UPDRS scores, H&Y scores, MoCA, average TUG time, and PDQ-39 scores) and voice analysis features. Intraclass correlation coefficients (ICC) were calculated, and Bland-Altman plots were used to determine the association between the in-home assessments with in-person assessments in a subset of participants who participated in prior IRB-approved research in our lab (n = 26). ICC allowed comparison of our results to historical literature that validated these assessments, while the Bland-Altman plots allowed easier determination of whether repeated results significantly differed from one another, along with a clearer picture of the variability of the data set. For survey results, Pearson’s chi-square was used to determine statistical difference between the MUA and non-MUA groups for questions with responses classified as nominal variables (Table 3 Items 3, 4, 6, 7 & Table 4 Items 5, 6, 10, 11), while the Mann-Whitney U-test was used for responses classified as ordinal variables (remaining Table 3 and 4 Items). A generalized linear repeated measures analysis was used with the interaction effects between location (in-person vs in-home) and group (MUA vs non-MUA) being the variable of interest to determine if participant’s self-reliance was different.
Abbreviations: BP: blood pressure; HNR: harmonics to noise ratio; HR: heart rate; MA: more affected; MoCA: Montreal Cognitive Assessment; MAO-I: monoamine oxidase inhibitors; MUA: medically underserved area; N-FOG-Q: New Freezing of gait questionnaire; PDQ-39: Parkinson’s disease Quality of Life questionnaire-39; PwPD: People with Parkinson’s disease; RBD-Q: REM sleep disorder questionnaire; TUG: Timed-up-and-go; UAMS: University of Arkansas for Medical Sciences; UPDRS: Unified Parkinson’s Disease Rating Scale.
* p < 0.05 between MUA and non-MUA by ANOVA.
# p < 0.05 between MUA and non-MUA by Mann-Whitney test.
Abbreviations: MUA: medically underserved area; PwPD: People with Parkinson’s disease.
^ p < 0.05 repeated measures analysis for self-reliance between in-person and telemedicine visit.
# p < 0.05 Mann-Whitney U-test.
Abbreviations: ESS, Epworth sleepiness scale; MoCA: Montreal Cognitive Assessment; MUA: medically underserved area; N-FOG-Q: New Freezing of gait questionnaire; PDQ-39: Parkinson’s disease Quality of Life questionnaire-39; PwPD: People with Parkinson’s disease; RBD: REM sleep behavior disorder; TUG: Timed-up-and-go; UPDRS: Unified Parkinson’s Disease Rating Scale.
Data Sharing
The data from this manuscript will be available in a public repository on acceptance of the manuscript. To support optimal data integration for analysis, we combined all study data into a single collection using the Arkansas Research Image Enterprise System (ARIES) [Reference Sharma, Tarbox and Kurc46,Reference Bona, Kemp and Cox47]. ARIES supports integration of multimedia data, including sound files, and extracts from both the REDCap database and the UAMS Arkansas Research Clinical Data Repository (AR-CDR) [Reference Baghal, Zozus, Baghal, Al-Shukri and Prior48,Reference Syed, Syed and Syeda49]. All ARIES data are de-identified using an integrated utility [Reference Syed, Syed and Syeda49] to facilitate use by the study team and potential reuse by other researchers. Data for a given patient can be added using a secure record linkage mechanism connecting ARIES data instances with data pulled from the AR-CDR.
Results
The demographics and assessment results of all participants are shown in Table 2. Based on their home address, 20 participants resided in MUAs (Fig. 1). Participants in the MUA and non-MUA groups had similar disease duration, modified UPDRS motor and total scores, N-FOG-Q scores, and gender distribution. The MUA participants were younger at enrollment. On the TUG, walking speed was similar in both groups, based on similar times to walk a 10-ft distance. Cognition using the MoCA was similar between groups. Sleep quality was variable, with similar rates of REM sleep behavior disorder based on the RBD-Q scores, but greater daytime sleepiness in the MUA group based on the ESS scores. This could partially be due to higher dopamine agonist usage in the MUA group as those on agonists had higher RBD-Q scores (4.3 ± 2.8 not on agonist, 6.7 ± 2.8 on agonist, p = 0.015 ANOVA) and trended toward higher ESS scores (7.0 ± 4.3 not on agonist, 9.7 ± 6.1 on agonist, p = 0.080 ANOVA). While quality of life scores (PDQ-39) was worse in those residing in MUAs, this was not statistically significant. MUA participants resided significantly further away from UAMS than non-MUA participants (Table 2).
The hard-copy samples of the Archimedes spiral drawings collected on paper that were mailed to us by participants were digitized and analyzed using in-house Python scripts. Consistent with similar motor UPDRS scores and Timed-Up-and-Go (TUG) times, there were no differences in spiral measures between the MUA and non-MUA groups (Table 2).
Voice samples were successfully collected using digital voicemail, with no notable differences in the quality of samples obtained from those residing in MUAs. Preliminary voice acoustical analysis using Parselmouth [Reference Jadoul, Thompson and DeBoer40] showed no differences in standard voice features between MUA and non-MUA participants. After application of a Benjamini-Hochberg correction for multiple comparisons, RP duration was inversely correlated with MoCA scores (−0.402, p = 0.0001).
Comparison to Prior In-person Research Results
Twenty-six participants had previously performed in-person research assessments, approximately 1.4 ± 0.4 years prior to their in-home visits, prior to the COVID pandemic. To determine validity of our in-home measurements, we compared the results in these individuals between the two modes of participation. Bland-Altman plots (Fig. 2) showed agreement between these measures, except in the case of the PDQ-39, where the 0 line lies outside of the 95% confidence intervals. There was significant correlation in all measures (Supplementary Table 1) with ICC ranging from 0.757 (motor UPDRS) to 0.870 (RBD-Q).
Subgroup Analysis of Freezers and Non-freezers
As an additional test of validity of our in-home assessments, we split our study population into freezers (n = 17) and non-freezers (n = 33) using item 1 of the N-FOG-Q, and results are provided in Table 5. Consistent with previous findings [Reference Shah, Pillai and Williams28], freezers had longer disease duration (7.3 ± 4.1; 12.9 ± 6.7), higher total UPDRS scores (21.3 ± 9.1; 29.1 ± 11.9), and worse quality of life scores on the PDQ-39 (19.7 ± 12.2; 42.7 ± 27.0).
Abbreviations: BP: blood pressure; HNR: harmonics to noise ratio; HR: heart rate; MAO-I: monoamine oxidase inhibitors; MoCA: Montreal Cognitive Assessment; N-FOG-Q: New Freezing of gait questionnaire; PDQ-39: Parkinson’s disease Quality of Life questionnaire-39; RBD-Q: REM sleep disorder questionnaire; TUG: Timed-up-and-go; UPDRS: Unified Parkinson’s Disease Rating Scale.
# p < 0.05 between freezers and non-freezers by Mann-Whitney U-test.
^ Not significant after Benjamini-Hochberg correction.
Participant Satisfaction Survey
In a post-study survey (Table 3), 82% of participants reported ability to participate in research as a positive feature of their in-home visit, although when asked what they disliked, 28% responded that they preferred in-person visits. Only 6% disagreed on a question asking if they were more likely to participate in telemedicine research in the future. A greater percentage of participants reported relying on themselves for in-home vs in-person visits (80% vs 64% respectively), and this difference was greater in the MUA group (85% vs 50% respectively). While household income was lower in MUA participants compared to non-MUA (32% vs 13%, respectively, self-reported <$50,000/year), costs to attend traditional in-person visits were higher in the MUA participants (47% vs 8% respectively reporting >$75 to attend). Level of education was lower in the MUA group (Table 2).
Visit Quality
To determine technical difficulties from the research administration standpoint, the research team completed post-visit surveys (Table 5). The components of the visit considered to be routine clinical care or research assessments (see Table 1, superscript 1 – routine clinical care, superscript 2 – research assessments) were graded separately. Only 14% of participants required >5 minutes on setup for administration of research assessments. While 42% required >5 minutes additional time to help understand, locate, or complete the research assessments compared to our prior experience administering these assessments in-person (Table 4), this was not significantly different between the MUA and non-MUA participants. The total visit time was also similar in both groups. Audio-video quality was rated great for 60% of visits, and there was no significant difference in the audio-video quality between the MUA and non-MUA groups (Table 4).
Discussion
Due to the COVID-19 pandemic, clinical care was forced to rapidly adapt to the need for restricted travel and in-person interaction to decrease spread of COVID-19. This led to an increased interest in remote assessments of neurologic disorders [Reference Mirelman, Dorsey, Brundin and Bloem50]. In this pilot study, we assessed the viability of enrolling PwPD residing in a predominantly rural state in the United States, in a home-based telemedicine study. We developed digital data collection instruments for validated tools for the assessment of motor and non-motor features of PD and deployed them in this study. We were able to show that people were willing and able to participate in telemedicine-based research, with 40% of our participants residing in MUAs that historically have limited participation in research and telemedicine [Reference Patel, Rose, Barnett, Huskamp, Uscher-Pines and Mehrotra51]. We were able to show that age, disease duration, and disease severity-matched participants could be enrolled from MUAs. We show that the quality of audio-video connectivity, even in rural areas, was adequate to implement the routine clinical and research assessments performed on PwPD. We also showed using both subgroup analysis of freezers and non-freezers, and historical in-person research evaluations previously performed on a subgroup of participants, that remote assessment results were valid and reproduced previous findings. We were also able to add new assessments that we had not previously performed such as voice analysis and reproduce the results from other groups.
A few studies have used remote visits to determine feasibility of research participation. Dorsey et al. [Reference Dorsey, Wagner and Bull14] enrolled participants from the Fox Trial Find registry and performed MoCA and modified MDS-UPDRS measures. Like in our study, overall patient and neurologist satisfaction was high. While the number of participants was three times those in our study, these participants were highly motivated for research participation, having enrolled in a registry indicating interest and therefore were a different population to our rural population, where 40% resided in MUAs, and 50% had not previously participated in research. In a more recent study, Tarolli et al. [Reference Tarolli, Andrzejewski, Zimmerman and etal15] showed that remote evaluations of PwPD after participation in a clinical trial were also feasible and results showed good correlation with in-person visits. The UPDRS has also been successfully implemented previously in small clinical studies, either via telemedicine alone [Reference Beck, Beran and Biglan12], compared to in-person assessments [Reference Cubo, Gabriel-Galan and Martinez16], and in repeated monitoring in the setting of a continuous care facility [Reference Barbour, Arroyo, High, Fichera, Staska-Pier and McMahon52].
A subset of participants in our study had previously performed IRB-approved in-person research studies on average 1.5 years prior to the telemedicine visit. The ICCs between in-home and prior in-person research visits (0.757 UPDRS III and 0.825 MoCA) were better than those previously reported by Tarolli et al. [Reference Tarolli, Andrzejewski, Zimmerman and etal15] (0.51 UPDRS III and 0.62 MoCA) and Cubo et al. [Reference Cubo, Gabriel-Galan and Martinez16] (0.63 UPDRS III) performed closer together. In the initial validation studies for the assessments, the correlation coefficients ranged from 0.68 for the social subscale of the PDQ-39 [Reference Peto, Jenkinson, Fitzpatrick and Greenhall43] to 0.92 for the UPDRS [Reference Siderowf, McDermott and Kieburtz53] and MoCA [Reference Nasreddine, Phillips and Bedirian34]. It is likely that disease progression and COVID isolation-related superimposed functional limitations could account for some of the variability between our two measurements, especially in quality of life. It might be expected that the 1.5-year interval between assessments in our cohort should decrease correlation (ICC) and mean difference measures (Bland-Altman plots) due to different disease progression rates between participants. This could lead to faster progressing participants having greater differences in these values between the assessments, with slower progressing individuals having lesser differences. This would result in increased scatter in correlation between values and lower correlation coefficients. In our studies, both motor and cognitive assessments were administered to participants while they were in the levodopa medicated state, which could mask significant disease progression. In support of this, in a prior study of disease progression in PwPD with and without freezing, the group without freezing, similar to the majority of participants in the current study, had an average decline of 1 point/year on the motor UPDRS and 0.4 points/year on the MoCA, when longitudinally assessed in the levodopa medicated state [Reference Glover, Pillai, Doerhoff and Virmani54]. We cannot exclude the possibility that participants with slower disease progression enrolled in this study, although this should not affect the results as the MUA and non-MUA groups were well matched for disease duration, Hoehn and Yahr scores, MoCA scores and motor UPDRS scores with an average of 9 years disease duration and Hoehn and Yahr scores of 2. Repeating this study once the pandemic restrictions allow performance of short interval serial in-person and home-based assessments would likely provide stronger correlations.
Studies of clinical virtual visits have also been performed on a highly motivated population of PwPD who visited a website and submitted interest in participating [Reference Beck, Beran and Biglan12,Reference Korn, Wagle Shukla and Katz13]. While there was no improvement in quality of life with in-home over in-person visits [Reference Beck, Beran and Biglan12,Reference Korn, Wagle Shukla and Katz13], satisfaction was high among both neurologists and patients in both studies. Other studies using surveys alone also show high satisfaction with clinical virtual visits compared to office visits [Reference Hanson, Truesdell, Stebbins, Weathers and Goetz55,Reference Spear, Auinger, Simone, Dorsey and Francis56]. In our study, in addition to 75% participant satisfaction, importantly, participants felt that they were more self-reliant with the in-home visits compared to historical in-person visits.
The MoCA has been previously used for remote administration in diverse populations [Reference DeYoung and Shenal36], including a small study in PwPD [Reference Abdolahi, Bull and Darwin35]. The ability to perform reliable cognitive testing remotely however is still of interest. The results could be “improved” by utilizing people outside of the camera field for help or by taking notes to help improve scores on memory items. To ensure participants were performing their own assessments, we had them angle their camera toward the writing table during writing assessments (cube and clock drawing) and requested they not write down the items to be recalled. Their microphones were on to hear if caregivers provided assistance. Among our instruments, the MoCA provided the greatest difficulty in administration, albeit in only 18% of participants, primarily in the visuospatial domain assessments. In the 26 participants with prior MoCA scores, the correlation with prior administration was high.
There are concerns about remote visits widening the digital divide [Reference Shalash, Spindler and Cubo57] and our study aimed to provide evidence that remote assessments can be reliable, even in an elderly population living in MUAs in a rural state, thereby improving healthcare equity. Research participation also helps empower people by allowing them to participate in the search for better outcomes for their disease. It is important to note that we did not target recruitment efforts toward enrollment of MUA participants, yet 40% of the participants were from such areas. This remains lower than expected based on the demographics of our clinic population where 75% reside in MUAs. The quality of videoconferencing is one of the concerns commonly expressed about remote visits, and there have been limited small studies evaluating this [Reference Stillerova, Liddle, Gustafsson, Lamont and Silburn58]. We rated the quality of audio-video similarly in both MUA and non-MUA populations, while blinded to participants residence status. In PwPD, given that onset is commonly in later life, elderly patients may be less comfortable accessing new technology needed to perform telemedicine visits, increasing reliance on others to help with medical care. Contrary to this, participants in our study felt more self-reliant with the in-home visits and the majority felt they were more likely to participate in future telemedicine research, independent of their residence location.
About a third of participants reported that they preferred in-person visits (to in-home visits). In a post hoc analysis (Supplementary Table 2), we found no significant differences in age, disease duration, motor UPDRS scores, or TUG times based on this item response, that might suggest less severely affected participants were more willing to travel to visit in-person. Distance traveled for in-person visits was also similar in those preferring in-person visits to those that did not indicate this. However, the participants that suggested preference for in-person visits did appear to have a higher annual income with 64% with incomes >$100,000 and 0% <$50,000, compared to 44% and 28%, respectively, in the group that did not suggest a preference for in-person visits. Further exploration of this income inequality in the MUA population and choice of visit type is warranted, as it suggests that lower income, underserved populations are willing to utilize telemedicine. Those who preferred in-person visits were possibly less reliant on others for their in-person visits and were not as likely to participate in future telemedicine research. It must be noted however that this preference for in-person visits is based on a questionnaire after a research visit that lasted on average 1.4 hours. It is possible that this would be different if only a 30-minute clinical visit was performed. It is important to keep in mind that these data are self-report and must be interpreted with caution.
The medically underserved population in our study had lower self-reported annual income, lower education levels, and traveled longer distances to attend in-person visits. They also reported higher costs to attend in-person visits, which could be secondary to costs associated with travel, meals that were required to be purchased, and potential need for hotel accommodations to break up the journey. Incorporation of telemedicine into clinical and research practice can help reduce costs associated with healthcare. Additionally, the use of telemedicine in this underserved population could greatly improve the ease of access to clinical care, and network-based models focused on patient-centered care have been proposed [Reference Bloem, Henderson and Dorsey5].
One of the limitations of our study was that due to costs, we were unable to incorporate remote sensors for objective evaluation of limb bradykinesia and gait [Reference Del Din, Kirk, Yarnall, Rochester and Hausdorff59]. Future addition of properly validated, inexpensive, and reliable sensors [Reference Joshi, Bronstein and Keener60,Reference Evers, Raykov and Krijthe61] for objective remote monitoring of motor function in rural and underserved areas could further extend our results. Performing Lee Silverman Voice Treatment [Reference Griffin, Bentley, Shanks and Wood62], exercise and physical therapy [Reference Landers and Ellis63] and cognitive behavioral therapy [Reference Kraepelien, Schibbye and Mansson64] via telemedicine have also been gaining momentum, but need to be tested in underserved populations. Due to the COVID pandemic, our in-person comparisons were limited to only half the participants and were performed 1.5 years prior to the telemedicine assessments. Disease progression could impact our results; however, as discussed in greater detail above, there was still significant agreement between the two assessments performed. These results therefore still provide evidence for the feasibility of conducting such assessments using telemedicine in a manner that tracks disease. Additionally, due to the multiple measured features and possible false discovery rate, caution should be taken to not overinterpret any statistical group differences.
In summary, we show that in-home telemedicine visits can be conducted in PwPD residing in MUAs, that assessments performed show concordance with those performed in-person, and that participants were not only satisfied with the visits and felt more self-reliant with such visits, but that they would be willing to participate in telemedicine-based research in the future. These results provide support for continued incorporation of remote assessments into research studies and clinical care in conjunction with current in-person care models in the future.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/cts.2022.459
Acknowledgments
We would like to acknowledge the participants for their time and effort, without which this work would not have been possible.
Disclosures
The authors have no conflicts of interest to disclose in relation to this work.
Tuhin Virmani received grant support from the Parkinson’s Foundation (PF-JFA-1935), Translational Research Institute at the University of Arkansas for Medical Sciences (UAMS) through the National Center for Advancing Translational Sciences of the National Institutes of Health (Award number UL1TR003107) and the UAMS Clinician Scientist program, as well as salary support from the University of Arkansas for Medical Sciences.
Mitesh Lotia received salary support from the University of Arkansas for Medical Sciences.
Aliyah Glover received salary support from the above grants to TV.
Lakshmi Pillai received salary support from the above grants to TV.
Aaron Kemp received salary support from UAMS and the grants to TV and LLP and is a minority shareholder in NeuroComp Systems, Inc.
Phillip Farmer received salary support from UAMS.
Shorahbuddin Syed received salary support from grants to FP.
Linda Larson-Prior received salary support from UAMS, the National Institutes of Health, and the Arkansas Children’s Nutrition Center (USDA-ARS).
Fred Prior served as a member of the External Advisory Committees of the Arkansas Integrative Metabolic Research Center (AIMRC) COBRE grant and the Imaging Data Commons contract from the National Cancer Institute to Harvard University. He received grant funding from the NCI, NSF, and PCORI grants. He also received contract funding through Leidos Biomedical Research for the NCI Cancer Imaging Archive. He served as core director for the NCATS CTSA award to UAMS and the Data Coordinating and Operations Center for the IDeA states Pediatric Clinical Trials Network. He also received salary support from UAMS.