We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The psychometric rigor of unsupervised, smartphone-based assessments and factors that impact remote protocol engagement is critical to evaluate prior to the use of such methods in clinical contexts. We evaluated the validity of a high-frequency, smartphone-based cognitive assessment protocol, including examining convergence and divergence with standard cognitive tests, and investigating factors that may impact adherence and performance (i.e., time of day and anticipated receipt of feedback vs. no feedback).
Methods:
Cognitively unimpaired participants (N = 120, Mage = 68.8, 68.3% female, 87% White, Meducation = 16.5 years) completed 8 consecutive days of the Mobile Monitoring of Cognitive Change (M2C2), a mobile app-based testing platform, with brief morning, afternoon, and evening sessions. Tasks included measures of working memory, processing speed, and episodic memory. Traditional neuropsychological assessments included measures from the Preclinical Alzheimer’s Cognitive Composite battery.
Results:
Findings showed overall high compliance (89.3%) across M2C2 sessions. Average compliance by time of day ranged from 90.2% for morning sessions, to 77.9% for afternoon sessions, and 84.4% for evening sessions. There was evidence of faster reaction time and among participants who expected to receive performance feedback. We observed excellent convergent and divergent validity in our comparison of M2C2 tasks and traditional neuropsychological assessments.
Conclusions:
This study supports the validity and reliability of self-administered, high-frequency cognitive assessment via smartphones in older adults. Insights into factors affecting adherence, performance, and protocol implementation are discussed.
Associative memory is impacted early in Alzheimer’s disease (AD). Poorer performance on associative memory tests has been related to greater amyloid and regional tau burden in preclinical AD. Our group previously examined the association of brain pathology and performance on the Free and Cued Selective Reminding Test (FCSRT) in Autosomal Dominant Alzheimer’s Disease (ADAD), finding that associative memory summary scores distinguished non-demented mutation carriers from non-carriers several years before clinical onset of cognitive impairment. In the current study, we examined whether FCSRT learning slopes were associated with brain pathology in a sample of ADAD carriers and non-carriers.
Participants and Methods:
There were 119 participants including 57 non-demented carriers of the Colombian kindred with the Presenilin1 E280A mutation and 62 non-carrier family members (mean age= 36.3, 60% female). Participants were administered the Mini-Mental State Examination (MMSE), a measure of global cognitive status, and the FCSRT, which consists of three trials in which participants are asked to freely recall the same list of 16 items. It is a well-established measure known to be sensitive to early changes in AD. A subsample of 69 participants (32 carriers and 37 non-carriers) underwent positron emission tomography (PET) to measure in vivo cortical amyloid-beta (Pittsburgh compound B, PiB), and regional tau (Flortaucipir, FTP) burden in entorhinal and precuneus regions, which are among the earliest sites of tau accumulation in this ADAD population. Mann Whitney U tests, Spearman correlations, and chi-square tests were used to examine group differences and relations among variables of interest. Learning slope was calculated by subtracting the number of items freely recalled in FCSRT Trial 1 from the number of items freely recalled in Trial 3.
Results:
Compared to non-carriers, carriers had greater cortical amyloid-ß and regional tau burden, lower MMSE scores (mean [SD]: carriers= 27.5 [2.7]; non-carriers= 28.8 [1.0]), and lower scores on total immediate/ delayed free/ cued recall scores on the FCSRT (all p<.01). The groups did not differ on age, sex, or education level (all p> 0.05). In the whole sample and in carriers only, we found that higher MMSE scores were associated with higher learning slope, meaning faster learning (whole group p= 0.25, p= 0.006; carriers p= 0.30, p=0.029). In the whole sample, we found that lower learning slope was associated with higher levels of amyloid (p=-.34, p=.006) and tau in the left, right, and bilateral precuneus region (p=-.43, p<.001; p=-.46, p<.001; p=-.45, p<.001). In carriers only, lower learning slope was associated with higher tau burden in the left, right, and bilateral precuneus specifically (p=-.43, p=.017; p=-.48, p=.008; p=-.46, p=.010, respectively). No significant associations were found in non-carriers.
Conclusions:
These findings suggest that learning curves on an associative memory test may be sensitive to preclinical pathological changes in AD, specifically within the precuneus, a brain region known to be involved in cue reactivity, episodic memory retrieval, and mental imagery strategies. Future studies with larger samples are warranted to further examine associations between the FCSRT learning curves and regional tau accumulation in individuals with ADAD.
The global prevalence of persons living with dementia will soon exceed 50 million. Most of these individuals reside in low- and middle-income countries (LMICs). In South Africa, one such LMIC, the physician-to-patient ratio of 9:10 000 severely limits the capacity of clinicians to screen, assess, diagnose, and treat dementias. One way to address this limitation is by using mobile health (mHealth) platforms to scale-up neurocognitive testing. In this paper, we describe one such platform, a brief tablet-based cognitive assessment tool (NeuroScreen) that can be administered by lay health-providers. It may help identify patients with cognitive impairment (related, for instance, to dementia) and thereby improve clinical care and outcomes. However, there is a lack of data regarding (a) the acceptability of this novel technology for delivery of neurocognitive assessments in LMIC-resident older adults, and (b) the influence of technology-use experience on NeuroScreen performance of LMIC-resident older adults. This study aimed to fill that knowledge gap, using a sample of cognitively impaired South African older adults.
Participants and Methods:
Participants were 60 older adults (63.33% female; 91.67% right-handed; age M = 68.90 years, SD = 9.42, range = 50-83), all recruited from geriatric and memory clinics in Cape Town, South Africa. In a single 1-hour session, they completed the entire NeuroScreen battery (Trail Making, Number Speed, Finger Tapping, Visual Discrimination, Number Span Forward, Number Span Backward, List Learning, List Recall) as well as a study-specific questionnaire assessing acceptability of NeuroScreen use and overall experience and comfort with computer-based technology. We summed across 11 questionnaire items to derive a single variable capturing technology-use experience, with higher scores indicating more experience.
Results:
Almost all participants (93.33%) indicated that NeuroScreen was easy to use. A similar number (90.00%) indicated they would be comfortable completing NeuroScreen at routine doctor's visits. Only 6.67% reported feeling uncomfortable using a tablet, despite about three-quarters (76.67%) reporting never having used a tablet with a touchscreen before. Almost one in five participants (18.33%) reported owning a computer, 10.00% a tablet, and 70.00% a smartphone. Correlations between test performance and technology-use experience were statistically significant (or strongly tended toward significance) for most NeuroScreen subtests that assessed higherorder cognitive functioning and that required the participant to manipulate the tablet themselves: Trail Making 2 (a measure of cognitive switching ability), r = .24, p = .05; Visual Discrimination A (complex processing speed [number-symbol matching]), r = .38, p = .002; Visual Discrimination B (pattern recognition), r = .37, p = .004; Number Speed (simple information processing speed), r = .36, p = .004. For the most part, there were no such significant associations when the NeuroScreen subtest required only verbal input from the participant (i.e., on the list learning and number span tasks).
Conclusions:
NeuroScreen, a tablet-based neurocognitive screening tool, appears feasible for use among older South Africans, even if they are cognitively impaired and have limited technological familiarity. However, test performance might be influenced by amount of technology-use experience; clinicians using the battery must consider this in their interpretations.
Approximately half of people living with HIV (PWH) experience HIV-associated neurocognitive disorders (HAND), yet HAND often goes undiagnosed. There is an ongoing need to find efficient, cost-effective ways to screen for HAND and monitor its progression in order to intervene earlier in its course and more effectively treat it. Prior studies that analyzed brief HAND screening tools have demonstrated that certain cognitive test pairs are sensitive to HAND cross-sectionally and outperform other screening tools such as the HIV Dementia Scale (HDS). However, few studies have examined optimal tests for longitudinal screening. This study aims to identify the best cognitive test pairs for detecting cognitive decline longitudinally.
Participants and Methods:
Participants were HIV+ adults (N=132; ages 25-68; 59% men; 92% Black) from the Temple/Drexel Comprehensive NeuroHIV Center cohort. Participants were currently well treated (98% on cART, 92% with undetectable viral load, and mean current CD4 count=686). They completed comprehensive neurocognitive assessments longitudinally (328 total visits, average follow-up time=4.9 years). Eighteen participants (14% of the cohort) demonstrated significant cognitive decline, defined as a decline in global cognitive z-score of 0.5 (SD) or more. In receiver operating characteristic (ROC) analyses, tests with an area under the curve (AUC) of greater than .7 were included in subsequent test pair analyses. Further ROC analyses examined the sensitivity and specificity of each test pair in detecting significant cognitive decline. Results were compared with the predictive ability of the Modified HIV Dementia Scale (MHDS).
Results:
The following test pairs demonstrated the best balance between sensitivity and specificity in detecting global cognitive decline: Grooved Pegboard dominant hand (GPD) and category fluency (sensitivity=.89, specificity=.60, AUC=.75, p<.001), GPD and Coding (sensitivity=.76, specificity=.70, AUC=.73, p<.001), letter fluency and Trail Making Test (TMT) B (sensitivity=.82, specificity=.63, AUC=.73, p<.001), and GPD and TMT B (sensitivity=.81, specificity=.64, AUC=.73, p<.001). Change in MHDS predicted significant decline no better than chance (sensitivity=.61, specificity=.47, AUC=.53, p=.65).
Conclusions:
Several cognitive test pairs, particularly those that include GPD, are sensitive to HIV-associated cognitive change, and far more sensitive and specific than the MHDS. Cognitive test pairs can serve as valid, rapid, cost-effective screening tools for detecting cognitive change in PWH, thereby better enabling early detection and intervention. Future research should validate the present findings in other cohorts and examine the implementation of test pair screenings in HIV care settings. Most of the optimal tests identified are consistent with the well-established impact of HAND on frontal-subcortical motor and executive networks. The utility of category fluency is somewhat unexpected as it places more demands on temporal semantic networks; future research should explore the factors driving this finding, such as the potential interaction of HIV with aging and neurodegenerative disease.
Cognitive screening tools such as the Montreal Cognitive Assessment (MoCA) play an essential role in the clinical evaluation of neuropsychological functions. Despite the extensive investigations of the MoCA in English speaking countries as well as emerging adaptation work in a few Asian cultures, evidence base for the utility of the Vietnamese MoCA (MoCA-V) is lacking. This has posed a huge challenge for current and future clinical practice in Vietnam, as the country continues to assume a large burden of brain-related disorders. This study examined the construct validity of the MoCA-V and identified a cut-off score for the determination of cognitive impairment in a prevalent neurological condition in Vietnam - traumatic brain injury (TBI).
Participants and Methods:
Participants included 129 neurologically healthy individuals and 80 patients with moderate-to-severe TBI. All participants completed the MoCA-V, along with other common neurocognitive measures such as the Trail Making Test (TMT) Parts A and B, Vietnamese Verbal Fluency Test, and Digit Span.
Results:
Pearson’s correlations revealed significant, moderate correlations between performance on the MoCA-V subdomains and more comprehensive cognitive measures. Performance on the MoCA-V Attention domain was correlated with both Digit Span Forward, r(110) = .453, p < .001] and Digit Span Backward, r(110) = .303, p = .001; performance on the MoCA Language domain was correlated with the Vietnamese Verbal Fluency Test, r(107) = .334, p < .001; and performance on the MoCA Executive Function domain was correlated with the TMT-B, r(108) = -.479, p = .022. Performance on the MoCA-V was also associated with age, r(127) = -.659, p < .001, and education, r(127) = .769, p < .001, consistent with the general effects of age and education in cognitive abilities. Finally, a cut-off score of 22.5 was identified for the detection of cognitive impairment in Vietnamese people with TBI (AUC = 0.811; 95% CI = .75-.87, p < .001).
Conclusions:
This study provides the first evidence for the construct validity and clinical utility of the MoCA-V. Future research is necessary to cross-validate study findings among other clinical populations. Lessons learned from neuropsychological test translation and adaptation process will be discussed, particularly in the development of the administration materials and test instructions (e.g., considerations for individuals with limited formal education, influences of colonialism in the development of test stimuli).
Subjective cognitive complaints are common in individuals with Parkinson’s disease and Essential Tremor. One scale often used to capture the type and severity of subjective cognitive concerns is the Cognitive Change Index-20 (CCI-20). Created by Saykin et al (2006), the CCI-20 is a questionnaire that assesses perception of cognitive changes in memory, executive function, and language domains. Despite its multidomain structure, previous research has not empirically examined whether the CCI-20's underlying factor structure aligns with the cognitive domains proposed during its original development. Thus, the goal of the current study was to investigate the factor structure of the CCI-20 in individuals with movement disorders (Parkinson’s disease, Essential Tremor) who are known to experience varying degrees of cognitive sequelae as part of their disease progression.
Participants and Methods:
Participants included a convenience sample of 216 non-demented individuals with Parkinson disease (n=149) or Essential Tremor (n=67) who were seen at the University of Florida Fixel Institute Movement Disorders Center. All received the CCI-20 as part of a neuropsychological evaluation. The CCI-20 consists of 20 items, rated on a 5-point Likert scale, that ask questions about change in memory (12 items), executive function (5 items), and language (3 items) over the past 5 years. An exploratory factor analysis was conducted on CCI-20 scores using Promax rotation with factor extraction based on scree plot visual inspection and Kaiser’s rule (eigenvalues >1.0). Cronbach’s alpha was used to assess internal consistency reliability. Finally, Spearman correlations determined associations between factors and mood measures of depression (Beck Depression Inventory-II, BDI-II), apathy (Apathy Scale, AS), and anxiety (State-Trait Anxiety Inventory, STAI).
Results:
Because the Parkinson’s disease and Essential Tremor groups did not statistically differ in their CCI-20 total scores, they were combined into a single group for analyses. This resulted in 216 participants who were well-educated (m=15.01±2.92), in their mid-60's (m=67.72±9.33), predominantly male (63%), and non-Hispanic White (93.6%). The factor analysis resulted in 3 factors: factor 1 included 8 memory items (items 1-4, 6, 10-12; loadings from .524 to .920); factor 2 included all executive and language (items 13-20; loadings from .605 to .824), and factor 3 included four remaining memory items (items 5, 7-9; loadings from .628 to .810). Reliability of the 20 CCI items was good (a = .94), and reliability within each factor ranged from adequate (Factor 3, a = .78) to good (Factors 1 and 2, a = .90). All factors showed significant weak to moderate associations with BDI-II, AS, and STAI (state and trait) scores.
Conclusions:
The CCI-20 revealed three distinct dimensions of subjective cognitive complaints that did not correspond to the memory, executive function, and language domains. Rather, the CCI-20 was decomposed into two different dimensions of memory complaints and one dimension of non-memory complaints. Mood symptoms played a significant role in driving all dimensions of subjective cognitive complaints. Future studies should confirm this triadic structure in a healthy older adult sample and explore the relationship between factors and objective cognitive performance beyond the contribution of mood. T32-AG061892; T32-NS082168
The Montreal Cognitive Assessment (MOCA) is widely used as a mental status screening test to detect cognitive impairment in adults over 55 years of age. Performance on this test ranges from 0 to 30. One point is given to individuals with 12 or lower years of education. This accommodation is based on the fact that low education may be a risk factor for dementia (Milani et al., 2018). However, studies suggest the one-point adjustment may not be sufficient to address the impact of low education on test performance (Malek-Ahmadi et al., 2015). The aim of this study is to compare the effects of educational achievement versus baseline verbal abilities on MOCA performance.
Participants and Methods:
Fifty patients (25 male; mean age=72.78, SD = 8.11; mean education=16.18, SD = 2.73) with cognitive concerns were referred to Massachusetts General Brigham. All underwent neuropsychological evaluation, including screening with the MOCA. Total MOCA scores were calculated. In this patient group, the MOCA scores ranged from 10 to 29 (mean=22, SD=5.129). Measures of literacy (Wechsler Test of Adult Reading or Test of Premorbid Functioning) were used to estimate baseline verbal abilities. Educational achievement was based on self-reported years of education.
Results:
Correlational analyses included the Total MOCA scores, measures of literacy, and years of education. Performance on the MOCA significantly correlated with measures of literacy, r(43)=.578, p< .001, and a stepwise regression analysis revealed that literacy predicted performance on the MOCA, R2=.041, F(3,139)= 9.172, p<.001. Years of education correlated with measures of literacy, r(44)=.494, p< .001, but not with performance on the MOCA.
Conclusions:
Findings suggest that education-adjusted scoring on the MOCA may not be sufficient to “level the playing field” in terms of MOCA performance. Years of education had less of an effect on the Total MOCA scores than did baseline verbal abilities. It may be the case that literacy has a more robust effect on MOCA performance due to the inherent verbal nature of the MOCA. Data from this study highlight the importance of considering a patient’s baseline verbal abilities in the interpretation of their MOCA performance.
To investigate differences in performance on a widely used cognitive screener between community-dwelling older adults from two disparate socioeconomic groups.
Participants and Methods:
Participants were part of a larger study of cognitive screening in healthy older adults. The total sample (N=79, 69.6% female, 19% White/Caucasian, 12.7% Asian, 43% Latino/a, 25.3% Black/African-American) consisted of community-dwelling adults (Mage=73.1 years [SD=7.2] and Meducation=14.3 years [SD=2.6]) who were initially recruited via social media, flyers, and general community announcements. A lack of ethnic minority participants resulted in a two-year commitment to reach communities of color via visits and provision of health literacy to local religious and community programs. Continuous contact with leaders/gatekeepers helped establish research study credibility and forge a stronger sense of trust among ethnically diverse participants in the greater Houston, TX, area. Testing was initially conducted at the clinical study site. Due to low participation rates among people of color, greater effort was placed on tailored strategies to overcome economic and time constraints (i.e., schedule/time conflicts, lack of transportation, inability to pay for parking). To fit the priorities and needs of the participants, testing was also conducted at their homes (25.3%) and nearby religious and community centers (22.8%). Participants identifying as Latino/a or Black were predominantly recruited and tested at their local community center (as requested by gatekeepers/participants) to increase access to the study, in contrast to Caucasian participants. Median income estimates were used to stratify participants by socioeconomic status (SES) based on zip codes into low SES (L-SES) or high SES (H-SES) groups.
Results:
Participants from the L-SES group had significantly lower total scores on the MoCA than their H-SES counterparts, t(77J=2.837, p=0.003, g=0.696. The average MoCA total score for participants from the L-SES group was 2.64 points lower. The observed differences in MoCA total score when stratifying by ethnicity may be attributable to differences in education level and SES, which are known risk factors for cognitive impairment and will be further examined upon recruitment completion.
Conclusions:
Studies have found that ethnically diverse older adults not only encounter more barriers to accessing quality health care but also experience disparities in brain health research. Communities of color comprise a sizeable portion of our older adults but have been traditionally underrepresented in clinical research, limiting the generalizability of research findings to clinical treatment. Socioeconomic deprivation has been identified as one of several barriers to research engagement for people of color, placing ethnic communities at increased risk for under- or misdiagnosis and limited access to medical intervention. Preliminary findings have implications for the recruitment of ethnically diverse groups in clinical research. Given the growing racial and ethnic diversity among the United States population, we must do our due diligence to increase understanding of participation and recruitment barriers for racial/ethnic individuals. Tailored community outreach and engagement strategies may be effective in improving the inclusion of ethnically diverse populations and facilitating recruitment and retention in clinical research studies.
Given the aging population, there are significant public health benefits to delaying the onset of Alzheimer’s disease (AD) in individuals at risk. However, adherence to health behaviors (e.g., diet, exercise, sleep hygiene) is low in the general population. The Health Belief Model proposes that beliefs such as perceived threat of disease, perceived benefits and barriers to behavior change, and cues to action are mediators of behavior change. The aim of this study was to gain additional information on current health behaviors and beliefs for individuals at risk for developing AD. This information can then be used to inform behavioral interventions and individualized strategies to improve health behaviors that may reduce AD risk or delay symptom onset.
Participants and Methods:
Surveys were sent to the Rhode Island AD Prevention Registry, which is enriched for at-risk, cognitively normal adults (i.e., majority with a family history and/or an APOE e4 allele). A total of 177 individuals participated in this study. Participants were 68% female; 93% Caucasian and non-Hispanic; mean age of 69.2; 74% with family history of dementia; 40% with subjective memory decline. The survey included measures from the Science of Behavior Change (SoBC) Research Network to measure specific health belief factors, including individual AD risk, perceived future time remaining in one’s life, generalized self-efficacy, deferment of gratification, consideration of future consequences as well as dementia risk awareness and a total risk score for dementia calculated from a combination demographic, health and lifestyle behaviors.
Results:
Participants who were older had higher scores for dementia risk (r=0.78), lower future time perspective (r=-0.33), and lower generalized self-efficacy (r=-0.31) (all at p<0.001). Higher education correlated with higher consideration of future consequences (r=-.31, p<0.001) and lower overall dementia risk score (r=-0.23, p=0.006). Of all scales examined, only generalized self-efficacy had a significant linear relationship to both frequency (r2=0.06) and duration (r2=0.08) of weekly physical activity (p<0.001). Total dementia risk score also had significant linear relationships (r2=0.19) with future time perspective (p<0.001) and generalized self-efficacy (p=0.48).
Conclusions:
Overall, individuals who rated themselves higher in self-efficacy were more likely to exercise more frequently and for a longer duration. Individuals who had lower overall risk for dementia due to both demographic and behavioral factors were more likely to endorse higher self-efficacy and more perceived time remaining in their lives. Increasing self-efficacy and targeting perceived future time limitations may be key areas to increase motivation and participation in behavioral strategies to reduce AD risk. Developing individual profiles based on these scales may further allow for individually tailored intervention opportunities.
Given the results of the clinical trials for the disease-modifying therapy for Alzheimer's disease and its mechanism of action, it is necessary to start at the early stage as soon as possible. To this end, there is a need for a tool that allows easy periodic home assessment of memory change from the early stages of the disease. The purpose of this study is to establish a new method of memory evaluation showing well- correlated with Logical Memory (LM) II subtest score of the WMS-R and that, at the same time, can be done easily in a short time.
Participants and Methods:
The subjects were 85 subjects (including 12 MCI, 8 AD, and 65 age people with normal cognitive function). In the new method, 8-picture recall and 16-word recognition were assessed, respectively, and the index was calculated by adding up the ratio ofcorrect responses to both tests (max point is two). The correlation with the LM II score was examined.
Results:
Our statistical analysis showed that 8-picture recall (R=0.872, p<0.001) and the index (R=0.857, p<0.001) showed a significantcorrelation with the LMII score. On the other hand, the 16-word regression and LM II score was R = 0.691(p<0.001), relatively lower than the other two scores, because this task may have been higher than the true ability due to the false recognition of words that were not there.
Conclusions:
Our new method can easily predict the LM II score of WMS-R in about one third of the time required by conventional methods. We named this index as Self Assessment Memory Scale (SAMS), and are planning to develop a digital tool to enable easy andself-accessible evaluation of recall.
The study aimed to develop, validate and field test the cognitive screening tool for use in outpatient departments within health facilities in Uganda.
Participants and Methods:
In the rural eastern region of Uganda, twenty-three (23) purposively selected health facilities and administered a scientifically derived cognitive screening tools to all eligible older persons. We conducted an inter-rater reliability in all the health facilities using three raters. Diagnosis of dementia (DSM IV) was classified as a major cognitive impairment and was quality checked by physiatrist who were blinded to results of the screening assessment.
Results:
The area under the receiver operating characterizes (AUROC) curve in health facilities was 0.912. The inter-rater reliability was good (Intra-class correlation coefficient of 0.692 to 0.734). the predictive accuracy of the tool to discriminate between dementia and other cognitive impairment was 0.892. In regression modal, the cognitive screening tool, didn’t appear to be biased by age.
Conclusions:
The cognitive screening tool if performed well among the older persons, can be proved useful for screening dementia in other developing countries.
The Montreal Cognitive Assessment (MOCA) is a brief cognitive screener, widely used by providers to detect mild cognitive impairment (MCI). It encompasses 30 questions, assessing executive functioning, visuospatial skills, language, memory, attention, and orientation. Although the MOCA has been shown to have high sensitivity (90%) and specificity (87%) for detecting MCI, existing studies have primarily included participants who were already diagnosed with amnestic MCI via neuropsychological testing. Since several factors beyond the presence of MCI can contribute to low performance on the MOCA (e.g., premorbid IQ, fatigue, mood symptoms), over-reliance on the MOCA runs the risk of falsely identifying individuals as having cognitive impairment. The MOCA’s memory subtest raises particular concern as there are several language-based tasks between the learning and delay trials, introducing the potential for interference effects. Thus, the MOCA’s ability to accurately identify those at risk for MCI in the community remains unclear. The objective of the present study was to evaluate: (1) the MOCA’s association with neuropsychological memory measures; and (2) its ability to distinguish between neurocognitive groups (intact vs. MCI vs. dementia).
Participants and Methods:
This study involved a retrospective analysis of fifty-one patients (M age=72.58 [7.90]; M education= 16.37 [16.37]) who underwent neuropsychological evaluation. Standardized scores for total list-learning (HVLT; CVLT-bf) were used to capture memory encoding; retention % scores were used to capture memory storage. MOCA scores included Total MOCA, MOCA-Orientation, and the MOCA Memory Index (MOCA-MEM). MOCA-MEM was calculated based on Julayanont et al., 2014— (Free-Delayed Recall*3) + (Category-Cued Recall*2) + Multiple Choice-Cued Recall. Bivariate correlations were conducted for the MOCA and neuropsychological test scores. Participants were divided into three diagnostic groups, classified by the neuropsychologist: (1) Cognitive Intact (CI; n=13); (2) MCI (n=26); and (3) Major Neurocognitive Disorder/Dementia (MNCD; n=11). Analysis of covariance was used to analyze differences between the cognitive groups on Total MOCA, MOCA-Orientation, and MOCA-MEM.
Results:
Total MOCA correlated with word-list learning (r=.434, p=.004) and retention% (r=.306, p=.049). MOCA-MEM was correlated with word-list learning (r=.367, p=.042); it did not significantly correlate with retention%. MOCA-Orientation had the strongest correlation with retention0/) (r=.406, p=.009). Means of Total MOCA significantly differed between CI (25.31[2.56]), MCI (22.04[4.14]), and MNCD (15.44[4.13]). MOCA-MEM only differentiated CI (10[3.66]) and MNCD (5.71[2.14]); it did not differentiate MCI (6.94[3.13]) from either CI or MNCD.
Conclusions:
Our findings suggest that the MOCA has limitations in accurately classifying memory deficits in older adults. First, our study suggests that the MOCA-MEM reflects encoding rather than memory storage. Given that deficiency in encoding may be secondary to other cognitive deficits, such as attention and executive dysfunction, performance on MOCA-MEM cannot readily delineate the presence of an amnestic process. Second, the findings show that MOCA-MEM does not differentiate between patient groups with intact cognition versus MCI, nor those with MCI versus MNCD. These findings argue the importance of neuropsychological evaluation in deciphering patterns of memory performance and the presence of an amnestic process.
Neuropsychological (NP) tests are increasingly computerized, which automates testing, scoring, and administration. These innovations are well-suited for use in resource-limited settings, such as low- to middle- income countries (LMICs), which often lack specialized testing resources (e.g., trained staff, forms, norms, equipment). Despite this, there is a dearth of research on their acceptability and usability which could affect performance, particularly in LMICs with varying levels of access to computer technology. NeuroScreen is a tablet-based battery of tests assessing learning, memory, working memory, processing speed, executive functions, and motor speed. This study evaluated the acceptability and usability of NeuroScreen among two groups of LMIC adolescents with and without HIV from Cape Town, South Africa and Kampala, Uganda.
Participants and Methods:
Adolescents in Cape Town (n=131) and Kampala (n=80) completed NeuroScreen and questions about their use and ownership of, as well as comfort with computer technology and their experiences completing NeuroScreen. Participants rated their technology use -comfort with and ease-of-use of computers, tablets, smartphones, and NeuroScreen on a Likert-type scale: (1) Very Easy/Very Comfortable to (6) Very Difficult/Very Uncomfortable. For analyses, responses of Somewhat Easy/Comfortable to Very Easy/Comfortable were collapsed to codify comfort and ease. Descriptive statistics assessed technology use and experiences of using the NeuroScreen tool. A qualitative question asked how participants would feel receiving NeuroScreen routinely in the future; responses were coded as positive, negative, or neutral (e.g., “I would enjoy it”). Chi-squares assessed for group differences.
Results:
South African adolescents were 15.42 years on average, 50.3% male, and 49% were HIV-positive. Ugandan adolescents were 15.64 years on average, 50.6% male, and 54% HIVpositive. South African participants were more likely than Ugandan participants to have ever used a computer (71% vs. 49%; p<.005), or tablet (58% vs. 40%; p<.05), whereas smartphone use was similar (94% vs 87%). South African participants reported higher rates of comfort using a computer (86% vs. 46%; p<.001) and smartphone (96% vs. 88%; p<.05) compared to Ugandan participants. Ugandan adolescents rated using NeuroScreen as easier than South African adolescents (96% vs. 87%; p<.05).). Regarding within-sample differences by HIV status, Ugandan participants with HIV were less likely to have used a computer than participants without HIV (70% vs. 57%; p<.05, respectively).The Finger Tapping test was rated as the easiest by both South African (73%) and Ugandan (64%) participants. Trail Making was rated as the most difficult test among Ugandan participants (37%); 75% of South African participants reported no tasks as difficult followed by Finger Tapping as most difficult (8%). When asked about completing NeuroScreen at routine doctor’s visits, most South Africans (85%) and Ugandans (72%) responded positively.
Conclusions:
This study found that even with low prior tablet use and varying levels of comfort in using technology, South African and Ugandan adolescents rated NeuroScreen with high acceptability and usability. These data suggest that scaling up NeuroScreen in LMICs, where technology use might be limited, may be appropriate for adolescent populations. Further research should examine prior experience and comfort with tablets as predictors NeuroScreen test performance.
With the emergence of the coronavirus 2019 pandemic, investigating the validity of tele-screenings for neuropsychological status has become increasingly necessary. While the telephone version of the Montreal Cognitive Assessment (MoCA-T) has been validated for use in patients with Parkinson’s and stroke/cerebrovascular disease, the clinical utility of this instrument in geriatric patients with other suspected cognitive disorders has yet to be determined. Thus, the present study aimed to examine the classification accuracy of the MoCA-T in a mixed clinical sample of patients with mild cognitive impairment (MCI) or dementia.
Participants and Methods:
Ninety-one older adults were administered the MoCA-T via videoconferencing technology as part of a comprehensive neurocognitive evaluation performed by a multidisciplinary treatment team within a dementia specialty clinic. Based on this evaluation, 51 (56.0%) patients were diagnosed with dementia, 27 (29.7%) with MCI, and 13 (14.3%) with no neurocognitive diagnosis (i.e., subjective cognitive complaints). In addition to MoCA-T total and item scores, we also computed subscale scores for between-group comparisons as the sum of items assessing orientation, language, attention/executive function, and memory. ANOVA/ANCOVA and ROC curve analyses were used to examine between-group differences on the MoCA-T and its psychometric properties in discriminating patients with MCI or dementia, respectively.
Results:
Participants had a mean age of 74.3 ± 8.7 and education of 16 ± 2.9 years. Patients with dementia were significantly older than those with MCI and no diagnosis, but there were no other significant between-group differences in clinical characteristics. MoCA-T total [F(2,86)=28.5, p<0.001] and all subscale scores (p<0.01) differed significantly between groups and in the expected direction (dementia<MCI<no diagnosis) even after controlling for age. The only exception was language for which there was initially a statistical trend (p=0.06) that reached significance (p<0.05) after controlling for age. In terms of individual items, abstraction, fluency, orientation to place/city, and category cued recall were the only items that did not differ significantly between groups. ROC curve analyses revealed -5 points to be the optimum cut-off for distinguishing between cognitively normal individuals from patients with MCI (Sensitivity=0.67; Specificity=0.77; AUC=0.78), and a cut-off of -8 points optimally distinguished between patients with MCI and dementia (Sensitivity=0.77; Specificity=0.74; AUC=0.81).
Conclusions:
The current study provides further evidence for the clinical utility of the MoCA-T as a screening instrument for neurocognitive disorders in older adults and extends prior work to include administration via videoconferencing technology. While previous studies have focused on the use of MoCA-T in specific patient populations, here, we demonstrate the validity of this screening tool in a mixed-clinical sample, which suggests its broader use in clinical settings for distinguishing between neurocognitive disorders, regardless of the underlying etiology.
Preclinical Alzheimer disease (AD) has been associated with subtle deficits in memory, attention, and spatial navigation (Allison et al., 2019; Aschenbrenner et al., 2015; Hedden et al., 2013). There is a need for a widely distributable screening measure for detecting preclinical AD. The goal of this study was to examine whether self- and informant-reported change in the relevant cognitive domains, measured by the Everyday Cognition Scale (ECog; Farias et al., 2008), could represent robust clinical tools sensitive to preclinical AD.
Participants and Methods:
Clinically normal adults aged 56-93 (n=371) and their informants (n=366) completed memory, divided attention, and visuospatial abilities (which assesses spatial navigation) subsections of the ECog. Reliability and validity of these subsections were examined using Cronbach’s alpha and confirmatory factor analyses (CFA). The hypothesized CFA assumed a three-factor structure with each subsection representing a separate latent construct. Receiver operating characteristics (ROC) and area under the curve (AUC) analyses were used to determine the diagnostic accuracy of the ECog subsections in detecting preclinical AD, either defined by cerebrospinal fluid (CSF) ptau181/Aß42 ratio >0.0198 or hippocampal volume in the bottom tertial of the sample. Hierarchical linear regression was used to examine whether ECog subsections predicted continuous AD biomarker burden when controlling for depressive symptomatology, which has been previously associated with subjective cognition (Zlatar et al., 2018). Lastly, we compared the diagnostic accuracy of ECog subsections and neuropsychological composites assessing the same or similar cognitive domains (memory, executive function, and visuospatial ability) in identifying preclinical AD.
Results:
All self- and informant-reported subsections demonstrated appropriate reliability (a range=.71-.89). The three-factor CFA models were an adequate fit to the data and were significantly better than one-factor models (self-report x2(3)=129.511, p<.001; informant-report X2(3)=145.347, p<.001), suggesting that the subsections measured distinct constructs. Self-reported memory (AUC=.582, p=.007) and attention (AUC=.564, p=.036) were significant predictors of preclinical AD defined by CSF ptau181/Aß42 ratio. Self-reported spatial navigation (AUC=.592, p=.022) was a significant predictor of preclinical AD defined by hippocampal volume. Additionally, self-reported attention was a significant predictor of the CSF ptau181/Aß42 ratio (p<.001) and self-reported memory was a significant predictor of hippocampal volume (p=.024) when controlling for depressive symptoms. Informant-reports were not significant predictors of preclinical AD (all ps>.074).
There was a nonsignificant trend for the objectively measured executive function AUC to be higher than for self-reported attention in detecting preclinical AD defined by CSF ptau181/Aß42 ratio and was significantly higher than self-reported attention in detecting preclinical AD defined by hippocampal volume (p=.084 and p<.001, respectively). For memory and spatial navigation/visuospatial domains, the AUCs for self-reported and objective measures did not differ in detecting preclinical AD defined by either CSF ptau181/Aß42 ratio or hippocampal volume (ps>.129).
Conclusions:
Although the self-reported subsections produced significant AUCs, these were not high enough to indicate clinical utility based on existing recommendations (all AUCs<.60; Mandrekar, 2010). Nonetheless, there was evidence that self-reported cognitive change has promise as a screening tool for preclinical AD but there is a need to develop questionnaires with greater sensitivity to subtle cognitive change associated with preclinical AD.
Early detection of mild cognitive impairment (MCI) and dementia is crucial for initiation of treatment and access to appropriate care. While comprehensive neuropsychological assessment is often an intrinsic part of the diagnostic process, access to services may be limited and cannot be utilized effectively on a large scale. For these reasons, cognitive screening instruments are used as brief and cost-effective methods to identify individuals who require further evaluation. Novel technologies and automated software systems to screen for cognitive changes in older individuals are evolving as new avenues for early detection. The present study presents preliminary data on a new technology that uses automated linguistic analysis software to screen for MCI and dementia.
Participants and Methods:
Data were collected from 148 Spanish-speaking individuals recruited in Spain (MAge=74.4, MEducation=12.93, 56.7% females) of whom 78 were diagnosed as cognitively normal [CN; Mmmse = 28.51 (1.39)], 49 as MCI [MMMSE = 25.65 (2.94)], and 21 as all-cause dementia [MMMSE = 22.52 (2.06)]. Participants were recorded performing various verbal tasks [Animal fluency, phonemic (F) fluency, Cookie Theft Description, and CERAD list learning task]. Recordings were processed via text-transcription and sound signal processing techniques to capture neuropsychological variables and audio characteristics. Features from each task were used in the development of an algorithm (for that task) to compute a score between 0 or 1 (healthy to more impairment), and a fifth algorithm was constructed using audio characteristics from all tasks. These five classifiers were combined algorithmically to provide the final algorithm. Receiver Operating Characteristic (ROC) analysis was conducted to determine sensitivity and specificity of predicted algorithm performance [CN vs. impaired (MCI or dementia)] against clinical diagnoses, and additional general linear modeling was used to test whether age, sex, education, and multilingualism significantly predicted logistically transformed weighted algorithm scores.
Results:
Scores were transformed to logit scores, with significant differences in mean logit scores between all groups (p <.001). Logit-inverse transformation of mean logit scores (possible range 0 -1) resulted in values of 0.06 for CN, 0.90 for MCI, and 0.99 for all-cause dementia groups. ROC curve analyses revealed the algorithm obtained a total area under the curve of 0.92, with an overall accuracy of 86.8%, a sensitivity of 0.92, and specificity of 0.82. Age was identified as a significant predictor (beta = 0.22; p <0.01) of algorithm output, whereas years of education (beta = -0.04; p = 0.64), sex (beta = 0.38; p = 0.02, did not survive correction for type-1 error), and multilingualism (beta = -0.24; p = 0.22) were non-significant.
Conclusions:
These findings provide initial support for the utility of an automated speech analysis algorithm to detect cognitive impairment quickly and efficiently in a Spanish-speaking population. Although sociodemographic variables were not included in the algorithm, age significantly predicted algorithm output, and should be further explored to determine if age-adjusted formulas would improve algorithm accuracy for younger versus older individuals. Additional research is needed to validate this novel methodology in other languages, as this may represent a promising cross-cultural screening method for MCI and dementia detection.
When assessing individuals from diverse backgrounds, APA ethical principles emphasize the consideration of language and culture when selecting appropriate measures. Research among hearing, English-speaking individuals has shown the effects in identifying cognitive deficits when language, culture, and educational background are not considered in the selection and administration of measures (Ardilla, 2007). Among the Deaf community in the US, a minority group with a unique culture and language (American Sign Language: ASL), there have been few attempts to adapt existing English cognitive measures. Factors complicating this include research resources given the limited number of neuropsychologists and researchers who understand both the complexities of the measures as well as the linguistic and cultural factors within the Deaf population. The goal of the current project is to develop a culturally informed interpretation of a cognitive screening tool for appropriate use with older Deaf adults.
Participants and Methods:
Item selection was informed by MMSE data from Dean et al. (2009) and methods utilized by Atkinson et al. (2015). Items selection occurred through consultation with three neuropsychologists and graduate peers with either native signing abilities or demonstrated ASL fluency, as well as Deaf identities, cultural affiliation and or community engagement. Selection considered the potential for translation errors, particularly related to equivalence of translation from a spoken modality to a signed. Items were categorized into the following domains: Orientation, Attention, Memory, Language, Executive Functioning, Visuospatial, and Performance Validity. Two native signers (Deaf interpreters) provided formal translation of the items. The measure was piloted with 20 deaf and hard of hearing (DHH) adult signers (ages M=41.10, SD=5.50, Range=31-48). Items were prerecorded to standardize the administration, which was shown to participants through the screenshare function of Zoom software.
Results:
The average performance was 100.80 (SD=3.91)/ 105 possible points. Within the memory domain, some errors, especially for word selection on delayed recall, were noted which may be related to sign choice and dialect. Additionally, with culture-specific episodic memory items, participants 35% of participants were unable to provide a correct answer with qualitative responses indicating this information may be more familiar to a subset of the Deaf community that had attended Gallaudet University in Washington, D.C. There was a significant positive relationship between ASL fluency, determined by the ASL-Comprehension Test, and performance on the cognitive screener (r(18)=.54, p=.01) while age of onset of deafness (r(18)=-.16, p=.51) and age of ASL acquisition (r(18)= .21, p=.37), were not significant.
Conclusions:
Results of this preliminary project yielded a measure that benefited from inclusion of content experts in the field during the process of interpretation and translation. It appears appropriate for Deaf signers who are proficient in ASL. The pattern of correlations suggests the measure may be appropriate for use with fluent signers with experience in ASL acquisition. Further development of the measure should focus on appropriate items that address the diversity of the Deaf experience as well as continue to explore inclusive translation approaches.
Mayo Test Drive (MTD): Test Development through Rapid Iteration, Validation and Expansion, is a web-based multi-device (smartphone, tablet, personal computer) platform optimized for remote self-administered cognitive assessment that includes a computer-adaptive word list memory test (Stricker Learning Span; SLS; Stricker et al., 2022; Stricker et al., in press) and a measure of processing speed (Symbols Test: Wilks et al., 2021). Study aims were to determine criterion validity of MTD by comparing the ability of the MTD raw composite and in-person administered cognitive measures to differentiate biomarkerdefined groups in cognitively unimpaired (CU) individuals on the Alzheimer’s continuum.
Participants and Methods:
Mayo Clinic Study of Aging CU participants (N=319; mean age=71, SD=11, range=37-94; mean education=16, SD=2, range=6-20; 47% female) completed a brief remote cognitive assessment (∼0.5 months from in-person visit). Brain amyloid and brain tau PET scans were available within 3 years. Overlapping groups were formed for 1) those on the Alzheimer’s disease (AD) continuum (A+, n=110) or not (A-, n=209), and for 2) those with biological AD (A+T+, n=43) or with no evidence of AD pathology (A-T-, n=181). Primary outcome variables were MTD raw composite (SLS sum of trials + an accuracy-weighted Symbols response time measure), Global-z (average of 9 in-person neuropsychological measures) and an in-person screening measure (Kokmen Short Test of Mental Status, STMS; which is like the MMSE). Linear model ANOVAs were used to investigate biomarker subgroup differences and Hedge’s G effect sizes were derived, with and without adjusting for demographic variables (age, education, sex).
Results:
Remotely administered MTD raw composite showed comparable to slightly larger effect sizes compared to Global-z. Unadjusted effect sizes for MTD raw composite for differentiating A+ vs. A- and A+T+ vs. A-T- groups, respectively, were -0.57 and -0.84 and effect sizes for Global-z were -0.54 and -0.73 (all p’s<.05). Because biomarker positive groups were significantly older than biomarker negative groups, group differences were attenuated after adjusting for demographic variables, but MTD raw composite remained significant for A+T+ vs A-T- (adjusted effect size -0.35, p=.007); Global-z did not reach significance for A+T+ vs A-T- (adjusted effect size -0.19, p=.08). Neither composite reached significance for adjusted analyses for the A+ vs A- comparison (MTD raw composite adjusted effect size= -.22, p=.06; Global-z adjusted effect size= -.08, p=.47). Results were the same for an alternative MTD composite using traditional z-score averaging methods, but the raw score method is preferred for comparability to other screening measures. The STMS screening measure did not differentiate biomarker groups in any analyses (unadjusted and adjusted p’s>.05; d’s -0.23 to 0.05).
Conclusions:
Remotely administered MTD raw composite shows at least similar ability to separate biomarker-defined groups in CU individuals as a Global-z for person-administered measures within a neuropsychological battery, providing evidence of criterion validity. Both the MTD raw composite and Global-z showed greater ability to separate biomarker positive from negative CU groups compared to a typical screening measure (STMS) that was unable to differentiate these groups. MTD may be useful as a screening measure to aid early detection of Alzheimer’s pathological changes.
In the context of primary care, cognitive screenings are brief, non-diagnostic tests that clinicians can administer in order to provide appropriate referrals to neuropsychologists. Annual cognitive screening for adults over age 65 (“older adults”) can help monitor cognitive functioning over time and ensure more patients with cognitive impairments receive neuropsychological assessment and care earlier. Unfortunately, time constraints and lack of training present major barriers to cognitive screening in primary care, and less than half of cognitive impairment cases are identified in these settings. A remote cognitive screening mobile app has the potential to save primary care clinics time, particularly for the majority of older adults who are cognitively healthy. Moreover, a screening app well-validated for remote clinical use can replace the inadequate or nonexistent screening practices currently employed by many primary care clinics. In order to achieve their potential, remote smartphone-enabled cognitive screening paradigms must be acceptable and feasible for both patients and clinical end users. With this goal in mind, we describe the collaborative, human-centered design process and proposed implementation of MyCog Mobile (MCM), a self-administered cognitive screening app based on well-validated NIH Toolbox measures.
Participants and Methods:
We conducted foundational interviews with primary care clinicians (N=5) and clinic administrators (N=3) and created user journey maps of their existing and proposed cognitive screening workflows. We then conducted individual semi-structured interviews with healthy older adults (N=5) as well as participated in a community stakeholder panel of older adults and caregivers (N=11). Based on the data collected, we developed high-fidelity prototypes of the MCM app which we iteratively tested and refined with the older adult interview participants. Older adults rated the usability of the prototypes on the Simplified System Usability Scale (S-SUS) and After Scenario Questionnaire (ASQ).
Results:
Clinicians and administrators were eager to use a well-validated remote screening app if it saved them time in their workflows and were fully integrated into their EHR. Clinicians prioritized easily interpretable score reports tied to automated best practice guidelines. Findings from interviews and user journey mapping further informed the details of the proposed implementation and core functionality of MCM. Older adult participants were motivated to complete a remote cognitive screener to ensure they were cognitively healthy, save time during their in-person visit, and for privacy and comfort reasons. Older adults also identified several challenges to remote smartphone screening which informed the user experience design of the MCM app. The average rating across prototype versions was 91 (SD 5.18) on the SSUS and 6.13 (SD 8.40), indicating above average usability.
Conclusions:
Through our iterative, humancentered design process, we were able to develop a viable remote cognitive screening app and proposed implementation for primary care settings optimized for multiple stakeholders. Next steps include validating MCM in clinical and healthy populations, collaboratively developing best practice alerts for primary care EHRs with neuropsychologists, and piloting the finalized app in a community clinic. We hope the finalized MCM app will promote broader screening practices within primary care and improve early assessment and diagnosis of cognitive impairment for older adults.
A systematic approach is vital for adapting neuropsychological tests developed and validated in western monocultural, educated and English-speaking populations. However, rigorous and uniform methods are often not implemented during adaptation of neuropsychological tests and cognitive screening tools across different languages and cultures. This has serious clinical implications. Our group has adapted the Addenbrooke’s Cognitive Examination (ACE) III for the Bengali speaking population in India. We have taken a 'culture-specific’ approach to adaptation and illustrate this by describing the process of adapting the ACE III naming sub-test, with a focus on the process of selecting culturally appropriate and psychometrically reliable items
Participants and Methods:
Two studies were conducted in seven phases for adapting the ACE III naming test. Twenty-three items from the naming test in the English and the different Indian ACE-R versions were administered to healthy Bengali speaking literate adults to determine image agreement, naming and familiarity of the items. Eleven items were identified as outliers. We then included 16 culturally appropriate items that were semantically similar to the items in the selected ACE-R versions of which 3 were identified as outliers. The final corpus consisting of 24 items was administered to 30 patients with mild cognitive Impairment, Alzheimer’s disease and vascular dementia, and 60 healthy controls matched for age and education to determine which items in the corpus best discriminated patients and the controls, and to examine their difficulty levels.
Results:
The ACE III Bengali naming test with an internal consistency of .76 included 12 psychometrically reliable, culturally relevant high naming-high familiarity and high naming-low familiarity living and non-living items. Item difficulty ranged from .47 to .88 and had discrimination indices >.44.
Conclusions:
A key question for test development/adaptation is whether to aim for culture-broad or culture-specific tests. Either way, a systematic approach to test adaption will increase the likelihood that a test is appropriate for the linguistic/cultural context in which it is intended to be used. Adaptation of neuropsychological tests based on a familiarity driven approach helps to reduce cultural bias at the content level. This coupled with appropriate item selection statistics helps to improve the validity of the adapted tests and ensure cross-cultural comparability of test scores both across and within nations.