We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
A variety of standardised and validated tools and measures can provide a wealth of information to further understand a patient’s complaints, to evaluate the impact of insomnia on their life, and to screen for other disorders. This chapter provides a detailed overview of measures that a clinician can incorporate into their assessment to confirm the presence of insomnia, aid in differential and/or comorbid diagnosis, and understand broader impacts of their sleep complaint. The chapter then proceeds to describe the value of the sleep diary in an initial assessment. Finally, it encourages the reader to consider outcomes that are measurable, clinically meaningful, and matter to the patient.
Posttraumatic stress disorder (PTSD) is one of the most serious and incapacitating mental diseases that can result from trauma exposure. The exact prevalence of this disorder is not known as the literature provides very different results, ranging from 2.5% to 74%. The aim of this umbrella review is to provide an estimation of PTSD prevalence and to clarify whether the prevalence depends on the assessment methods applied (structured interview v. self-report questionnaire) and on the nature of the traumatic event (interpersonal v. not-interpersonal). A systematic search of major databases and additional sources (Google Scholar, EBSCO, Web of Science, PubMed, Galileo Discovery) was conducted. Fifty-nine reviews met the criteria of this umbrella review. Overall PTSD prevalence was 23.95% (95% confidence interval 95% CI 20.74–27.15), with no publication bias or significant small-study effects, but a high level of heterogeneity between meta-analyses. Sensitivities analyses revealed that these results do not change after removing meta-analysis also including data from underage participants (23.03%, 95% CI 18.58–27.48), nor after excluding meta-analysis of low quality (24.26%, 95% CI 20.46–28.06). Regarding the impact of diagnostic instruments on PTSD prevalence, the results revealed a lack of significant differences in PTSD prevalence when structured v. self-report instruments were applied (p = 0.0835). Finally, PTSD prevalence did not differ following event of intentional (25.42%, 95% CI 19.76–31.09) or not intentional (22.48%, 95% CI 17.22–27.73) nature (p = 0.4598). The present umbrella review establishes a robust foundation for future research and provides valuable insights on PTSD prevalence.
While previous studies have reported high rates of documented suicide attempts (SAs) in the U.S. Army, the extent to which soldiers make SAs that are not identified in the healthcare system is unknown. Understanding undetected suicidal behavior is important in broadening prevention and intervention efforts.
Methods
Representative survey of U.S. Regular Army enlisted soldiers (n = 24 475). Reported SAs during service were compared with SAs documented in administrative medical records. Logistic regression analyses examined sociodemographic characteristics differentiating soldiers with an undetected SA v. documented SA. Among those with an undetected SA, chi-square tests examined characteristics associated with receiving a mental health diagnosis (MH-Dx) prior to SA. Discrete-time survival analysis estimated risk of undetected SA by time in service.
Results
Prevalence of undetected SA (unweighted n = 259) was 1.3%. Annual incidence was 255.6 per 100 000 soldiers, suggesting one in three SAs are undetected. In multivariable analysis, rank ⩾E5 (OR = 3.1[95%CI 1.6–5.7]) was associated with increased odds of undetected v. documented SA. Females were more likely to have a MH-Dx prior to their undetected SA (Rao-Scott χ21 = 6.1, p = .01). Over one-fifth of undetected SAs resulted in at least moderate injury. Risk of undetected SA was greater during the first four years of service.
Conclusions
Findings suggest that substantially more soldiers make SAs than indicated by estimates based on documented attempts. A sizable minority of undetected SAs result in significant injury. Soldiers reporting an undetected SA tend to be higher ranking than those with documented SAs. Undetected SAs require additional approaches to identifying individuals at risk.
Language proficiency is a critically important factor in research on bilingualism, but researchers disagree on its measurement. Validated objective measures exist, but investigators often rely exclusively on subjective measures. We investigated if combining multiple self-report measures improves prediction of objective naming test scores in 36 English-dominant versus 32 Spanish-dominant older bilinguals (Experiment 1), and in 41 older Spanish–English bilinguals versus 41 proficiency-matched young bilinguals (Experiment 2). Self-rated proficiency was a powerful but sometimes inaccurate predictor and better predicted naming accuracy when combined with years of immersion, while percent use explained little or no unique variance. Spanish-dominant bilinguals rated themselves more strictly than English-dominant bilinguals at the same objectively measured proficiency level. Immersion affected young more than older bilinguals, and non-immersed (English-dominant) more than immersed (Spanish-dominant) bilinguals. Self-reported proficiency ratings can produce spurious results, but predictive power improves when combined with self-report questions that might be less affected by subjective judgements.
This online study investigates how first (L1) and foreign language (LX) users, and naïve (L0) listeners of Mandarin perceive the valence and arousal level of a Chinese interlocutor in various communication modalities. The 1485 participants (651 L1, 292 LX, and 542 L0 Mandarin users) were presented with 12 recordings of a Chinese actor conveying emotional events in the visual-vocal-verbal, vocal-verbal, visual-only, or vocal-only modality. Valence and arousal perceptions were collected via the 2DAFS (Lorette, 2021). Disregarding the vocal-only modality which led to neutral perceptions, bootstrapped regression models suggest that modality does not affect L1 users’ valence perceptions. LX and L0 users perceive markedly more neutral valence levels in the absence of visual cues, and in the case of positive stimuli, slightly lower arousal levels. This calls for a more nuanced conceptualisation of valence and arousal as universal features of emotions and stress the significance of modality for intercultural communication.
COVID-19 caused a worldwide restructuring of daily life, necessitating a sharp increase in social distancing, telecommunication, and adherence to rapidly changing public health recommendations. Coping, a response to stressors, is a protective mechanism to increase resiliency against uncertainty and decreased social connectedness. This disruption of daily life has prompted unanticipated and unique research opportunities and allowed researchers to consider whether individuals' primary coping style with the pandemic is associated with cognition. Previous research has found that problem coping, or action-oriented approaches to a stressor, is the most adaptive coping strategy. Emotion-based coping, like venting or humor, varies and depends on the stressor. Avoidant coping, like denial or ignoring the stressor, is generally considered maladaptive (Carver, 1977), which may lead to increased psychosocial disturbance. Executive functioning, responsible for planning, organizing, inhibition, and self-management are theorized to be most impacted by the social and psychological effects of COVID-19 (Pollizi et al., 2021). While some research has examined this question in working parent and older adult populations, we seek to understand this relationship in emerging adults, whose frontal lobes, responsible for executive functioning, are still developing. The present study seeks to examine the association between coping with the COVID-19 pandemic and executive functioning.
Participants and Methods:
College students (N=440; M=19.30 years old, SD=1.42, 76% female) across seven US universities completed self-report questionnaires on SONA, which included Barkley's Deficits in Executive Functioning, Short Form (BDEFS-SF; Barkley, 2011) and the Brief Coping Orientation to Problems Experienced Inventory adapted for coping with the COVID-19 pandemic (Brief COPE; Carver 1989). Items on the BDEFS-SF were summed to create a global executive functioning score. Items on the Brief COPE were combined to create three factors: emotional,, avoidant,, and problem-focused (Dias et al., 2011).
Results:
Stepwise linear regression was used to assess whether coping style predicted executive functioning. Results indicate that the use of emotional coping (ß =0.19, p< .001) and avoidant coping (ß =0.33, p< .001) predicted higher scores on the BDEFS (greater deficits in executive functioning). Additionally, the use of problem coping (ß =-0.27, p< .001) predicted lower BDEFS scores (better executive functioning), with this overall model explaining 16.37% of the variance.
Conclusions:
Results from this study confirm that COVID-19 coping styles are associated with decreased executive functioning. Specifically, emotional coping and avoidant coping predicted decreased executive functioning, which has been supported in non-pandemic samples. The use of problem-focused coping predicted increased executive functioning, indicating that this may be a protective form of coping with the pandemic. Because tasks necessary for daily life, such as planning, organizing, and judgment, rely on executive functioning, maladaptive coping with COVID-19 may impede college students' daily functioning necessary for successful engagement in schoolwork, emotion regulation, and activities of daily living. This research begins to address the gap in knowledge regarding the relationship between coping with the COVID-19 pandemic and executive functioning. This knowledge can be used in future crises in order to promote the use of problem-focused coping and mitigate the self-observed deficits in executive functioning demonstrated in this population.
Consideration of individual differences in recovery after concussion has become a focus of concussion research. Sex and racial/ethnic identity as they may affect reporting of concussion symptoms have been studied at single time points but not over time. Our objective was to investigate the factors of self-defined sex and race/ethnicity in reporting of lingering concussion symptoms in a large sample of adolescents.
Participants and Methods:
Concussed, symptomatic adolescents (n=849; Female=464, Male=385) aged 13-18 years were evaluated within 30 days of injury at a North Texas Concussion Registry (ConTex) clinic. Participants were grouped by self-defined race/ethnicity into three groups: Non-Hispanic Caucasian (n=570), Hispanic Caucasian (n=157), and African American (n=122). Measures collected at the initial visit included medical history, injury related information, and the Sport Concussion Assessment Tool-5 Symptom Evaluation (SCAT-5SE). At a three-month follow-up, participants completed the SCAT-5SE. Pearson’s Chi-Square analyses examined differences in categorical measures of demographics, medical history, and injury characteristics. Prior to analysis, statistical assumptions were examined, and log base 10 transformations were performed to address issues of unequal group variances and nonnormal distributions. A three-way repeated measures ANOVA (Sex x Race/Ethnicity x Time) was conducted to examine total severity scores on the SCAT-5SE. Bonferroni post-hoc tests were performed to determine specific group differences. SPSS V28 was used for analysis with p<0.05 for significance. Data reported below has been back transformed.
Results:
A significant interaction of Time by Race/Ethnicity was found for SCAT-5SE scores reported at initial visit and three-month follow-up (F(2, 843)=7.362, p<0.001). To understand this interaction, at initial visit, Race/Ethnicity groups reported similar levels of severity for concussion symptoms. At three month follow-up, African Americans reported the highest level of severity of lingering symptoms (M= 3.925, 95% CIs [2.938-5.158]) followed by Hispanic Caucasians(M= 2.978, 95% CIs [2.2663.845]) and Non-Hispanic Caucasians who were the lowest(M= 1.915, 95% CIs [1.6262.237]). There were significant main effects for Time, Sex, and Race/Ethnicity. Average symptom levels were higher at initial visit compared to three-month follow-up (F(1, 843)=1531.526, p<0.001). Females had higher average symptom levels compared to males (F(1, 843)=35.58, p<0.001). For Race/Ethnicity (F(2, 843)=9.236, p<0.001), Non-Hispanic Caucasians were significantly different than African Americans (p<0.001) and Hispanic Caucasians (p=0.021) in reported levels of concussion symptom severity.
Conclusions:
Data from a large sample of concussed adolescents supported a higher level of reported symptoms by females, but there were no significant differences in symptom reporting between sexes across racial/ethnic groups. Overall, at three-months, the African American and Hispanic Caucasians participants reported a higher level of lingering symptoms than Non-Hispanic Caucasians. In order to improve care, the difference between specific racial/ethnic groups during recovery merits exploration into the factors that may influence symptom reporting.
Eliciting perceived cognitive complaints is a routine part of a clinical neuropsychological evaluation, presumably because complaints are informative of underlying pathology. However, there is no strong empirical support that subjective cognitive impairment (SCI) is actually related to objective cognitive impairment as measured by neurocognitive tests. Instead, internalizing psychopathology is thought to predominately influence the endorsement of SCI. Specifically, individuals with greater symptoms of depression and anxiety, when accounting for comorbidities, have a higher disposition to overestimate their degree of cognitive impairment as compared to objective testing. Yet, there are few existing studies that have determined which factors influence both SCI and the discrepancy between subjective and objective cognitive impairment in general outpatient populations. The current study examined the relationship between subjective and objective cognitive impairment in a clinically diverse sample of outpatients. We additionally explored the associations between SCI and relevant intrapersonal factors including internalizing psychopathology, number of medical comorbidities, and demographics. Finally, we quantified the degree of discrepancy between subjective and objective impairment and examined this discrepancy in relation to the intrapersonal factors.
Participants and Methods:
The sample comprised 142 adult women and men (age range 18–79 years) seen in an outpatient neuropsychology clinic for a diverse range of referral questions. Scores on the cognition portion of the WHO Disability Assessment Schedule (WHODAS 2.0) were used to index SCI. A composite score from 14 measures across various domains of cognitive functioning served as an objective measure of cognitive functioning. Internalizing psychopathology was measured via a standardized composite of scores from screening measures of anxiety and depression. Medical comorbidities were indexed by the number of different ICD diagnostic categories documented in patients' medical records. Demographics included age, sex, race, and years of formal education. Objective-subjective discrepancy scores were computed by saving standardized residuals from a linear regression of neurocognitive test performance on the WHODAS 2.0 scores.
Results:
A hierarchical linear regression revealed that objective cognitive impairment was not significantly related to SCI (p > .05), explaining less than 2% of the variance in SCI ratings. Likewise, participants' demographics (age, sex, education, race) and number of comorbidities were not significantly related to their SCI ratings, explaining about 6% of the variance. However, participants' level of internalizing psychopathology was significantly associated with SCI (F[10, 131] = 4.99, p < .001), and explained approximately 20% of the variance in SCI ratings. Similarly, the degree of discrepancy between subjective and objective cognitive impairment was primarily influenced by internalizing psychopathology (F[9, 132] = 5.20, p < .001, R2 = 21%) and largely unrelated to demographics and number of comorbidities, which explained about 6% of the variance.
Conclusions:
These findings are consistent with prior research suggesting that SCI may be more indicative of the extent of internalizing psychopathology rather than actual cognitive impairment. Taken together, these results illuminate potential treatment and diagnostic implications associated with assessing perceived cognitive complaints during a neuropsychological evaluation.
Diagnostic criteria for mild cognitive impairment (MCI) include a report of cognitive decline from the patient or a close informant. It is therefore important to understand the relationship between self- and informant-rated cognition and actual patient performance. Furthermore, it is unknown whether the nature of the relationship between the patient and their informant impacts accuracy of subjective reports. This study aimed to determine the association between informant report, self-report and objective cognitive performance based on relationship factors. We predicted that informant report would be more closely associated with objective performance than self-report after controlling for demographics and mood (Geriatric Depression Scale [mean= 1.4, SD=2]), especially among those who live with the participant and those who are spouses/partners.
Participants and Methods:
Participants (n = 338; age= 73.5 ±6.7) of varying diagnoses and their respective informants were drawn from the longitudinal cohort of the Michigan Alzheimer’s Disease Research Center (MADRC). The majority of informants were spouses/significant others (55.6%), followed by 23.7% being other family members and 20.7% were non- family members; 58.9% of informants live with the participant. Both respondents completed the Cognitive Change Index (CCI) to rate the patient’s cognitive status (higher scores indicating worsening cognition) across three domains: memory (12 questions), language (1 question), and attention/executive functioning (7 questions). These domains were matched to objective cognitive performance measured using the MADRC neuropsychological battery. Executive functioning and attention were assessed using Number Span Test Forward and Backward (NSF, NSB) and Trail Making Test Part B and Trail- Making Test Part A and B ratio (TMTB, TMTB: A); memory was measured using Craft Story 21 (Immediate and Delayed), Hopkins Verbal Learning Test-Revised (HVLT-R) Total Recall, Delayed Recall, and Benson Complex Figure (BCF) Delayed Recall; and Language was measured by the Controlled Oral Word Association Test (COWAT) and Animal fluency.
Results:
Linear regression adjusted for sex, race, and mood indicated that both patient and informant CCI ratings were significantly (p<.05) associated with objective cognitive performance. For every one unit increase on executive CCI items, there was a significant decline in executive functioning (NSF patient and informant ß= -0.09, NSB: [ßP= -.14; ßp-0.13]) and TMTB [ßP= 3.85; ß= 3.10 [% change]). Memory performance also declined per unit increase on CCI memory items: (Craft Story 21 Immediate [ßP=-0.32; ß= -0.37] and Delayed [ßP=-.40; ßp -.47], HVLT-R Total Recall [ßP= -.31; ßI=-.37] and Delayed Recall [ßP= -.16; ß=-.20], and BCF Delayed Recall [ßP= -.18; ß= -.23]. Similarly, one unit increase on the single CCI language item was associated with a decline in COWAT (ßP= -2.27; ß= -4.61) and Animal fluency (ßP= -1.88; ß= -3.03). Effect modification by participant-informant relationship type or participant-informant cohabitation was not significant.
Conclusions:
Patient and informant ratings are associated with objective measures of cognition regardless of the relationship between informant and patient or if they live together. This study was limited by a well-educated sample (mean= 16.1 years of education, SD= 2.4 years) with relatively limited diversity among participant-informant relationships. Future studies should replicate analyses across a larger and more diverse sample.
Mild traumatic brain injury (mTBI) is an important public health problem, due to its high incidence and the failure of at least 20% of patients to successfully recover from injury. Cognitive symptoms, in particular, are an important area of research in mTBI, due to their association with return to work and referral to neuropsychological services. Understanding the predictors of cognitive symptoms may help to improve outcomes after mTBI. This study explored female sex, psychological distress, coping style and illness perceptions as potential predictors of cognitive symptoms following adult civilian mTBI.
Participants and Methods:
Sixty-nine premorbidly healthy adults with mTBI (mean age = 36.7, SD = 14.7, range = 18-60; 15 females) were recruited from trauma wards at two public hospitals in Australia and assessed 6-12 weeks following injury. Cognitive complaint was measured using a comprehensive 30-item scale (CCAMCHI) assessing mTBI-specific symptoms in the domains of processing speed, attention, memory and executive function. Participants additionally completed the following measures: Brief-COPE, Illness Perceptions Questionnaire-Revised, Inventory of Depressive Symptomatology, Beck Anxiety Inventory, and PTSD Checklist for DSM-5. The latter three measures were combined to create an index of psychological distress.
Results:
Bivariate nonparametric correlational analyses indicated that female sex (r[67] = .26, 95% CI [.14, .55], p = .03) and psychological distress (r[66] = .54, 95% CI [.40, .72], p < .001) were each significantly associated with cognitive symptom reporting following mTBI. Additionally, while none of the three coping style factors were associated with cognitive symptom reporting, seven of the eight dimensions of illness perceptions were associated with symptom reporting (|r| = .25 - .58, p < 0.05). In a linear regression model assessing the combined effects of each variable, female sex, greater psychological distress, and overall negative illness perceptions were each significant independent predictors of increased cognitive complaint (adj. R2 = .47, F[4,63] = 15.59, p < .001).
Conclusions:
These findings implicate female sex, psychological distress, and illness perceptions as key factors associated with cognitive symptom reporting after mTBI. This research suggests that these factors may be useful in clinical practice when considering early identification of individuals at risk of poor recovery. Specifically, this research implicates females, individuals with high psychological distress, and individuals with negative illness perceptions as important to subgroups to consider for potential intervention after mTBI. Additionally, as psychological distress and illness perceptions are both potentially modifiable, this research suggests that these factors may be useful targets for intervention.
Self-regulation is typically operationalized in neuropsychological assessment through self-report scales and measures of attention and executive functioning. However, there have been mixed findings on the relationships between self-report measures and physiological and performance-based measures believed to represent self-regulation. Poorer self-regulation is related to an array of negative behavioral and health-related outcomes. Therefore, it is critical to understand the process of self-regulation and the relationships between measures neuropsychologists use to assess it. The current study aims to investigate the relationships between four purported measures of self-regulation: resting-state high-frequency heart rate variability (HRV; a stable individual difference variable that reflects parasympathetic capacity for adapting to changing environmental demands), behavioral performance on the Delis-Kaplan Executive Function System (D-KEFS) and the Conners Continuous Performance Test - 3rd Edition (CPT-3), and trait self-control on the Brief Self-Control Scale (BSCS). It was hypothesized that physiological and behavioral self-regulation variables would predict the BSCS, such that higher resting HRV and better performance on the cognitive measures would predict higher self-reported self-control.
Participants and Methods:
Thirty-five healthy adults (Age M = 29.80, SD =8.52, 45.7% female) recruited from the community completed the BSCS, CPT-3, and D-KEFS as part of a larger battery. Participants also completed a 10-minute eyes-open resting condition during electrocardiogram recording. High-frequency power (0.15 - 0.4 Hz) was extracted and used to operationalize resting HRV. Linear regression was used to test the predictive relationships between the BSCS total score, resting HRV, CPT-3 scores, and a residualized executive functioning score from the D-KEFS that controls for non-executive lower-order cognitive processes.
Results:
Regression analyses indicated that neither the D-KEFS composite, the CPT-3 indices, nor resting HRV were related to the BSCS. Resting HRV predicted the CPT-3 Hit Reaction Time (HRT; B = -2.97, p < .05) and HRT Standard Deviation (HRT SD; B = -4.55, p < .05). Resting HRV was unrelated to the D-KEFS executive composite score. CPT-3 performance variables and D-KEFS composite score were also unrelated to one another.
Conclusions:
Results showed that the BSCS was unrelated to resting HRV, CPT-3, and D-KEFS performance. However, higher resting HRV was related to faster and more consistent responding on the CPT-3. These findings contradict previous research showing associations between the BSCS and performance on executive functioning measures. The relationship between resting HRV and reaction time on the CPT-3 is generally consistent with literature that suggests that higher resting HRV is associated with better cognitive performance. Although the association between resting HRV and executive functioning was not significant in this modest sample, it was comparable to that reported in a recent meta-analysis. Overall, despite limitations related to the small sample size, the results raise questions regarding the construct validity of common neuropsychological indices of self-regulation. Further research is needed to clarify the nature of the self-regulation construct and the relation of neuropsychological measures of behavioral self-regulation to physiological and self-report indices.
Engagement in activities that promote overall brain health and well-being is often a key step in reducing risks to cognitive health in older adults. Given that higher health literacy has been found to be associated with healthier lifestyles, it is unsurprising that it has been the focus of many studies and programs aimed at improving the health outcomes of older adults. An equally important factor to consider when it comes to such efforts is the role of moderating variables in the relationship between health literacy and engagement in healthy behaviors. The present study examined the moderating effect of self-efficacy, a variable that has been shown to be positively associated with both health literacy and health behaviors. We hypothesized that increased self-efficacy will strengthen the relationship between health literacy and healthy activity engagement in a sample of community-living older adults.
Participants and Methods:
Forty-nine older adults (age: M = 64.35, SD = 8.00; education: M = 16.39, SD = 2.37; 87.76% female) completed a health literacy measure (Newest Vital Sign; NVS), a self-efficacy questionnaire (General Self-Efficacy Scale; GSE), and a lifestyle behaviors questionnaire (Healthy Aging Activity Engagement Scale; HAAE). The NVS is a performance-based measure in which participants are asked to interpret the verbal and numerical information of a nutrition label to make health-related decisions. The GSE is a self-report measure that evaluates one’s belief in their ability to handle challenges, solve problems, and accomplish goals. The HAAE is a self-report measure that assesses one’s engagement in healthy activities across multiple health domains.
Results:
To examine whether self-efficacy moderates the relationship between health literacy and healthy activity engagement, a moderation analysis was conducted using Hayes’ PROCESS macro for SPSS with age and education included in the model as covariates. The results revealed no significant interaction between health literacy and self-efficacy, b = 0.23, p = .59, 95% CI [-0.60 to 1.05].
Conclusions:
Contrary to expectations, in the present sample, the degree of self-efficacy was not a condition under which level of health literacy exerted its influence on healthy activity engagement in older adults. Future studies with larger and more nationally representative samples are needed to explore self-efficacy and other potential moderating factors in order to identify individual characteristics that support older adults’ adoption and engagement in health-promoting behaviors.
The behavioral assessment of executive functions has become increasingly common in clinical practice, with a self-report measure of executive functions becoming one of the most commonly administered assessment instruments of the construct in clinical practice. These subjective measurements serve as an alternative to objective tests of executive functions, which have been criticized for poor ecological validity. Many behavioral measures of executive functions are now available, but there are some issues with those currently in use, in that many are lengthy, proprietary, and/or do not measure executive functions that align with a theoretical framework of the multidimensional construct. This study aimed to examine the psychometric properties of a new short questionnaire of executive functions designed to be concise, theoretically based, and ultimately freely available for use in research and clinical practice.
Participants and Methods:
Participants included 575 college undergraduate students who completed an online questionnaire to earn credit in psychology courses. They were, on average, 18.9 years-old (SD=1.0, range: 18-22), 82.4% female, and 78.8% White. All participants completed 20 self-report items on a four-point ordinal scale measuring five theorized executive function constructs of Planning, Inhibition, Working Memory, Shifting, and Emotional Control. The 20 items were analyzed using confirmatory factor analysis and factor reliabilities were estimated using omega. As a validity analysis, correlations between the total score with measures of subjective cognition and ADHD symptoms were compared to correlations between the total score with measures of anxiety and depression, hypothesizing stronger correlations of executive functions with cognition and ADHD than negative affect.
Results:
The initial 20-item model did not fit well, x2=1560.10, df=160, p<.0001, CFI=0.822, TLI=0.788, RMSEA=0.130 (90% CI: 0.1240.136). The polychoric inter-item correlations were examined for high cross-factor correlations and low intra-factor correlations. This process resulted in the removal of one item from each factor, The modified model, inclusive of 15 items, presented with adequate fit to the data, X2=470.56, df=80, p<.0001, CFI=0.936, TLI=0.916, RMSEA=0.097 (90% CI: 0.0890.106). The total score has good reliability (Q=.82), whereas estimates for each factor ranged from .56 to .79. The total score showed a stronger correlation with ADHD symptoms (r=-.59) and subjective cognition (r=.59) than depression (r=.46, z=4.05, p<.001) and anxiety symptoms (r=.38, z=6.29, p<.001).
Conclusions:
These preliminary findings provided modest psychometric support for this short 15-item self-report questionnaire of executive functions. The questionnaire had acceptable fit and evidence for validity, in that the total executive function score had a stronger correlation with subjective cognitive complaints and ADHD symptoms than negative affect. The reliability of some individual factors fell below conventional cutoffs for acceptable reliability, indicating a need for further refinement of this new questionnaire.
Individuals tend to overestimate their abilities in areas where they are less competent. This cognitive bias is known as the Dunning-Krueger effect. Research shows that Dunning-Krueger effect occurs in persons with traumatic brain injury and healthy comparison participants. It was suggested by Walker and colleagues (2017) that the deficits in cognitive awareness may be due to brain injury. Confrontational naming tasks (e.g., Boston Naming Test) are used to evaluate language abilities. The Cordoba Naming Test (CNT) is a 30-item confrontational naming task developed to be administered in multiple languages. Hardy and Wright (2018) conditionally validated a measure of perceived mental workload called the NASA Task Load Index (NASA-TLX). They found that workload ratings on the NASA-TLX increased with increased task demands on a cognitive task. The purpose of the present study was to determine whether the Dunning-Kruger effect occurs in a Latinx population and possible factors driving individuals to overestimate their abilities on the CNT. We predicted the low-performance group would report better CNT performance, but underperform on the CNT compared to the high-performance group.
Participants and Methods:
The sample consisted of 129 Latinx participants with a mean age of 21.07 (SD = 4.57). Participants were neurologically and psychologically healthy. Our sample was divided into two groups: the low-performance group and the high-performance group. Participants completed the CNT and the NASA-TLX in English. The NASA-TLX examines perceived workload (e.g., performance) and it was used in the present study to evaluate possible factors driving individuals to overestimate their abilities on the CNT. Participants completed the NASA-TLX after completing the CNT. Moreover, the CNT raw scores were averaged to create the following two groups: low-performance (CNT raw score <17) and high-performance (CNT raw score 18+). A series of ANCOVA's, controlling for gender and years of education completed were used to evaluate CNT performance and CNT perceived workloads.
Results:
We found the low-performance group reported better performance on the CNT compared to the high-performance, p = .021, np2 = .04. However, the high-performance group outperformed the low-performance group on the CNT, p = .000, np2 = .53. Additionally, results revealed the low-performance group reported higher temporal demand and effort levels on the CNT compared to the high-performance group, p's < .05, nps2 = .05.
Conclusions:
As we predicted, the low-performance group overestimated their CNT performance compared to the high-performance group. The current data suggest that the Dunning-Kruger effect occurs in healthy Latinx participants. We also found that temporal demand and effort may be influencing awareness in the low-performance group CNT performance compared to the high-performance group. The present study suggests subjective features on what may be influencing confrontational naming task performance in low-performance individuals more than highperformance individuals on the CNT. Current literature shows that bilingual speakers underperformed on confrontational naming tasks compared to monolingual speakers. Future studies should investigate if the Dunning-Kruger effects Latinx English monolingual speakers compared to Spanish-English bilingual speakers on the CNT.
Illness perception, or the ways in which individuals understand and cope with injury, has been extensively studied in the broader medical literature and has been found to have important associations with clinical outcomes across a wide range of medical conditions. However, there is a dearth of knowledge regarding how perceptions of traumatic brain injury (TBI) influence outcome and recovery following injury, especially in military populations. The purpose of this study was to examine relationships between illness perception, as measured via symptom attribution, and neurobehavioral and neurocognitive outcomes in Veterans with TBI history.
Participants and Methods:
This cross-sectional study included 44 treatment-seeking Veterans (86.4% male, 65.9% white) with remote history of TBI (75.0% mild TBI). All Veterans were referred to the TBI Cognitive Rehabilitation Clinic at VA San Diego and completed a clinical interview, self-report questionnaires, and a neuropsychological assessment. A modified version of the Neurobehavioral Symptom Inventory (NSI) was administered to assess neurobehavioral symptom endorsement and symptom attribution. Symptom attribution was assessed by having participants rate whether they believe each NSI item was caused by TBI. A total symptom attribution score was computed, as well as the standard NSI total and symptom cluster scores (i.e., vestibular, somatic, cognitive, and affective symptom domains). Three cognitive composite scores (representing mean performance) were also computed, including memory, attention/processing speed, and executive functioning. Participants were excluded if they did not complete the NSI attribution questions or they failed performance validity testing.
Results:
Results showed that the symptoms most frequently attributed to TBI included forgetfulness (82%), poor concentration (80%), and slowed thinking (77%). There was a significant positive association between symptom attribution and the NSI total score (r = 0.62, p < .001), meaning that greater attribution of symptoms to TBI was significantly associated with greater symptom endorsement overall.
Symptom attribution was also significantly associated with all four NSI symptom domains (r’s = 0.47-0.66; all p’s < .001), with the strongest relationship emerging between symptom attribution and vestibular symptoms. Finally, linear regressions demonstrated that symptom attribution but not symptom endorsement was significantly associated with objective cognitive functioning. Specifically, greater attribution of symptoms to TBI was associated with worse memory (ß = -0.33, p = .035) and attention/processing speed (ß = -0.40, p = .013) performance.
Conclusions:
Results showed significant associations between symptom attribution and (1) symptom endorsement and (2) objective cognitive performance in Veterans with a remote history of TBI. Taken together, findings suggest that Veterans who attribute neurobehavioral symptoms to their TBI are at greater risk of experiencing poor long-term outcomes. Although more research is needed to understand how illness perception influences outcomes in this population, results highlight the importance of early psychoeducation regarding the anticipated course of recovery following TBI.
Given the aging population, there are significant public health benefits to delaying the onset of Alzheimer’s disease (AD) in individuals at risk. However, adherence to health behaviors (e.g., diet, exercise, sleep hygiene) is low in the general population. The Health Belief Model proposes that beliefs such as perceived threat of disease, perceived benefits and barriers to behavior change, and cues to action are mediators of behavior change. The aim of this study was to gain additional information on current health behaviors and beliefs for individuals at risk for developing AD. This information can then be used to inform behavioral interventions and individualized strategies to improve health behaviors that may reduce AD risk or delay symptom onset.
Participants and Methods:
Surveys were sent to the Rhode Island AD Prevention Registry, which is enriched for at-risk, cognitively normal adults (i.e., majority with a family history and/or an APOE e4 allele). A total of 177 individuals participated in this study. Participants were 68% female; 93% Caucasian and non-Hispanic; mean age of 69.2; 74% with family history of dementia; 40% with subjective memory decline. The survey included measures from the Science of Behavior Change (SoBC) Research Network to measure specific health belief factors, including individual AD risk, perceived future time remaining in one’s life, generalized self-efficacy, deferment of gratification, consideration of future consequences as well as dementia risk awareness and a total risk score for dementia calculated from a combination demographic, health and lifestyle behaviors.
Results:
Participants who were older had higher scores for dementia risk (r=0.78), lower future time perspective (r=-0.33), and lower generalized self-efficacy (r=-0.31) (all at p<0.001). Higher education correlated with higher consideration of future consequences (r=-.31, p<0.001) and lower overall dementia risk score (r=-0.23, p=0.006). Of all scales examined, only generalized self-efficacy had a significant linear relationship to both frequency (r2=0.06) and duration (r2=0.08) of weekly physical activity (p<0.001). Total dementia risk score also had significant linear relationships (r2=0.19) with future time perspective (p<0.001) and generalized self-efficacy (p=0.48).
Conclusions:
Overall, individuals who rated themselves higher in self-efficacy were more likely to exercise more frequently and for a longer duration. Individuals who had lower overall risk for dementia due to both demographic and behavioral factors were more likely to endorse higher self-efficacy and more perceived time remaining in their lives. Increasing self-efficacy and targeting perceived future time limitations may be key areas to increase motivation and participation in behavioral strategies to reduce AD risk. Developing individual profiles based on these scales may further allow for individually tailored intervention opportunities.
The Functional Assessment of Cancer Therapy-Cognitive scale (FACT-Cog) is one of the most frequently used patient-reported outcome (PRO) measures of cancer-related cognitive impairment (CRCI) and of CRCI-related impact on quality of life (QOL). Previous studies using the FACT-Cog found that >75% of women with breast cancer (BCa) experience CRCI. Distress tolerance (DT) is a complex construct that encompasses both the perceived capacity (i.e., cognitive appraisal) and the behavioral act of withstanding uncomfortable/aversive/negative emotional or physical experiences. Low DT is associated with psychopathology and executive dysfunction. We previously found that women with BCa with better DT skills reported less CRCI on the FACT-Cog. However, this relationship has not been tested using a performance-based cognitive measure. Therefore, the aims of this study were to: (1) assess the relationship between the FACT-Cog and the Telephone Interview for Cognitive Status (TICS), a performance-based cognitive measure; and (2) test whether the association between DT and CRCI (using the FACT-Cog) was replicated with the TICS.
Participants and Methods:
Participants completed the Distress Tolerance Scale (DTS), the FACT-Cog, and the TICS after undergoing BCa surgery and prior to starting adjuvant therapy [101 women, age >50 years, M(SD)= 61.15(7.76), 43% White Non-Hispanic, 34.4% White Hispanic, 10.8% Black, with nonmetastatic BCa, 55.4% lumpectomy, 36.6% mastectomy; median 29 days post-surgery].
Results:
Although there was a significant correlation between the TICS total score and the FACT-CogQOL subscale (r = 0.347, p < 0.001), the TICS total score was not correlated with scores on the FACT-Cog perceived cognitive impairment (CogPCI), perceived cognitive abilities (CogPCA), or comments from others (CogOth) subscales. However, the TICS memory item, a 10-word list immediate recall task, had a weak statistically significant correlation with CogPCI (r = 0.237, p < 0.032), CogOth (r = 0.223, p < 0.044), and CogPCA (r = 0.233, p < 0.036). Next, the sample was divided based on the participant’s score on TICS memory item (i.e., < vs. > sample mean of 5.09). Results of independent samples t-tests demonstrated significant differences in mean scores for CogPCI, f(80) = -2.09, p = 0.04, Mdt = -7.65, Cohen’s d = 0.483, and CogQOL, f(80) = -2.57, p = 0.01, Mditt = -2.38, Cohen’s d = 0.593. A hierarchical linear regression found that DTS subscale and total scores did not significantly predict performance on the TICS. However, DTS continued to be a significant predictor of poorer FACT-Cog PCI scores while controlling for TICS scores.
Conclusions:
We found a weak relationship between self-reported cognitive impairment and objective cognitive performance (TICS). However, greater self-reported PCI and its impact on QOL was found in participants who scored below the sample mean on a recall task from the TICS. Although perceived ability to tolerate distress continued to predict self-reported PCI on the FACT-Cog, it did not predict overall performance on the TICS. Therefore, responses on the FACT-Cog may be more representative of an individual’s ability to tolerate distress related to perceived CRCI than actual overall cognitive ability or impairment.
Inconsistent relationships between subjective and objective performance have been found across various clinical groups. Discrepancies in these relationships across studies have been attributed to various factors such as patient characteristics (e.g., level of insight associated with cognitive impairment) and test characteristics (e.g., using too few measures to assess different cognitive domains). Although performance and symptom invalidity are common in clinical and research settings and have the potential to impact responding on testing, previous studies have not explored the role of performance and symptom invalidity on relationships between objective and subjective performance. Therefore, the current study examined the impact of invalidity on performance and symptom validity tests (PVTs and SVTs, respectively) on the relationship between subjective and objective cognitive functioning.
Participants and Methods:
Data were obtained from 299 Veterans (77.6% male, mean age of 48.8 years (SD = 13.5)) assessed in a VA medical center epilepsy monitoring unit from 2008-2018. Participants completed a measure of subjective functioning (i.e., the Patient Competency Rating Scale), PVTs (i.e., Word Memory Test, Test of Memory Malingering, Reliable Digit Span), SVTs (i.e., Minnesota Multiphasic Personality Inventory-2-Restructured Form Response Bias Scale, Structured Inventory of Malingered Symptomatology), and neuropsychological measures assessing objective cognitive performance (e.g., Trail Making Test parts A and B). Pearson correlations were conducted between subjective functioning and objective cognitive performance in the following groups: 1.) PVT and SVT valid, 2.) PVT and SVT invalid, 3.) PVT-only invalid, 4.) SVT-only invalid. Using Fisher’s r-to-z transformation, tests for the differences between correlation coefficients were then conducted between the PVT and SVT valid vs. PVT and SVT invalid groups, and the PVT-only invalid vs. SVT-only invalid groups.
Results:
Participants with fully valid PVT and SVT performances demonstrated generally stronger relationships between subjective and objective scores (r’s = .058 - .310) compared to participants with both invalid PVT and SVT scores (r’s = -.033 - .132). However, the only significant difference in the strengths of correlations between the groups was found on Trail Making Test Part B (p = .034). In separate exploratory analyses due to low group size, those with invalid PVT scores only (fully valid SVT) demonstrated generally stronger relationships between subjective and objective scores (r’s = -.101 - .741) compared to participants with invalid SVT scores only (fully valid PVT; r’s = -.088 - .024). However, the only significant difference in the strengths of correlations between the groups was found on Trail Making Test Part A (p = .028).
Conclusions:
The present study suggests that at least some of the discrepancies in previous studies between subjective and objective cognitive performance may be related to performance and symptom validity. Specifically, very weak relationships between objective and subjective performance were found in participants who only failed SVTs, whereas relationships were stronger in those who only failed PVTs. Therefore, findings suggest that including measures of PVTs and SVTs in future studies investigating relationships between subjective and objective cognitive performance is critical to ensuring accuracy of conclusions that are drawn.
Perceived cognitive dysfunction is a common feature of late-life depression (LLD) that is associated with diminished quality of life and greater disability. Similar associations have been demonstrated in individuals with Hoarding Disorder. The degree to which hoarding behaviors (HB) are associated with greater perceived cognitive dysfunction and disability in individuals with concurrent LLD is not known.
Participants and Methods:
Participants with LLD (N=83) completed measures of hoarding symptom severity (Savings Inventory-Revised; SI-R) and were classified into two groups based on HB severity: LLD+HB who exhibited significant HB (SI-R . 41, n = 25) and LLD with low HB (SI-R < 41, n = 58). Additional measures assessed depression severity (Hamilton Depression Rating Scale; HDRS), perceived cognitive difficulties (Everyday Cognition Scale; ECOG), and disability (World Health Organization Disability Assessment Scale [WHODAS]-II-Short). Given a non-normal distribution of ECOG and WHODAS-II scores, non-parametric Wilcoxon-Mann-Whitney tests were used to assess group differences in perceived cognitive dysfunction and disability. A regression model assessed the extent to which perceived cognitive dysfunction was associated with hoarding symptom severity measured continuously, covarying for age, education, gender, and depression severity. A separate regression model assessed the extent to which disability scores were associated with perceived cognitive dysfunction and HB severity covarying for demographics and depression severity.
Results:
LLD+HB endorsed significantly greater perceived cognitive dysfunction (W = 1023, p = 0.003) and greater disability (W = 1006, p = < 0.001) compared to LLD. Regression models accounting for demographic characteristics and depression severity revealed that greater HB severity was associated with greater perceived cognitive dysfunction (β = 0.009, t = 2.765, p = 0.007). Increased disability was associated with greater perceived cognitive dysfunction (β = 4.792, t(71) = 3.551, p = 0.0007) and HB severity (β = 0.080, t(71) = 1.944, p = 0.056) approached significance after accounting for variance explained by depression severity and demographic covariates.
Conclusions:
Our results suggest that hoarding behaviors are associated with increased perceived cognitive dysfunction and greater disability in individuals with LLD. Screening for HB in individuals with LLD may help identify those at greater risk for poor cognitive and functional outcomes. Interventions that target HB and perceived cognitive difficulties may decrease risk for disability in LLD. However, longitudinal studies would be required to further evaluate these relationships.
The relation between depressed mood and functional difficulties in older adults has been demonstrated in studies using self-report measures and has been interpreted as evidence for low mood negatively impacting everyday functional abilities. However, few studies have directly examined the relation between mood and everyday function using performance-based tests. This study included a standardized, performance-based measure of everyday action (Naturalistic Action Task, NAT) to test the prediction that report of depression symptoms are associated with self-report and performance-based tests of everyday function. Associations with anxiety symptoms and motivation/grit and everyday function also were explored.
Participants and Methods:
68 older adults without dementia were screened and recruited (n = 55, M age = 74.21, SD= 6.80, age range = 65 to 98) from the community and completed self-report measures of depression symptoms (GDS), anxiety (GAI), motivation (Short Grit-S), and everyday functioning (FAQ). Participants also performed the NAT, which requires completion of a breakfast and lunch task and is scored for task accomplishment, errors (micro-errors, overt, motor), and total time. Additionally, an informant also reported on the participant’s everyday function. Spearman correlations were performed and results showing a medium effect size or greater are reported.
Results:
Participant mood (GDS) was associated with self-reported function (FAQ; r =.45) but not performance-based measures of everyday function (NAT). Self-reported anxiety and motivation were not meaningfully associated with either self-reported or performance-based everyday function. Participant self-report (FAQ) and informant report of participant’s function (IFAQ) supported the validity of performance-based assessment as both were meaningfully associated with NAT performance (FAQ x NAT overt errors r = .34; I-FAQ x NAT micro-errors r = .34; I-FAQ x NAT motor errors r = .49).
Conclusions:
Mood, but not anxiety or motivation, was associated with self-reported everyday function but not performance-based function. When considered alongside the meaningful relations between self/informant-report of function and everyday task performance, results suggest mood does not impact everyday function abilities in community-dwelling older adults without dementia. We suggest that frameworks to be reconceptualized to consider the potential for mild functional difficulties to negatively impact mood in older adults without dementia. Additionally, interventions and compensatory strategies designed to improve everyday function should examine the impact on mood outcomes.