We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We developed the Shell Game Task (SGT) as a novel Performance Validity Test (PVT). While most PVTs use a forced-choice paradigm with “memory” as the primary domain being assessed, the SGT is a face-valid measure of attention and working memory. We explored the accuracy of the SGT to detect noncredible performance using a simulatordesign study.
Participants and Methods:
Ninety-four university students were randomly assigned to either best effort (CON) (n=49) or simulating traumatic brain injury (TBI) (SIM) (n=45) conditions. Participants completed a full battery of neuropsychological tests to simulate an actual evaluation, including the Test of Memory Malingering (TOMM) and the SGT. The SGT involves three cups and a red ball shown on the screen. Participants watch as the ball is placed under one of the three cups. Cups are then shuffled. Participants are asked to track the cup that contains the ball and correctly identify its location. We created two difficulty levels (easy vs hard, 20 trials each) by changing the number of times the cups were shuffled. Participants were given feedback (correct vs incorrect) after each trial. At the conclusion of the study, participants were asked about adherence to study directions they were given.
Results:
Participants with missing data (CON=1; SIM=2) or who reported non-adherence to study directions (CON=2; SIM=1) were removed from analyses. Twenty-five percent in SIM and 0% in CON failed TOMM
Trial 2 (<45) suggesting adequate manipulation of groups. Groups were not different in age, gender, ethnicity, or education (all p’s>.05). There were 9 participants in each group with concussion/TBI history. TBI history was not significantly related to performance on the SGT in either group, although participants with TBI history tended to do better. Average performances on TOMM Trial 1 (36.62 vs 47.91, p<.001) and TOMM Trial 2 (37.50 vs 49.71, p<.001) were significantly lower in the SIM group. Performance on SGT was also significantly lower in the SIM group across SGT Total Correct (20.17 vs 24.65 of 40, p=.008), SGT Easy (10.60 vs 13.52 of 20, p=.002), and SGT Hard (9.57 vs 11.13 of 20, p=.068). Mixed ANOVA showed a trend towards significant group by SGT difficulty interaction (F(1,86)=3.41, p=.052, np2=.043). There was steeper decline in performance on SGT Hard compared to SGT Easy for CON. ROC analyses suggested adequate but not ideal sensitivity/specificity: scores <8 on SGT Easy (sensitivity=26%; false positive=11%), <7 on SGT Hard (sensitivity=26%; false positive=7%), and <15 on SGT Total (sensitivity=24%; false positive=9%).
Conclusions:
These preliminary data indicate the SGT may be able to detect malingered TBI. However, additional development of this measure is necessary. Further refinement of difficulty level may improve sensitivity/specificity (e.g., CON mean performance for SGT Easy trails was 13.52, suggesting the items may be too difficult). This study was limited to an online administration due to COVID, which could have affected results; future studies should test inperson administration of the SGT. In addition, performance in clinical control groups (larger samples of individuals with mild TBI, ADHD) should be tested to better determine specificity for these preliminary cutoffs.
Typical evaluations of adult ADHD consist of behavior self-report rating scales, cognitive or intellectual functioning measures, and specific measures designed to measure attention. Boone (2009) suggested monitoring continuous effort is essential throughout psychological assessments. However, very few research studies have contributed to malingering literature on the ADHD population. Many studies have reported the adequate use of symptom validity tests, which assess effortful performance in ADHD evaluations (Jasinski et al., 2011; Sollman et al., 2010; Schneider et al., 2014). Because of the length of ADHD assessments, individuals are likely to become weary and tired, thus impacting their performance. This study investigates the eye movement strategies used by a clinical ADHD population, non-ADHD subjects, and malingering simulators when playing a common simple visual search task.
Participants and Methods:
A total of 153 college students participated in this study. To be placed in the ADHD group, a participant must endorse four or more symptoms on the ASRS (N = 37). To be placed in the non-ADHD, participants should have endorsed no ADHD symptoms (N = 43). Participants that did not meet the above criteria for ADHD and not-ADHD were placed in an Indeterminate group and were not included in the analysis. A total of 20 participants were instructed to fake symptoms related to ADHD during the session. A total of twelve Spot the Difference images were used as the visual picture stimuli. Sticky by Tobii Pro (2020) was used for the collection of eye-movement data was utilized. Sticky by Tobii Pro is an online self-service platform that combines online survey questions with an eye-tracking webcam, allowing participants to see images from their home computers.
Results:
Results indicated on the participants classified as Malingering had a significantly Visit Count (M = 17.16; SD= 4.99) compared to the ADHD(M = 12.53; SD= 43.92) and not-ADHD groups (M =11.51; SD=3.23). Results also indicated a statistically significant Area Under the Curve (AUC) = .784; SE = .067; p -.003; 95% CI = .652-.916. Optimal cutoffs suggest a Sensitivity of 50% with a False Positive Rate of 10%.
Conclusions:
Results indicated that eye-tracking technology could help differentiate simulator malingerers from non-malingerers with ADHD. Eye-tracking research’ relates to a patchwork of fields more diverse than the study of perceptual systems. Due to their close relation to attentional mechanisms, the study’s results can provide an insight into cognitive processes related to malingering performance.
Previous investigations have demonstrated the clinical utility of the Delis-Kaplan Executive Function System (D-KEFS) Color Word Interference Test (CWIT) as an embedded validity indicator in mixed clinical samples and traumatic brain injury. The present study sought to cross-validate previously identified indicators and cutoffs in a sample of adults referred for psychoeducational testing.
Participants and Methods:
Archival data from 267 students and community members self-referred for a psychoeducational evaluation at a university clinic in the South were analyzed. Referrals included assessment for attention-deficit hyperactivity disorder, specific learning disorder, autism spectrum disorder, or other disorders (e.g., anxiety, depression). Individuals were administered subtests of the D-KEFS including the CWIT and several standalone and embedded performance validity indicators as part of the evaluation. Criterion measures included The b Test, Victoria Symptom Validity Test, Medical Symptom Validity Test, Dot Counting Test, and Reliable Digit Span. Individuals who failed 0 criterion measures were included in the credible group (n = 164) and individuals failing 2 or more criterion measures were included in the non-credible group (n = 31). Because a subset of the sample were seeking external incentives (e.g., accommodations), individuals who failed only 1 of the criterion measures were excluded (n = 72). Indicators of interest included all test conditions examined separately, the inverted Stroop index (i.e., better performance on the interference trial than the word reading or color naming trials), inhibition and inhibition/switching composite, and sum of all conditions.
Results:
Receiver Operating Characteristics (ROC) curves were significant for all four conditions (p < .001) and the inverted stroop index (p = .032). However, only conditions 2, 3 and 4 met minimal acceptable classification accuracy (AUC = .72 - 81). ROC curves with composite indicators were also significant (p < .001), with all three composite indicators meeting minimal acceptable classification accuracy (AUC = .71- .80). At the previously identified cutoff of age corrected scale score of 6 for all four conditions, specificity was high (.88 -.91), with varying sensitivity (.23 - .45). At the previously identified cutoff of .75 for the inverted stroop index, specificity was high (.87) while sensitivity was low (.19). Composite indicators yielded high specificity (.88 - .99) at previously established cutoffs with sensitivity varying from low to moderate (.19 - .48). Increasing the cutoffs (i.e., requiring higher age corrected scale score to pass) for composite indicators increased sensitivity while still maintaining high specificity. For example, increasing the total score cutoff from 18 to 28 resulted in moderate sensitivity (.26 vs .52) with specificity of .91.
Conclusions:
While a cutoff of 6 resulted in high specificity for most conditions, the sum of all four conditions exhibited the strongest classification accuracy and appears to be the most robust indicator which is consistent with previous research (Eglit et al., 2019). However, a cutoff of 28 as opposed to 18 may be most appropriate for psychoeducational samples. Overall, the results suggest that the D-KEFS CWIT can function as a measure of performance validity in addition to a measure of processing speed/executive functioning.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.