Hostname: page-component-78c5997874-xbtfd Total loading time: 0 Render date: 2024-11-10T14:20:24.097Z Has data issue: false hasContentIssue false

Effort – What is it, How Should it be Measured?

Published online by Cambridge University Press:  06 June 2011

Erin D. Bigler*
Affiliation:
Department of Psychology and Neuroscience Center, Brigham Young University, Provo, Utah Department of Psychiatry and the Utah Brain Institute, University of Utah, Salt Lake City, Utah
*
Correspondence and reprint requests to: Erin D. Bigler, Department of Psychology and Neuroscience Center, 1190D SWKT, Brigham Young University, Provo, UT, 84602. E-mail: erin_bigler@byu.edu
Rights & Permissions [Opens in a new window]

Abstract

(JINS, 2011, 17, 751–752)

Type
Letter to the Editor
Copyright
Copyright © The International Neuropsychological Society 2011

Victor, Boone, and Kulick (Reference Victor, Boone and Kulick2010) make a bold statement with major implications for the entire field of neuropsychology in their critique of Libon et al. (Reference Libon, Schwartzman, Eppig, Wambach, Brahin, Peterlin and Kalanuria2010) on complex regional pain syndrome (CRPS). Victor et al. state “… in the absence of adequate consideration of compensation status and effort, such conclusions are likely inaccurate, and it is our belief that the practice of continuing to publish such papers harms the field through the perpetuation of misleading information. The clinical impact of studies neglecting these factors is potentially damaging (p. 1151).” Without discussion or dispute, every researcher and clinician in neuropsychology strives to make accurate inferences from valid data. Is it really the case that the Libon et al. findings cannot be interpreted without knowing “compensation status and effort?” While compensation status is straightforward to determine and report what defines “effort” and its measurement are not necessarily straightforward.

The term “effort” has emerged in neuropsychological nomenclature as an indicator of response bias often defined by performance on symptom validity tests (SVTs). The terms effort, SVT performance, and response bias are often used interchangeably. Even with all the contemporary SVT research, McGrath, Mitchell, Kim, and Hough (Reference McGrath, Mitchell, Kim and Hough2010) recently reviewed the topic concluding that “Despite many years of research, a sufficient justification for the use of bias indicators in applied settings remains elusive (p. 450).”

Position papers from neuropsychological organizations provide some important SVT guidelines (Bush et al., Reference Bush, Ruff, Troster, Barth, Koffler, Pliskin and Silver2005; Heilbronner, Sweet, Morgan, Larrabee, and Millis, Reference Heilbronner, Sweet, Morgan, Larrabee and Millis2009), but offer no specifics for which SVTs may be best for which neurological and neuropsychiatric conditions. Often good versus poor effort is based on whether the selected SVT is performed above (“good effort”) or below (“poor effort”) an established cut-score. However, cut-scores create binary classifications with inherent limitations. For example Dwyer's (Reference Dwyer1996) review of cut-score development concluded that cut-scores (a) always entail judgment, (b) inherently result in some misclassification, (c) impose artificial “pass/fail” dichotomies, and (d) no “true” cut scores exist (p. 360). The position papers mentioned above provide clear guidelines for malingering detection, especially in the presence of below chance SVT performance but most studies that examine “poor” effort find above chance but below cut-score performance.

In this context, what does poor effort mean? Answers to this question can only come from comprehensive studies that compare and contrast the major neurological and neuropsychiatric disorders by SVT measures. Furthermore, there are no systematic and comprehensive lesion-location SVT studies that demonstrate absence of a performance effect.

Explicit to the Libon et al. study, these investigators appropriately used a cluster analysis technique to examine their clinically derived CRPS data, resulting in a three group solution that made clinical sense. How would knowing SVT findings alter these clinical groupings? In fact one cluster was deemed unimpaired on neuropsychological tests. Is an independent “effort” measure needed to declare one group “normal” when they have actually performed within expected ranges of normalcy on all tests? The Global group was impaired across the board, including poor memory performance and also endorsed moderate-to-severe depression. Could these findings be a reflection of poor effort? Certainly, but how does one disentangle cognitive effort and all of the issues raised above merely by administering an SVT?

Acknowledgments

There are no financial or other relationships that could be interpreted as a conflict of interest affecting this manuscript. Medicolegal cases are seen by the author.

References

REFERENCES

Bush, S.S., Ruff, R.M., Troster, A.I., Barth, J.T., Koffler, S.P., Pliskin, N.H., Silver, C.H. (2005). Symptom validity assessment: Practice issues and medical necessity NAN Policy & Planning Committee. Archives of Clinical Neuropsychology, 20(4), 419426. doi:S0887-6177(05)00041-7 [pii]10.1016/j.acn.2005.02.002CrossRefGoogle ScholarPubMed
Dwyer, C.A. (1996). Cut scores and testing: Statistics, judgment, truth, and error. Psychological Assessment, 8(4), 360362.CrossRefGoogle Scholar
Heilbronner, R.L., Sweet, J.J., Morgan, J.E., Larrabee, G.J., Millis, S.R. (2009). American Academy of Clinical Neuropsychology Consensus Conference Statement on the neuropsychological assessment of effort, response bias, and malingering. The Clinical Neuropsychologist, 23(7), 10931129. doi:914484767 [pii]10.1080/13854040903155063CrossRefGoogle ScholarPubMed
Libon, D.J., Schwartzman, R.J., Eppig, J., Wambach, D., Brahin, E., Peterlin, B.L., Kalanuria, A. (2010). Neuropsychological deficits associated with Complex Regional Pain Syndrome. Journal of International Neuropsychological Society, 16(3), 566573. doi:S1355617710000214 [pii]10.1017/S1355617710000214CrossRefGoogle ScholarPubMed
McGrath, R.E., Mitchell, M., Kim, B.H., Hough, L. (2010). Evidence for response bias as a source of error variance in applied assessment. Psychology Bulletin, 136(3), 450470. doi:2010-07936-009 [pii]10.1037/a0019216CrossRefGoogle ScholarPubMed
Victor, T.L., Boone, K.B., Kulick, A.D. (2010). My head hurts just thinking about it. [Letter to the editor]. Journal of the International Neuropsychological Society, 16, 11511152. doi:S1355617710000858 [pii]10.1017/S1355617710000858CrossRefGoogle Scholar