Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-25T18:31:52.217Z Has data issue: false hasContentIssue false

What Would It Take for Electrophysiology to Become Clinically Useful?

Published online by Cambridge University Press:  07 November 2014

Abstract

In this article we discuss the procedures that should be followed in order to develop diagnostic tests based on electro-physiologic parameters that would be useful to practicing psychiatrists in their efforts to diagnose mental disorders. To do this, we start by giving an overview of receiver operating characteristic (ROC) analysis, with which a candidate diagnostic test can be characterized mathematically. This methodology allows for an estimate of the overall discriminating power of the diagnostic test and thus can be used to compare the performance of one diagnostic test with that of another. However, ROC analysis cannot be used for optimizing a given diagnostic test (ie, determining the “best” cut-off) and, because it does not incorporate Bayes' theorem, it cannot be used to compare the performances of diagnostic tests when it is given to different risk groups for the target disorder. Both of these problems can be solved by using the concept of information theory. The equation for information gain automatically takes Bayes' theorem into consideration and also provides an intrinsic criterion for finding the cutoff that best discriminates between individuals who do and those who do not have the target disorder. Neither ROC analysis nor information theory are model dependent; however, if the equation for the frequency distribution of the electrophysiologic variable is known for the disease-positive and disease-negative populations, the calculations are greatly simplified. Therefore, the assumption is often made that both of these distributions (or transformations of the distributions) are Gaussian. However, studies have shown that the results of ROC analysis are quite robust to deviations from normality of the underlying distributions.

Type
Feature Articles
Copyright
Copyright © Cambridge University Press 1999

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

1. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 4th ed.Washington, DC: 1994.Google Scholar
2. Somoza, E, Mossman, D. Optimizing REM latency as a diagnostic test for depression using receiver operating characteristic analysis and information theory. Biol Psychiatry. 1990; 27: 9901006.CrossRefGoogle ScholarPubMed
3. Kupfer, DJ. REM latency, a psychobiological marker for primary depressive disease. Biol Psychiatry. 1976; 11: 159174.Google ScholarPubMed
4. Somoza, E, Mossman, D. Introduction to neuropsychiatric decision making: binary diagnostic tests. J Neuropsychiatry Clin Neurosci. 1990; 2: 297300.Google ScholarPubMed
5. Mossman, D, Somoza, E. ROC curves, test accuracy, and the description of diagnostic tests. J Neuropsychiatry Clin Neurosci. 1991; 3: 330333.Google ScholarPubMed
6. Mossman, D, Somoza, E. Neurophychiatric decision making: the rate of disorder prevalence in diagnostic testing. J Neuropsychiatry Clin Neurosc. 1991; 3: 8088.Google Scholar
7. Lusted, B. ROC recollected. Med Decis Making. 1984;4: 131135.CrossRefGoogle Scholar
8. Metz, CE. Basic principles of ROC analysis. Semin Nucl Med. 1978; 8: 283298.CrossRefGoogle ScholarPubMed
9. Green, DM, Swets, JA. Signal Detection Theory and Psychophysics. New York, NY: John Wiley; 1966.Google Scholar
10. Swets, JA. Measuring the accuracy of diagnostic systems. Science. 1988; 240: 12851293.CrossRefGoogle ScholarPubMed
11. Mossman, D, Somoza, E. Maximizing diagnostic information from the dexamethasone suppression test. An approach to criterion selection using receiver operating characteristic analysis. Arch Gen Psychiatry. 1989; 46: 653660.CrossRefGoogle ScholarPubMed
12. Swets, JA. Form of empirical ROCs in discrimination and diagnostic tasks: implications for theory and measurement of performance. Psychol Bull. 1986; 99: 181198.CrossRefGoogle ScholarPubMed
13. Hanley, JA. The robustness of the “binormal” assumptions used in fitting ROC curves. Med Decis Making. 1988; 8: 197203.CrossRefGoogle ScholarPubMed
14. Swets, JA. Indices of discrimination or diagnostic accuracy: their ROCs and implied models. Psychol Bull. 1986; 99: 100117.CrossRefGoogle ScholarPubMed
15. Swets, JA. ROC analysis applied to the evaluation of medical imaging techniques. Invest Radiol. 1979; 14: 109121.CrossRefGoogle Scholar
16. Somoza, E, Soutullo-Esperon, L, Mossman, D. Evaluation and optimization of diagnostic tests using receiver operating characteristic analysis and information theory. Int J Biomed Comput. 1989; 24: 153189.CrossRefGoogle ScholarPubMed
17. Dorfman, DD, Alf, JE. Maximum likelihood estimation of parameters of signal-detection theory and determination of confidence intervals-rating method data. J Math Psychol. 1969; 6: 487494.CrossRefGoogle Scholar
18. Hanley, JA. The use of the ‘binormal’ model for parametric ROC analysis of quantitative diagnostic tests. Stat Med. 1996; 15: 15751585.3.0.CO;2-2>CrossRefGoogle ScholarPubMed
19. Hajian-Tilaki, KO, Hanley, JA, Joseph, L, Collet, J-P. A comparison of parametric and nonparametric approaches to ROC analysis of quantitative diagnostic tests. Med Decis Making. 1997; 17: 94102.CrossRefGoogle ScholarPubMed
20. Kim, S, Horn, P, Singal, B, Somoza, E. The effects of deviations from normality on the InfoRoc technique. Presented at: Annual Meeting of the Society for Medical Decision Making; October 26-29, 1997; Houston, Tex.Google Scholar
21. Somoza, E. Classification of diagnostic tests. Int J Biomed Comput. 1994; 37: 4155.CrossRefGoogle ScholarPubMed
22. Somoza, E. Classifying binormal diagnostic tests using separation-asymmetry diagrams with constant-performance curves. Med Decis Making. 1994; 14: 157168.CrossRefGoogle ScholarPubMed
23. Hanley, JA, McNeil, BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982; 143: 2936.CrossRefGoogle Scholar
24. Deco, G, Obradovic, D. An Information-Theoretic Approach to Neural Computing. New York, NY: Springer; 1996.CrossRefGoogle Scholar
25. Fermi, E. Thermodynamics. New York, NY: EU Condon; 1937: chaps 4 and 13.Google Scholar
26. Kittel, C. Thermal Physics. San Francisco, Calif: WH Freeman; 1980.Google Scholar
27. Shannon, CE. A mathematical theory of communication. Bell Systems Technical Journal. 1948; 27: 379–423, 623656.CrossRefGoogle Scholar
28. Quastler, H. Essays on the Use of Information Theory in Biology. Chicago, Ill: University of Illinois Press; 1949.Google Scholar
29. Metz, CE, Goodenough, DJ, Rossmann, K. Evaluation of receiver operating characteristic curve data in terms of information theory with applications in radiography. Radiology. 1973; 109: 297304.CrossRefGoogle ScholarPubMed
30. Jessop, A. Informed Assessments: An Introduction to Information, Entropy and Statistics. New York, NY: Ellis Horwood Limited; 1995.Google Scholar
31. Somoza, E, Mossman, D. Comparing and optimizing diagnostic tests: an information-theoretical approach. Med Decis Making. 1992; 12: 179188.CrossRefGoogle ScholarPubMed
32. Somoza, E, Mossman, D. “Biological markers” and psychiatric diagnosis: risk-benefit balancing using ROC analysis. Biol Psychiatry. 1991; 29: 811826.CrossRefGoogle ScholarPubMed
33. Somoza, E, Steer, RA, Beck, AT, Clark, DA. Differentiating major depression and panic disorders by self-report and clinical rating scales: ROC analysis and information theory. Behav Res Ther. 1994; 32: 771782.CrossRefGoogle ScholarPubMed