Hostname: page-component-78c5997874-8bhkd Total loading time: 0 Render date: 2024-11-14T06:53:46.775Z Has data issue: false hasContentIssue false

LINKING THE EVIDENCE: INTERMEDIATE OUTCOMES IN MEDICAL TEST ASSESSMENTS

Published online by Cambridge University Press:  23 January 2012

Lukas P. Staub
Affiliation:
The University of Sydneylukas.staub@ctc.usyd.edu.au
Suzanne Dyer
Affiliation:
The University of Sydney
Sarah J. Lord
Affiliation:
The University of Sydney
R. John Simes
Affiliation:
The University of Sydney

Abstract

Objectives: The aim of this study is to review how health technology assessments (HTA) of medical tests incorporate intermediate outcomes in conclusions about the effectiveness of tests on improving health outcomes.

Methods: Systematic review of English-language test assessments in the HTA database from January 2005 to February 2010, supplemented by a search of the Web sites of International Network of Agencies for Health Technology Assessment (INAHTA) members.

Results: A total of 149 HTAs from eight countries were assessed. Half evaluated tests for screening or diagnosis, a third for disease classification (including staging, prognosis, monitoring), and a fifth for multiple purposes. In seventy-one HTAs (48 percent) only diagnostic accuracy was reported, while in seventeen (11 percent) evidence of health outcomes was reported in addition to accuracy. Intermediate outcomes, mainly the impact of test results on patient management, were considered in sixty-one HTAs (41 percent). Of these, forty-seven identified randomized trials or observational studies reporting intermediate outcomes. The validity of these intermediate outcomes as a surrogate for health outcomes was not consistently discussed; nor was the quality appraisal of this evidence. Clear conclusions about whether the test was effective were included in approximately 60 percent of HTAs.

Conclusions: Intermediate outcomes are frequently assessed in medical test HTAs, but interpretation of this evidence is inconsistently reported. We recommend that reviewers explain the rationale for using intermediate outcomes, identify the assumptions required to link intermediate outcomes and patient benefits and harms, and assess the quality of included studies.

Type
METHODS
Copyright
Copyright © Cambridge University Press 2012

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

1.AHRQ. Methods guide for medical test reviews (draft). Rockville, MD: Agency for Healthcare Research and Quality; 2010.Google Scholar
2.Albon, E, Tsourapas, A, Frew, E, et al. Structural neuroimaging in psychosis: A systematic review and economic evaluation. Health Technol Assess. 2008;12:iiiiv, ix-163.Google Scholar
3.Bossuyt, PM, Irwig, L, Craig, J, Glasziou, P. Diagnosis - Comparative accuracy: Assessing new tests against existing diagnostic pathways. BMJ. 2006;332:10891092.Google Scholar
4.Bossuyt, PM, Lijmer, JG, Mol, BW, et al. Randomised comparisons of medical tests: Sometimes invalid, not always efficient. Lancet. 2000;356:18441847.Google Scholar
5.Bossuyt, PM, Reitsma, JB, Bruns, DE, et al. Towards complete and accurate reporting of studies of diagnostic accuracy: The STARD initiative. BMJ. 2003;326:4144.Google Scholar
6.EUnetHTA. HTA core model for diagnostic technologies v 1.0r. Copenhagen: European Network for Health Technology Assessment; 2008.Google Scholar
7.Fryback, DG, Thornbury, JR. The efficacy of diagnostic imaging. Med Decis Making. 1991;11:8894.CrossRefGoogle ScholarPubMed
8.Glasziou, PP, Aronson, JK. An introduction to monitoring therapeutic interventions in clinical practice. In: Glasziou, PP, Irwig, L, Aronson, JK, eds. Evidence-based medical monitoring. Malden, MA: BMJ Books; 2008.CrossRefGoogle Scholar
9.Grootendorst, DC, Jager, KJ, Zoccali, C, Dekker, FW. Screening: Why, when, and how. Kidney Int. 2009;76:694699.CrossRefGoogle Scholar
10.Guyatt, GH, Tugwell, PX, Feeny, DH, et al. The role of before after studies of therapeutic impact in the evaluation of diagnostic technologies. J Chronic Dis. 1986;39:295304.CrossRefGoogle ScholarPubMed
11.Higgins, JPT, Altman, DG. Chapter 8: Assessing risk of bias in included studies. In: Higgins, JPT, Green, S, eds. Cochrane handbook for systematic reviews of interventions. Version 5.0.1 (updated September 2008). CochraneCollaboration; 2008.CrossRefGoogle Scholar
12.Jadad, AR, Moore, RA, Carroll, D, et al. Assessing the quality of reports of randomized clinical trials: Is blinding necessary? Control Clin Trials. 1996;17:112.Google Scholar
13.Knottnerus, JA, van Weel, C, Muris, JWM. Evidence base of clinical diagnosis—Evaluation of diagnostic procedures. BMJ. 2002;324:477480.CrossRefGoogle Scholar
14.Lijmer, JG, Leeflang, M, Bossuyt, PM. Proposals for a phased evaluation of medical tests. Med Decis Making. 2009;29:E13E21.Google Scholar
15.Lord, SJ, Irwig, L, Simes, RJ. When is measuring sensitivity and specificity sufficient to evaluate a diagnostic test, and when do we need randomized trials? Ann Intern Med. 2006;144:850855.CrossRefGoogle ScholarPubMed
16.Meads, CA, Davenport, CF. Quality assessment of diagnostic before-after studies: Development of methodology in the context of a systematic review. BMC Med Res Methodol. 2009;9:3.Google Scholar
17.Moher, D, Schulz, KF, Altman, DG. The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials. Ann Intern Med. 2001;134:657662.Google Scholar
18.MSAC. Guidelines for the assessment of diagnostic technologies. Canberra: Medical Services Advisory Committee (MSAC), Commonwealth of Australia; 2005.Google Scholar
19.MSAC. Positron emission tomography for head and neck cancer. Canberra: Medical Services Advisory Committee (MSAC), Commonwealth of Australia; 2008.Google Scholar
20.NICE. Diagnostics assessment programme; Interim methods statement (March 2010). London: National Institute for Health and Clinical Excellence; 2010.Google Scholar
21.Pawson, RF, Greenhalgh, TF, Harvey, GF, Walshe, K. Realist review - a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005;10:2134.Google Scholar
22.Pepe, MS. Introduction. In: Pepe, MS, ed. The statistical evaluation of medical tests for classification and prediction. Oxford: Oxford University Press; 2003.CrossRefGoogle Scholar
23.Piper, MA, Aronson, N, Ziegler, KM, et al. Special report: Fecal DNA analysis for colon cancer screening. Assessment program 21(6). Blue Cross Blue Shield Association; 2006.Google Scholar
24.Scott, AM, Gunawardana, DH, Bartholomeusz, D, et al. PET changes management and improves prognostic stratification in patients with head and neck cancer: Results of a multicenter prospective study. J Nucl Med. 2008;49:15931600.Google Scholar
25.Staub, LP, Lord, SJ, Simes, RJ, et al. . Using patient management as a proxy for patient outcomes in test evaluation. Methods for Evaluating Medical Tests and Biomarkers 2nd Symposium; 1–2 July 2010; Birmingham: 41.Google Scholar
26.Whiting, P, Rutjes, AW, Reitsma, JB, et al. The development of QUADAS: A tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol. 2003;3:25.CrossRefGoogle ScholarPubMed
Supplementary material: PDF

Staub Supplementary Materials

Staub Supplementary Materials

Download Staub Supplementary Materials(PDF)
PDF 39.7 KB