No CrossRef data available.
Published online by Cambridge University Press: 23 December 2022
Clinical Decision Support Software (CDSS) can improve the quality and safety of care by providing patient-specific diagnostic and treatment recommendations. However, robust evaluation is required to ensure that the recommendations provided are clinically valid, up-to-date, and relevant to a specific clinical context. Most evaluation studies assess CDSS performance from the perspective of end-user requirements. But only occasionally is CDSS subject to stringent pre- and post-market evaluation, making it difficult to determine the safety and quality in practice. This study aimed to assess CDSS evaluation in Australia to identify gaps in evaluation approaches.
We conducted 11 semi-structured interviews with different policymakers from committees involved in digital health activities in Australia. Data were thematically analyzed using both theory-based (deductive) and data-driven (inductive) approaches.
Our findings indicated that evaluating CDSS as a purely technical intervention has overly narrowed the assessment of benefits and risks by inadequately capturing the sociotechnical environment. Existing evaluation methods, adopting a static view of the implemented system, cannot discern the impact of the dynamic clinical environment and rapidly evolving technology on CDSS performance. The timeframe of evaluation studies are also incongruent with fast software upgrade cycles, with clinical practices and software potentially changing by the time evaluation is complete. The regulation of software as a medical device depends on the intended use. CDSS are exempt from regulation because they only ‘produce advice’; however, this ignores the fact that they can transition to specifying a diagnosis and treatment after a software update. There is no framework for continuous post-market monitoring, and this is especially important when a CDSS algorithm can change and impact on patient management.
The sociotechnical environment is a significant factor influencing the impact of CDSS on clinical practice, therefore evaluation approaches must acknowledge the dynamic nature of clinical and organizational contexts. Pragmatic and data-driven methodologies are required for CDSS evaluation that acknowledge the evolving landscape of clinical practice and its relationship to technology.