Hostname: page-component-857557d7f7-fn92c Total loading time: 0 Render date: 2025-12-10T15:49:22.671Z Has data issue: false hasContentIssue false

Advancing infection prevention and control through artificial intelligence: a scoping review of applications, barriers, and a decision-support checklist

Published online by Cambridge University Press:  25 November 2025

Silvana Gastaldi*
Affiliation:
Department of Infectious Diseases, Epidemiology, Biostatistics and Mathematical Modeling Unit (EPI) Istituto Superiore di Sanità , Italy
Ermira Tartari
Affiliation:
Faculty of Health Sciences, University of Malta, Msida, Malta
Giovanni Satta
Affiliation:
Centre for Clinical Microbiology, University College London, London, UK
Benedetta Allegranzi
Affiliation:
Department of Communicable Diseases, Global Lead for Infection Prevention and Control, World Health Organization, Eastern Mediterranean Regional Office, Cairo, Egypt
*
Corresponding author: Silvana Gastaldi; Email: silvanagastaldi72@gmail.com

Abstract

Objective:

To examine how artificial intelligence (AI) has been applied to infection prevention and control in healthcare, identify barriers and risks affecting implementation, and develop a structured checklist to support safe adoption.

Design:

Scoping review conducted in line with Joanna Briggs Institute methodology and reported according to PRISMA-ScR.

Methods:

PubMed, Scopus, and Web of Science were searched for primary studies (2014–2024) describing real-world AI applications for IPC. Studies reporting implementation experiences, outcomes, or risks were included. Data on study design, AI type, IPC function, integration level, barriers, and outcomes were extracted and synthesized thematically to derive a 41-item decision-support checklist.

Results:

Of 2,143 records screened, 100 studies met inclusion. Most were published since 2022, with the United States and China leading output. Machine learning dominated (75%), mainly for predictive analytics (53%), HAI detection (13%), and hand hygiene monitoring (13%). Only 15% of tools were integrated into existing digital infrastructures. Barriers centred on data quality (45%), technical and data related (16%), and economic/technical constraints (16%). Reported risks clustered around operational failures (35%), technical errors (33%), and data security (12%). Evidence was heavily skewed toward high-income countries, with limited prospective validation or implementation science.

Conclusions:

AI offers clear promise for IPC, particularly in early detection and compliance monitoring, but its translation into practice remains constrained by data fragmentation, limited integration, and uneven readiness across settings. Our evidence-informed checklist provides IPC teams with a structured tool to assess feasibility, governance, and resource needs before adoption, supporting safer and sustainable innovation.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Society for Healthcare Epidemiology of America

Introduction

Healthcare-associated infections (HAIs) remain a major threat to patient safety and quality of care worldwide (WHO, 2022). 1 Conventional IPC measures, such as active surveillance, outbreak investigations, and transmission-based precautions, are essential but often reactive and resource-intensive. 1

Artificial intelligence (AI) has emerged as a promising tool to enhance IPC by linking diverse data sources, including laboratory results, resistance profiles, patient movement, and compliance behaviors. Reference Ellahham2,Reference Maddox, Rumsfeld and Payne3,Reference Hanna and Medford4 Applications include predictive analytics, infection detection, hand hygiene monitoring, and compliance auditing. While evidence demonstrates technical feasibility, translation into everyday practice remains limited due to fragmented data, poor integration, and organizational or ethical concerns. Reference Topol5,6

This review synthesizes evidence on real-world AI applications for IPC, identifies barriers and risks, and develops an evidence-informed readiness checklist to support an easier and safer adoption. The checklist aligns empirical findings with international governance frameworks, such as the WHO ethics guidance on AI in health 6 and the EU Artificial Intelligence Act. 7

Methods

This review followed the Joanna Briggs Institute (JBI) methodology for scoping reviews Reference Aromataris, Lockwood, Porritt, Pilla and Jordan8 and is reported in line with PRISMA-ScR guidelines. Reference Peters, Godfrey, McInerney, Munn, Tricco and Khalil9 We addressed two questions: (1) What are the documented applications, barriers, and real-world outcomes of AI in IPC? and (2) How can these findings inform a structured checklist for feasibility, risk, and organizational readiness?

Eligible studies were primary research (prospective cohorts, retrospective analyses, pilots, multicenter trials) describing AI applications in healthcare IPC with practical outcomes (eg, effectiveness, feasibility, workflow integration, risks). We excluded theoretical models, algorithm-only studies without real-world application, and non-peer-reviewed material such as editorials or conference abstracts. Digital health tools or Internet of Things (IoT) solutions were included only if they incorporated an AI component relevant to IPC functions. Searches covered PubMed, Scopus, and Web of Science (period Jan 2014–Dec 2024; last search Feb 2025) using a Population–Concept–Context framework. Reference Aromataris, Lockwood, Porritt, Pilla and Jordan8 No language restrictions were applied. Search terms are detailed in Supplementary Appendix A.

Records were screened in Rayyan Reference Ouzzani, Hammady, Fedorowicz and Elmagarmid10 tool by two reviewers (SG, ET) with adjudication by a third (GS). The PRISMA flow diagram (Figure 1) summarizes the selection. Data were extracted into a piloted template and charted in Excel. Variables included study characteristics, AI type, IPC application, implementation features, outcomes, barriers, and risks. Two reviewers (SG, ET) cross-validated entries.

Figure 1. PRISMA 2020 flow diagram for the scoping review.

Data synthesis combined descriptive and thematic analysis. Descriptive synthesis grouped studies by AI type and IPC function; thematic synthesis identified implementation barriers, risks, and facilitators. Rooted on this analysis, we developed a 41-item readiness checklist, mapped to six domains: governance and policy, data quality and interoperability, technical and infrastructure, human and workflow, economic and resource, and risk and compliance. The full data set, including structured classifications by AI method, IPC focus, implementation features, and risks, is available in Supplementary Appendix B. Each study in the following tables retains the same numeric identifier as in Appendix B, allowing readers to cross-reference study characteristics with the full citation.

Results

Out of 2,143 identified records, 382 duplicates were removed. Of the remaining 1,761 titles and abstracts screened, 272 full-text articles were reviewed. A total of 100 studies met all inclusion criteria and were included in the final study.

Of the 100 studies included in this review, only one (1%) was published before 2017. In contrast, 31% were published between 2017 and 2021, while more than two-thirds (68%) appeared between 2022 and 2024, indicating an acceleration in research activity in recent years.

Geographically, the evidence was mostly generated in high-income settings. The United States accounts for 35% of studies, primarily focused on predictive analytics. China contributed 17%, often exploring deep-learning approaches. European countries represented the 19%, with emphasis on workflow integration and hospital-acquired infection (HAI) surveillance. The Asia-Pacific region outside China added another 14% and Latin America contributed 5%. A small number of studies (6%) originated from sub-Saharan Africa and the Middle East, reflecting emerging research capacity but limited scale.

Using the four-tier scheme (Experimental, Interventional/Implementation, Retrospective, and Validation) applied to the 100 eligible papers in Table 1, almost half of the evidence base remains anchored in retrospective work (49%), which relied on existing health or laboratory data to build or test models.

Table 1. Design standardized classification distribution

Experimental designs accounted for 30%, focusing on technical feasibility in controlled settings. Far fewer studies examined real-world use: 17% reported interventional or implementation pilots, while only 4% carried out independent validation before wider deployment.

Types of AI technology

A full breakdown of AI techniques and corresponding references is available in Table 2 and detailed in Supplementary Appendix B.

Table 2. AI technology categories distribution

Machine learning (ML) was the dominant technique, used in 75% of the 100 studies. This technique often relies on supervised algorithms such as logistic regression, random forests, or gradient boosting to support tasks like infection risk prediction and HAIs surveillance based on EHRs or sensor data. For more details, see next section on IPC applications blocks.

Deep learning approaches appeared in 21 studies (21%) and included for example the use of convolutional neural networks (CNNs) technologies for surface contamination detection Reference Abubeker60 but also on video-based hand hygiene monitoring, Reference Kim, Choi, Jo, Kim, Kim and Kim63 surgical site infections (SSI), Reference Asif, Xu and Zhao96 and early detection of multidrug-resistant infections. Reference Gouareb, Bornet, Proios, Pereira and Teodoro52

Generative AI was used in just four studies (4%). These projects piloted large language models (LLMs) for IPC education, Reference Jawanpuria, Behera, Dash and Rahman11 policy summarisation Reference Chester and Mandler31 through expert consensus, HAI’s surveillance Reference Wiemken and Carrico41 and Hand hygiene. Reference Simioli, Annunziata and Coppola49

IPC applications supported by AI distribution

Among the 100 primary studies, nine distinct IPC application areas were identified (Table 3 and Figure 2).

Figure 2. IPC applications distribution.

Table 3. IPC applications supported by AI distribution

Predictive-analytics systems accounted for 53% of the studies. These papers typically trained supervised models such as logistic regression, gradient boosting, or recurrent networks on electronic-health-record data to forecast patient-level risks such as, for example, central-line associated bacteremia, Reference Montella, Ferraro, Sperlì, Triassi, Santini and Improta56,Reference Beeler, Dbeibo and Kelley39 Multi Drug Resistance (MDR) Reference Kamruzzaman, Heavey and Song35,Reference Gouareb, Bornet, Proios, Pereira and Teodoro52 or SSI. Reference Gutierrez-Naranjo12,Reference Bartz-Kurycki, Green and Anderson38,Reference Chen, Lu, You, Zhou, Xu and Chen74,Reference Fletcher, Schneider and Hedt-Gauthier107 In some cases, ward-level early-warning scores were generated by combining laboratory, vital-sign, and admission data streams. Reference Montella, Ferraro, Sperlì, Triassi, Santini and Improta56

Hand hygiene compliance monitoring was reported in 13 studies (13%), with applications varying through video analytics using CNNs, badge-based proximity sensors, Reference Wang, Xia, Wu, Ni, Long, Li, Zhao, Chen, Wang, Xu, Huang and Lin17,Reference Asif, Xu and Zhao96 alcohol-dispenser sensors linked to compliance dashboards, Reference Xu, Yang and Chen109 and optical microscopy with ML classification for hand contamination. Reference Claudinon, Steltenkamp and Fink40

Thirteen studies (13%) focused on HAI detection, applying classification algorithms, microbiology, imaging, or pharmacy records to flag active infections such as SSI, Reference Wu, Tsai, Ho, Lai, Tai and Lin36,Reference Kiser, Shi and Bucher64,Reference Bucher, Shi and Ferraro77,Reference Zhu, Simon and Wick76,Reference Colborn, Zhuang and Dyas79 central-line–associated bloodstream infection Reference Roimi, Neuberger and Shrot94 or urinary tract infections (UTIs). Reference van der Werff, Thiman and Tanushi86 Many of these tools operated continuously in clinical workflows. Reference Abubeker60,Reference Verberk and van der Werff98

Eight studies (8%) targeted HAI surveillance by aggregating time-series data for incidence-trend analysis. Reference Rennert-May, Leal, MacDonald, Cannon, Smith, Exner, Larios, Bush and Chew25,Reference Lukasewicz Ferreira, Franco Meneses and Vaz54,Reference Flores-Balado, Méndez and González108 Automated case-finding rules were often calibrated against conventional manual surveillance Reference Cho, Kim and Chung45 but also using Natural Language Processing (NLP) Reference Shi, Liu and Pruitt29,Reference Tvardika, Kergourlay, Bittar, Segond, Darmoni and Metzger34,Reference dos Santos, Silva and Menezes72,Reference Flores-Balado, Méndez and González108 and Generative AI. Reference Wiemken and Carrico41 Outbreak-detection models were described in four studies (4%). Approaches ranged from supervised clustering of ward alerts Reference Atkinson, Ellenberger and Piezzi101 to whole-genome-sequencing pipelines that feed resistance-related single-nucleotide-polymorphism data into transmission-mapping algorithms. Reference Sundermann, Chen and Miller42,Reference Sundermann, Chen and Miller43 One study integrated environmental IoT sensors to identify spatiotemporal hotspots. Reference Bhatia, Sood and Kaur15

Less-common themes included education and training platforms (3 %), where generative- or retrieval-augmented language models generated scenario-based learning modules Reference Jawanpuria, Behera, Dash and Rahman11 or PPE monitoring through computer vision founded on Human-AI Collaboration; Reference Kim, Park and Sippel44,Reference Segal, Bradley and Williams55 environmental monitoring (3 %), employing image recognition or air-quality sensors for surface-cleanliness verification; Reference Hattori, Sekido, Leong, Tsutsui, Arima, Tanaka, Yokota, Washio, Kawai and Okochi27,Reference Hu, Zhong, Li, Tan and He69,Reference Lee, Shim and Lim106 and AMR prediction (1%), also using Plasmonic Nanosensors. Reference Yu, Fu and He110 Finally, decision-support dashboards (2 %) evaluating ChatGPT’s reliability in agreeing with expert statements Reference Chester and Mandler31 or risk stratification for UTIs Reference Jakobsen59 were described in two studies.

Barrier profile

A total of 4 barrier categories were coded across 100 primary studies, clustering into nine composite categories (Table 4 and Figure 3).

Figure 3. Barrier categories distribution.

Table 4. Barrier categories

Data-related barriers were the most common, reported in 45 studies (45%), with issues such as incomplete records, non-standard terminologies, and delayed feeds. Reference Cho, Lee, Kim, Yoo, Choi, Lee and Choi22,Reference Chester and Mandler31,Reference Bucher, Shi and Ferraro77,Reference Chen, Joisa and Stem78 Barriers combining technical and economic elements (16%) were also frequent. Examples included high setup and maintenance costs for AI surveillance tools, Reference Kamruzzaman, Heavey and Song35 computational demands and complexity of real-time systems, Reference Bartz-Kurycki, Green and Anderson38 and substantial infrastructure requirements for whole-genome sequencing pipelines. Reference Sundermann, Chen and Miller42,Reference Sundermann, Chen and Miller43 A further 16% of studies reported barriers blending integration challenges with data shortcomings. Purely technical problems, including outages, high computational requirements, and sensor failures, appeared in 9%. Human factors coupled with data gaps, such as low trust or alert fatigue, were noted in 5%. Less frequent were mixed technical-human (3%), economic-data (3%), stand-alone economic (2%), and economic-human (1%) barriers.

Risk taxonomy

A total of 4 risk categories were coded across 100 primary studies, yielding eight composite categories (Table 5 and Figure 4). Risk reporting was highly granular, and many papers logged more than one category.

Figure 4. Risk categories distribution.

Table 5. Risk categories

Operational issues were the most frequently documented risk type, recorded in 35 of 100 studies (35%). These papers described, for example, over-reliance on predictions without clinical confirmation Reference Gutierrez-Naranjo12,Reference Liang, Zhao, Xu, Zhou and Huang16 or potential misinterpretation if model outputs are not validated. Reference Rabhi, Jakubowicz and Metzger18

Technical failures followed closely, appearing in 33 studies (33%). Examples included server Risk of model overfitting Reference Sun, Zuccarelli and Zerhouni71,Reference Marra, Alzunitan and Abosi73 or risk of under-detection in systems with incomplete digital documentation. Reference Bucher, Shi and Ferraro77,Reference van der Werff, Thiman and Tanushi86

Data-security concerns were reported in 12 studies (12%), typically focusing on patient or staff privacy regulations. Reference Bhatia, Sood and Kaur15,Reference Zhang, White, Schmidt and Dennis37,Reference Xu, Yang and Chen109

Combined technical-and-operational risks featured in nine studies (9 %), while paired operational-and-data-security risks were noted in four studies (4 %).

Pure human-factor risks, such as over-reliance on AI outputs Reference Yu, Fu and He110 or diminished vigilance during downtimes, Reference Balczewski and Keidan102 were documented in four studies (4 %). Two papers (2 %) combined technical and data-security risks, Reference Shi, Liu and Pruitt29,Reference Shrimali and Teuscher32 and one study (1 %) reported a mixed human-and-data-security category. Reference Simioli, Annunziata and Coppola49

Digital-integration status

Of the 100 studies reviewed, only 15 papers (15 %) described AI tools already integrated into another digital layer of hospital or public health information systems. The main integration pathways were:

Furthermore, single studies (each 1 %) detailed integration via mHealth apps, Reference Ke13 computer-vision systems, Reference Kim, Park and Sippel44 computational-fluid-dynamics models, Reference Lee, Shim and Lim106 wearable devices Reference Xu, Yang and Chen109 and plasmonic nanosensors. Reference Yu, Fu and He110

The remaining 85 studies (85 %) reported prototypes without wider digital connectivity.

Checklist refinement and practical use

To help IPC teams assess readiness for adopting AI tools, we translated the findings of this review into a structured, evidence-informed checklist (Table 6). The checklist contains 41 items grouped into six domains reflecting the most common barrier clusters identified across the literature: Governance and Policy, Data Quality and Interoperability, Technical and Infrastructure, Human and Workflow, Risk and Compliance and Economic and Resource.

Table 6. Structured readiness checklist for AI implementation in IPC

Each item was derived inductively from recurrent challenges and risks documented in the 100 included studies, ensuring that the checklist reflects real-world implementation experiences rather than theoretical considerations. To support practical use, two additional ratings were applied: maturity and priority.

By combining these two layers, IPC teams can gauge both how close their systems are to readiness and where to focus resources first. The maturity-priority framework thus transforms the checklist into a practical roadmap for planning, implementation, and risk mitigation.

The development of these scales followed a four-step, evidence-informed process:

  1. 1. Thematic synthesis: reviewers inductively coded implementation barriers and risks from the included studies and organized them into the six checklist domains.

  2. 2. Item drafting: recurring, actionable challenges within each domain were translated into checklist items.

  3. 3. Maturity anchors: the 4-level maturity scale was defined to reflect observable progression from non-existent capacity to embedded practice. This structure aligns with established digital health maturity concepts such as the Healthcare Information and Management Systems Society’s (HIMSS) Electronic Medical Record Adoption Model (EMRAM) 111 and with implementation science frameworks on organizational readiness such as the Consolidated Framework for Implementation Research (CFIR). Reference Damschroder, Aron and Keith112

  4. 4. Priority criteria: item priority was determined using risk-management principles (safety and regulatory criticality, dependency and sequencing, feasibility and resource burden), aligned with two normative governance frameworks central to this review—the WHO guidance on ethics and governance of AI in health 6 and the EU Artificial Intelligence Act, 7 which classifies AI applications by risk. This ensures that high-priority items correspond to safeguards required for high-risk use cases.

The maturity scale was designed as a four-point ordinal measure:

  • 0—Absent: no sign AI solution has been considered.

  • 1—Emergent: early steps are present (e.g., draft policy, small pilot, budget line), but the system is not yet operational.

  • 2—Functional: the system is operational but limited in scope or reliability.

  • 3—Mature: the element is fully embedded, routinely monitored, and continuously improved.

This progression mirrors patterns observed in the literature, moving from complete absence of capacity (eg, no data stewardship), through pilots and partial functionality, to fully institutionalized, monitored systems. A neutral midpoint was deliberately excluded to encourage decisive appraisal of whether an element is operational or still requires attention.

The priority scale distinguishes between:

  • High Priority items, which are essential for safe and effective AI use, such as complete data, model accuracy, cybersecurity, and legal compliance.

  • Medium Priority items, which facilitate adoption and sustainability over time, including vendor support, return-on-investment data, or stress-testing of alert volumes.

Discussion

This review maps how AI has been applied in IPC across 100 studies and distills common barriers and risks into a readiness checklist. Applications ranged from prediction to surveillance and compliance monitoring, but most systems remain at early stages. Success depends not only on algorithmic accuracy but on the conditions enabling reliable use: high-quality data, seamless integration, and organizational readiness. Three key messages emerge: data quality drives performance; integration is the main bottleneck; and barriers span technical, economic, and organizational domains.

Data quality drives AI performance

According to our analysis, data completeness and quality influences AI systems performance and applicability. Across nearly every IPC application area, the most effective systems were built on complete, high-fidelity data. When microbiology codes, admission timestamps, and vital signs were reliably captured, predictive models for HAIs’ prevention such as surgical site infection (SSI) and Clostridioides difficile risk prediction routinely achieved high accuracy. Reference Gutierrez-Naranjo12,Reference Ke13,Reference Chen, Joisa and Stem78 Conversely, nearly half of all studies reported stalled or degraded performance due to missing fields, non-standard coding, or delayed data streams. Reference Bucher, Shi and Ferraro77,Reference Kim and Canovas-Segura100 From a practical standpoint, this suggests that data readiness must come before model adoption. Installing even the most sophisticated AI algorithm on top of fragmented or inconsistent inputs is unlikely to yield clinical value and may even introduce misleading noise.

Digital integration is the real bottleneck

The study design distribution highlighted a pronounced translation gap: while experimental (30%) and retrospective investigations (49%) dominate, fewer than one-fifth of studies evaluate real-world implementation (17%), and rigorous external validation remains exceptional (4%). External validation or device-level verification is the step that exposes hidden overfitting, data-mapping errors, and workflow frictions before large-scale rollout. The scarcity of such studies therefore highlights a critical translational gap: most AI-for-IPC tools remain unproven outside their development sandbox. Moving the field beyond proof-of-concept will therefore require prospective, implementation-science designs that capture workflow integration, user acceptance, and downstream infection-control impact. The vast majority remained confined to isolated research platforms, often lacking connectivity with EHR interfaces, IPC dashboards, or real-time data streams from environmental or wearable sensors. However, by curating case definitions into formats interpretable by AI agents, even complex surveillance tasks such as the detection of HAIs, can be substantially automated. Reference Rennert-May, Leal, MacDonald, Cannon, Smith, Exner, Larios, Bush and Chew25,Reference Shi, Liu and Pruitt29,Reference Tvardika, Kergourlay, Bittar, Segond, Darmoni and Metzger34,Reference Wiemken and Carrico41,Reference Cho, Kim and Chung45,Reference Lukasewicz Ferreira, Franco Meneses and Vaz54,Reference dos Santos, Silva and Menezes72,Reference Flores-Balado, Méndez and González108

Such frameworks could also be extended to support automated case-to-definition matching in outbreak investigations, reportable disease tracking, and clinical decision-making, particularly when embedded directly within electronic health records (EHR). Reference Bhatia, Sood and Kaur15,Reference Sundermann, Chen and Miller42,Reference Sundermann, Chen and Miller43,Reference Atkinson, Ellenberger and Piezzi101

Where digital integration did occur, for example, monitoring of PPE adherence or linking hand hygiene systems to IPC dashboards, teams were more likely to act on the outputs, leading to improved compliance and earlier intervention. Reference Kim, Park and Sippel44,Reference Xu, Yang and Chen109

Failures were often linked to technical incompatibilities (eg, software version mismatches after EHR upgrades) or infrastructure gaps, such as the absence of real-time IoT data pipelines but also due to high setup costs, requiring integration with existing systems for smooth workflow. Reference Haghpanah, Vali and Torkamani66,Reference Van Lissa, Stroebe and vanDellen67,Reference Hu, Zhong, Li, Tan and He69 This highlights a key insight: accuracy is not enough, usefulness depends on connectivity.

Barriers compound across domains

Barriers rarely occur in isolation. Data gaps, fragile digital infrastructure, budget constraints, and users’ skepticism often interact, reinforcing one another. For instance, dependency on EHR structure and data quality and high computational cost contributed to model drift, which in turn produced irrelevant alerts that clinicians learned to ignore. Reference Kamruzzaman, Heavey and Song35,Reference Ferrari, Arina and Edgeworth85

Human factors, like alert fatigue or distrust, became more acute when paired with poor feedback loops or missing context. Reference Balczewski and Keidan102 A simple AI model that functions transparently and within a familiar dashboard may be more sustainable than a black-box tool that overloads staff with unexplained alerts.

Risk is not just technical, it’s operational

The risk landscape described across these studies suggests that operational vulnerabilities are at least as common, and potentially more disruptive, than algorithmic failures. In some studies, challenges were reported due to explainability of ML models and integration into daily routines due to complexity Reference Jakobsen59 or misaligned response protocols. Reference Chester and Mandler31 These risks affect patient care directly and erode staff trust.

Operational and technical risks frequently co-occurred, particularly in complex systems depending on high-quality data and computational power. Reference Jakobsen59 Although relatively few papers discuss it (12%), data security such as patient privacy remain critical, especially as cloud-based IPC solutions scale across institutions combined with wireless solutions.

Maturity levels vary across IPC application areas

Some IPC application areas appear closer to routine readiness. Predictive analytics (53%) and hand hygiene monitoring (13%) stand out: models often performed well, and compliance tools that provided immediate feedback were linked to tangible improvements. Reference Gutierrez-Naranjo12,Reference Ke13,Reference Wang, Xia, Wu, Ni, Long, Li, Zhao, Chen, Wang, Xu, Huang and Lin17 One pilot study demonstrated that a human-AI collaboration system accurately monitored PPE donning and doffing procedures in a simulated setting, suggesting that such systems could serve as a substitute or enhancement to in-person observers. Reference Kim, Park and Sippel44,Reference Segal, Bradley and Williams55 These systems are beginning to move beyond pilot status in select settings. One study reported that despite some usability concerns, particularly related to AI system design and EHR integration, most users expressed overall satisfaction with AI-based hand hygiene monitoring. Reference Lintz68 Importantly, the system was perceived to reduce HAIs and positively influence provider well-being, with younger and more experienced staff reporting greater satisfaction with AI use in direct patient care. Reference Lintz68

By contrast, AMR prediction through WGS, Reference Sundermann, Chen and Miller42,Reference Sundermann, Chen and Miller43 and sensor-based environmental monitoring Reference Hattori, Sekido, Leong, Tsutsui, Arima, Tanaka, Yokota, Washio, Kawai and Okochi27,Reference Hu, Zhong, Li, Tan and He69,Reference Lee, Shim and Lim106 remain largely exploratory. Their value is conceptually clear, but technical complexity and cost remain barriers to widespread use.

For decision-makers, this suggests a tiered approach: focus first on AI tools with strong implementation evidence and realistic integration paths, while continuing to evaluate and pilot newer innovations with longer lead times.

AI tools already match or surpass conventional IPC approaches in prediction and compliance monitoring (eg, Hand Hygiene or PPE compliance) when high-quality data and robust integration are in place. However, data fragmentation, fragile interfaces, and unfunded maintenance obligations consistently limit scale-up. We also illustrate how these findings informed a readiness checklist that will offer IPC leaders a concise, evidence-based instrument to assess feasibility, allocate resources and mitigate risk before AI deployment.

Future directions

Few studies addressed explainability, equity, or sustainability. Post hoc explanation methods (eg, SHAP) were rare, Reference Li, Liu and Wu95 and almost none reported energy or carbon footprints even if the Wash Ring project Reference Xu, Yang and Chen109 showed that efficiency gains are possible. Future research should combine performance metrics with explainability, sustainability reporting, and validation in diverse, low-resource settings.

Limitations

Most included studies were single-center pilots from high-income countries (85%), limiting generalizability. Evidence on cost-effectiveness and patient outcomes was scarce. The proposed checklist is an initial synthesis that requires prospective validation and iterative refinement through consensus methods such as Delphi.

Conclusion

Artificial intelligence offers real opportunities to strengthen infection prevention and control, from improving prediction and surveillance to supporting compliance monitoring. Yet most applications remain at an early stage, with progress slowed by gaps in data quality, weak system integration, and uneven readiness across healthcare settings. The evidence-informed checklist developed in this review is intended as a practical guide for IPC teams, helping them assess maturity, set priorities, and align implementation with international governance frameworks. 6,7 Moving AI from promising pilots to routine practice will require reliable data, robust integration, and above all, clear accountability to ensure safe and effective use.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/ash.2025.10191.

Data availability statement

All data supporting the findings of this review are contained within the manuscript and its supplementary materials.

Acknowledgments

This review was made possible through the voluntary efforts and collaborative dedication of a multidisciplinary team committed to advancing infection prevention and control. We thank all contributing reviewers and technical advisors who supported the identification, screening, and synthesis of evidence. Special thanks to those who participated in the refinement of the readiness checklist and provided critical feedback on its practical relevance and usability.

Author contribution

SG conceptualized the study, coordinated the review process, led the data extraction and analysis and wrote the first draft. GS contributed to the development of the search strategy, supervised the evidence screening process, and supported synthesis and interpretation of findings. ET provided methodological guidance and critically revised the manuscript for intellectual content. BA offered strategic oversight, validated key findings, and provided substantial input to the paper draft. All authors contributed to the drafting and revision of the manuscript, approved the final version, and take responsibility for the accuracy and integrity of the work.

Financial support

No external funding was received for this work, and all contributions were provided on a voluntary, non-commercial basis.

Competing interests

All authors report no conflicts of interest relevant to this article.

Disclaimer

The opinions expressed in this article are those of the authors and do not reflect the official position of WHO or Istituto Superiore di Sanità (ISS). WHO and ISS take no responsibility for the information provided or the views expressed in this article.

References

World Health Organization. Global report on infection prevention and control, 2022. License: CC BY-NC-SA 3.0 IGO. https://iris.who.int/handle/10665/354489 Google Scholar
Ellahham, S. Role of artificial intelligence (AI) in infection control. International Journal of Science and Research (IJSR) 2023; 12:12421247. https://doi.org/10.21275/sr21929143532 CrossRefGoogle Scholar
Maddox, T. M., Rumsfeld, J. S., Payne, P. R. O. Questions for artificial intelligence in health care. JAMA. 2019;321:3132. https://doi.org/10.1001/jama.2018.18932. PMID: 30535130.CrossRefGoogle ScholarPubMed
Hanna, J. J., Medford, R. J. Navigating the future: machine learnings role in revolutionizing antimicrobial stewardship and infection prevention and control. Curr Opin Infect Dis. 2024;37:290295. https://doi.org/10.1097/QCO.0000000000001028 CrossRefGoogle ScholarPubMed
Topol, E. J. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019; 25: 4456. https://doi.org/10.1038/s41591-018-0300-7 CrossRefGoogle ScholarPubMed
World Health Organization. Ethics and governance of artificial intelligence for health: guidance on large multi-modal models. License: CC BY-NC-SA 3.0 IGO., 2024. https://iris.who.int/handle/10665/375579 Google Scholar
European Union. Regulation (EU) 2024/1689 of the European parliament and of the council of 12 July 2024 laying down harmonised rules on artificial intelligence (Artificial intelligence act). Off J Eur Union. 2024;L 1689:168.Google Scholar
Aromataris, E., Lockwood, C., Porritt, K., Pilla, B., Jordan, Z. (eds.) JBI JBI manual for evidence synthesis, 2024. https://jbi-global-wiki.refined.site/space/MANUAL 10.46658/JBIMES-24-01CrossRefGoogle Scholar
Peters, MDJ, Godfrey, C, McInerney, P, Munn, Z, Tricco, AC Khalil, H. Scop Rev, 2020. 10.46658/JBIMES-24-09. https://synthesismanual.jbi.global Google Scholar
Ouzzani, M., Hammady, H., Fedorowicz, Z., Elmagarmid, A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5:210. https://doi.org/10.1186/s13643-016-0384-4.CrossRefGoogle ScholarPubMed
Jawanpuria, A., Behera, A. R., Dash, C., Rahman, M. H. U. ChatGPT in hospital infection prevention and control – assessing knowledge of an AI model based on a validated questionnaire. Eur J Clin Exp Med. 2024;22:347352.10.15584/ejcem.2024.2.19CrossRefGoogle Scholar
Gutierrez-Naranjo, JM et al. A machine learning model to predict surgical site infection after surgery of lower extremity fractures. Int Orthop. 2024; 48:18871896. https://doi.org/10.1007/s00264-024-06194-5.CrossRefGoogle ScholarPubMed
Ke, C., et al. Prognostics of surgical site infections using dynamic health data. J Biomed Informat. 2017; 65: 2233.10.1016/j.jbi.2016.10.021CrossRefGoogle ScholarPubMed
Zhang, Q., et al. (2024). Construct validation of machine learning for accurately predicting the risk of postoperative surgical site infection (SSI) following spine surgery. J Hosp Infect: 146, 232241.10.1016/j.jhin.2023.09.024CrossRefGoogle ScholarPubMed
Bhatia, M., Sood, S. K., Kaur, M. Artificial intelligence-inspired comprehensive framework for COVID-19 outbreak control. Artif Intell Med., 2022;127:102288.10.1016/j.artmed.2022.102288CrossRefGoogle ScholarPubMed
Liang, Q., Zhao, Q., Xu, X., Zhou, Y., Huang, M. Early prediction of carbapenem-resistant gram-negative bacterial carriage in intensive care units using machine learning. J Glob Antimicrob Resist. 2022;29:225231.10.1016/j.jgar.2022.03.019CrossRefGoogle ScholarPubMed
Wang, T, Xia, J, Wu, T, Ni, H, Long, E, Li, J-P O, Zhao, L, Chen, R, Wang, R, Xu, Y, Huang, K, Lin, H. Handwashing quality assessment via deep learning: A modeling study for monitoring compliance and standards in hospitals and communities. Intell Med 2. 2022 152160.10.1016/j.imed.2022.03.005CrossRefGoogle Scholar
Rabhi, S., Jakubowicz, J., Metzger, M.-H. Deep learning versus conventional machine learning for detection of healthcare-associated infections in French clinical narratives. Methods Inf Med. 2018;57.10.1055/s-0039-1677692CrossRefGoogle Scholar
Marschollek, M., Marquet, M., Reinoso Schiller, N., et al. RISK PRINCIPE: Development of a risk-stratified infection prevention project integrating surveillance and prediction. Bundesgesundheit Gesundheit Gesundheit. 2024;67:685692.10.1007/s00103-024-03882-wCrossRefGoogle Scholar
Nistal-Nuño, B. A neural network for prediction of risk of nosocomial infection at intensive care units: a didactic preliminary model. Einstein (São Paulo) 2020;18.10.31744/einstein_journal/2020AO5480CrossRefGoogle Scholar
Haghpanah, M. A., Tale Masouleh, M., Kalhor, A., Akhavan Sarraf, E. A hand rubbing classification model based on image sequence enhanced by feature-based confidence metric. Signal Image Video Process. 2017;17:24992509.10.1007/s11760-022-02467-xCrossRefGoogle Scholar
Cho, Y, Lee, H K, Kim, J, Yoo, K-B, Choi, J, Lee, Y, Choi, M. Prediction of hospital-acquired influenza using machine learning algorithms: a comparative study. BMC Infect Dis. 2024;24: 466.10.1186/s12879-024-09358-1CrossRefGoogle ScholarPubMed
Chen, Y, Zhang, Y, Nie, S, Ning, J, Wang, Q, Yuan, H, Wu, H, Li, B, Hu, W, Wu, C. Risk assessment and prediction of nosocomial infections based on surveillance data using machine learning methods. BMC Public Health. 2024;24: 1780.10.1186/s12889-024-19096-3CrossRefGoogle ScholarPubMed
Li, J., Yan, Z. Machine learning model predicting factors for incisional infection following right hemicolectomy for colon cancer. BMC Surgery. 2024;24:279.10.1186/s12893-024-02543-8CrossRefGoogle ScholarPubMed
Rennert-May, E, Leal, J, MacDonald, M K, Cannon, K, Smith, S, Exner, D, Larios, O E, Bush, K, Chew, D. Validating administrative data to identify complex surgical site infections following cardiac implantable electronic device implantation: a comparison of traditional methods and machine learning. Antimicrob Resist Infect Control. 2022;11: 138.10.1186/s13756-022-01174-zCrossRefGoogle ScholarPubMed
Li, S., Zhang, Y., Lin, Y., Zheng, L., Fang, K., Wu, J. Development and validation of prediction models for nosocomial infection and prognosis in hospitalized patients with cirrhosis. Antimicrob Resist Infect Control. 2024;13:85.10.1186/s13756-024-01444-yCrossRefGoogle ScholarPubMed
Hattori, S, Sekido, R, Leong, I W, Tsutsui, M, Arima, A, Tanaka, M, Yokota, K, Washio, T, Kawai, T, Okochi, M. Machine learning-driven electronic identifications of single pathogenic bacteria. Sci Rep 10. 2020 15525.10.1038/s41598-020-72508-3CrossRefGoogle ScholarPubMed
Soguero-Ruiz, C, Wang, F, Jenssen, R, et al. Data-driven temporal prediction of surgical site infection. Sci Rep. 2020; 10:62083.Google Scholar
Shi, J, Liu, S, Pruitt, LC et al. Using natural language processing to improve EHR structured data-based surgical site infection surveillance. Sci Rep. 2023; 10:2246816.Google Scholar
Zakur, Y. A., Mirashrafi, S. B., Flaih, L. R. A comparative study on association rule mining algorithms on the hospital infection control dataset. Baghdad Sci. 2023; P-ISSN: 2078-8665, E-ISSN: 2411-7986.Google Scholar
Chester, A. N., Mandler, S. I. A comparison of chatGPT and expert consensus statements on surgical site infection prevention in high-risk pediatric spine surgery. J Pediatr Orthop. 2025;45:e72e75. 2024.10.1097/BPO.0000000000002781CrossRefGoogle Scholar
Shrimali, S., Teuscher, C. A novel deep learning-, camera-, and sensor-based system for enforcing hand hygiene compliance in healthcare facilities. IEEE Sens J. 2023;23:1365913670.10.1109/JSEN.2023.3271297CrossRefGoogle Scholar
Skube, S J, Hu, Z, Simon, G J, Wick, E C, Arsoniadis, E G, Ko, C Y, Melton, G B. Accelerating Surgical Site Infection Abstraction With a Semi-automated Machine-learning Approach. Ann Surg. 2022;276: 180185.10.1097/SLA.0000000000004354CrossRefGoogle ScholarPubMed
Tvardika, N., Kergourlay, I., Bittar, A., Segond, F., Darmoni, S., Metzger, M.-H. Accuracy of using natural language processing methods for identifying healthcare-associated infections. Int J Med Inform. 2018;117:96102.10.1016/j.ijmedinf.2018.06.002CrossRefGoogle Scholar
Kamruzzaman, M., Heavey, J., Song, A. et al. Improving risk prediction of methicillin-resistant staphylococcus aureus using machine learning methods with network features: retrospective development study. JMIR AI. 2024;3:e48067. https://doi.org/10.2196/48067.CrossRefGoogle ScholarPubMed
Wu, J.-M., Tsai, C.-J., Ho, T.-W., Lai, F., Tai, H.-C., Lin, M.-T. A unified framework for automatic detection of wound infection with artificial intelligence. Appl Sci. 2020;10:5353.10.3390/app10155353CrossRefGoogle Scholar
Zhang, P., White, J., Schmidt, D., Dennis, T. Applying machine learning methods to predict hand hygiene compliance characteristics. IEEE Int Conf Med Imaging. 2017;353-356.10.1109/BHI.2017.7897278CrossRefGoogle Scholar
Bartz-Kurycki, M. A., Green, C., Anderson, K. T. et al. Enhanced neonatal surgical site infection prediction model utilizing statistically and clinically significant variables in combination with a machine learning algorithm. Am J Surg. 2018;216:764777.10.1016/j.amjsurg.2018.07.041CrossRefGoogle ScholarPubMed
Beeler, C., Dbeibo, L., Kelley, K. et al. Assessing patient risk of central line-associated bacteremia via machine learning. Am J Infect Control. 2018. 46:986991. https://doi.org/10.1016/j.ajic.2018.02.021.CrossRefGoogle ScholarPubMed
Claudinon, J., Steltenkamp, S., Fink, M. et al. A label-free optical detection of pathogens in Isopropanol as a first step towards real-time infection prevention. Biosensors. 2021;11:2.10.3390/bios11010002CrossRefGoogle Scholar
Wiemken, T. L., Carrico, R. M. Assisting the infection preventionist: Use of artificial intelligence for healthcare–associated infection surveillance. Am J Infect Control. 2024;52:625629.10.1016/j.ajic.2024.02.007CrossRefGoogle ScholarPubMed
Sundermann, A. J., Chen, J., Miller, J. K., et al. Outbreak of pseudomonas aeruginosa infections from a contaminated gastroscope detected by whole genome sequencing surveillance. Clin Infect Dis, 2021;73:42.10.1093/cid/ciaa1887CrossRefGoogle ScholarPubMed
Sundermann, A. J., Chen, J., Miller, J. K., et al. Whole-genome sequencing surveillance and machine learning of the electronic health record for enhanced healthcare outbreak detection. Clin Infect Dis. 2022;75:476482.10.1093/cid/ciab946CrossRefGoogle ScholarPubMed
Kim, M. S., Park, B., Sippel, G. J., et al. Comparative analysis of personal protective equipment nonadherence detection: computer vision versus human observers. J Am Med Inform Assoc. 2025;32:163171. https://doi.org/10.1093/jamia/ocae262.CrossRefGoogle ScholarPubMed
Cho, S. Y., Kim, Z., Chung, D. R., et al. Development of machine learning models for the surveillance of colon surgical site infections. J Hosp Infect. 2024;146:224231.10.1016/j.jhin.2023.03.025CrossRefGoogle ScholarPubMed
Khan, R. U., Almakdi, S., Alshehri, M. et al. Probabilistic approach to COVID-19 data analysis and forecasting future outbreaks using a multi-layer perceptron neural network. Diagnostics. 2022;12:2539.10.3390/diagnostics12102539CrossRefGoogle ScholarPubMed
Özakar R, Gedikli, E. Evaluation of hand washing procedure using vision-based frame level and spatio-temporal level data models. Electronics. 2023; 12:2024.(File: electronics-12-02024.pdf) (electronics-12-02024).Google Scholar
Ying, H., Guo, B. W., Wu, H. J., Zhu, R. P., Liu, W. C., Zhong, H. F. Using multiple indicators to predict the risk of surgical site infection after ORIF of tibia fractures: a machine learning-based study. Front Cell Infect Microbiol. 2023;13:1206393.10.3389/fcimb.2023.1206393CrossRefGoogle ScholarPubMed
Simioli, F., Annunziata, A., Coppola, A., et al. Artificial intelligence for training and reporting infection prevention measures in critical wards. Front Public Health. 2024;12:1442188.10.3389/fpubh.2024.1442188CrossRefGoogle ScholarPubMed
Wang, J., Wang, G., Wang, Y., Wang, Y. Development and evaluation of a model for predicting the risk of healthcare-associated infections in ICU patients. Front Public Health. 2024;12:1444176.10.3389/fpubh.2024.1444176CrossRefGoogle Scholar
Pasupuleti, D. Handwashing action detection system for an autonomous social robot. TENCON 2022. 2022 IEEE Region 10 International Conference. 2022. https://doi.org/10.1109/TENCON55691.2022.9977684.CrossRefGoogle Scholar
Gouareb, R., Bornet, A., Proios, D., Pereira, S. G., Teodoro, D. Detection of patients at risk of multidrug-resistant enterobacteriaceae infection using graph neural networks: a retrospective study. Health Data Sci. 2023;3 0099.10.34133/hds.0099CrossRefGoogle ScholarPubMed
Hopkins, B. S., Mazmudar, A., Driscoll, C., Sveta, M., Goergen, J., Kelsten, M. et al. Using artificial intelligence (AI) to predict postoperative surgical site infection: a retrospective cohort of 4046 Posterior spinal fusions. Clin Neurol Neurosurg. 2020;192:105718.10.1016/j.clineuro.2020.105718CrossRefGoogle ScholarPubMed
Lukasewicz Ferreira, S. A., Franco Meneses, A. C., Vaz, T. A., et al. Hospital-acquired infections surveillance: the machine-learning algorithm mirrors national healthcare safety network definitions. Infect Control Hosp Epidemiol. 2024;45 :604608.10.1017/ice.2023.224CrossRefGoogle ScholarPubMed
Segal, R., Bradley, W. P., Williams, D. L., et al. Human-machine collaboration using artificial intelligence to enhance the safety of donning and doffing personal protective equipment (PPE). Infect Control Hosp Epidemiol. 2023;44(4):732735.10.1017/ice.2022.169CrossRefGoogle ScholarPubMed
Montella, E., Ferraro, A., Sperlì, G., Triassi, M., Santini, S., Improta, G. Predictive analysis of healthcare-associated blood stream infections in the neonatal intensive care unit using artificial intelligence: a single center study. Int J Environ Res Public Health. 2022;19:2498.10.3390/ijerph19052498CrossRefGoogle ScholarPubMed
Platt, L. S., Chen, X., Sabo-Attwood, T., Iovine, N., Brown, S., Pollitt, B. Improving infection prevention briefing through predictive predesign: a computational approach to architectural programming by evaluating socioecological risk factors. Arch Eng Des Manag. 2024;20:776788.Google Scholar
Cotia, A. L. F., Scorsato, A. P., Prado, M., et al. Integration of an electronic hand hygiene auditing system with electronic health records using machine learning to predict hospital-acquired infection in a health care setting. Am J Infect Control. 2025. 53:5864. https://doi.org/10.1016/j.ajic.2024.09.012.CrossRefGoogle Scholar
Jakobsen, R. S., et al. A study on the risk stratification for patients within 24 hours of admission for risk of hospital-acquired urinary tract infection using bayesian network models. Health Inform J. 2024.10.1177/14604582241234232CrossRefGoogle Scholar
Abubeker, K. M., et al. Internet-of-things-assisted wireless body area network-enabled biosensor framework for detecting ventilator and hospital-acquired pneumonia. IEEE Sens. J. 2024.10.1109/JSEN.2024.3361158CrossRefGoogle Scholar
Lu, K., et al. Machine learning application for prediction of surgical site infection after posterior cervical surgery. Int Wound J. 2024;21:e14607. https://doi.org/10.1111/iwj.14607.CrossRefGoogle ScholarPubMed
Liu, T., Bai, Y., Du, M., Gao, Y., Liu, Y. Susceptible-infected-removed mathematical model under deep learning in hospital infection control of novel coronavirus pneumonia. J Healthc Eng. 2021;2021:1535046. https://doi.org/10.1155/2021/1535046.Google ScholarPubMed
Kim, M., Choi, J., Jo, J.-Y., Kim, W.-J., Kim, S.-H., Kim, N. Video-based automatic hand hygiene detection for operating rooms using 3D convolutional neural networks. J Clin Monit Comput. 2024;38:11871197. https://doi.org/10.1007/s10877-024-01179-6.CrossRefGoogle ScholarPubMed
Kiser, A. C., Shi, J., Bucher, B. T. An explainable long short-term memory network for surgical site infection identification. Surgery. 2024;176:2431. https://doi.org/10.1016/j.surg.2024.03.006.CrossRefGoogle Scholar
Mamlook, R. E. A., Wells, L. J., Sawyer, R. Machine-learning models for predicting surgical site infections using patient pre-operative risk and surgical procedure factors. Am J Infect Control. 2023;51:544550. https://doi.org/10.1016/j.ajic.2022.08.013.CrossRefGoogle ScholarPubMed
Haghpanah, M. A., Vali, S., Torkamani, A. M., et al. Real-time hand rubbing quality estimation using deep learning enhanced by separation index and feature-based confidence metric. Expert Syst Appl. 2023;218:119588. https://doi.org/10.1016/j.eswa.2023.119588.CrossRefGoogle ScholarPubMed
Van Lissa, C. J., Stroebe, W., vanDellen, M. R., et al. Using machine learning to identify important predictors of COVID-19 infection prevention behaviors during the early phase of the pandemic. Patterns. 2022;3:100482. https://doi.org/10.1016/j.patter.2022.100482.CrossRefGoogle ScholarPubMed
Lintz, J. Provider satisfaction with artificial intelligence–based hand hygiene monitoring system during the COVID-19 pandemic: study of a rural medical center. J Chiropr Med. 2023;22:197203. https://doi.org/10.1016/j.jcm.2023.03.004.CrossRefGoogle ScholarPubMed
Hu, D., Zhong, H., Li, S., Tan, J., He, Q. Segmenting areas of potential contamination for adaptive robotic disinfection in built environments. 2020;184:107226. https://doi.org/10.1016/j.buildenv.2020.107226.Google ScholarPubMed
Myall, A., Price, J. R., Peach, R. L., et al. Prediction of hospital-onset COVID-19 infections using dynamic networks of patient contact: an international retrospective cohort study. Lancet Digit Health. 2022;4. https://doi.org/10.1016/S2589-7500(22)00093-0.CrossRefGoogle Scholar
Sun, C. L. F., Zuccarelli, E., Zerhouni, E. G. A., et al. Predicting COVID-19 infection risk and related risk drivers in nursing homes: a machine learning approach. JAMDA. 2020;21:15331538. https://doi.org/10.1016/j.jamda.2020.08.030.Google ScholarPubMed
dos Santos, R. P., Silva, D., Menezes, A., et al. Automated healthcare-associated infection surveillance using an artificial intelligence algorithm. Infect Prev Pract. 2021;3:100167. https://doi.org/10.1016/j.infpip.2021.100167.CrossRefGoogle ScholarPubMed
Marra, A. R., Alzunitan, M., Abosi, O., et al. Modest clostridioides difficile infection prediction using machine learning models in a tertiary care hospital. Diagn Microbiol Infect Dis. 2020;98:115104. https://doi.org/10.1016/j.diagmicrobio.2020.115104.CrossRefGoogle ScholarPubMed
Chen, W., Lu, Z., You, L., Zhou, L., Xu, J., Chen, K. Artificial intelligence–based multimodal risk assessment model for surgical site infection (AMRAMS): development and validation study. JMIR Med Inform. 2020;8. https://doi.org/10.2196/18186.CrossRefGoogle Scholar
Tunthanathip, T., Sae-heng, S., Oearsakul, T., et al. Machine learning applications for the prediction of surgical site infection in neurological operations. Neurosurg Focus. 2019;47. https://doi.org/10.3171/2019.5.FOCUS19241.CrossRefGoogle Scholar
Zhu, Y., Simon, G. J., Wick, E. C., et al. Applying machine learning across sites: external validation of a surgical site infection detection algorithm. J Am Coll Surg. 2021;232:963971.e1. https://doi.org/10.1016/j.jamcollsurg.2021.03.026.CrossRefGoogle ScholarPubMed
Bucher, B. T., Shi, J., Ferraro, J. P., et al. Portable automated surveillance of surgical site infections using natural language processing: development and validation. Ann Surg. 2020;272:629636. https://doi.org/10.1097/SLA.0000000000004133.CrossRefGoogle ScholarPubMed
Chen, K. A., Joisa, C. U., Stem, J., et al. Predicting surgical site infection after colorectal surgery using machine learning. Dis Colon Rectum. 2023;66:458466. https://doi.org/10.1097/DCR.0000000000002559.CrossRefGoogle ScholarPubMed
Colborn, K. L., Zhuang, Y., Dyas, A. R., et al. Development and validation of models for detection of postoperative infections using structured electronic health records data and machine learning. Surgery. 2023;173:464471. https://doi.org/10.1016/j.surg.2022.10.026.CrossRefGoogle ScholarPubMed
Sanger, P. C., van Ramshorst, G. H., Mercan, E., et al. A prognostic model of surgical site infection using daily clinical wound assessment. J Am Coll Surg. 2016;223:259270.e2. doi: https://doi.org/10.1016/j.jamcollsurg.2016.04.046.CrossRefGoogle ScholarPubMed
Sohn, S., Larson, D. W., Habermann, E. B., et al. Detection of clinically important colorectal surgical site infection using bayesian network. J Surg Res. 2017;209:168173. https://doi.org/10.1016/j.jss.2016.09.058.CrossRefGoogle ScholarPubMed
Zachariah, P., Sanabria, E., Liu, J., et al. Novel strategies for predicting healthcare-associated infections at admission: implications for nursing care. Nurs Res. 2020;69:399403. https://doi.org/10.1097/NNR.0000000000000449.CrossRefGoogle ScholarPubMed
Yang, H., Tourani, R., Zhu, Y., et al. Strategies for building robust prediction models using data unavailable at prediction time. J Am Med Inform Assoc. 2022;29:7279. https://doi.org/10.1093/jamia/ocab229.CrossRefGoogle Scholar
Özdede M, Zarakolu, P., Metan, G., et al. Predictive modeling of mortality in carbapenem-resistant acinetobacter baumannii bloodstream infections using machine learning. J Investig Med. 2023;72:684696. https://doi.org/10.1177/10815589241258964.Google Scholar
Ferrari, D., Arina, P., Edgeworth, J., et al. Using interpretable machine learning to predict bloodstream infection and antimicrobial resistance in ICU patients: early alert predictors based on EHR data. PLOS Digit Health. 2024;3. https://doi.org/10.1371/journal.pdig.0000641.CrossRefGoogle Scholar
van der Werff, S. D., Thiman, E., Tanushi, H., et al. The accuracy of fully automated algorithms for surveillance of healthcare-associated urinary tract infections in hospitalized patients. J Hosp Infect. 2021;110:139147. https://doi.org/10.1016/j.jhin.2021.01.023.CrossRefGoogle ScholarPubMed
Panchavati, S., Zelin, N. S., Garikipati, A., et al. A comparative analysis of machine learning approaches to predict C. difficile infection in hospitalized patients. Am J Infect Control. 2022;50:250257. https://doi.org/10.1016/j.ajic.2021.11.012.CrossRefGoogle Scholar
da Silva, D. A., ten Caten, C. S., dos Santos, R. P., Fogliatto, F. S., Hsuan, J. Predicting the occurrence of surgical site infections using text mining and machine learning. PLoS One. 2019;14. https://doi.org/10.1371/journal.pone.0226272.CrossRefGoogle Scholar
Møller, J Kølseth, Sørensen, M, Hardahl, C, Pappalardo, F. Prediction of risk of acquiring urinary tract infection during hospital stay based on machine-learning: A retrospective cohort study. PLoS One 2021;16.10.1371/journal.pone.0248636CrossRefGoogle Scholar
Peng, H.-Y., Lin, Y.-K., Nguyen, P.-A., et al. Determinants of coronavirus disease 2019 infection by artificial intelligence technology: a study of 28 countries. PLoS One. 2022;17. https://doi.org/10.1371/journal.pone.0272546.CrossRefGoogle Scholar
Zhuang, Y., Dyas, A., Meguid, R. A., et al. Preoperative prediction of postoperative infections using machine learning and electronic health record data. Ann Surg. 2024;279:720726. https://doi.org/10.1097/SLA.0000000000006106.CrossRefGoogle ScholarPubMed
Rafaqat, W., Fatima, H. S., Kumar, A., Khan, S., Khurram, M. Machine learning model for assessment of risk factors and postoperative day for superficial vs deep/organ-space surgical site infections. Surg Innov. 2023;30:455462. https://doi.org/10.1177/15533506231170933.CrossRefGoogle ScholarPubMed
Yeo, I., Klemt, C., Robinson, M. G., et al. The use of artificial neural networks for the prediction of surgical site infection following TKA. J Knee Surg. 2023;36:637643. https://doi.org/10.1055/s-0041-1741396.Google ScholarPubMed
Roimi, M., Neuberger, A., Shrot, A., et al. Early diagnosis of bloodstream infections in the intensive care unit using machine-learning algorithms. Intensive Care Med. 2020;46:454462. https://doi.org/10.1007/s00134-019-05876-8.CrossRefGoogle ScholarPubMed
Li, M. P., Liu, W. C., Wu, J. B., et al. Machine learning for the prediction of postoperative nosocomial pulmonary infection in patients with spinal cord injury. Eur Spine J. 2023;32:38253835. https://doi.org/10.1007/s00586-023-07772-8.CrossRefGoogle ScholarPubMed
Asif, S., Xu, X., Zhao, M., et al. ResMFuse-net: residual-based multilevel fused network with spatial-temporal features for hand hygiene monitoring. Appl Intell. 2024;54:36063628. https://doi.org/10.1007/s10489-024-05305-4.CrossRefGoogle Scholar
Petrosyan, Y., Thavorn, K., Smith, G., et al. Predicting postoperative surgical site infection with administrative data: a random forests algorithm. BMC Med Res Methodol. 2021;21:179. https://doi.org/10.1186/s12874-021-01369-9.CrossRefGoogle Scholar
Verberk, J. D. M., van der Werff, S. D., et al. Augmented value of using clinical notes in semi-automated surveillance of deep surgical site infections after colorectal surgery. Antimicrob Resist Infect Control. 2023;12:117. https://doi.org/10.1186/s13756-023-01316-x.CrossRefGoogle ScholarPubMed
Wang, M., Li, W., Hui, W., et al. Development and validation of machine learning-based models for predicting healthcare-associated bacterial/fungal infections among COVID-19 inpatients: a retrospective cohort study. Antimicrob Resist Infect Control. 2024;13:42. https://doi.org/10.1186/s13756-024-01392-7.CrossRefGoogle ScholarPubMed
Kim, D., Canovas-Segura, B. et al. Spatial-temporal simulation for hospital infection spread and outbreaks of clostridioides difficile. Sci Rep. 2023;13:20022. 10.1038/s41598-023-47296-1 10.1038/s41598-023-47296-1CrossRefGoogle ScholarPubMed
Atkinson, A., Ellenberger, B., Piezzi, V., et al. Extending outbreak investigation with machine learning and graph theory: benefits of new tools with application to a nosocomial outbreak of a multidrug-resistant organism. Infect Control Hosp Epidemiol. 2023;44:246252. https://doi.org/10.1017/ice.2022.66.CrossRefGoogle ScholarPubMed
Ötleş E, Balczewski, E. A., Keidan, M., et al. Clostridioides difficile infection surveillance in intensive care units and oncology wards using machine learning. Infect Control Hosp Epidemiol. 2023;44:17761781. 10.1017/ice.2023.54.Google Scholar
Savin, I., Ershova, K., Kurdyumova, N., et al. Healthcare-associated ventriculitis and meningitis in a neuro-ICU: incidence and risk factors selected by machine learning approach. J Crit Care. 2018;45:95104. https://doi.org/10.1016/j.jcrc.2018.01.022.CrossRefGoogle Scholar
Lyu, J. W., Zhang, X. D., Tang, J. W., et al. Rapid prediction of multidrug-resistant Klebsiella pneumoniae through deep learning analysis of SERS spectra. Microbiol Spectr. 2023;11. https://doi.org/10.1128/spectrum.04126-22.CrossRefGoogle Scholar
Prey, B. J., Colburn, Z. T., Williams, J. M., et al. The use of mobile thermal imaging and machine learning technology for the detection of early surgical site infections. Am J Surg. 2024;231:6064. https://doi.org/10.1016/j.amjsurg.2023.04.011.CrossRefGoogle ScholarPubMed
Lee, J. H., Shim, J. W., Lim, M. H., et al. Towards optimal design of patient isolation units in emergency rooms to prevent airborne virus transmission: from computational fluid dynamics to data-driven modeling. Comput Biol Med. 2024;173:108309. https://doi.org/10.1016/j.compbiomed.2024.108309.CrossRefGoogle ScholarPubMed
Fletcher, R. R., Schneider, G., Hedt-Gauthier, B., et al. Use of convolutional neural networks (CNN) and transfer learning for prediction of surgical site infection from color images. IEEE Eng Med Biol Soc. 2021;43:5047. https://doi.org/10.1109/EMBC.2021.9630430.Google Scholar
Flores-Balado, Á., Méndez, C.C., González, A.H., et al. Using artificial intelligence (AI) to reduce orthopedic surgical site infection surveillance workload: algorithm design, validation, and implementation in 4 Spanish hospitals. Am J Infect Control. 2023;51:12251229. 10.1016/j.ajic.2023.04.165.10.1016/j.ajic.2023.04.165CrossRefGoogle ScholarPubMed
Xu, W., Yang, H., Chen, J., et al. WashRing: An energy-efficient and highly accurate handwashing monitoring system via smart ring. IEEE Trans Mob Comput. 2024;23:971989. https://doi.org/10.1109/TMC.2022.3227299.CrossRefGoogle Scholar
Yu, T., Fu, Y., He, J., et al. Identification of antibiotic resistance in ESKAPE pathogens through plasmonic nanosensors and machine learning. ACS Nano. 2023;17:45514563. https://doi.org/10.1021/acsnano.2c10584.CrossRefGoogle ScholarPubMed
Healthcare Information and Management Systems Society (HIMSS). Electronic Medical Record Adoption Model (EMRAM): Criteria and Methodology. Chicago: HIMSS, 2021. https://www.himss.org/sites/hde/files/media/file/2021/06/04/himss-emram-criteria.pdf.Google Scholar
Damschroder, L. J., Aron, D. C., Keith, R. E. et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Sci. 2009; 4: 50. https://doi.org/10.1186/1748-5908-4-50.CrossRefGoogle Scholar
Figure 0

Figure 1. PRISMA 2020 flow diagram for the scoping review.

Figure 1

Table 1. Design standardized classification distribution

Figure 2

Table 2. AI technology categories distribution

Figure 3

Figure 2. IPC applications distribution.

Figure 4

Table 3. IPC applications supported by AI distribution

Figure 5

Figure 3. Barrier categories distribution.

Figure 6

Table 4. Barrier categories

Figure 7

Figure 4. Risk categories distribution.

Figure 8

Table 5. Risk categories

Figure 9

Table 6. Structured readiness checklist for AI implementation in IPC

Supplementary material: File

Gastaldi et al. supplementary material 1

Gastaldi et al. supplementary material
Download Gastaldi et al. supplementary material 1(File)
File 22.3 KB
Supplementary material: File

Gastaldi et al. supplementary material 2

Gastaldi et al. supplementary material
Download Gastaldi et al. supplementary material 2(File)
File 72.1 KB