Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-14T04:33:59.025Z Has data issue: false hasContentIssue false

Incidence of surgical site infections cannot be derived reliably from point prevalence survey data in Dutch hospitals

Published online by Cambridge University Press:  09 January 2017

A. P. MEIJS*
Affiliation:
Centre for Infectious Disease Control, Department of Epidemiology and Surveillance,National Institute of Public Health and the Environment (RIVM), Bilthoven, The Netherlands
J. A. FERREIRA
Affiliation:
Department of Statistics, Informatics and Modelling, National Institute of Public Health and the Environment (RIVM), Bilthoven, The Netherlands
S. C. DE GREEFF
Affiliation:
Centre for Infectious Disease Control, Department of Epidemiology and Surveillance,National Institute of Public Health and the Environment (RIVM), Bilthoven, The Netherlands
M. C. VOS
Affiliation:
Department of Medical Microbiology and Infectious Diseases, Erasmus MC, Rotterdam, The Netherlands
M. B. G. KOEK
Affiliation:
Centre for Infectious Disease Control, Department of Epidemiology and Surveillance,National Institute of Public Health and the Environment (RIVM), Bilthoven, The Netherlands
*
*Author for correspondence: Ms. A. P. Meijs, Centre for Infectious Disease Control, Department of Epidemiology and Surveillance, National Institute of Public Health and the Environment (RIVM), PO Box 1, 3720 BA, Bilthoven, The Netherlands. (Email: anouk.meijs@rivm.nl)
Rights & Permissions [Opens in a new window]

Summary

Thorough studies on whether point prevalence surveys of healthcare-associated infections (HAIs) can be used to reliably estimate incidence of surgical site infections (SSIs) are scarce. We examined this topic using surveillance data of 58 hospitals that participated in two Dutch national surveillances; HAI prevalence and SSI incidence surveillance, respectively. First, we simulated daily prevalences of SSIs from incidence data. Subsequently, Rhame & Sudderth's formula was used to estimate SSI incidence from prevalence. Finally, we developed random-effects models to predict SSI incidence from prevalence and other relevant variables. The prevalences simulated from incidence data indicated that daily prevalence varied greatly. Incidences calculated with Rhame & Sudderth's formula often had values below zero, due to the large number of SSIs occurring post-discharge. Excluding these SSIs, still resulted in poor correlation between calculated and observed incidence. The two models best predicting total incidence and incidence during initial hospital stay both performed poorly (proportion of explained variance of 0·25 and 0·10, respectively). In conclusion, incidence of SSIs cannot be reliably estimated from point prevalence data in Dutch hospitals by any of the applied methods. We therefore conclude that prevalence surveys are not a useful measure to give reliable insight into incidence of SSIs.

Type
Original Papers
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © Cambridge University Press 2017

INTRODUCTION

A surgical site infection (SSI) is a severe surgical complication and is among the most frequently reported types of healthcare associated infections (HAIs) [Reference Klevens1]. SSIs are associated with increased morbidity and mortality, as well as a prolonged hospital stay and a high number of hospital readmissions [Reference Kirkland2Reference Coello4]. Active surveillance has proved to be an effective tool in infection control programmes [Reference Geubbels5Reference Schroder9], although the risk reduction attributed to surveillance varies [Reference Manniën8, Reference Brandt10, Reference Koch11]. The gold standard for SSI surveillance is prospective incidence surveillance [Reference Gravel12, Reference Ustun13], which gives accurate and detailed information on the occurrence of new cases within a standardized follow-up period. However, in order to provide reliable estimates it typically requires prolonged data collection. Recently, an increasing number of countries as well as the American and European Centers for Disease Control and Prevention (CDC and ECDC) have started to include prevalence surveys in their HAI surveillance programmes [Reference Magill14, 15]. Prevalence surveys measure the proportion of infections at one point in time (or over a short period) in all patients hospitalized at that time. Prevalence surveys are attractive because they are less labour-intensive and cheaper compared to incidence surveillance. On the other hand, prevalence surveys are less specific since they usually include all types of HAI. When performed regularly, however, they can be used to visualize trends in the occurrence of infections or to evaluate infection control programmes [Reference French16]. Despite the different objectives of the two surveillance methods, several studies have attempted to convert HAI prevalence into incidence. For this purpose, some formulas have been developed [Reference Rhame and Sudderth17Reference Freeman and McGowan19], which are based on the relationship between incidence and prevalence via the estimated duration of the infection. The most frequently used formula is that of Rhame & Sudderth [Reference Ustun13, Reference Delgado-Rodriguez20Reference Berthelot23], which, however, has not been extensively applied nor studied for calculating incidence of SSIs [15, Reference Gastmeier21, Reference Reilly24].

In the Dutch national surveillance network on nosocomial infections (PREZIES), SSI incidence surveillance as well as biannual point prevalence surveys of all HAIs are carried out. Recently, it was proposed to use the results from the prevalence surveys to estimate SSI incidence, in order to substantially reduce the national burden of HAI surveillance. As available evidence regarding the use of prevalence surveys for this purpose was insufficient in The Netherlands, in the present study we aim to investigate whether SSI incidence can be reliably predicted from SSI prevalence data reported by Dutch hospitals.

METHODS

SSI incidence surveillance

The SSI incidence surveillance of PREZIES is an ongoing programme in which hospitals participate voluntarily. Information concerning patient and operation characteristics, risk factors for SSIs and the occurrence of an SSI are collected. The definitions used to diagnose SSIs are based on the definitions of ECDC and CDC [Reference Horan2527]. As a large proportion of SSIs develop after discharge from the hospital, post-discharge surveillance is necessary to detect these infections. A more detailed description of the incidence surveillance has been published previously [Reference Manniën8, Reference Geubbels28, Reference Koek29].

HAI prevalence surveys

PREZIES HAI prevalence surveys are performed twice a year, in March and October. All inpatients admitted to the hospital before the day of the survey and aged ⩾1 year are included. The collected data include patient characteristics as well as risk factors for HAIs and the presence of any type of HAI. The definitions used to diagnose SSIs related to the current hospital stay are the same as those used in the incidence surveillance. For SSIs present at admission (and therefore related to a previous hospitalization) diagnosis is based on patient history (instead of based on surveillance definitions) and infections are reported also when they are already cured at the day of the survey. The methods used in the HAI prevalence survey have been described in detail elsewhere [Reference van der Kooi30].

Data selection

We included data from hospitals participating simultaneously in both surveillance programmes in the years 2007–2011, linking cumulative SSI incidence to the prevalence in each hospital per year. Specialities with fewer than 20 surgeries in either surveillance programme were excluded. No outbreaks of SSIs were reported by the hospitals during the study period. Cumulative incidence was calculated by dividing the number of operations resulting in an SSI by the total number of operations performed per surveillance year. Prevalence of SSIs was calculated as the number of SSIs (active or under treatment and related to the current hospital stay) at the time of the survey, divided by the total number of surgical patients in the hospital at the time of the survey. When hospitals participated in two prevalence surveys in the same year, combined prevalence was calculated using the total number of surgical patients hospitalized during both surveys. SSIs present at admission were excluded in the prevalence surveys, as diagnosis is not based on surveillance definitions and because these patients did not necessarily underwent surgery during their readmission and may therefore not be included in the denominator (total number of surgical patients in the hospital).

For our analyses, we produced three datasets aggregated at hospital level. The first, dataset I, included incidence based on all reported SSIs. Dataset II included incidence based on SSIs diagnosed during the initial hospital stay only. Finally, dataset III was based on prevalence data and included only SSIs diagnosed during the initial hospital stay.

Data analysis

First, we assessed the variability of daily point prevalence rates within 1 month. For this purpose, daily prevalence of SSIs was simulated from incidence data for each day of the month in which the prevalence survey might be performed. Only SSIs detected during the initial hospital stay were included (dataset II). Daily prevalence was calculated both at the hospital and the national level.

Second, we used the formula of Rhame & Sudderth to estimate annual SSI incidence from prevalence (dataset III) [Reference Rhame and Sudderth17]. The relationship between incidence (I) and prevalence (P) in this method is

$${\rm I}\, = \,{\rm P}\, \times \,\displaystyle{{{\rm LA}} \over {{\rm LN} - {\rm INT}}}.$$

Applied to our study, LA represents the mean length of hospitalization of all patients, LN is the mean length of hospitalization of patients who acquire an SSI, and INT is the mean interval between hospital admission and onset of the SSI.

Since these values could not be derived from the prevalence surveys, they were calculated based on the incidence surveillance. We included, for each hospital, only the specialities that were registered in both surveillance programmes. The values were then calculated taking into account the distribution of patients over the different specialities in the prevalence survey. The accuracy of Rhame & Sudderth's formula was evaluated by comparing the estimated incidence to the observed incidence. The analysis was performed twice, first using incidence based on all SSIs (dataset I) and thereafter including only SSIs detected during initial hospital stay (dataset II).

In the third part of our analyses we developed a linear model to predict SSI incidence from prevalence and other relevant variables including LA, LN, INT, LN minus INT, hospital type, gender, median age and wound class. Because of assumed differences in infection risk between hospitals, hospitals were assigned a random effect. The model was fitted twice, using both incidence of all SSIs (dataset I) and of SSIs detected during initial hospital stay only (dataset II). Because the relationship between incidence and the predictor variables was more likely to be multiplicative than additive, the prediction models adopted were linear on log-transformed variables, but the predictions of the log-incidence were transformed back to yield predictions of incidence ‘on the normal scale’. The predictive performance of the models was assessed by leave-one-out cross-validation [Reference Mosteller and Tukey31]. We quantified the performance of the models by the model's proportion of explained variance and by looking at the distribution of the percent difference between the predicted and observed incidence:

$${\rm Percent}\,{\rm difference}\,{\rm =} \,\displaystyle{{{\rm predicted}\,{\rm -} \,{\rm observed}} \over {{\rm observed}}}\, \times \,100\%. $$

The analyses of the incidence prediction models were performed with R statistical software v. 3.0.1 (R Foundation, Austria). All other analyses were performed with SAS v. 9.3 (SAS Institute Inc., USA).

RESULTS

Fifty-eight hospitals that participated in both surveillance programmes for 1–5 years were matched. Figure 1 shows the flowchart of the data inclusion process. Of the 90 337 included surgeries from the incidence surveillance (ranging from 4613 in 2007 to 37 246 in 2011) 2502 SSIs were observed, of which 838 (33%) were diagnosed during initial hospitalization. We included 13 288 surgical patients from the prevalence surveys (ranging from 721 in 2007 to 4258 in 2011) of which 517 had an SSI related to the current hospital stay at time of the survey. Table 1 presents the number of surgical patients, SSIs, lengths of stay and patient characteristics per speciality. The results are presented separately for the dataset including all SSIs (dataset I) and the dataset including SSIs diagnosed during the initial hospital stay only (dataset II). The length of stay of the total patient population (LA) is the same in both datasets, but in dataset II the value of LN is higher and the value of INT fell, as a result of the selection of SSIs during admission only.

Fig. 1. Flowchart of data inclusion. EG, Endocrine glands; ER, ears, nose and throat; EY, ophthalmology; UNK, unknown speciality.

Table 1. Number of surgical patients, SSIs and patient characteristics in the incidence surveillance and prevalence surveys, per speciality

BL, Blood and lymphatic system; BR, Breast; FGO, Female genital organs; GI, Gastrointestinal system; HCV, Heart and central vascular system; MGO, Male genital organs; MS, Musculoskeletal system; NS, Nervous system; OBS, Obstetrics; PV, Peripheral vascular system; RS, Respiratory system; RU, Renal and urinal system; SS, Skin and subcutaneous tissue.

INT, mean interval between hospital admission and onset of the SSI; IQR, interquartile range; LA, mean length of hospitalization of all patients; LN, mean length of hospitalization of patients who acquire an SSI; SSI, surgical site infection; UNK, Unknown.

Prevalence simulations

Daily prevalence simulated from incidence surveillance dataset II showed substantial variation during the month, both at hospital and national levels (Fig. 2). As expected, the daily variation in prevalence was larger for individual hospitals. Estimates from other hospitals and national survey months yielded results comparable to the ones presented.

Fig. 2. Daily prevalence simulated from incidence dataset II (a) at a single hospital in March 2010 and (b) at national level in October 2011.

Rhame & Sudderth method

Estimates of SSI incidence according to the Rhame & Sudderth method were in most cases lower than the incidence surveillance observations based on all reported SSIs (dataset I), and only five (6·1%) out of 82 estimates fell within the 95% confidence interval (CI) for the observed incidence (Fig. 3). The Spearman correlation coefficient between estimated and observed incidence was 0·22, indicating a very weak association. The results presented in Figure 4 are analogous to those of Figure 3 but are based on only those SSIs diagnosed during initial hospital stay (dataset II). Out of 70 estimates, 29 (41%) fell within the 95% CI for the observed incidence. The Spearman correlation coefficient between estimated and observed incidence was 0·35.

Fig. 3. Comparison of observed and estimated incidence of surgical site infections (SSIs) per year at hospital level, for all reported SSIs (dataset I). Estimated incidence was calculated using the Rhame & Sudderth method. One extreme pair of points (observed incidence 7·1%, estimated incidence 105·5%) is not displayed. The diagonal line represents the situation in which the observed and estimated incidence are equal.

Fig. 4. Comparison of observed and estimated incidence of surgical site infections (SSIs) per year at hospital level, for SSIs occurring during the initial hospital stay (dataset II). Estimated incidence was calculated using the Rhame & Sudderth method. Two extreme pairs of points (observed incidence 0·5%, estimated incidence 32·7%; and observed incidence 0·4%, estimated incidence 25·6%) are not displayed. The diagonal line represents the situation in which the observed and estimated incidence are equal.

Incidence prediction model

Figures 5 and 6 show the results of the models best predicting total SSI incidence (dataset I) and incidence of SSIs during initial hospital stay (dataset II), respectively. Variables included in the best performing models were prevalence, LA, LN, gender and hospital as random effect. The performance of the models was quantified on the log scale with the proportion of explained variance, which was 0·27 for total SSI incidence and 0·24 for SSIs during initial hospital stay. The mean percent difference in the model for incidence of all SSIs was 12%, with a 95% prediction interval ranging from –60% to 145%. For the model on SSIs during initial hospital stay only, the mean percent difference was 43%, with a 95% prediction interval ranging from –85% to 405%.

Fig. 5. (a) Comparison of observed and predicted surgical site infection (SSI) incidence and (b) distribution of the percental prediction error, illustrating the performance of the prediction model best predicting SSI incidence (dataset I). The diagonal line in (a) represents the situation in which the observed and predicted incidence are equal. The vertical dotted lines in (b) display the mean percental prediction error (in bold) and its 95% prediction interval.

Fig. 6. (a) Comparison of observed and predicted surgical site infection (SSI) incidence and (b) distribution of the percental prediction error, illustrating the performance of the prediction model best predicting incidence of SSIs occurring during the initial hospital stay (dataset II). The diagonal line in (a) represents the situation in which the observed and predicted incidence are equal. The vertical dotted lines in (b) display the mean percent prediction error (in bold) and its 95% prediction interval.

DISCUSSION

This study showed that HAI prevalence survey data from Dutch hospitals are not suitable to reliably estimate SSI incidence, either by the existing model of Rhame & Sudderth or by our newly developed prediction model including several predictive factors. A first indication of the limitations of prevalence data to estimate SSI incidence was given by the daily fluctuations in the simulated SSI prevalence within a 1-month period. However, as these analyses were based on data including only a subset of specialities, the variability of the true (unselected) SSI prevalence is probably smaller, as variability decreases with higher numbers. Nevertheless, our results at the national level indicate that daily prevalence fluctuates substantially even when the number of patients is large.

When we estimated SSI incidence from prevalence with Rhame & Sudderth's formula, correlation with the observed SSI incidence was poor. Moreover, the incidence estimations based on all SSIs greatly underestimated the incidence surveillance observations and were frequently smaller than zero. This could partly be explained by the fact that from the prevalence surveys we included only the SSIs that occurred during the initial hospital stay, and therefore underestimated the actual prevalence. However, in addition, the incidence estimates smaller than zero were caused by the assumption in the Rhame & Sudderth method that all infections occur in the hospital. The method hereby assumes that patients have surgery and either stay in the hospital until they develop an infection, or go home without complications. The term ‘LN-INT’ in the formula, which describes the period in which patients are admitted to the hospital with an infection, is then a proxy for the duration of infection. However, when the majority of SSIs occur post-discharge and therefore do not influence the initial length of hospitalization, as was the case in our study, this term becomes meaningless. The Rhame & Sudderth method is therefore not applicable to the present situation in Dutch hospitals. When we applied the formula including only SSIs occurring during initial hospitalization, a higher percentage of estimated incidences fell within the range of the incidence surveillance observations, but the correlation between both was still poor. These results confirm our findings that the formula is not adequate for estimating SSI incidence.

In the multilevel prediction models we could not find any combination of (risk) factors that reliably predicted SSI incidence. A possible explanation for the poor performance of the models is that incidence at the hospital level depends heavily on the type and number of reported specialities (as selected by the hospital). For instance, a high-risk surgery (e.g. colectomy) will increase the overall incidence, whereas a low-risk surgery (e.g. total hip replacement) has a reducing effect. Another explanation is that the included risk factors are not associated with SSI to the same extent in all specialities, for which reason the impact of a factor on a speciality might be eliminated by the lack of impact on another. Taking these remarks into account, it might not be possible to predict SSI incidence at the hospital level. However, due to the small numbers of patients and infections per speciality in the prevalence surveys, it was not possible to extend the model to speciality- or procedure-specific SSI incidence predictions.

A considerable number of studies have used the Rhame & Sudderth method to calculate HAI incidence from prevalence survey data [Reference Ustun13, Reference Delgado-Rodriguez20, Reference Gastmeier21, Reference Berthelot23, Reference Graves32Reference Magill34], but only few have rigorously investigated the applicability of the method for SSIs [Reference Gastmeier21, Reference Reilly24]. In a report on the 2007 Scottish national HAI prevalence survey, SSI incidence was estimated per surgical category [Reference Reilly24]. Because of small numbers, only three categories had sufficient data for comparison. The results were highly variable and included no reliable estimates; however, Gastmeier et al. did find promising results [Reference Gastmeier21]. For eight hospitals combined, estimated SSI incidence and prevalence were within the 95% CI of the corresponding observed rate. However, their study differed from ours in several aspects. First, their study was performed as part of an infection control management study for all HAIs, including three prevalence surveys and incidence surveillance on all patients discharged during an 8-week period. Second, only infections that developed during hospital stay were reported, which does not adequately reflect actual surveillance results. In addition, Gastmeier et al. showed the incidence estimates for individual HAI, including SSIs, only for all hospitals combined. These results indicate that combining outcomes of repeated prevalence surveys may reduce variability. A study by Ustun et al. using the mean prevalence from weekly surveys also demonstrated a close relationship between prevalence and incidence [Reference Ustun13]. If hospitals had performed several (or at least more than two) prevalence surveys per year, we might have found similar results. Our study, however, focused on estimating SSI incidence from prevalence surveys in the current ongoing nationwide programme, reflecting daily practice, rather than from multiple surveys that were part of a single short-term study. To the best of our knowledge there are no published alternative methods to calculate or predict SSI incidence for data aggregated at hospital level. Other prediction models for SSIs exist, but these are aimed at predicting the risk for individual patients [Reference van Walraven and Musselman35Reference Raja, Rochon and Jarman38].

This study has several important strengths. Data were derived from a national surveillance network that uses a standardized protocol with strict definitions to diagnose SSIs and mandatory onsite validation every 3–5 years, resulting in high-quality surveillance data. For this study we only included hospitals participating in both incidence surveillance and prevalence surveys simultaneously, which resulted in the most optimal link between the two.

There were also some limitations to this study. Although strict protocols were implemented, routine surveillance data is typically not collected for study purposes, resulting in some important factors that were not collected in the best way. In the incidence surveillance, the type of surgery was collected, whereas only the patient's speciality was reported in the prevalence surveys. We have taken this into account as much as possible by linking hospital data at the speciality level, in order to make for the greatest comparability between incidence and prevalence. Furthermore, because dates of discharge were not available from prevalence surveys, the estimates of LA, LN and INT were derived from incidence surveillance. It is difficult to assess the impact this had on the results of Rhame & Sudderth's formula, especially for the analysis of total SSI incidence where the SSIs that develop post-discharge led to meaningless values for the duration of infection (‘LN-INT’). However, when only SSIs during the initial hospital stay were included, LA, LN and INT most likely were lower than when prevalence data would have been used, since prevalence surveys are biased towards patients with longer lengths of stay. How this impacted on the outcomes is highly dependent on the ratios between the values. The used values of LA, LN and INT might have influenced the performance of the prediction models as well, but their impact was likely to be limited. Finally, hospitals are free to choose which types of surgery they want to include in their incidence surveillance and may change their selection annually. This means that the distribution of surgery types in the incidence surveillance is not a reflection of the true distribution in the hospital, and therefore the calculated SSI incidence at hospital level is only based on a selection of the total number of specialities in the hospital.

In conclusion, SSI incidence cannot be reliably predicted from SSI prevalence survey data using the currently available methods. This is caused by (i) the design of the prevalence surveys, which give only a snapshot of the current infection status of admitted patients, (ii) the large number of SSIs that occur after the initial hospitalization, and (iii) the infection duration of SSIs which is difficult to estimate and cannot be reliably approximated by the ‘time from infection until discharge’. We therefore conclude that prevalence surveys, as currently implemented in The Netherlands, are not a useful measure to estimate SSI incidence and that they cannot replace SSI incidence surveillance in order to reduce the workload and expenses for HAI surveillance. Although these findings are likely to be universal, further research should be performed to investigate whether this also applies for other countries with similar prevalence survey protocols [Reference Magill14, 15]. For infection prevention purposes, both types of surveillance will remain important and are in fact complementary. Prevalence surveys can give a first indication of the areas of interest for infection control and are useful in visualizing trends, while incidence surveillance is better equipped when more in-depth information is needed. When choosing a surveillance method, hospitals should always be aware of the value of both surveillance systems and keep in mind the goals they aim to achieve.

ACKNOWLEDGEMENTS

The authors thank all hospital staff who were involved in data collection. We also thank Hussein Jamaladin for his contribution to the initial design of the study analyses.

This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

DECLARATION OF INTEREST

None.

References

REFERENCES

1. Klevens, RM, et al. Estimating health care-associated infections and deaths in U.S. hospitals, 2002. Public Health Reports 2007; 122: 160166.Google ScholarPubMed
2. Kirkland, KB, et al. The impact of surgical-site infections in the 1990s: attributable mortality, excess length of hospitalization, and extra costs. Infection Control and Hospital Epidemiology 1999; 20: 725730.CrossRefGoogle ScholarPubMed
3. de Lissovoy, G, et al. Surgical site infection: incidence and impact on hospital utilization and treatment costs. American Journal of Infection Control 2009; 37: 387397.CrossRefGoogle ScholarPubMed
4. Coello, R, et al. Adverse impact of surgical site infections in English hospitals. Journal of Hospital Infection 2005; 60: 93103.CrossRefGoogle ScholarPubMed
5. Geubbels, ELPE, et al. Reduced risk of surgical site infections through surveillance in a network. International Journal of Quality Health Care 2006; 18: 127133.CrossRefGoogle ScholarPubMed
6. Haley, RW, et al. The efficacy of infection surveillance and control programs in preventing nosocomial infections in US hospitals. American Journal of Epidemiology 1985; 121: 182205.CrossRefGoogle ScholarPubMed
7. Rioux, C, Grandbastien, B, Astagneau, P. Impact of a six-year control programme on surgical site infections in France: results of the INCISO surveillance. Journal of Hospital Infection 2007; 66: 217223.CrossRefGoogle ScholarPubMed
8. Manniën, J, et al. Trends in the incidence of surgical site infection in the Netherlands. Infection Control and Hospital Epidemiology 2008; 29: 11321138.CrossRefGoogle ScholarPubMed
9. Schroder, C, et al. Epidemiology of healthcare associated infections in Germany: Nearly 20 years of surveillance. International Journal of Medical Microbiology 2015; 305: 799806.CrossRefGoogle ScholarPubMed
10. Brandt, C, et al. Reduction of surgical site infection rates associated with active surveillance. Infect Control and Hospital Epidemiology 2006; 27: 13471351.CrossRefGoogle ScholarPubMed
11. Koch, AM, et al. Need for more targeted measures – only less severe hospital-associated infections declined after introduction of an infection control program. Journal of Infection and Public Health 2015; 8: 282290.CrossRefGoogle ScholarPubMed
12. Gravel, D, et al. Point prevalence survey for healthcare-associated infections within Canadian adult acute-care hospitals. Journal of Hospital Infection 2007; 66: 243248.CrossRefGoogle ScholarPubMed
13. Ustun, C, et al. The accuracy and validity of a weekly point-prevalence survey for evaluating the trend of hospital-acquired infections in a university hospital in Turkey. International Journal of Infectious Diseases 2011; 15: e684687.CrossRefGoogle Scholar
14. Magill, SS, et al. Multistate point-prevalence survey of health care-associated infections. New England Journal of Medicine 2014; 370: 11981208.CrossRefGoogle ScholarPubMed
15. European Centre for Disease Prevention and Control (ECDC). Point prevalence survey of healthcare-associated infections and antimicrobial use in European acute care hospitals. Stockholm, Sweden: ECDC, 2013 (http://www.ecdc.europa.eu/en/publications/Publications/healthcare-associated-infections-antimicrobial-use-PPS.pdf).Google Scholar
16. French, GL, et al. Repeated prevalence surveys for monitoring effectiveness of hospital infection control. Lancet 1989; 2: 10211023.CrossRefGoogle ScholarPubMed
17. Rhame, FS, Sudderth, WD. Incidence and prevalence as used in the analysis of the occurrence of nosocomial infections. American Journal of Epidemiology 1981; 113: 111.CrossRefGoogle ScholarPubMed
18. Freeman, J, Hutchison, GB. Prevalence, incidence and duration. American Journal of Epidemiology 1980; 112: 707723.CrossRefGoogle ScholarPubMed
19. Freeman, J, McGowan, JE. Day-specific incidence of nosocomial infection estimated from a prevalence survey. American Journal of Epidemiology 1981; 114: 888901.CrossRefGoogle ScholarPubMed
20. Delgado-Rodriguez, M, et al. A practical application of Rhame and Sudderth's formula on nosocomial infection surveillance. Revue d'Épidémiologie et de Santé Publique 1987; 35: 482487.Google ScholarPubMed
21. Gastmeier, P, et al. Converting incidence and prevalence data of nosocomial infections: results from eight hospitals. Infection Control and Hospital Epidemiology 2001; 22: 3134.CrossRefGoogle ScholarPubMed
22. Rossello-Urgell, J, Rodriguez-Pla, A. Behavior of cross-sectional surveys in the hospital setting: a simulation model. Infection Control and Hospital Epidemiology 2005; 26: 362368.CrossRefGoogle ScholarPubMed
23. Berthelot, P, et al. Conversion of prevalence survey data on nosocomial infections to incidence estimates: a simplified tool for surveillance? Infection Control and Hospital Epidemiology 2007; 28: 633636.CrossRefGoogle ScholarPubMed
24. Reilly, J, et al. NHS Scotland national HAI prevalence survey. Final Report. Glasgow, Scotland: Health Protection Scotland, 2007.Google Scholar
25. Horan, TC, et al. CDC definitions of nosocomial surgical site infections, 1992: a modification of CDC definitions of surgical wound infections. Infection Control and Hospital Epidemiology 1992; 13: 606608.CrossRefGoogle ScholarPubMed
26. Horan, TC, Andrus, M, Dudeck, MA. CDC/NHSN surveillance definition of health care-associated infection and criteria for specific types of infections in the acute care setting. American Journal of Infection Control 2008; 36: 309332.CrossRefGoogle ScholarPubMed
27. European Centre for Disease Prevention and Control (ECDC). Surveillance of surgical site infections in European hospitals – HAISSI protocol. Version 1.02. Stockholm, Sweden: ECDC, 2012 (http://www.ecdc.europa.eu/en/publications/Publications/120215_TED_SSI_Protocol.pdf).Google Scholar
28. Geubbels, EL, et al. An operating surveillance system of surgical-site infections in The Netherlands: results of the PREZIES national surveillance network. Preventie van Ziekenhuisinfecties door Surveillance. Infection Control and Hospital Epidemiology 2000; 21: 311318.CrossRefGoogle Scholar
29. Koek, MB, et al. Post-discharge surveillance (PDS) for surgical site infections: a good method is more important than a long duration. Eurosurveillance 2015; 20: pii = 21042.Google ScholarPubMed
30. van der Kooi, TI, et al. Prevalence of nosocomial infections in The Netherlands, 2007–2008: results of the first four national studies. Journal of Hospital Infection 2010; 75: 168172.CrossRefGoogle ScholarPubMed
31. Mosteller, F, Tukey, JW. Data Analysis and Regression: a Second Course in Statistics. Reading, MA: Addison-Wesley, 1977.Google Scholar
32. Graves, N, et al. The prevalence and estimates of the cumulative incidence of hospital-acquired infections among patients admitted to Auckland District Health Board Hospitals in New Zealand. Infection Control and Hospital Epidemiology 2003; 24: 5661.CrossRefGoogle ScholarPubMed
33. Kanerva, M, et al. Estimating the annual burden of health care-associated infections in Finnish adult acute care hospitals. American Journal of Infection Control 2009; 37: 227230.CrossRefGoogle ScholarPubMed
34. Magill, SS, et al. Multistate point-prevalence survey of health care-associated infections. New England Journal of Medicine 2014; 370: 11981208.CrossRefGoogle ScholarPubMed
35. van Walraven, C, Musselman, R. The Surgical Site Infection Risk Score (SSIRS): a model to predict the risk of surgical site infections. PLoS ONE 2013; 8: e67167.CrossRefGoogle Scholar
36. Paryavi, E, et al. Predictive model for surgical site infection risk after surgery for high-energy lower-extremity fractures: development of the risk of infection in orthopedic trauma surgery score. Journal of Trauma and Acute Care Surgery 2013; 74: 15211527.CrossRefGoogle ScholarPubMed
37. Lee, MJ, et al. Predicting surgical site infection after spine surgery: a validated model using a prospective surgical registry. Spine Journal 2014; 14: 21122117.CrossRefGoogle ScholarPubMed
38. Raja, SG, Rochon, M, Jarman, JW. Brompton Harefield Infection Score (BHIS): development and validation of a stratification tool for predicting risk of surgical site infection after coronary artery bypass grafting. International Journal of Surgery 2015; 16: 6973.CrossRefGoogle ScholarPubMed
Figure 0

Fig. 1. Flowchart of data inclusion. EG, Endocrine glands; ER, ears, nose and throat; EY, ophthalmology; UNK, unknown speciality.

Figure 1

Table 1. Number of surgical patients, SSIs and patient characteristics in the incidence surveillance and prevalence surveys, per speciality

Figure 2

Fig. 2. Daily prevalence simulated from incidence dataset II (a) at a single hospital in March 2010 and (b) at national level in October 2011.

Figure 3

Fig. 3. Comparison of observed and estimated incidence of surgical site infections (SSIs) per year at hospital level, for all reported SSIs (dataset I). Estimated incidence was calculated using the Rhame & Sudderth method. One extreme pair of points (observed incidence 7·1%, estimated incidence 105·5%) is not displayed. The diagonal line represents the situation in which the observed and estimated incidence are equal.

Figure 4

Fig. 4. Comparison of observed and estimated incidence of surgical site infections (SSIs) per year at hospital level, for SSIs occurring during the initial hospital stay (dataset II). Estimated incidence was calculated using the Rhame & Sudderth method. Two extreme pairs of points (observed incidence 0·5%, estimated incidence 32·7%; and observed incidence 0·4%, estimated incidence 25·6%) are not displayed. The diagonal line represents the situation in which the observed and estimated incidence are equal.

Figure 5

Fig. 5. (a) Comparison of observed and predicted surgical site infection (SSI) incidence and (b) distribution of the percental prediction error, illustrating the performance of the prediction model best predicting SSI incidence (dataset I). The diagonal line in (a) represents the situation in which the observed and predicted incidence are equal. The vertical dotted lines in (b) display the mean percental prediction error (in bold) and its 95% prediction interval.

Figure 6

Fig. 6. (a) Comparison of observed and predicted surgical site infection (SSI) incidence and (b) distribution of the percental prediction error, illustrating the performance of the prediction model best predicting incidence of SSIs occurring during the initial hospital stay (dataset II). The diagonal line in (a) represents the situation in which the observed and predicted incidence are equal. The vertical dotted lines in (b) display the mean percent prediction error (in bold) and its 95% prediction interval.