INTRODUCTION
Simulation modelling can provide insight into the epidemiology and control of infectious disease in animals and man and is widely adopted as a decision support tool in many disciplines. Historically, the most common approaches to simulating the transmission of infections in populations are strongly mathematical and based on differential equations and matrix algebra. These models have been applied to many viral and parasitic diseases of man and animals [Reference Anderson and May1, Reference Scott and Smith2] and are increasingly being used to elaborate the epidemiology of human enteric pathogens derived from livestock [Reference Turner3–Reference Xiao6].
Reliance on models that are heavily based on mathematical processes can limit the flexibility available for dealing with complexities found in some practical settings. Complexity typically occurs, for example, when interventions to control disease are superimposed over the natural cycle of infection and recovery. In particular problems can arise when models attempt to mimic large populations since the constituent members (herds or individual subjects) are very likely to be heterogenous with respect to traits that influence infection, recovery and detection within a surveillance system. Under these circumstances models that are based on a mathematical process are often not sufficiently flexible to reflect an understanding of the system under study. Moreover, models with a strong mathematical basis can sometimes lack intuitive appeal amongst the practitioners of disease control because the inner workings are either not transparent, not intelligible, or both. Examples of models of infectious disease that are useful because they combine statements of logic with mathematical processes are becoming more common [Reference Allore7–Reference Wood, McKendrick and Gettinby9]. Because there is such a broad diversity in the types of decisions facing veterinary and medical authorities, expansion of the range of techniques available for integrating modelling and disease control is much needed. Ideally, such models should be easily demonstrated to decision makers and be sufficiently flexible to evaluate a range of control measures that might be considered by policy makers.
In this paper, we describe the virtual hierarchy approach to simulating transmission of infection in a large and heterogenous population. We did this by developing a model for studying Salmonella enterica subsp. enterica serovar Dublin (S. Dublin) infection in the population of Danish dairy cattle herds (here the level of interest is the herd). S. Dublin is primarily associated with cattle, causes disease and production loss in many countries, and is a problematic pathogen in dairy cattle production. The organism also infects man by the foodborne and direct contact routes and has a propensity to be rapidly invasive and cause high mortality [Reference Vugia10]. Since 2002, Denmark has implemented a national surveillance scheme in cattle in an attempt to reduce the public health and economic impact of S. Dublin. The programme is based on periodic assessment of herd infection status by measurement of antibody to S. Dublin in bulk tank milk (BTM) by ELISA at 90-day intervals. In Denmark, all herds are continuously classified according to their risk of infection. Herds officially referred to as ‘level 1’ are at low risk of being infected (on average <1% probability that the herd is infected), ‘level 2’ herds are at higher risk (on average >80% probability that the herd is infected), and ‘level 3’ herds are those with culture-confirmed clinical salmonellosis [very few herds are level 3 (a maximum of about 15 at any one time), that remain at level 3 for about 3–4 months]. Herds move from level 1 to level 2 classification if a concentration of antibody indicative of infection is detected, or, for at least a 3-week period after the purchase of animals from a herd that is classified as level 2. Herds are promoted to level 1 when antibody concentrations decline in BTM following the elimination of infection. Herds classified as level 2 because of the purchase of animals from a level 2 herd can be promoted to level 1 if the next scheduled test for antibody in BTM following the purchase is negative. This system was introduced to discourage farmers from purchasing animals from high-risk herds, and the effects on trading patterns were dramatic within the first half year after the initiation of the surveillance programme. The BTM ELISA test and surveillance programme have been described in detail and evaluated elsewhere [Reference Nielsen, Warnick and Greiner11–Reference Nielsen and Ersboll13].
The aim of the current study was to develop a virtual hierarchy model of S. Dublin infection and control in the population of Danish dairy cattle herds by adapting knowledge of the pathogen, animal population and surveillance measures. The primary purpose of the model is to predict changes in the prevalence of herds infected with S. Dublin over time under different control strategies.
METHODS
Overview
The initial stage of modelling involves adapting and organizing existing knowledge of the epidemiology of the pathogen of interest, in this case the key features of the ecology of S. Dublin infection in cattle, to create a conceptual model of the pathogen at herd, regional and national levels. The conceptual model is a simplified account of the real world, obtained by considering the relationships between elements of the system that have a non-trivial influence on the occurrence of S. Dublin in Danish cattle herds. The second stage involves transforming the conceptual model into computer code to produce a simulation program that accepts various inputs (allowing experimentation with the model) and that provides outputs consisting of time-dependent estimates of the proportion of herds infected and the proportion of herds classified as high risk or infected (levels 2 and 3, hereafter collectively referred to as level 2). Finally, the third stage involves formulating a basis for the input assumptions by collecting and organizing existing knowledge (established facts and expert opinion), extracting and analysing data obtained from the surveillance of S. Dublin in Danish dairy cattle herds (serology and microbiology findings over time), and extracting data on herd demographics and patterns of movements of animals between herds from the Danish Cattle Database (DCD). Each of these stages of model development is described in detail below.
Model structure
The system under study can be represented as a hierarchy consisting of groups of dairy cattle managed within a common herd, followed by groups of herds located within a common geographic region with similar prevalence of S. Dublin and then groups of regions comprising the entire dairy cattle industry of Denmark (over 7000 herds located in seven regions). The objective of the model is to follow each of these elements of the hierarchy through time, by monitoring changes in each herd's S. Dublin true infection status and risk classification, and summarizing these traits at the regional and national level at the completion of each time step. In this model, the duration of the time step is a single day and we estimate the national and regional outcomes each day for the duration of an entire iteration. A single iteration may comprise any number of consecutive days, although for the purposes of informing policy on control of S. Dublin a maximum duration of 3650 days (10 years) is adequate. A complete simulation consists of multiple iterations with the results collected at the end of each iteration and these summarized descriptively at the end of the simulation to provide a picture of the variation of possible outcomes from the model when taking into account the stochasticity of the infection process.
The regions referred to in this model and their abbreviations used in the figures are listed in Table 1. These regions do not have an official status under Danish statutes but have been devised by workers in animal disease control as a useful system for classifying geographic location of herds within the country [Reference Nielsen, Warnick and Greiner11].
Herd-level infection and recovery
Central to the conceptual model is the infection–recovery cycle of herds exposed to S. Dublin. Instead of the ‘susceptible–immune–recovered’ (SIR) technique with subjects (herds) considered en masse, the current approach assumes that at each time step each herd exists in one of five non-overlapping time periods defined by the state of infectiousness and level of antibody in BTM. When arranged in their temporal order of occurrence these periods describe the infection–recovery cycle for herds (Fig. 1). Herds existing in the ‘true-negative period’ are those that that are both free of infection with S. Dublin and have low levels of antibody in BTM. If a herd is exposed to a source of S. Dublin that leads to the spread of infection within that herd then in that time step it moves from the true-negative period to the ‘dissemination period’ – a phase where S. Dublin is being actively disseminated throughout the herd but as yet there are insufficient animals shedding the organism in faeces for the herd itself to be regarded as infectious, and antibody levels in BTM have not increased. At the conclusion of the dissemination period the herd enters the ‘antibody-lag period’ when a proportion of the herd (defined by within-herd prevalence) is actively shedding the pathogen and clinical signs of a new outbreak are usually evident. If any such ‘shedding’ animals are sold to a clean herd they may cause a new outbreak of S. Dublin. In the ‘antibody-lag period’ there has not yet been a detectable rise in the level of antibodies in BTM (the herd is effectively ‘false-negative’ if BTM is tested for antibodies in this period). Once the level of antibodies in BTM rises sufficiently high, the herd enters the true-positive period and it remains a source of infection for other herds if it participates in trading. Finally, at the end of the infection–recovery cycle, the herd enters the ‘antibody-fall period’ during which S. Dublin has been eliminated from the herd but antibody levels in BTM persist ensuring that the herd remains classified as level 2 if a test is scheduled. At the conclusion of the antibody-fall period, antibody in BTM reverts to normal (low) levels and the herd once again enters a true-negative period. Herds in the true-negative period stay there indefinitely until exposed to a source of infection upon which the cycle begins again.
By incorporating the above infection–recovery cycle into the model the infection status, infectiousness (shedding) status and test status of each herd can be followed through time. Thus, for example, if animals are moved from a herd that is in either the antibody-lag period or the true-positive period there is a possibility that at least one of these animals can transmit S. Dublin to the purchasing herd. However, in this model such a movement would have no impact if there are no infected animals in the consignment or the receiving herd is itself already infected (in both cases no new outbreak would result). Similarly, it is a simple matter to know the infection classification of each herd by keeping track of the time steps during which they have high levels of BTM antibody.
Movement of infected cattle
Livestock trading is inevitably a complex issue owing to the many human, economic, regulatory and production influences that govern decisions to buy or sell. Consequently, the conceptual model adopted a simplification of the trading behaviour of Danish dairy herds by defining all herds according to the following three attributes: (a) number of days per year that livestock are purchased, (b) the number of cattle that are acquired per purchase event (assuming there is no more than one purchase event per day), and (c) the ‘buying behaviour’ of herds. Both attributes (a) and (b) can be described as probability distributions with density estimates obtained from analysis of data from the DCD that records the date of all movements in and out of all herds at the individual animal level. The third variable (c), ‘buying behaviour’, is a surrogate measure of one aspect of biosecurity and classifies each herd as either ‘closed’ (no purchases of cattle), ‘conservative’ (purchases are only made from S. Dublin level 1 herds) or ‘indiscriminate’ (herds that buy from either level 1 or level 2 herds). Each of these possible classifications is mutually exclusive allowing buying behaviour to be represented by a discrete probability distribution that is defined by an analysis of data on cattle movements within each of the seven regions (see below). At the beginning of each iteration all herds are assigned a buying behaviour by sampling from the discrete probability distribution for that region and this behaviour is retained by each herd until the end of the iteration such that in different iterations the same herd can have a different buying behaviour.
With each new time step in the model each herd is evaluated to see if it is required to purchase cattle by performing a single Bernoulli trial, with p (the probability of success) equal to the herd's pre-allocated probability of purchasing a consignment of cattle on any one day (see below). Because herds that are ‘closed’ are not permitted to buy animals they do not require purchasing to be simulated. Herds with a buying status that is ‘conservative’ are permitted to buy cattle from any herd that has a level 1 status in that same time period. Herds that are ‘indiscriminate’ buyers can buy cattle from any herd (regardless of the level). For all purchase events the source herd is chosen at random from a list of the eligible herds and the number of animals purchased is also a random value from the corresponding input probability distribution. Finally, the number of infected animals in the purchased consignment is made equal to nil if the source herd is free of infection while for infected herds it is a random variate from the binomial probability distribution having parameters p (within-herd prevalence of infection) and n (size of the consignment).
The model also includes an option to restrict the movement of animals between regions. With respect to S. Dublin no such restrictions are currently in place in Denmark although this might be introduced in the future and it is a common strategy for the control of livestock disease in many other animal health jurisdictions. Thus, the model includes an option to force herds seeking replacement animals to only obtain them from their own region instead of from any region.
Surveillance
In the Danish surveillance programme for S. Dublin, dairy herds are assessed for evidence of infection about every 90 days by assaying BTM for antibodies using an ELISA. Previous studies have documented a strong association between within-herd prevalence of seropositive animals and infection in the herd and the level of BTM antibody response [Reference Warnick12, Reference Nielsen and Ersboll13]. High or rapidly elevating antibody is taken as an indication of infection and results in the herd being classified as level 2. ELISA results from up to four consecutive samples are used to assess whether reclassification to level 2 is required. Thus, herds may move from level 1 to level 2 if antibodies in BTM rise to a high level as evidenced by a single test, or, if antibodies slowly rise and persist such that the mean of four consecutive tests exceeds the critical value. In the simulation model, each herd has its testing scheduled at a set interval (the default being 90 days). At each time step, each herd is queried to establish if a test is scheduled for that day and if so it is simply a matter of identifying which period of infection or recovery the herd is in. If the herd is in the true-positive period or the antibody-fall period then the surveillance test will be simulated as positive otherwise it will be negative. Herds with a positive test are immediately allocated a level 2 status if they are not already level 2. Herds with a negative test are kept at level 1 or promoted to level 1 if they were level 2 before testing negative.
Start conditions
At the commencement of each iteration (t=0) the population of herds is established by deriving the infection status for each herd given its BTM antibody status on 31 December 2005. From here herd infection status at t=0 is simulated from estimates of positive predictive value and negative predictive value for the BTM ELISA (derivation of the estimates for predictive values is discussed below). The infection status of antibody-negative herds is thus the outcome of a Bernoulli trial with p equal to the negative predictive value (here p describes the probability that an antibody-negative herd is not infectious) and the infection status of antibody-positive herds is the outcome of a Bernoulli trial with p equal to the positive predictive value.
Software implementation
The conceptual model was encoded into software using an object-oriented programming language allowing rapid development of the interface through ‘drag and drop’ addition of visual components (e.g. memo boxes, edit boxes and labels) onto a form from a component pallet (Borland® Delphi™ 7 for Windows®, Borland Software Corporation, Scotts Valley, CA, USA). The use of object-oriented code is critical to the development of the model because it enables orderly management of the hierarchy of objects (country, regions and herds) and their associated code for the manipulation of the correct data during simulations. Other strongly object-oriented programming languages such as C++, or C# could also be used to develop a similar model. Central to the construction of this model is reliance on a non-visual object referred to in Delphi as the TObjectList which has the ability to own and manipulate a list of any other objects. In this model, three specialized descendants of the TObjectList were derived to represent each level of the hierarchy (TDenmark for the national level, TRegion and THerd). Only one instance of TDenmark was required and this held in its object list seven instances of TRegion (one for each region) with each TRegion object holding N R instances of THerd (R=1–7), where N R is the number of herds in each region. Additional code was provided to each of the descendant classes of the TObjectList specific for its behaviour in the model. For example, THerd has a procedure called ‘THerd.AntibodyFallPeriod’ that defines the behaviour of any particular herd during the antibody-fall period, TRegion has a procedure called ‘TRegion.RegionStep’ for managing all the events that occur in a particular region within a single time step, and TDenmark has a procedure called ‘TDenmark.BuyFromL1’ for simulating the purchase of a consignment of cattle on behalf of any herd in any region with the source of cattle being any level 1 herd in any region. In addition to the code for managing the object hierarchy, additional code was written for input of fixed and stochastic assumptions, setting of simulation options and the output of simulation results as text and plots. Specialized routines for obtaining random variates from probability distributions were adapted from those used in an earlier model [Reference Jordan and McEwen14] and are largely based on the techniques outlined by Law & Kelton [Reference Law and Kelton15].
Simulation inputs
Prior to all simulations, default data on the population of dairy herds were loaded into the model. This described the BTM ELISA test result, the classification status (level 1 or level 2), and the region of origin of each herd (n=7401) at 31 March 2004. The information was acquired from the DCD and edited using SAS analysis software version 9.1 (SAS Institute Inc., Cary, NC, USA) then loaded into the simulation model as a flat database file in ASCII format.
Probability distributions describing positive and negative predictive value for deriving each herd's infection status at t=0 from their BTM ELISA status at t=0 were generated for herds belonging to each of the seven regions. In short, the process involved extracting from the DCD, for the period 2001–2005, the distribution of BTM ELISA results from known infected and non-infected herds and correlations between consecutive ELISA tests for each herd. These findings act as inputs for a model that simulates both antibody measurements on herds at 90-day intervals and the surveillance classification levels that would result. Then the estimated predictive values for each region of interest at t=0 were derived. The process is fully described in a related study [Reference Warnick12].
Time periods in the infection–recovery cycle are central to the functioning of the model. Information on the epidemiology of S. Dublin infection in cattle in Denmark is available from earlier work using repeated ELISA testing (sera, individual milk sampling and BTM) and faecal culture applied to 12 herds. Referred to as the Kongeå project, methodology and outcomes have been previously described [Reference Warnick12, Reference Nielsen, van den Borne and van Schaik16, Reference Nielsen17]). Information used to inform decisions on probability distributions for each of the time periods in the infection–recovery cycle for herds consisted of evidence from the Kongeå project, theoretical knowledge of the ecology of Salmonella infection in individual cattle, and the combined experiences of the authors (each having had protracted involvement in field and research aspects of enteric pathogens in cattle).
The ‘dissemination period’ equates to the period of time for an outbreak to commence in herds following the introduction of a source of infection so that such herds can be regarded a potential source of infection. This time period is variable owing to differences in the amount of infection initially introduced, herd structure and contact dynamics, variation in the amount of shedding in individual animals, time of onset and duration of shedding in individuals. It is possible to estimate a theoretical minimum for the duration of the dissemination period by assuming that: (i) herds have an average size of 80 cows and 150 animals in total, (ii) at least 5% of animals must be infectious for the herd to be infectious to other herds, (iii) it takes on average 2 days for an animal to become infectious from the time they are exposed to the pathogen, (iv) individual animals are infectious for 12 days on average [Reference Robertsson18], and (v) each animal infects on average two other animals during its entire infectious period [Reference Nielsen, van den Borne and van Schaik16]. This means that after 2 days we could have three infectious animals, after 4 days we could have seven infectious animals and after 6 days we could have 15 infectious animals in the herd. However, this timing is highly unlikely because there is not free and unrestricted contact between all animals in a herd, the interval between first-generation cases and second- generation cases is not always as short as 2 days and contacts do not all occur immediately after individuals become infectious. Thus, while cognisant of the above theoretical limit, we set the minimum dissemination period for herds to 14 days to be consistent with experience in the field whereby herds rarely show signs of a new infection within 14 days of the introduction of carrier animals. A ‘most likely’ dissemination period of 30 days was adopted to be consistent with levels of contact that normally occur in Danish dairy herds and the typical appearance of signs of infection in herds after exposure to a source of contamination. However, in herds with limited contact between animals or groups of animals, or in herds with animals becoming infected on pasture the dissemination may well be longer. We therefore set the maximum possible duration of the dissemination period to 120 days.
The ‘antibody-lag period’ is the time it takes for the concentration of antibodies in BTM to rise above the cut-off value used in the surveillance programme classification after dissemination of infection to a level of at least 5% infected animals in the herd. This rise in antibody is assessed from ELISA results from up to four consecutive tests. Experience from the field shows that this period can be quite short (about 2 weeks for infected cows to produce high antibody levels in serum [Reference Robertsson18]) if the infection spreads from within the lactating cow section of the herd. However, this period can also be much longer (up to 120 days) if the infection spreads first within the calf barn and the calves and the lactating cows are housed separately. We set the most likely antibody-lag period to 60 days.
The ‘antibody-fall period’ is the time for the antibody level in BTM to fall to levels low enough for the herd to enter the level 1 classification once there are no longer infectious animals present in the herd. We estimated the distribution of this period based on data acquired from eight dairy herds during a field study. The herds had blood samples collected from all young stock twice per year and milk samples collected every quarter of the year from lactating cows for a period of 3½ years while managers attempted to eradicate the infection through hygiene control and test-and-cull strategies. Herds were considered free of infectious animals when there were no longer any signs of new infections in the young stock. From this time to when level 1 classification could be reached took between 0–810 days with the most likely being around 180 days, however, it was difficult to estimate accurately due to the fairly long testing intervals in the intervention herds. Based on the above a beta-pert distribution with parameters 0 (minimum), 180 (most likely) and 810 (maximum) was used to represent the duration (days) of the antibody-fall period.
The ‘true-positive period’ is the time from when BTM antibody levels are high enough for the herd to be classified as level 2 until the herd clears the infection (the herd is infectious throughout this period). Estimation of this period is problematic because evidence of the demise of infection in herds is unobtainable due to the need for extensive and repeated faecal culture. Consequently, we used the BTM ELISA data from all herds to estimate the total duration of the high antibody period, which consists of both the true-positive period plus the antibody-fall period and then subtracted from this the estimate of the antibody-fall period (above). The subtraction of one probability distribution (antibody-fall period) from another (high antibody period) was performed by simulation with only the non-negative simulation outputs retained for fitting to a suite of candidate parametric distributions using @Risk software (Palisade Corporation, NY, USA).
The high antibody period is not formally part of the model but used above to derive the true-positive period. Duration of test-positive periods can not be calculated directly from the surveillance programme data because the data are censored due to most measurements having been made at 90-day intervals. Therefore an analysis of all the antibody measurements for all herds for the period 2003–2005 was performed as follows. If a herd had more than one test within 3 months, one value was selected at random and then all herds were then classified as test-positive or test-negative at each testing event using the surveillance programme criteria. If four sequential measurements for a herd spanned a period of >15 months (5 year-quarters), then the observations on that herd for that period were excluded. All such observations on consecutive quarters (n=72 144 from 7728 dairy herds) were then used to calculate the probability of changing from test-positive to test-negative status and vice versa. We then assumed these transitions followed a first-order Markov process with the average duration of test-positive status equal to the inverse of the positive to negative transition rate. Finally, the distribution of the duration of test-positive days was obtained as an exponential distribution with the parameter (mean) equal to the average duration of those testing positive.
The DCD keeps track of all movements of cattle between herds, the date of such movements, the identity of the origin and destination herds and the number of animals involved. Extensive manipulation of the database using SAS software was undertaken to estimate probability distributions for the following input assumptions: number of purchase events per herd per year (as an empirical discrete distribution describing count data), number of animals obtained at each purchase event (also as an empirical discrete distribution describing count data), and the purchasing behaviour of each herd (as an empirical discrete distribution describing categorical data).
Data describing the prevalence of individual cattle infected with S. Dublin within infected herds (within-herd prevalence) was obtained from the Kongeå project. In that work, faecal culture had been performed on multiple animals within infected herds on multiple occasions. We collated the results of 33 such samplings, expressed the data as a proportion of animals culture positive and then used this to derive an empirical probability distribution for entry into the model.
The environmental exposure probability (EEP) is a variable in the model that encompasses all exposures to infection other than those caused by contact with an infected animal. Exposure of livestock and man to enteric pathogens by the various environmental pathways is an insidious process that it is difficult to accurately describe and quantify. Although the literature does contain many qualitative data on Salmonella in the environment (for a summary see Murray [Reference Murray, Wray and Wray19]), it does not contain quantitative estimates of the frequency of transfer of Salmonella between cattle herds by environmental pathways. Some studies specific for S. Dublin do also provide good qualitative evidence that transfer of S. Dublin does occur between cattle herds along environmental pathways but again quantitative data suitable for incorporation in a simulation model are lacking [Reference Fossler20–Reference Vaessen22]. To overcome this deficiency on the probability of spread of infection between herds by environmental pathways we performed a calibration exercise using the model to establish an order-of-magnitude estimate of the daily probability that a cattle herd will be exposed to S. Dublin from an environmental source. A value of EEP was obtained by searching for a value that provided an estimate of the percentage of level 2 herds that was consistent with that observed at the planned commencement of simulation experiments (1 January 2006) and which did not cause the model to behave in a manner likely to be implausible given existing knowledge of the system.
Experimentation with the model
Following the initial simulations to establish a value for EEP the model was used to evaluate the current system of surveillance and control and then various modifications representing specific decisions made to enhance the control of S. Dublin in Danish dairy cattle in the future. Where full simulations are performed these involve 1000 iterations (trial and error had previously shown this number to be sufficient to describe the output distributions) and a descriptive graphical analysis performed on predictions of the percent of herds classified as level 2 at t=3650 days and the percent of herds infected at t=3650 days.
Scenario 1 is the base scenario and approximates the current management of S. Dublin in the Danish dairy cattle industry. It is used as a comparison for the intervention scenarios described immediately below. Inputs were defined as the default values described above and with EEP set to 10−5. In addition, herds were allowed to acquire replacement animals from any other herd regardless of region by only taking into account their simulated purchase policy, and BTM ELISA testing was performed at the usual 90-day interval.
Scenario 2 simulates the effect of restricting movement of cattle so that they are confined to their own regions. This prevents high-prevalence regions from ‘exporting’ infection thereby protecting low-prevalence regions from external sources of S. Dublin infection. In practice, there are many possible options for controlling animal movement between regions (e.g. some regions may have restrictions placed on them but not others, some regions may import but not export, etc.). In this scenario we merely wish to obtain a general appreciation of the extent of benefit from restricting movement between regions and so apply the restriction to all regions. This scenario is implemented by activating a switch option that was built into the model and software which forces herds seeking replacement animals to only acquire the from the herd's home region.
Scenario 3 evaluates aspects of herd-level biosecurity. In cattle production, the chance that a herd acquires an infectious agent from another herd can be reduced by restricting the number of animals that are traded, reducing the frequency of trading and adopting a policy of only obtaining replacement animals from herds regarded as a ‘low risk’. Notwithstanding the possibility that such practices can have a deleterious economic impact, the benefits accrued from applying this approach to the control of S. Dublin does need to be quantified. The ‘enhanced biosecurity’ scenario therefore limits all herds to no more than 12 purchase events per year (by truncating the input distribution used in the base scenario at 12 purchase events per year) and limits the number of animals acquired at any one purchase to 12 (by truncating the base scenario inputs for this variable at 12 animals per trade). In addition, the distributions describing the purchasing policy of herds within each region were altered as follows: both the proportion of herds with an ‘indiscriminate’ purchasing policy and the proportion of herds with a ‘conservative’ purchasing policy were halved with the remaining proportion assigned a purchasing policy of ‘closed’.
In Scenario 4 we evaluated the gains from testing herds more frequently by reducing the interval between BTM ELISA tests to 30 days (the current practice reflected in the base scenario is a 90-day BTM test interval). Such a practice would be expected to improve the predictive values of the surveillance classification scheme.
Scenario 5 examines the effect of enhanced control of S. Dublin at the herd level. As the number of level 2 herds in the Danish dairy industry is falling it might soon be feasible to direct more resources at herds as soon as they become level 2 with the aim of hastening the elimination of the pathogen and thus increasing the pace of industry-wide control of S. Dublin. The effect of such measures would be to reduce the duration of time that individual herds spend in the true-positive period – by responding quickly to reduce the spread and severity of infection within the herd. Thus, in this scenario we halved the mean of the exponential distribution used to model the true-positive period in the base scenario so that this period was simulated as an exponential distribution with mean of 338 days. Presently, there are no data available to discern whether or not this extent of improvement in control of S. Dublin within herds is possible. However, the aim of this simulation was merely to obtain a general understanding of whether further investigation of this approach should be pursued.
Finally, we created scenario 6 by combining all the features of scenarios 2, 3, 4 and 5 to provide some indication of the maximum possible reduction in prevalence that might occur with this composite approach.
RESULTS
Model inputs from data analysis
Predictive values for the BTM ELISA at t=0 that were calculated for each region are shown in Table 2. Probability distributions derived and used to estimate the duration of each time period within the infection–recovery cycle for all herds regardless of region are shown in Table 3. Empirical probability distributions used to estimate the number of purchase events per year for each herd and the number of animals acquired at each purchase event are shown in Figure 2. Further exploratory analysis (using plots of various class intervals of number of purchase events per year) failed to reveal any dependency between these variables (plots not shown). The descriptive analysis of purchasing behaviour of herds in various regions is shown in Figure 3 and reveals that regions vary substantially with respect to this trait. Within-herd prevalence of infection data recovered from the Kongeå project is plotted as a probability distribution function in Figure 4.
* Abbreviations of region names are given in Table 1.
Triangular distribution parameters are given as minimum, mode, maximum respectively.
* Triangular and beta-pert distribution parameters are given as minimum, mode, maximum respectively. The parameter for the exponential distribution is the mean.
Model outputs
Outputs from the model (prevalence of infected herds and prevalence of level 2 herds) occur in two formats. First, as time-series plots of the outputs from a single iteration of the model. This provides a picture of the behaviour of the model through time and is useful for interactive comparisons using the software (e.g. Figs 5 and 6) for visualizing differences between iterations and the impact of stochastic effects. The second form of output is the results from full simulations (scenarios) consisting of predictions for both outcomes (prevalence of infection and prevalence of level 2) at a given number of days in the future and repeated for the number of iterations. The outputs are analysed using box plots for each region of Denmark and a national summary. The box plots for all simulation scenarios are arranged in two panels (one for prevalence of infection and the other for prevalence of level 2) to illustrate the variability between and within simulation scenarios and between and within regions (Figs 7 and 8).
Figure 5 gives outputs from single iterations of the model under the base scenario (in the form of time-series plots of percent of herds classified as level 2) with four different levels of environmental transmission (EEP input variable). This output graphically illustrates the importance of environmental transmission of S. Dublin in cattle, the key role of the EEP variable in the model, and why at subsequent simulations a level of EEP=10−5 was used. When EEP=10−3 the percent of level 2 herds increases markedly over a 3-year period in a manner that is completely inconsistent with surveillance system results for recent years. When EEP=10−4 the proportion of level 2 herds is virtually static over a 10-year period. While this is possibly consistent with a static-endemic pattern of disease, it is inconsistent with surveillance data from recent years showing the percent of level 2 herds gradually falling. In contrast, EEP has very little effect on the model at values <10−5 (Fig. 5d shows the behaviour for EEP=10−6 which is identical to output for whenever EEP <10−6). However, at a value of EEP=10−5 the resulting time-series plot is the most consistent with the downward trend in proportion of level 2 herds that has been experienced in recent years, and for these reasons EEP=10−5 was used as the level of environmental transmission in the other simulation scenarios.
An illustration of the behaviour of the model is given by time-dependent predictions of the proportion of herds classified as level 2 and the proportion of infected herds from a single iteration of the base scenario (Fig. 6). There is initially substantial variation between regions for both outcomes but the inter-regional variation diminishes with time. It appears that once the true prevalence of herds infected with S. Dublin falls below about 10% (from 1 to 3 years depending on region) further reductions are gradual. The prevalence of level 2 herds is almost always greater than the true prevalence and the reduction in prevalence of level 2 ‘lags’ the fall in herd true prevalence. The predictions at t=3650 days from 1000 iterations of the base scenario are presented in Figures 7 and 8 (provided for comparison with the other scenarios). At t=3650 days there is a national median of 3·25% of herds infected and a median 4% are classified as level 2.
Comparison of scenarios
Output for the simulation related to restricted regional trading (scenario 2, national median herd prevalence after 10 years of 3·38%) were derived assuming herds can only acquire replacement cattle from other herds located in the same region. Compared to the base scenario (scenario 1) this restricting geographic movement of cattle delivers a dramatic benefit to those regions that have an initial low prevalence of level 2 herds (especially EJ, ISL but also NJN and NWJ to a lesser extent). However, a penalty for the gains made at year 10 by these initially ‘low-prevalence’ regions is that the remaining regions (NJS, SJ and WJ) have a higher prevalence of infection (and level 2) than is the case under free trading.
The various measures used to mimic enhanced biosecurity (scenario 3: less frequent trading of cattle, smaller consignments of cattle during trading, and less high-risk trading) were predicted to have a dramatic impact on control of S. Dublin in Danish dairy herds. For example, the national (median) herd prevalence at 10 years is predicted to be 0·1% (compared to 3·25% in the base scenario) and that of the regions more than a tenfold reduction compared to the base scenario. Although increasing the frequency of testing to once in 30 days (scenario 4) does improve the predicted outcomes at 10 years (national median herd prevalence at 10 years of 1·55%), the amount of this improvement is much smaller than obtained with enhanced biosecurity (scenario 3) and enhanced control within infected herds (scenario 5, national median herd prevalence at 10 years of 0·18%). Scenario 5 does suggest a very pronounced benefit if herds that become level 2 can rapidly eliminate infection from their animals. Although the output from the composite strategy shows the greatest improvement over the base scenario compared to all other scenarios (national median herd prevalence at 10 years of 0%), both prevalence of infected herds and prevalence of level 2 herds for the composite strategy are only marginally lower than those for the enhanced biosecurity scenario (scenario 3). These results indicate that the herd biosecurity component of the composite strategy had a dominant effect on the model predictions for the latter scenario.
DISCUSSION
We have demonstrated how a virtual hierarchy of objects can be useful for predicting the spread of infection in populations in the presence of surveillance and intervention programmes of varying complexity. This approach is a major departure from traditional methods for modelling diseases as it explicitly simulates the infection and surveillance status of each individual element at each level of the hierarchy instead of dealing with elements en masse. By dealing with individual objects in computer memory it is possible to assign them any number of attributes for modelling the course of disease and the impact of interventions. Although, this approach to modelling disease is highly extensible, the degree to which this advantage can be exploited is limited by the extent of knowledge and data available from the population in question. Fortunately there is an extensive body of information in the DCD and from earlier studies on S. Dublin in Denmark that were extremely useful for informing the development of the present model. By using a hierarchical structure to manage information in the model we avoided the complexity that arises with other programming techniques and which have previously discouraged the development of similar models. Aside from providing a natural representation of the population, the hierarchical approach yields a specific advantage of being able to estimate differences in S. Dublin herd prevalence between regions and through time.
In practical terms this study has highlighted opportunities for hastening the elimination of S. Dublin from the Danish dairy industry. The model predicts that decisive progress is possible if the amount of time that herds are infected can be reduced and if biosecurity with regard to trade of animals can be improved. In contrast, more frequent testing of BTM for antibody to S. Dublin promises far less gain. There is also a strong indication that future control should be tailored to suit particular regions given the predicted disparity in prevalence estimates between regions even after many years of a control programme. For example, region-specific programmes could target aspects of herd biosecurity since the effect of these practices as assessed (scenario 3) did have a strong influence at reducing herd-level prevalence (compared to scenario 1) but might not be able to be implemented on a nationwide scale because it would demand too many resources. Herd-level biosecurity could also be combined with ‘regional biosecurity’ where herd managers within low-prevalence regions are encouraged to only acquire replacement animals from low-prevalence regions. Comparison of the output for scenario 1 (base scenario) and scenario 2 (restricted regional movement) suggests that some such form of ‘regional biosecurity’ would do much to protect the progress already made with the control of S. Dublin in low-prevalence regions.
The measures adopted in national disease control programmes are usually arrived at after a range of interest groups make a joint consideration of scientific, practical, economic and social factors. For this reason it is presently difficult to suggest which particular combination of the scenarios that we have evaluated should be implemented despite our results demonstrating that some approaches have clear advantages over others. Useful comparisons of the economic consequences of different approaches to control of S. Dublin are available for the dairy industry in The Netherlands [Reference Bergevoet23], but may not be directly relevant to Denmark. Moreover, further work is needed on the feasibility and affordability of the measures identified here as useful. For example, the extent to which herds can be more rapidly cleared of infection by reducing the spread of pathogen within level 2 herds is not well quantified nor is it clear what resources would be required to achieve this. Nevertheless, while such information is being sought, the model can still be used to address decision options. We envisage this would involve combining the output of this study, with the findings from additional scenarios arrived at during consultation with stakeholders. The model has a modern software interface so that any recommended strategies that emerge from this process can be interactively demonstrated to interest groups in the process of finalizing research priorities and policy directions.
Ignorance about the ecology of S. Dublin as it occurs outside of bovine hosts dictates that there is much uncertainty in the way we modelled transmission of this pathogen between herds by environmental pathways. It is clear from the results in Figure 2 that the manner and amount of environmental transmission occurring in nature is critically important, both in a practical setting for preventing new outbreaks and with respect to the interpretation of output from the present model. Although we use a constant rate of transmission through environmental pathways this is less intuitively appealing than having the risk of environmental transmission made a function of regional prevalence of infected herds or a function of prevalence of infected herds in the immediate geographic vicinity of each individual herd. A greater understanding of environmental transfer of S. Dublin between herds is therefore of pressing importance. However, obtaining quantitative descriptions of the environmental transfer of S. Dublin will probably require a new development in methodology. Analysis of risk factors is a quantitative approach that has been used to examine aspects of environmental transfer in the past [Reference Nielsen, Warnick and Greiner11] but the outputs from this methodology are in the form of a coarse measurement of association and so are poorly suited for use in a simulation model.
Other caveats apply to the findings from this work. We used a range of input variables most of which stay fixed as the model steps through time and this may not always be appropriate. For example, we did not model changes in the size and number of dairy herds despite the likelihood that this will occur during the present period of restructuring in the Danish dairy industry. We did not change the duration of various intervals in the infection–recovery cycle with time, nor did we alter patterns of trading of live cattle with time, nor did we change the within-herd prevalence of infection with time. To include such relationships in the model would have amounted to substantial speculation due to the paucity of information on these subjects.
Although the virtual hierarchy approach was very suited to this work it may be less useful when simulations involve very large population (millions) due to the demands on computer memory and processing speed. Despite these shortfalls we consider that the general approach of a virtual model and the specific example involving S. Dublin in dairy cattle does offer a transparent and objective alternative to other decision-making processes that could be applied in the present setting.
In summary, we have demonstrated a virtual hierarchy model for improving the basis of decisions aimed at controlling pathogens in populations of herds. The example of S. Dublin in cattle in Denmark was shown to be well suited to this approach because of the extensive amount of surveillance data and supporting studies available. Model outputs predict that the future approach for the control of S. Dublin in the Danish cattle industry could be based a combination of enhanced herd-level controls once new infections are detected, improved animal trading practices and regional biosecurity measures.
ACKNOWLEDGEMENTS
The research was funded as a project of the International EpiLab (66032-0150) and performed at the International EpiLab in Denmark. The authors thank Jørgen Nielsen for collecting data from the DCD for the project.
DECLARATION OF INTEREST
None.