We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We correlate the annual Wolf numbers W and their time derivatives Wʹ by shifting time fragments of W and Wʹ relative to each other. The most significant (up to 0.874) correlation is with 3 years shifts for fragments covering 14 years. For longer and shorter periods, the correlation coefficients 0.771–0.855 with 2–3 years shift. The most significant 9 years shift corresponds to -0.852/-0.824 anti-correlation coefficient for 14/11 years period. The other periods are less significant. To evaluate predictive estimates, we use the times series fragments of W shifted back into the past. A forecast can be made using the leading graphs based upon the derived calibration factor. Test calculations show that the most effective is the calibration factor calculated for changing the phase of the cycle. The best linear pairwise correlation coefficient of the approximation is 0.94.
Climate change is resulting in global changes to sea level and wave climates, which in many locations significantly increase the probability of erosion, flooding and damage to coastal infrastructure and ecosystems. Therefore, there is a pressing societal need to be able to forecast the morphological evolution of our coastlines over a broad range of timescales, spanning days-to-decades, facilitating more focused, appropriate and cost-effective management interventions and data-informed planning to support the development of coastal environments. A wide range of modelling approaches have been used with varying degrees of success to assess both the detailed morphological evolution and/or simplified indicators of coastal erosion/accretion. This paper presents an overview of these modelling approaches, covering the full range of the complexity spectrum and summarising the advantages and disadvantages of each method. A focus is given to reduced-complexity modelling approaches, including models based on equilibrium concepts, which have emerged as a particularly promising methodology for the prediction of coastal change over multi-decadal timescales. The advantages of stable, computationally-efficient, reduced-complexity models must be balanced against the requirement for good generality and skill in diverse and complex coastal settings. Significant obstacles are also identified, limiting the generic application of models at regional and global scales. Challenges include the accurate long-term prediction of model forcing time-series in a changing climate, and accounting for processes that can largely be ignored in the shorter term but increase in importance in the long term. Further complications include coastal complexities, such as the accurate assessment of the impacts of headland bypassing. Additional complexities include complex structures and geology, mixed grain size, limited sediment supply, sources and sinks. It is concluded that with present computational resources, data availability limitations and process knowledge gaps, reduced-complexity modelling approaches currently offer the most promising solution to modelling shoreline evolution on daily-to-decadal timescales.
Psychologists typically measure beliefs and preferences using self-reports, whereas economists are much more likely to infer them from behavior. Prediction markets appear to be a victory for the economic approach, having yielded more accurate probability estimates than opinion polls or experts for a wide variety of events, all without ever asking for self-reported beliefs. We conduct the most direct comparison to date of prediction markets to simple self-reports using a within-subject design. Our participants traded on the likelihood of geopolitical events. Each time they placed a trade, they first had to report their belief that the event would occur on a 0–100 scale. When previously validated aggregation algorithms were applied to self-reported beliefs, they were at least as accurate as prediction-market prices in predicting a wide range of geopolitical events. Furthermore, the combination of approaches was significantly more accurate than prediction-market prices alone, indicating that self-reports contained information that the market did not efficiently aggregate. Combining measurement techniques across behavioral and social sciences may have greater benefits than previously thought.
Hepatitis E is an increasingly serious worldwide public health problem that has attracted extensive attention. It is necessary to accurately predict the incidence of hepatitis E to better plan ahead for future medical care. In this study, we developed a Bi-LSTM model that incorporated meteorological factors to predict the prevalence of hepatitis E. The hepatitis E data used in this study are collected from January 2005 to March 2017 by Jiangsu Provincial Center for Disease Control and Prevention. ARIMA, GBDT, SVM, LSTM and Bi-LSTM models are adopted in this study. The data from January 2009 to September 2014 are used as the training set to fit models, and data from October 2014 to March 2017 are used as the testing set to evaluate the predicting accuracy of different models. Selecting models and evaluating the effectiveness of the models are based on mean absolute per cent error (MAPE), root mean square error (RMSE) and mean absolute error (MAE). A total of 44 923 cases of hepatitis E are detected in Jiangsu Province from January 2005 to March 2017. The average monthly incidence rate is 0.35 per 100 000 persons in Jiangsu Province. Incorporating meteorological factors of temperature, water vapour pressure, and rainfall as a combination into the Bi-LSTM Model achieved the state-of-the-art performance in predicting the monthly incidence of hepatitis E, in which RMSE is 0.044, MAPE is 11.88%, and MAE is 0.0377. The Bi-LSTM model with the meteorological factors of temperature, water vapour pressure, and rainfall can fully extract the linear and non-linear information in the hepatitis E incidence data, and has significantly improved the interpretability, learning ability, generalisability and prediction accuracy.
This article examines if professional forecasters form their expectations regarding the policy rate of the European Central Bank (ECB) consistent with the Taylor rule. In doing so, we assess micro-level data including individual forecasts for the ECB main refinancing operations rate as well as inflation and gross domestic product (GDP) growth for the Euro Area. Our results indicate that professionals indeed form their expectations in line with the Taylor rule. However, this connection has diminished over time, especially after the policy rate hit the zero lower bound. In addition, we also find a relationship between forecasters’ disagreement regarding the policy rate of the ECB and disagreement on future GDP growth, which disappears when controlling for monetary policy shocks proxied by changes in the policy rate in the quarter the forecasts are made.
Chapter V applies concepts from preceding chapters to investigate the future character of war and war’s future as a human activity. The chapter begins by exploring the challenge of future forecasting and strategy development, which is compounded by the accumulation of human and environmental effects that invalidate assumptions. The chapter asserts that forecasting may be improved by considering three sets of factors: history and trends, current circumstances, and theory. Next, it explores political, technological, and doctrinal developments that could impact war’s future character, like artificial intelligence and nanotechnology, and provides strategic advice for both high and low capacity groups. The chapter’s latter half uses the history-current circumstances-theory model to assess the feasibility and desirability of ending war forever. Using evidence from archaeology, anthropology, history, trends, and war and peace theories, the chapter concludes that war’s existence is inextricably linked to humanity, i.e., eliminating either eliminates both. It wraps up by offering practical suggestions for minimizing the potential for war.
Using monthly data from the Ebola-outbreak 2013–2016 in West Africa, we compared two calibrations for data fitting, least-squares (SSE) and weighted least-squares (SWSE) with weights reciprocal to the number of new infections. To compare (in hindsight) forecasts for the final disease size (the actual value was observed at month 28 of the outbreak) we fitted Bertalanffy–Pütter growth models to truncated initial data (first 11, 12, …, 28 months). The growth curves identified the epidemic peak at month 10 and the relative errors of the forecasts (asymptotic limits) were below 10%, if 16 or more month were used; for SWSE the relative errors were smaller than for SSE. However, the calibrations differed insofar as for SWSE there were good fitting models that forecasted reasonable upper and lower bounds, while SSE was biased, as the forecasts of good fitting models systematically underestimated the final disease size. Furthermore, for SSE the normal distribution hypothesis of the fit residuals was refuted, while the similar hypothesis for SWSE was not refuted. We therefore recommend considering SWSE for epidemic forecasts.
The UK is one of the epicenters of coronavirus disease (COVID-19) in the world. As of April 14, there have been 93 873 confirmed patients of COVID-19 in the UK and 12 107 deaths with confirmed infection. On April 14, it was reported that COVID-19 was the cause of more than half of the deaths in London.
Methods:
The present paper addresses the modeling and forecasting of the outbreak of COVID-19 in the UK. This modeling must be accomplished through a 2-part time series model to study the number of confirmed cases and deaths. The period we aimed at a forecast was 46 days from April 15 to May 30, 2020. All the computations and simulations were conducted on Matlab R2015b, and the average curves and confidence intervals were calculated based on 100 simulations of the fitted models.
Results:
According to the obtained model, we expect that the cumulative number of confirmed cases will reach 282 000 with an 80% confidence interval (242 000 to 316 500) on May 30, from 93 873 on April 14. In addition, it is expected that, over this period, the number of daily new confirmed cases will fall to the interval 1330 to 6450 with the probability of 0.80 by the point estimation around 3100. Regarding death, our model establishes that the real case fatality rate of the pandemic in the UK approaches 11% (80% confidence interval: 8%–15%). Accordingly, we forecast that the total death in the UK will rise to 35 000 (28 000–50 000 with the probability of 80%).
Conclusions:
The drawback of this study is the shortage of observations. Also, to conduct a more exact study, it is possible to take the number of the tests into account as an explanatory variable besides time.
The aim of this study was to assess the risks in confronting the coronavirus disease 2019 (COVID-19) pandemic and the ongoing lockdown effectiveness in each of Italy, Germany, Spain, France, and the United States using China’s lockdown model simulation, and cases forecast until the plateau phase.
Methods:
Quantitative and qualitative historical data analysis. Total Risk Assessment (TRA) evaluation tool was used to assess the pre-pandemic stage risks, pandemic threshold fast responsiveness, and the ongoing performance until plateau. The Infected Patient Ratio (IPR) tool was developed to measure the number of patients resulting from 1 infector during the incubation period. Both IPR and TRA were used together to forecast inflection points, plateau phases, intensive care units’ and ventilators’ breakpoints, and the Total Fatality Ratio.
Results:
In Italy, Spain, France, Germany, and the United States, an inflection point is predicted within the first 15 d of April, to arrive at a plateau after another 30 to 80 d. Variations in IPR drop are expected due to variations in lockdown timing by each country, the extent of adherence to it, and the number of performed tests in each.
Conclusions:
Both qualitative (TRA) and quantitative (IPR) tools can be used together for assessing and minimizing the pandemic risks and for more precise forecasting.
Influenza activity is subject to environmental factors. Accurate forecasting of influenza epidemics would permit timely and effective implementation of public health interventions, but it remains challenging. In this study, we aimed to develop random forest (RF) regression models including meterological factors to predict seasonal influenza activity in Jiangsu provine, China. Coefficient of determination (R2) and mean absolute percentage error (MAPE) were employed to evaluate the models' performance. Three RF models with optimum parameters were constructed to predict influenza like illness (ILI) activity, influenza A and B (Flu-A and Flu-B) positive rates in Jiangsu. The models for Flu-B and ILI presented excellent performance with MAPEs <10%. The predicted values of the Flu-A model also matched the real trend very well, although its MAPE reached to 19.49% in the test set. The lagged dependent variables were vital predictors in each model. Seasonality was more pronounced in the models for ILI and Flu-A. The modification effects of the meteorological factors and their lagged terms on the prediction accuracy differed across the three models, while temperature always played an important role. Notably, atmospheric pressure made a major contribution to ILI and Flu-B forecasting. In brief, RF models performed well in influenza activity prediction. Impacts of meteorological factors on the predictive models for influenza activity are type-specific.
Diagnosis, treatment, and prevention of vector-borne disease (VBD) in pets is one cornerstone of companion animal practices. Veterinarians are facing new challenges associated with the emergence, reemergence, and rising incidence of VBD, including heartworm disease, Lyme disease, anaplasmosis, and ehrlichiosis. Increases in the observed prevalence of these diseases have been attributed to a multitude of factors, including diagnostic tests with improved sensitivity, expanded annual testing practices, climatologic and ecological changes enhancing vector survival and expansion, emergence or recognition of novel pathogens, and increased movement of pets as travel companions. Veterinarians have the additional responsibility of providing information about zoonotic pathogen transmission from pets, especially to vulnerable human populations: the immunocompromised, children, and the elderly. Hindering efforts to protect pets and people is the dynamic and ever-changing nature of VBD prevalence and distribution. To address this deficit in understanding, the Companion Animal Parasite Council (CAPC) began efforts to annually forecast VBD prevalence in 2011. These forecasts provide veterinarians and pet owners with expected disease prevalence in advance of potential changes. This review summarizes the fidelity of VBD forecasts and illustrates the practical use of CAPC pathogen prevalence maps and forecast data in the practice of veterinary medicine and client education.
As a benchmark mortality model in forecasting future mortality rates and hedging longevity risk, the widely employed Lee–Carter model (Lee, R.D. and Carter, L.R. (1992) Modeling and forecasting U.S. mortality. Journal of the American Statistical Association, 87, 659–671.) suffers from a restrictive constraint on the unobserved mortality index for ensuring model’s identification and a possible inconsistent inference. Recently, a modified Lee–Carter model (Liu, Q., Ling, C. and Peng, L. (2018) Statistical inference for Lee–Carter mortality model and corresponding forecasts. North American Actuarial Journal, to appear.) removes this constraint and a simple least squares estimation is consistent with a normal limit when the mortality index follows from a unit root or near unit root AR(1) model with a nonzero intercept. This paper proposes a bias-corrected estimator for this modified Lee–Carter model, which is consistent and has a normal limit regardless of the mortality index being a stationary or near unit root or unit root AR(1) process with a nonzero intercept. Applications to the US mortality rates and a simulation study are provided as well.
Dengue fever is a disease with increasing incidence, now occurring in some regions which were not previously affected. Ribeirão Preto and São Paulo, municipalities in São Paulo state, Brazil, have been highlighted due to the high dengue incidences especially after 2009 and 2013. Therefore, the current study aims to analyse the temporal behaviour of dengue cases in the both municipalities and forecast the number of disease cases in the out-of-sample period, using time series models, especially SARIMA model. We fitted SARIMA models, which satisfactorily meet the dengue incidence data collected in the municipalities of Ribeirão Preto and São Paulo. However, the out-of-sample forecast confidence intervals are very wide and this fact is usually omitted in several papers. Despite the high variability, health services can use these models in order to anticipate disease scenarios, however, one should interpret with prudence since the magnitude of the epidemic may be underestimated.
Tick-borne encephalitis is a serious arboviral infection with unstable dynamics and profound inter-annual fluctuations in case numbers. A dependable predictive model has been sought since the discovery of the disease. The present study demonstrates that four superimposed cycles, approximately 2·4, 3, 5·4, and 10·4 years long, can account for three-fifths of the variation in the disease fluctuations over central Europe. Using harmonic regression, these cycles can be projected into the future, yielding forecasts of sufficient accuracy for up to 4 years ahead. For the years 2016–2018, this model predicts elevated incidence levels in most parts of the region.
It is well known that the energy for solar eruptions comes from magnetic fields in solar active regions. Magnetic energy storage and dissipation are regarded as important physical processes in the solar corona. With incomplete theoretical modeling for eruptions in the solar atmosphere, activity forecasting is mainly supported with statistical models. Solar observations with high temporal and spatial resolution continuously from space well describe the evolution of activities in the solar atmosphere, and combined with three dimensional reconstruction of solar magnetic fields, makes numerical short-term (within hours to days) solar activity forecasting possible. In the current report, we propose the erupting frequency and main attack direction of solar eruptions as new forecasts and present the prospects for numerical short-term solar activity forecasting based on the magnetic topological framework in solar active regions.
Space weather processes, in general, are non-linear and time-varying. In such cases ‘data driven models’ such as Neural Network, Fuzzy Logic and Genetic Algorithm based models were proved promising to be used in parallel with the mathematical models based on first physical principles. In particular, with the recent developments in ‘big data’ systems, one of the urgent issues is the development of new signal processing techniques to extract manageable, representative data out of the ‘relevant big data’ to be employed in ‘training’, ‘testing’ and validation phases of model construction. Since 1990, under the EU Frame Work Program Actions, we have developed such models for nowcasting, forecasting, warning and also for filling the data gaps on space weather cases including prediction of orbital spacecraft parameters. In particular, some typical, illustrative examples include the forecasting of the ionospheric critical frequencies foF2, during disturbed conditions, such as solar storms and extreme events; GPS total electon content(TEC); solar flare index during solar maximum and the construction of solar EUV flux variations. The associated input data organisation and the typical errors which have been within the acceptable operational expectations are summarised in terms of absolute values, percent and RMS. The aim of the paper is to show that the data driven approaches are promising for the forecasting of space weather.
The transmission of haemorrhagic fever with renal syndrome (HFRS) is influenced by climatic, reservoir and environmental variables. The epidemiology of the disease was studied over a 6-year period in Changsha. Variables relating to climate, environment, rodent host distribution and disease occurrence were collected monthly and analysed using a time-series adjusted Poisson regression model. It was found that the density of the rodent host and multivariate El Niño Southern Oscillation index had the greatest effect on the transmission of HFRS with lags of 2–6 months. However, a number of climatic and environmental factors played important roles in affecting the density and transmission potential of the rodent host population. It was concluded that the measurement of a number of these variables could be used in disease surveillance to give useful advance warning of potential disease epidemics.
Even over a 75-year horizon, forecasts of PAYGO pension finances are misleadingly optimistic. Infinite horizon forecasts are necessary, but are they possible? We build on earlier stochastic forecasts of the US Social Security trust fund which model key demographic and economic variables as historical time series, and use the fitted models to generate Monte Carlo simulations of future fund performance. Using a 500-year stochastic projection, effectively infinite with discounting, we find a fund balance of −5.15 per cent of payroll, compared to the −3.5 per cent of the 2004 Trustees‘ Report, probably reflecting different mortality projections. Our 95 per cent probability bounds are −10.5 and −1.3 per cent. Such forecasts, which reflect only ‘routine’ uncertainty, have many problems but nonetheless seem worthwhile.
We introduce a new index that explores the linkage between business-cycle fluctuations and deviations from long-run economic relationships. This index is virtually a measure of the distance between an attractor, a space spanned by the associated cointegrating vectors, and a point in the n-dimensional Euclidean space. The index is applied to U.S. quarterly data to demonstrate its association with an economy's vulnerability state. We find that the average of the index during expansions negatively correlates with the average contraction in output during recessions. A nonlinear error correction model based on a revised version of the index reveals a forecasting gain as compared to the linear error correction model.The authors gratefully acknowledge helpful comments from two anonymous referees, Ming-Li Chen, Ching-Fan Chung, Jin-Lung Lin, and participants of the 2001 winter meeting of the Econometric Society in New Orleans and the workshop at the Institute of Economics, Academia Sinica. Yau received financial support from the National Science Council of Taiwan under grant NSC 89-2415-H-030-011.