We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Lung cancer ranks high among the causes of mortality in cancer patients, as per the most recent World Health Organization report. Proton therapy offers a precise approach to treating lung cancer by delivering protons with high accuracy to the targeted site. However, inaccuracies in proton delivery can lead to increased toxicity in healthy tissues. This study aims to investigate the correlation between proton beam dose profiles in lung tumours and the scattered gamma particles.
Material and methods:
The study utilised the Gate simulation software to simulate proton beam radiation and an imaging system for prompt gamma imaging during proton therapy. An anthropomorphic Non-uniform rational B-spline (NURBS) cardiac and torso (NCAT) phantom was employed to replicate lung tumours of various sizes. The imaging system comprised a multi-slit collimation system, CsI(Tl) scintillator arrays and a multichannel data acquisition system. Simulations were conducted to explore the relationship between prompt gamma detection and proton range for different tumour sizes.
Results:
Following 60 MeV proton irradiation of the NCAT phantom, the study examined the gamma energy spectrum, identifying peak intensities at energies of 2.31, 3.8, 4.44, 5.27 and 6.13 MeV. Adjustments to the proton beam source tailored to tumour sizes achieved a coverage rate of 98%. Optimal energies ranging from 77 to 91.5 MeV were determined for varying tumour volumes, supported by dose distribution profiles and prompt gamma distribution illustrations.
Discussion:
The study evaluated the viability of utilising 2D gamma imaging with a multi-slit collimator scintillation camera for real-time monitoring of dose delivery during proton therapy for lung cancer. The findings indicated that this method is most suitable for small lung tumours (radius ≤ 12 mm) due to reduced gamma emission from larger tumours.
Conclusion:
While the study demonstrates promising results in range estimation using prompt gamma particles, challenges were encountered in accurately estimating large tumours using this method.
The solid Earth's medium is heterogeneous over a wide range of scales. Seismological observations, including envelope broadening with increasing distance from an earthquake source and the excitation of long-lasting coda waves, provide a means of investigating velocity inhomogeneities in the lithosphere. These phenomena have been studied primarily using radiative transfer theory with random medium modelling. This book presents the mathematical foundations of scalar- and vector-wave scattering in random media, using the Born or Eikonal approximation, which are useful for understanding random inhomogeneity spectra and the scattering characteristics of the solid Earth. A step-by-step Monte Carlo simulation procedure is presented for synthesizing the propagation of energy density for impulsive radiation from a source in random media. Simulation results are then verified by comparison with analytical solutions and finite-difference simulations. Presenting the latest seismological observations and analysis techniques, this is a useful reference for graduate students and researchers in geophysics and physics.
This article introduces a comprehensive framework that effectively combines experience rating and exposure rating approaches in reinsurance for both short-tail and long-tail businesses. The generic framework applies to all nonlife lines of business and products emphasizing nonproportional treaty business. The approach is based on three pillars that enable a coherent usage of all available information. The first pillar comprises an exposure-based generative model that emulates the generative process leading to the observed claims experience. The second pillar encompasses a standardized reduction procedure that maps each high-dimensional claim object to a few weakly coupled reduced random variables. The third pillar comprises calibrating the generative model with retrospective Bayesian inference. The derived calibration parameters are fed back into the generative model, and the reinsurance contracts covering future cover periods are rated by projecting the calibrated generative model to the cover period and applying the future contract terms.
In this manuscript, we address open questions raised by Dieker and Yakir (2014), who proposed a novel method of estimating (discrete) Pickands constants $\mathcal{H}^\delta_\alpha$ using a family of estimators $\xi^\delta_\alpha(T)$, $T>0$, where $\alpha\in(0,2]$ is the Hurst parameter, and $\delta\geq0$ is the step size of the regular discretization grid. We derive an upper bound for the discretization error $\mathcal{H}_\alpha^0 - \mathcal{H}_\alpha^\delta$, whose rate of convergence agrees with Conjecture 1 of Dieker and Yakir (2014) in the case $\alpha\in(0,1]$ and agrees up to logarithmic terms for $\alpha\in(1,2)$. Moreover, we show that all moments of $\xi_\alpha^\delta(T)$ are uniformly bounded and the bias of the estimator decays no slower than $\exp\{-\mathcal CT^{\alpha}\}$, as T becomes large.
Qu, Dassios, and Zhao (2021) suggested an exact simulation method for tempered stable Ornstein–Uhlenbeck processes, but their algorithms contain some errors. This short note aims to correct their algorithms and conduct some numerical experiments.
The Istanbul metroplex airspace, home to Atatürk (LTBA), Sabiha Gökçen (LTFJ), and Istanbul (LTFM) international airports, is a critical hub for international travel, trade and commerce between Europe and Asia. The high air traffic volume and the proximity of multiple airports make air traffic management (ATM) a significant challenge. To better manage this complex air traffic, it is necessary to conduct detailed analyses of the capacities of these airports and surrounding airspace. In this study, Monte Carlo simulation is used to determine the ultimate and practical capacities of the airport and surrounding airspace and compare them to identify any differences or limitations. The traffic mix, runway occupancy time and traffic distribution at airspace entry points are randomised variables that directly impact airport and airspace capacities and delays. The study aims to determine the current capacities of the runways and routes in the metroplex airspace and project the future capacities with the addition of new facilities. The results demonstrated that the actual bottleneck could be experienced in airspace, rather than runways, which was the focus of the previous literature. Thus, this study will provide valuable insights for stakeholders in the aviation industry to effectively manage air traffic in the metroplex airspace and meet the growing demand.
Monte Carlo (MC) simulations of interlayer molecular structure in monolayer hydrates of Na-saturated Wyoming-type montmorillonites and vermiculite were performed. Detailed comparison of the stimulation results with experimental diffraction and thermodynamic data for these clay-water systems indicated good semiquantitative to quantitative agreement. The MC simulations revealed that, in the monolayer hydrate, interlayer water molecules tend to increase their occupation of the midplane as layer charge increases. As the percentage of tetrahedral layer charge increases, water molecules are induced to interact with the siloxane surface O atoms through hydrogen bonding and Na+ counter-ions are induced to form inner-sphere surface complexes. These results suggest the need for careful diffraction experiments on a series of monolayer hydrates of montmorillonite whose layer charge and tetrahedral isomorphic substitution charge vary systematically.
Monte Carlo (MC) simulations of molecular structure in the interlayers of 2:1 Na-saturated clay minerals were performed to address several important simulation methodological issues. Investigation was focused on monolayer hydrates of the clay minerals because these systems provide a severe test of the quality and sensitivity of MC interlayer simulations. Comparisons were made between two leading models of the water-water interaction in condensed phases, and the sensitivity of the simulations to the size or shape of the periodically-repeated simulation cell was determined. The results indicated that model potential functions permitting significant deviations from the molecular environment in bulk liquid water are superior to those calibrated to mimic the bulk water structure closely. Increasing the simulation cell size or altering its shape from a rectangular 21.12 Å × 18.28 Å × 6.54 Å cell (about eight clay mineral unit cells) had no significant effect on the calculated interlayer properties.
Chapter 6 demonstrates one way that RIO can be used for exploratory data analysis: identifying statistically significant interaction terms. We show how exploring the relationships among cases offers important insights into the relationships between variables.
The adoption of genomic technologies in the context of hospital-based health technology assessment presents multiple practical and organizational challenges.
Objective
This study aimed to assist the Instituto Português de Oncologia de Lisboa Francisco Gentil (IPO Lisboa) decision makers in analyzing which acute myeloid leukemia (AML) genomic panel contracting strategies had the highest value-for-money.
Methods
A tailored, three-step approach was developed, which included: mapping clinical pathways of AML patients, building a multicriteria value model using the MACBETH approach to evaluate each genomic testing contracting strategy, and estimating the cost of each strategy through Monte Carlo simulation modeling. The value-for-money of three contracting strategies – “Standard of care (S1),” “FoundationOne Heme test (S2),” and “New diagnostic test infrastructure (S3)” – was then analyzed through strategy landscape and value-for-money graphs.
Results
Implementing a larger gene panel (S2) and investing in a new diagnostic test infrastructure (S3) were shown to generate extra value, but also to entail extra costs in comparison with the standard of care, with the extra value being explained by making available additional genetic information that enables more personalized treatment and patient monitoring (S2 and S3), access to a broader range of clinical trials (S2), and more complete databases to potentiate research (S3).
Conclusion
The proposed multimethodology provided IPO Lisboa decision makers with comprehensive and insightful information regarding each strategy’s value-for-money, enabling an informed discussion on whether to move from the current Strategy S1 to other competing strategies.
A theoretically consistent structural model facilitates definition and measurement of use and non-use benefits of ecosystem services. Unlike many previous approaches that utilize multiple stated choice situations, we apply this conceptual framework to a travel cost random utility model and a consequential single referendum contingent valuation research design for simultaneously estimating use and non-use willingness to pay for environmental quality improvement. We employ Monte Carlo generated data to evaluate properties of key parameters and examine the robustness of this method of measuring use and non-use values associated with quality change. The simulation study confirms that this new method, combined with simulated revealed and stated preference data can generally, but not always, be applied to successfully identify use and non-use values of various ecosystems while consistency is ensured.
In this chapter, we discuss the use of simulations for clinical trials. Simulation in statistics generally refers to repeated analyses of randomly generated datasets with known properties. Clinical trial simulation is required to explore, compare, and characterise operating characteristics and statistical properties of adaptive and other innovative trials with complex designs. Clinical trial simulation is an important tool that allows for comparison of different design choices during the planning stage to enhance the quality and feasibility of the trial. While simulations are most frequently used in adaptive and other complex trial designs, they can be applied to fixed trial designs.
Quantifying the multiscale hydraulic heterogeneity in aquifers and their effects on solute transport is the task of this chapter. Using spatial statistics, we explain how to quantify spatial variability of hydraulic properties or parameters in the aquifer using the stochastic or random field concept. In particular, we discuss spatial covariance, variogram, statistical homogeneity, heterogeneity, isotropy, and anisotropy concepts. Field examples complement the discussion. We then present a highly parameterized heterogeneous media (HPHM) approach for simulating flow and solute transport in aquifers with spatially varying hydraulic properties to meet our interest and observation scale. However, our limited ability to collect the needed information for this approach promotes alternatives such as Monte Carlo simulation, zonation, and equivalent homogeneous media (EHM) approaches with macrodispersion approaches. This chapter details the EHM with the macordispersion concept.
True and Error Theory (TET) provides a method to separate the variability of behavior into components due to changing true policy and to random error. TET is a testable theory that can serve as a statistical model, allowing one to evaluate substantive theories as nested, special cases. TET is more accurate descriptively and has theoretical advantages over previous approaches. This paper presents a freely available computer program in R that can be used to fit and evaluate both TET and substantive theories that are special cases of it. The program performs Monte Carlo simulations to generate distributions of test statistics and bootstrapping to provide confidence intervals on parameter estimates. Use of the program is illustrated by a reanalysis of previously published data testing whether what appeared to be violations of Expected Utility (EU) theory (Allais paradoxes) by previous methods might actually be consistent with EU theory.
Edited by
Myles Lavan, University of St Andrews, Scotland,Daniel Jew, National University of Singapore,Bart Danon, Rijksuniversiteit Groningen, The Netherlands
This short chapter recapitulates the substantive advances made by the individual chapters in this volume before closing remarks on the difference between using probability to represent epistemic uncertainty and modelling variability, two exercises that are easily confused, and on the use of models to answer historical questions.
Edited by
Myles Lavan, University of St Andrews, Scotland,Daniel Jew, National University of Singapore,Bart Danon, Rijksuniversiteit Groningen, The Netherlands
An intense debate has arisen among scholars concerning the financial sustainability of the grain funds that Greek and Roman cities used to cope with the instabilities of the grain market. In this paper, we apply a Monte Carlo simulation to model their financial dynamics. Due to the uncertainties pertaining to the scope of such funds (targeting urban dwellers only or including rural residents), our model takes into account two scenarios: ‘optimistic’ (urban only) and ‘pessimistic’ (both urban and rural). The analysis reaches several important findings: (1) For both scenarios, we witness a considerable rate of funds collapsing in their first 10 years of operation. After 10 years, however, the probability of failure displays very little change, as if there was a threshold over which the funds had accumulated enough capital to withstand shortages. (2) As expected, the survival rates are significantly higher for the optimistic scenario. (3) The withdrawals seem to have the most dramatic impact on the dynamic of the fund. Overall, while the grain funds do not appear to be sustainable in the urban-rural scenario, they show clear signs of sustainability in the urban-only scenario. The results invite reconsideration of the widespread view that grain funds were an inefficient and precarious response to food crisis.
Edited by
Myles Lavan, University of St Andrews, Scotland,Daniel Jew, National University of Singapore,Bart Danon, Rijksuniversiteit Groningen, The Netherlands
This chapter introduces the concepts and methods used by the other chapters in the volume, using the long-standing problem of estimating the land carrying capacity of classical Attica to illustrate the benefits of probabilistic modelling. We begin by surveying the development of techniques for managing uncertainty in ancient history (1.1) and past work on the specific problem of Attica’s land carrying capacity (1.2). The chapter then turns to theoretical questions about the nature of uncertainty and probability (1.3), introducing the ‘subjectivist’ conception of probability as degree of belief, a theoretical framework that makes probability a powerful tool for historians. We go on to discuss the procedure of using probability distributions to represent uncertainty about the actual value of a quantity such as average barley yield in ancient Attica (among other variables relevant to the problem of land carrying capacity) (1.4), the need to be aware of cognitive biases that distort our probability judgements (1.5), the use of Monte Carlo simulation to combine uncertainties (1.6), the potential problem of epistemic interdependence (1.7), the interpretation of the outputs of a Monte Carlo simulation (1.8), and the use of sensitivity analysis to identify the most important sources of uncertainty in a simulation. The appendix illustrates model code in R.
Edited by
Myles Lavan, University of St Andrews, Scotland,Daniel Jew, National University of Singapore,Bart Danon, Rijksuniversiteit Groningen, The Netherlands
This chapter presents a new estimate of the value of coinage in circulation in the mid-second century CE Roman empire. More than 25 years ago, Richard Duncan-Jones revolutionized ancient economic history by offering a first projection through numismatic and statistical methods. At more than 20 bn sesterces, his estimate implied an anomalously high monetization ratio given past and current estimates of Roman GDP, an issue that economic historians have had to deal with ever since. In the first half, a review of the numismatic evidence points to a much smaller role for gold than posited by Duncan-Jones. The second half presents a new model of the money supply. It uses Monte Carlo simulation to estimate the value of centrally-minted precious metal coins produced annually under Hadrian, and then the total coinage in circulation ca 160 CE. Allowance for various uncertainties and other, minor components of the coinage suggests a money supply of around 16 bn sesterces, with less gold and more silver than expected. However, this is not quite low enough to explain away the monetization ratio, implying a higher GDP and more trade-oriented economy than currently thought. Two appendices contain lengthy but essential technical discussions of the assumptions in the estimate.
Edited by
Myles Lavan, University of St Andrews, Scotland,Daniel Jew, National University of Singapore,Bart Danon, Rijksuniversiteit Groningen, The Netherlands
This chapter argues that wealth was probably not the primary barrier for Pompeiians to enter the Roman senate. One oddity about the historical record of Pompeii is that it reveals not a single certain senator in the imperial period. A previous reconstruction of the local distribution of income offers a possible explanation: it predicts that there were no Pompeian households with a senatorial income, suggesting that a lack of wealth kept the Pompeiians outside the senate. However, a new reconstruction of the top part of the Pompeian wealth distribution suggests the opposite. This reconstruction is based on combining the archaeological remains of the intramural housing stock with an econometric model which assumes that the distribution of elite wealth follows a distinct mathematical function – a power law. Even though this type of cliometric modelling is pervaded by uncertainties, with the help of probabilistic calculations it is possible to conclude that at least several Pompeian households held enough wealth to satisfy the senatorial census qualification, implying that wealth may not have been the primary barrier preventing Pompeiians from embarking on a senatorial career.
Historians constantly wrestle with uncertainty, never more so than when attempting quantification, yet the field has given little attention to the nature of uncertainty and strategies for managing it. This volume proposes a powerful new approach to uncertainty in ancient history, drawing on techniques widely used in the social and natural sciences. It shows how probability-based techniques used to manage uncertainty about the future or the present can be applied to uncertainty about the past. A substantial introduction explains the use of probability to represent uncertainty. The chapters that follow showcase how the technique can offer leverage on a wide range of problems in ancient history, from the incidence of expropriation in the Classical Greek world to the money supply of the Roman empire.