We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We consider the problem of obtaining effective representations for the solutions of linear, vector-valued stochastic differential equations (SDEs) driven by non-Gaussian pure-jump Lévy processes, and we show how such representations lead to efficient simulation methods. The processes considered constitute a broad class of models that find application across the physical and biological sciences, mathematics, finance, and engineering. Motivated by important relevant problems in statistical inference, we derive new, generalised shot-noise simulation methods whenever a normal variance-mean (NVM) mixture representation exists for the driving Lévy process, including the generalised hyperbolic, normal-gamma, and normal tempered stable cases. Simple, explicit conditions are identified for the convergence of the residual of a truncated shot-noise representation to a Brownian motion in the case of the pure Lévy process, and to a Brownian-driven SDE in the case of the Lévy-driven SDE. These results provide Gaussian approximations to the small jumps of the process under the NVM representation. The resulting representations are of particular importance in state inference and parameter estimation for Lévy-driven SDE models, since the resulting conditionally Gaussian structures can be readily incorporated into latent variable inference methods such as Markov chain Monte Carlo, expectation-maximisation, and sequential Monte Carlo.
The miniaturized conical cones for stereotactic radiosurgery (SRS) make it challenging in measurement of dosimetric data needed for commissioning of treatment planning system. This study aims at validating dosimetric characteristics of conical cone collimator manufactured by Varian using Monte Carlo (MC) simulation technique.
Methods & Material:
Percentage depth dose (PDD), tissue maximum ratio (TMR), lateral dose profile (LDP) and output factor (OF) were measured for cones with diameters of 5mm, 7·5mm, 10mm, 12·5 mm, 15 mm and 17·5 mm using EDGE detector for 6MV flattening filter-free (FFF) beam from Truebeam linac. Similarly, MC modelling of linac for 6MVFFF beam and simulation of conical cones were performed in PRIMO. Subsequently, measured beam data were validated by comparing them with results obtained from MC simulation.
Results:
The measured and MC-simulated PDDs or TMRs showed close agreement within 3% except for cone of 5mm diameter. Deviations between measured and simulated PDDs or TMRs were substantially higher for 5mm cone. The maximum deviations at depth of 10cm, 20cm and at range of 50% dose were found 4·05%, 7·52%, 5·52% for PDD and 4·04%, 7·03%, 5·23% for TMR with 5mm cone, respectively. The measured LDPs acquired for all the cones showed close agreement with MC LDPs except in penumbra region around 80% and 20% dose profile. Measured and MC full-width half maxima of dose profiles agreed with nominal cone size within ± 0·2 mm. Measured and MC OFs showed excellent agreement for cone sizes ≥10 mm. However, deviation consistently increases as the size of the cone gets smaller.
Findings:
MC model of conical cones for SRS has been presented and validated. Very good agreement was found between experimentally measured and MC-simulated data. The dosimetry dataset obtained in this study validated using MC model may be used to benchmark beam data measured for commissioning of SRS for cone planning.
This paper presents a set of theoretical models that links a two-phase sequence of cooperative political integration and conflict to explore the reciprocal relationship between war and state formation. It compares equilibria rates of state formation and conflict using a Monte Carlo that generates comparative statics by altering the systemic distribution of ideology, population, tax rates, and war costs across polities. This approach supports three core findings. First, war-induced political integration is at least 2.5 times as likely to occur as integration to realize economic gains. Second, we identify mechanisms linking endogenous organizations to the likelihood of conflict in the system. For example, a greater domestic willingness to support public goods production facilitates the creation of buffer states that reduce the likelihood of a unique class of trilateral wars. These results suggest that the development of the modern administrative state has helped to foster peace. Third, we explore how modelling assumptions setting the number of actors in a strategic context can shape conclusions about war and state formation. We find that dyadic modelling restrictions tend to underestimate the likelihood of cooperative political integration and overestimate the likelihood of war relative to a triadic modelling context.
This chapter elaborates on the calibration and validation procedures for the model. First, we describe our calibration strategy in which a customised optimisation algorithm makes use of a multi-objective function, preventing the loss of indicator-specific error information. Second, we externally validate our model by replicating two well-known statistical patterns: (1) the skewed distribution of budgetary changes and (2) the negative relationship between development and corruption. Third, we internally validate the model by showing that public servants who receive more positive spillovers tend to be less efficient. Fourth, we analyse the statistical behaviour of the model through different tests: validity of synthetic counterfactuals, parameter recovery, overfitting, and time equivalence. Finally, we make a brief reference to the literature on estimating SDG networks.
We report a combined experimental and theoretical study of uranyl complexes that form on the interlayer siloxane surfaces of montmorillonite. We also consider the effect of isomorphic substitution on surface complexation since our montmorillonite sample contains charge sites in both the octahedral and tetrahedral sheets. Results are given for the two-layer hydrate with a layer spacing of 14.58 Å. Polarized-dependent X-ray absorption fine structure spectra are nearly invariant with the incident angle, indicating that the uranyl ions are oriented neither perpendicular nor parallel to the basal plane of montmorillonite. The equilibrated geometry from Monte Carlo simulations suggests that uranyl ions form outer-sphere surface complexes with the [O=U=O]2+ axis tilted at an angle of ~45° to the surface normal.
We performed Monte Carlo and molecular dynamics simulations to investigate the interlayer structure of a uranyl-substituted smectite clay. Our clay model is a dioctahedral montmorillonite with negative charge sites in the octahedral sheet only. We simulated a wide range of interlayer water content (0 mg H2O/g clay — 260 mg H2O/g clay), but we were particularly interested in the two-layer hydrate that has been the focus of recent X-ray absorption experiments. Our simulation results for the two-layer hydrate of uranyl-montmorillonite yield a water content of 160 mg H2O/g clay and a layer spacing of 14.66 Å. Except at extremely low water content, uranyl cations are oriented nearly parallel to the surface normal in an outer-sphere complex. The first coordination shell consists of five water molecules with an average U-O distance of 2.45 Å, in good agreement with experimental data. At low water content, the cations can assume a perpendicular orientation to include surface oxygen atoms in the first coordination shell. Our molecular dynamics results show that complexes translate within the clay pore through a jump diffusion process, and that first-shell water molecules are exchangeable and interchangeable.
This work presents Atomistic Topology Operations in MATLAB (atom), an open source library of modular MATLAB routines which comprise a general and flexible framework for manipulation of atomistic systems. The purpose of the atom library is simply to facilitate common operations performed for construction, manipulation, or structural analysis. Due to the data structure used, atoms and molecules can be operated upon based on different chemical names or attributes, such as atom- or molecule-ID, name, residue name, charge, positions, etc. Furthermore, the Bond Valence Method and a neighbor-distance analysis can be performed to assign many chemical properties of inorganic molecules. Apart from reading and writing common coordinate files (.pdb, .xyz, .gro, .cif) and trajectories (.dcd, .trr, .xtc; binary formats are parsed via third-party packages), the atom library can also be used to generate topology files with bonding and angle information taking the periodic boundary conditions into account, and supports basic Gromacs, NAMD, LAMMPS, and RASPA2 topology file formats. Focusing on clay-mineral systems, the library supports CLAYFF (Cygan, 2004) but can also generate topology files for the INTERFACE forcefield (Heinz, 2005, 2013) for Gromacs and NAMD.
Advanced treatment modalities involve applying small fields which might be shaped by collimators or circular cones. In these techniques, high-energy photons produce unwanted neutrons. Therefore, it is necessary to know neutron parameters in these techniques.
Materials and methods:
Different parts of Varian linac were simulated by MCNPX, and different neutron parameters were calculated. The results were then compared to photoneutron production in the same nominal fields created by circular cones.
Results:
Maximum neutron fluence for 1 × 1, 2 × 2, 3 × 3 cm2 field sizes was 165, 40.4, 19.78 (cm–2.Gy-1 × 106), respectively. The maximum values of neutron equivalent doses were 17.1, 4.65, 2.44 (mSv/Gy of photon dose) for 1 × 1, 2 × 2, 3 × 3 cm2 field size, respectively, and maximum neutron absorbed doses reached 903, 253, 131 (µGy/Gy photon dose) for 1 × 1, 2 × 2, 3 × 3 cm2 field sizes, respectively.
Conclusion:
Comparing the results with those in the presence of circular cones showed that circular cones reduce photoneutron production for the same nominal field sizes.
In this chapter, we overview recent developments of a simulation framework capable of capturing the highly nonequilibrium physics of the strongly coupled electron and phonon systems in quantum cascade lasers (QCLs). In midinfrared (mid-IR) devices, both electronic and optical phonon systems are largely semiclassical and described by coupled Boltzmann transport equations, which we solve using an efficient stochastic technique known as ensemble Monte Carlo. The optical phonon system is strongly coupled to acoustic phonons, the dominant carriers of heat, whose dynamics and thermal transport throughout the whole device are described via a global heat-diffusion solver. We discuss the roles of nonequilibrium optical phonons in QCLs at the level of a single stage , anisotropic thermal transport of acoustic phonons in QCLs, outline the algorithm for multiscale electrothermal simulation, and present data for a mid-IR QCL based on this framework.
Gambling in its modern form was invented in the nineteenth century. The resort casino, built in an environmentally or politically desirable location, attracted a wide range of people from around the world to an atmosphere of luxury, leisure, and cultural cultivation. Visitors to European casinos in the nineteenth century traveled there by steamship or by locomotive; they stayed in hotels and ate meticulously prepared foods; they listened to music performed by artists on tour; and caught up on global and regional affairs by reading newspapers from around the world. And they lost money in the gambling rooms. Built upon an existing network of health-conscious spa towns in the Rhineland, and then relocating to the Riviera in the 1860s, nineteenth-century casino life gave expression to bourgeois demands for leisure, luxury, and levity.
Chapter 7 starts out with a physics motivation, as well as a mathematical statement of the problems that will be tackled in later sections. Newton-Cotes integration methods are first studied ad hoc, via Taylor expansions and, second, building on the interpolation machinery of the previous chapter. Standard techniques like the trapezoid rule and Simpson’s rule are introduced, including the Euler-Maclaurin summation formula. The error behavior is employed to produce an adaptive-integration routine and also, separately, to introduce the topic of Romberg integration. The theme of integration from interpolation continues, when Gauss-Legendre quadrature is explicitly derived, including the integration abscissas, weights, and error behavior. Emphasis is placed on analytic manipulations that can help the numerical evaluation of integrals. The chapter then turns to Monte Carlo, namely stochastic integration: this is painstakingly introduced for one-dimensional problems, and then generalized to the real-world problem of multidimensional integration. The chapter is rounded out by a physics project, on variational Monte Carlo for many-particle quantum mechanics, and a problem set.
This study compared dose metrics between tangent breast plans calculated with the historical standard collapsed cone (CC) and the more accurate Monte Carlo (MC) algorithms. The intention was to correlate current plan quality metrics from the currently used CC algorithm with doses calculated using the more accurate MC algorithm.
Methods:
Thirteen clinically treated patients, whose plans had been calculated using the CC algorithm, were identified. These plans were copied and recalculated using the MC algorithm. Various dose metrics were compared for targets and the time necessary to perform each calculation. Special consideration was given to V105%, as this is increasingly being used as a predictor of skin toxicity and plan quality. Finally, both the CC and MC plans for 4 of the patients were delivered onto a dose measurement phantom used to analyse quality assurance (QA) pass rates. These pass rates, using various evaluation criteria, were also compared.
Results:
Metrics such as the PTVeval D95% and V95% showed a variation of 6% or less between the CC and MC plans, while the PTVeval V100% showed variation up to 20%. The PTVeval V105% showed a relative increase of up to 593% after being recalculated with MC. The time necessary to perform calculations was 76% longer on average for CC plans than for those recalculated using MC. On average, the QA pass rates using 2%2mm and 3%3mm gamma criteria for CC plans were lower (19·2% and 5·5%, respectively) than those recalculated using MC.
Conclusion:
Our study demonstrates MC-calculated PTVeval V105% values are significantly higher than those calculated using CC. PTVeval V105% is often used as a benchmark for acceptable plan quality and a predictor of acute toxicity. We have also shown that calculation times for MC are comparable to those for CC. Therefore, what is considered acceptable PTVeval V105% criteria should be redefined based on more accurate MC calculations.
We first calibrate and then analyze our ABM using suites of Monte Carlo simulations, applied to a representative set of training cases of government formation in European parliamentary democracies. For each to the twenty training cases, we execute 1,000 model runs, randomizing model parameters for each run as follows. For each observable parameter, for each model run for each training case, we take the empirically observed value and perturb this with parameterized random noise. For unobservable model parameters, we randomly sample from the full range of possible values. The 1,000 runs for each case thus yield a distribution of model-predicted outcome for that case. We calibrate unobservable model parameters by selecting ranges of these associated with empirically accurate model predictions. We analyze the (calibrated and uncalibrated) model by summarizing the mapping of model inputs into model outputs in the artificial data generated by the set of Monte Carlos, using theoretically informed logistic regressions. This is the computational analogue of analyses based on deductive “comparative statics” generated by traditional formal theorists.
A detailed analysis of management and performance fees for asset managers and investment funds is undertaken. While fund fees are considered as a cost of capital for investors, the structuring of such fee mechanisms in a fund can also influence a fund manager’s decisions and investment strategy, thereby also influencing the investment performance of the investors funds. The study undertaken will allow for an assessment of the effect of fee structures and the potential for asymmetric incentives to arise that may promote adverse risk-taking behaviours by the fund manager, to the detriment of the investor or retiree who places a portion of their retirement savings into such a managed fund with such fee structures. As such, understanding the mechanism of fee charging as well as pricing the fees correctly is vital. An exploration of the application of actuarial distortion pricing methods for complete and incomplete market valuation is performed on a variety of path-dependent option-like performance fee structures for various funds in the European and American markets. Furthermore, several scenario analysis and sensitivity studies are undertaken. The class of Net Asset Value models adopted are Lévy processes, and the pricing is performed via Monte Carlo techniques.
The study aimed to compare the dosimetric performance of Acuros® XB (AXB) and anisotropic analytical algorithm (AAA) for lung SBRT plans using Monte Carlo (MC) simulations.
Methods:
We compared the dose calculation algorithms AAA and either of the dose reporting modes of AXB (dose to medium (AXB-Dm) or dose to water (AXB-Dw)) algorithms implemented in Eclipse® (Varian Medical Systems, Palo Alto, CA) Treatment planning system (TPS) with MC. PRIMO code was used for the MC simulations. The TPS-calculated dose profiles obtained with a multi-slab heterogeneity phantom were compared to MC. A lung phantom with a tumour was used to validate TPS algorithms using different beam delivery techniques. 2D gamma values obtained from Gafchromic film measurements in the tumour isocentre plane were compared with TPS algorithms and MC. Ten VMAT SBRT plans generated in TPS with each algorithm were recalculated with a PRIMO MC system for identical beam parameters for the clinical plan validation. A dose–volume histogram (DVH) based plan comparison and a 3D global gamma analysis were performed.
Results:
AXB demonstrated better agreement with MC and film measurements in the lung phantom validation, with good agreement in PDD, profiles and gamma analysis. AAA showed an overestimated PDD, a significant difference in dose profiles and a lower gamma pass rate near the field borders. With AAA, there was a dose overestimation at the periphery of the tumour. For clinical plan validation, AXB demonstrated higher agreement with MC than AAA.
Conclusions:
AXB provided better agreement with MC than AAA in the phantom and clinical plan evaluations.
Standing as the first unified textbook on the subject, Liquid Crystals and Their Computer Simulations provides a comprehensive and up-to-date treatment of liquid crystals and of their Monte Carlo and molecular dynamics computer simulations. Liquid crystals have a complex physical nature, and, therefore, computer simulations are a key element of research in this field. This modern text develops a uniform formalism for addressing various spectroscopic techniques and other experimental methods for studying phase transitions of liquid crystals, and emphasises the links between their molecular organisation and observable static and dynamic properties. Aided by the inclusion of a set of Appendices containing detailed mathematical background and derivations, this book is accessible to a broad and multidisciplinary audience. Primarily intended for graduate students and academic researchers, it is also an invaluable reference for industrial researchers working on the development of liquid crystal display technology.
Chapter 6 explains the CUDA random number generators provided by the cuRAND library. The CUDA XORWOW generator was found to be the fastest generator in the cuRAND library. The classic calculation of pi by generating random numbers inside a square is used as a test case for the various possibilities on both host CPU and the GPU. A kernel using separate generators for each thread is able to generate about1012 random numbers per second and is about 20 000 times faster than the simplest host CPU version running on a single core. The inverse transform method for generating random numbers from any distribution is explained. A 3D Ising model calculation is presented as a more interesting application of random numbers.The Ising example has a simple interactive GUI based on OpenCV.
The study provides comparative risk analyses of Australia’s three Victorian dairy regions. Historical data were used to identify business risk and financial viability. Multivariate distributions were fitted to the historical price, production, and input costs using copula models, capturing non-linear dependence among the variables. Monte Carlo simulation methods were then used to generate cash flows for a decade. Factors that influenced profitability the most were identified using sensitivity analysis. The dairies in the Northern region have faced water reductions, whereas those of Gippsland and South West have more positive indicators. Our analysis summarizes long-term risks and net farm profits by utilizing survey data in a probabilistic manner.
A common tool in the practice of Markov chain Monte Carlo (MCMC) is to use approximating transition kernels to speed up computation when the desired kernel is slow to evaluate or is intractable. A limited set of quantitative tools exists to assess the relative accuracy and efficiency of such approximations. We derive a set of tools for such analysis based on the Hilbert space generated by the stationary distribution we intend to sample, $L_2(\pi)$. Our results apply to approximations of reversible chains which are geometrically ergodic, as is typically the case for applications to MCMC. The focus of our work is on determining whether the approximating kernel will preserve the geometric ergodicity of the exact chain, and whether the approximating stationary distribution will be close to the original stationary distribution. For reversible chains, our results extend the results of Johndrow et al. (2015) from the uniformly ergodic case to the geometrically ergodic case, under some additional regularity conditions. We then apply our results to a number of approximate MCMC algorithms.
As the feature size of crystalline materials gets smaller, the ability to correctly interpret geometrical sample information from electron backscatter diffraction (EBSD) data becomes more important. This paper uses the notion of transition curves, associated with line scans across grain boundaries (GBs), to correctly account for the finite size of the excitation volume (EV) in the determination of the geometry of the boundary. Various metrics arising from the EBSD data are compared to determine the best experimental proxy for actual numbers of backscattered electrons that are tracked in a Monte Carlo simulation. Consideration of the resultant curves provides an accurate method of determining GB position (at the sample surface) and indicates a significant potential for error in determining GB position using standard EBSD software. Subsequently, simple criteria for comparing experimental and simulated transition curves are derived. Finally, it is shown that the EV is too shallow for the curves to reveal subsurface geometry of the GB (i.e., GB inclination angle) for most values of GB inclination.