We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this paper, we explore the optimal risk sharing problem in the context of peer-to-peer insurance. Using the criterion of minimizing total variance, we find that the optimal risk sharing strategy should take a linear form. Although linear risk sharing strategies have been examined in the literature, our study uncovers a significant finding: to minimize total variance, the linear strategy should be applied to the residual risks rather than the original risks, as commonly adopted in existing studies. By comparing with the existing models, we demonstrate the advantage of the linear residual risk sharing model in variance reduction and robustness. Furthermore, we develop and study a number of new models by incorporating some constraints, to reflect desirable properties required by the market. With those constraints, the optimal strategies turn out to favor market development, such as incentivize participation and guarantee fairness. A relevant model is considered at last, which establishes the connection among multiple optimization problems and provides insights on how to extend the models into a more general setup.
We propose a multi-level method to increase the accuracy of machine learning algorithms for approximating observables in scientific computing, particularly those that arise in systems modelled by differential equations. The algorithm relies on judiciously combining a large number of computationally cheap training data on coarse resolutions with a few expensive training samples on fine grid resolutions. Theoretical arguments for lowering the generalisation error, based on reducing the variance of the underlying maps, are provided and numerical evidence, indicating significant gains over underlying single-level machine learning algorithms, are presented. Moreover, we also apply the multi-level algorithm in the context of forward uncertainty quantification and observe a considerable speedup over competing algorithms.
There are two ways of speeding up Markov chain Monte Carlo algorithms: (a) construct more complex samplers that use gradient and higher-order information about the target and (b) design a control variate to reduce the asymptotic variance. While the efficiency of (a) as a function of dimension has been studied extensively, this paper provides the first results linking the efficiency of (b) with dimension. Specifically, we construct a control variate for a d-dimensional random walk Metropolis chain with an independent, identically distributed target using the solution of the Poisson equation for the scaling limit in [30]. We prove that the asymptotic variance of the corresponding estimator is bounded above by a multiple of
$\log(d)/d$
over the spectral gap of the chain. The proof hinges on large deviations theory, optimal Young’s inequality and Berry–Esseen-type bounds. Extensions of the result to non-product targets are discussed.
Brown‒Resnick processes are max-stable processes that are associated to Gaussian processes. Their simulation is often based on the corresponding spectral representation which is not unique. We study to what extent simulation accuracy and efficiency can be improved by minimizing the maximal variance of the underlying Gaussian process. Such a minimization is a difficult mathematical problem that also depends on the geometry of the simulation domain. We extend Matheron's (1974) seminal contribution in two directions: (i) making his description of a minimal maximal variance explicit for convex variograms on symmetric domains, and (ii) proving that the same strategy also reduces the maximal variance for a huge class of nonconvex variograms representable through a Bernstein function. A simulation study confirms that our noncostly modification can lead to substantial improvements among Gaussian representations. We also compare it with three other established algorithms.
The calculation of multivariate normal probabilities is of great importance in many statistical and economic applications. In this paper we propose a spherical Monte Carlo method with both theoretical analysis and numerical simulation. We start by writing the multivariate normal probability via an inner radial integral and an outer spherical integral using the spherical transformation. For the outer spherical integral, we apply an integration rule by randomly rotating a predetermined set of well-located points. To find the desired set, we derive an upper bound for the variance of the Monte Carlo estimator and propose a set which is related to the kissing number problem in sphere packings. For the inner radial integral, we employ the idea of antithetic variates and identify certain conditions so that variance reduction is guaranteed. Extensive Monte Carlo simulations on some probabilities confirm these claims.
The multi-level Monte Carlo method proposed by Giles (2008) approximates the expectation of some functionals applied to a stochastic process with optimal order of convergence for the mean-square error. In this paper a modified multi-level Monte Carlo estimator is proposed with significantly reduced computational costs. As the main result, it is proved that the modified estimator reduces the computational costs asymptotically by a factor (p / α)2 if weak approximation methods of orders α and p are applied in the case of computational costs growing with the same order as the variances decay.
Electron probe microanalysis (EPMA) is based on the comparison of characteristic intensities induced by monoenergetic electrons. When the electron beam ionizes inner atomic shells and these ionizations cause the emission of characteristic X-rays, secondary fluorescence can occur, originating from ionizations induced by X-ray photons produced by the primary electron interactions. As detectors are unable to distinguish the origin of these characteristic X-rays, Monte Carlo simulation of radiation transport becomes a determinant tool in the study of this fluorescence enhancement. In this work, characteristic secondary fluorescence enhancement in EPMA has been studied by using the splitting routines offered by PENELOPE 2008 as a variance reduction alternative. This approach is controlled by a single parameter NSPLIT, which represents the desired number of X-ray photon replicas. The dependence of the uncertainties associated with secondary intensities on NSPLIT was studied as a function of the accelerating voltage and the sample composition in a simple binary alloy in which this effect becomes relevant. The achieved efficiencies for the simulated secondary intensities bear a remarkable improvement when increasing the NSPLIT parameter; although in most cases an NSPLIT value of 100 is sufficient, some less likely enhancements may require stronger splitting in order to increase the efficiency associated with the simulation of secondary intensities.
We consider an r component system having an arbitrary binary monotone structure function. We suppose that shocks occur according to a point process and that, independent of what has already occurred, each new shock is one of r different types, with respective probabilities p1, …, pr. We further suppose that there are given integers n1, …, nr such that component i fails (and remains failed) when there have been a total of ni type-i shocks. Letting L be the time at which the system fails, we are interested in using simulation to estimate E[L], E[L2], and P(L > t). We show how to efficiently accomplish this when the point process is (i) a Poisson, (ii) a renewal, and (iii) a Hawkes process.
The Asmussen–Kroese Monte Carlo estimators of P(Sn > u) and P(SN > u) are known to work well in rare event settings, where SN is the sum of independent, identically distributed heavy-tailed random variables X1,…,XN and N is a nonnegative, integer-valued random variable independent of the Xi. In this paper we show how to improve the Asmussen–Kroese estimators of both probabilities when the Xi are nonnegative. We also apply our ideas to estimate the quantity E[(SN-u)+].
We study asymptotically optimal simulation algorithms for approximating the tail probability of P(eX1+⋯+ eXd>u) as u→∞. The first algorithm proposed is based on conditional Monte Carlo and assumes that (X1,…,Xd) has an elliptical distribution with very mild assumptions on the radial component. This algorithm is applicable to a large class of models in finance, as we demonstrate with examples. In addition, we propose an importance sampling algorithm for an arbitrary dependence structure that is shown to be asymptotically optimal under mild assumptions on the marginal distributions and, basically, that we can simulate efficiently (X1,…,Xd|Xj >b) for large b. Extensions that allow us to handle portfolios of financial options are also discussed.
We present an efficient approach for reducing the statistical uncertaintyassociated with direct Monte Carlo simulations of the Boltzmann equation.As with previous variance-reduction approaches, the resulting relativestatistical uncertainty in hydrodynamic quantities (statistical uncertainty normalized by thecharacteristic value of quantity of interest) is smalland independent of the magnitude of the deviation from equilibrium,making the simulation of arbitrarily small deviations from equilibriumpossible. In contrast to previous variance-reduction methods, themethod presented here is able to substantially reduce variance withvery little modification to the standard DSMC algorithm. This is achievedby introducing an auxiliary equilibrium simulation which, via an importanceweight formulation, uses the same particle data as the non-equilibrium(DSMC) calculation; subtracting the equilibrium from the non-equilibriumhydrodynamic fields drastically reduces the statistical uncertaintyof the latter because the two fields are correlated. The resulting formulation is simple to code and provides considerable computational savings for a wide range of problems of practical interest. It is validated by comparing our results with DSMC solutions for steadyand unsteady, isothermal and non-isothermal problems; in all casesvery good agreement between the two methods is found.
The waste-recycling Monte Carlo (WRMC) algorithm introduced by physicists is a modification of the (multi-proposal) Metropolis–Hastings algorithm, which makes use of all the proposals in the empirical mean, whereas the standard (multi-proposal) Metropolis–Hastings algorithm uses only the accepted proposals. In this paper we extend the WRMC algorithm to a general control variate technique and exhibit the optimal choice of the control variate in terms of the asymptotic variance. We also give an example which shows that, in contradiction to the intuition of physicists, the WRMC algorithm can have an asymptotic variance larger than that of the Metropolis–Hastings algorithm. However, in the particular case of the Metropolis–Hastings algorithm called the Boltzmann algorithm, we prove that the WRMC algorithm is asymptotically better than the Metropolis–Hastings algorithm. This last property is also true for the multi-proposal Metropolis–Hastings algorithm. In this last framework we consider a linear parametric generalization of WRMC, and we propose an estimator of the explicit optimal parameter using the proposals.
We introduce adaptive-simulation schemes for estimating performance measures for stochastic systems based on the method of control variates. We consider several possible methods for adaptively tuning the control-variate estimators, and describe their asymptotic properties. Under certain assumptions, including the existence of a ‘perfect control variate’, all of the estimators considered converge faster than the canonical rate of n−1/2, where n is the simulation run length. Perfect control variates for a variety of stochastic processes can be constructed from ‘approximating martingales’. We prove a central limit theorem for an adaptive estimator that converges at rate A similar estimator converges at rate n−1. An exponential rate of convergence is also possible under suitable conditions.
We consider the simulation of transient performance measures of high reliable fault-tolerant computer systems. The most widely used mathematical tools to model the behavior of these systems are Markov processes. Here, we deal basically with the simulation of the mean time to failure (MTTF) and the reliability, R(t), of the system at time t. Some variance reduction techniques are used to reduce the simulation time. We will combine two of these techniques: Importance Sampling and Conditioning Technique. The resulting hybrid algorithm performs significant reduction of simulation time and gives stables estimations.
A recent study on the GI/G/1 queue derives the Maclaurin series for the moments of the waiting time and the delay to respect to some parameters. By the same approach, we obtain an identity on the moments of the transient delay of the M/G/1 queue. This identity allows us to understand the transient behavior of the process better. We apply the identity with other established results to study convergence rate and stochastic concavity of the transient delay process, and to derive bounds and approximations of the moments. Our approximation and bound both have simple closed forms and are asymptotically exact as either the traffic intensity goes to zero or the process approaches stationarity. Performance of the approximation of several M/G/1 queues is illustrated by numerical experiments. It is interesting to note that our results can also help to gain variance reduction in simulation.
Likelihood ratios are used in computer simulation to estimate expectations with respect to one law from simulation of another. This importance sampling technique can be implemented with either the likelihood ratio at the end of the simulated time horizon or with a sequence of likelihood ratios at intermediate times. Since a likelihood ratio process is a martingale, the intermediate values are conditional expectations of the final value and their use introduces no bias.
We provide conditions under which using conditional expectations in this way brings guaranteed variance reduction. We use stochastic orderings to get positive dependence between a process and its likelihood ratio, from which variance reduction follows. Our analysis supports the following rough statement: for increasing functionals of associated processes with monotone likelihood ratio, conditioning helps. Examples are drawn from recursively defined processes, Markov chains in discrete and continuous time, and processes with Poisson input.
We derive conditions under which the increments of a vector process are associated — i.e. under which all pairs of increasing functions of the increments are positively correlated. The process itself is associated if it is generated by a family of associated and monotone kernels. We show that the increments are associated if the kernels are associated and, in a suitable sense, convex. In the Markov case, we note a connection between associated increments and temporal stochastic convexity.
Our analysis is motivated by a question in variance reduction: assuming that a normalized process and its normalized compensator converge to the same value, which is the better estimator of that limit? Under some additional hypotheses we show that, for processes with conditionally associated increments, the compensator has smaller asymptotic variance.
The problem of Monte Carlo estimation of M(t) = EN(t), the expected number of renewals in [0, t] for a renewal process with known interarrival time distribution F, is considered. Several unbiased estimators which compete favorably with the naive estimator, N(t), are presented and studied. An approach to reduce the variance of the Monte Carlo estimator is developed and illustrated.
Suppose two alternative designs for a stochastic system are to be compared. These two systems can be simulated independently or dependently. This paper presents a method for comparing two regenerative stochastic processes in a dependent fashion using common random numbers. A set of sufficient conditions is given that guarantees that the dependent simulations will produce a variance reduction over independent simulations. Numerical examples for a variety of simple stochastic models are included which illustrate the variance reduction achieved.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.