We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this manuscript, we address open questions raised by Dieker and Yakir (2014), who proposed a novel method of estimating (discrete) Pickands constants $\mathcal{H}^\delta_\alpha$ using a family of estimators $\xi^\delta_\alpha(T)$, $T>0$, where $\alpha\in(0,2]$ is the Hurst parameter, and $\delta\geq0$ is the step size of the regular discretization grid. We derive an upper bound for the discretization error $\mathcal{H}_\alpha^0 - \mathcal{H}_\alpha^\delta$, whose rate of convergence agrees with Conjecture 1 of Dieker and Yakir (2014) in the case $\alpha\in(0,1]$ and agrees up to logarithmic terms for $\alpha\in(1,2)$. Moreover, we show that all moments of $\xi_\alpha^\delta(T)$ are uniformly bounded and the bias of the estimator decays no slower than $\exp\{-\mathcal CT^{\alpha}\}$, as T becomes large.
It is known that the simple slice sampler has robust convergence properties; however, the class of problems where it can be implemented is limited. In contrast, we consider hybrid slice samplers which are easily implementable and where another Markov chain approximately samples the uniform distribution on each slice. Under appropriate assumptions on the Markov chain on the slice, we give a lower bound and an upper bound of the spectral gap of the hybrid slice sampler in terms of the spectral gap of the simple slice sampler. An immediate consequence of this is that the spectral gap and geometric ergodicity of the hybrid slice sampler can be concluded from the spectral gap and geometric ergodicity of the simple version, which is very well understood. These results indicate that robustness properties of the simple slice sampler are inherited by (appropriately designed) easily implementable hybrid versions. We apply the developed theory and analyze a number of specific algorithms, such as the stepping-out shrinkage slice sampling, hit-and-run slice sampling on a class of multivariate targets, and an easily implementable combination of both procedures on multidimensional bimodal densities.
Qu, Dassios, and Zhao (2021) suggested an exact simulation method for tempered stable Ornstein–Uhlenbeck processes, but their algorithms contain some errors. This short note aims to correct their algorithms and conduct some numerical experiments.
In this paper we consider the filtering of partially observed multidimensional diffusion processes that are observed regularly at discrete times. This is a challenging problem which requires the use of advanced numerical schemes based upon time-discretization of the diffusion process and then the application of particle filters. Perhaps the state-of-the-art method for moderate-dimensional problems is the multilevel particle filter of Jasra et al. (SIAM J. Numer. Anal.55 (2017), 3068–3096). This is a method that combines multilevel Monte Carlo and particle filters. The approach in that article is based intrinsically upon an Euler discretization method. We develop a new particle filter based upon the antithetic truncated Milstein scheme of Giles and Szpruch (Ann. Appl. Prob.24 (2014), 1585–1620). We show empirically for a class of diffusion problems that, for $\epsilon>0$ given, the cost to produce a mean squared error (MSE) of $\mathcal{O}(\epsilon^2)$ in the estimation of the filter is $\mathcal{O}(\epsilon^{-2}\log(\epsilon)^2)$. In the case of multidimensional diffusions with non-constant diffusion coefficient, the method of Jasra et al. (2017) requires a cost of $\mathcal{O}(\epsilon^{-2.5})$ to achieve the same MSE.
There has been substantial interest in developing Markov chain Monte Carlo algorithms based on piecewise deterministic Markov processes. However, existing algorithms can only be used if the target distribution of interest is differentiable everywhere. The key to adapting these algorithms so that they can sample from densities with discontinuities is to define appropriate dynamics for the process when it hits a discontinuity. We present a simple condition for the transition of the process at a discontinuity which can be used to extend any existing sampler for smooth densities, and give specific choices for this transition which work with popular algorithms such as the bouncy particle sampler, the coordinate sampler, and the zigzag process. Our theoretical results extend and make rigorous arguments that have been presented previously, for instance constructing samplers for continuous densities restricted to a bounded domain, and we present a version of the zigzag process that can work in such a scenario. Our novel approach to deriving the invariant distribution of a piecewise deterministic Markov process with boundaries may be of independent interest.
We develop a novel Monte Carlo algorithm for the vector consisting of the supremum, the time at which the supremum is attained, and the position at a given (constant) time of an exponentially tempered Lévy process. The algorithm, based on the increments of the process without tempering, converges geometrically fast (as a function of the computational cost) for discontinuous and locally Lipschitz functions of the vector. We prove that the corresponding multilevel Monte Carlo estimator has optimal computational complexity (i.e. of order $\varepsilon^{-2}$ if the mean squared error is at most $\varepsilon^2$) and provide its central limit theorem (CLT). Using the CLT we construct confidence intervals for barrier option prices and various risk measures based on drawdown under the tempered stable (CGMY) model calibrated/estimated on real-world data. We provide non-asymptotic and asymptotic comparisons of our algorithm with existing approximations, leading to rule-of-thumb principles guiding users to the best method for a given set of parameters. We illustrate the performance of the algorithm with numerical examples.
We construct a class of non-reversible Metropolis kernels as a multivariate extension of the guided-walk kernel proposed by Gustafson (Statist. Comput.8, 1998). The main idea of our method is to introduce a projection that maps a state space to a totally ordered group. By using Haar measure, we construct a novel Markov kernel termed the Haar mixture kernel, which is of interest in its own right. This is achieved by inducing a topological structure to the totally ordered group. Our proposed method, the $\Delta$-guided Metropolis–Haar kernel, is constructed by using the Haar mixture kernel as a proposal kernel. The proposed non-reversible kernel is at least 10 times better than the random-walk Metropolis kernel and Hamiltonian Monte Carlo kernel for the logistic regression and a discretely observed stochastic process in terms of effective sample size per second.
The problem of optimally scaling the proposal distribution in a Markov chain Monte Carlo algorithm is critical to the quality of the generated samples. Much work has gone into obtaining such results for various Metropolis–Hastings (MH) algorithms. Recently, acceptance probabilities other than MH are being employed in problems with intractable target distributions. There are few resources available on tuning the Gaussian proposal distributions for this situation. We obtain optimal scaling results for a general class of acceptance functions, which includes Barker’s and lazy MH. In particular, optimal values for Barker’s algorithm are derived and found to be significantly different from that obtained for the MH algorithm. Our theoretical conclusions are supported by numerical simulations indicating that when the optimal proposal variance is unknown, tuning to the optimal acceptance probability remains an effective strategy.
We prove polynomial ergodicity for the one-dimensional Zig-Zag process on heavy-tailed targets and identify the exact order of polynomial convergence of the process when targeting Student distributions.
In this article we consider a Monte-Carlo-based method to filter partially observed diffusions observed at regular and discrete times. Given access only to Euler discretizations of the diffusion process, we present a new procedure which can return online estimates of the filtering distribution with no time-discretization bias and finite variance. Our approach is based upon a novel double application of the randomization methods of Rhee and Glynn (Operat. Res.63, 2015) along with the multilevel particle filter (MLPF) approach of Jasra et al. (SIAM J. Numer. Anal.55, 2017). A numerical comparison of our new approach with the MLPF, on a single processor, shows that similar errors are possible for a mild increase in computational cost. However, the new method scales strongly to arbitrarily many processors.
We present a new and straightforward algorithm that simulates exact sample paths for a generalized stress-release process. The computation of the exact law of the joint inter-arrival times is detailed and used to derive this algorithm. Furthermore, the martingale generator of the process is derived, and induces theoretical moments which generalize some results of [3] and are used to demonstrate the validity of our simulation algorithm.
We consider a continuous Gaussian random field living on a compact set
$T\subset \mathbb{R}^{d}$
. We are interested in designing an asymptotically efficient estimator of the probability that the integral of the exponential of the Gaussian process over T exceeds a large threshold u. We propose an Asmussen–Kroese conditional Monte Carlo type estimator and discuss its asymptotic properties according to the assumptions on the first and second moments of the Gaussian random field. We also provide a simulation study to illustrate its effectiveness and compare its performance with the importance sampling type estimator of Liu and Xu (2014a).
We study weighted ensemble, an interacting particle method for sampling distributions of Markov chains that has been used in computational chemistry since the 1990s. Many important applications of weighted ensemble require the computation of long time averages. We establish the consistency of weighted ensemble in this setting by proving an ergodic theorem for time averages. As part of the proof, we derive explicit variance formulas that could be useful for optimizing the method.
We investigate properties of random mappings whose core is composed of derangements as opposed to permutations. Such mappings arise as the natural framework for studying the Screaming Toes game described, for example, by Peter Cameron. This mapping differs from the classical case primarily in the behaviour of the small components, and a number of explicit results are provided to illustrate these differences.
In this paper an exact rejection algorithm for simulating paths of the coupled Wright–Fisher diffusion is introduced. The coupled Wright–Fisher diffusion is a family of multivariate Wright–Fisher diffusions that have drifts depending on each other through a coupling term and that find applications in the study of networks of interacting genes. The proposed rejection algorithm uses independent neutral Wright–Fisher diffusions as candidate proposals, which are only needed at a finite number of points. Once a candidate is accepted, the remainder of the path can be recovered by sampling from neutral multivariate Wright–Fisher bridges, for which an exact sampling strategy is also provided. Finally, the algorithm’s complexity is derived and its performance demonstrated in a simulation study.
A common tool in the practice of Markov chain Monte Carlo (MCMC) is to use approximating transition kernels to speed up computation when the desired kernel is slow to evaluate or is intractable. A limited set of quantitative tools exists to assess the relative accuracy and efficiency of such approximations. We derive a set of tools for such analysis based on the Hilbert space generated by the stationary distribution we intend to sample, $L_2(\pi)$. Our results apply to approximations of reversible chains which are geometrically ergodic, as is typically the case for applications to MCMC. The focus of our work is on determining whether the approximating kernel will preserve the geometric ergodicity of the exact chain, and whether the approximating stationary distribution will be close to the original stationary distribution. For reversible chains, our results extend the results of Johndrow et al. (2015) from the uniformly ergodic case to the geometrically ergodic case, under some additional regularity conditions. We then apply our results to a number of approximate MCMC algorithms.
Coupling-from-the-past (CFTP) methods have been used to generate perfect samples from finite Gibbs hard-sphere models, an important class of spatial point processes consisting of a set of spheres with the centers on a bounded region that are distributed as a homogeneous Poisson point process (PPP) conditioned so that spheres do not overlap with each other. We propose an alternative importance-sampling-based rejection methodology for the perfect sampling of these models. We analyze the asymptotic expected running time complexity of the proposed method when the intensity of the reference PPP increases to infinity while the (expected) sphere radius decreases to zero at varying rates. We further compare the performance of the proposed method analytically and numerically with that of a naive rejection algorithm and of popular dominated CFTP algorithms. Our analysis relies upon identifying large deviations decay rates of the non-overlapping probability of spheres whose centers are distributed as a homogeneous PPP.
There are two types of tempered stable (TS) based Ornstein–Uhlenbeck (OU) processes: (i) the OU-TS process, the OU process driven by a TS subordinator, and (ii) the TS-OU process, the OU process with TS marginal law. They have various applications in financial engineering and econometrics. In the literature, only the second type under the stationary assumption has an exact simulation algorithm. In this paper we develop a unified approach to exactly simulate both types without the stationary assumption. It is mainly based on the distributional decomposition of stochastic processes with the aid of an acceptance–rejection scheme. As the inverse Gaussian distribution is an important special case of TS distribution, we also provide tailored algorithms for the corresponding OU processes. Numerical experiments and tests are reported to demonstrate the accuracy and effectiveness of our algorithms, and some further extensions are also discussed.