We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The embedding problem of Markov chains examines whether a stochastic matrix$\mathbf{P} $ can arise as the transition matrix from time 0 to time 1 of a continuous-time Markov chain. When the chain is homogeneous, it checks if $ \mathbf{P}=\exp{\mathbf{Q}}$ for a rate matrix $ \mathbf{Q}$ with zero row sums and non-negative off-diagonal elements, called a Markov generator. It is known that a Markov generator may not always exist or be unique. This paper addresses finding $ \mathbf{Q}$, assuming that the process has at most one jump per unit time interval, and focuses on the problem of aligning the conditional one-jump transition matrix from time 0 to time 1 with $ \mathbf{P}$. We derive a formula for this matrix in terms of $ \mathbf{Q}$ and establish that for any $ \mathbf{P}$ with non-zero diagonal entries, a unique $ \mathbf{Q}$, called the ${\unicode{x1D7D9}}$-generator, exists. We compare the ${\unicode{x1D7D9}}$-generator with the one-jump rate matrix from Jarrow, Lando, and Turnbull (1997), showing which is a better approximate Markov generator of $ \mathbf{P}$ in some practical cases.
We consider the performance of Glauber dynamics for the random cluster model with real parameter $q\gt 1$ and temperature $\beta \gt 0$. Recent work by Helmuth, Jenssen, and Perkins detailed the ordered/disordered transition of the model on random $\Delta$-regular graphs for all sufficiently large $q$ and obtained an efficient sampling algorithm for all temperatures $\beta$ using cluster expansion methods. Despite this major progress, the performance of natural Markov chains, including Glauber dynamics, is not yet well understood on the random regular graph, partly because of the non-local nature of the model (especially at low temperatures) and partly because of severe bottleneck phenomena that emerge in a window around the ordered/disordered transition. Nevertheless, it is widely conjectured that the bottleneck phenomena that impede mixing from worst-case starting configurations can be avoided by initialising the chain more judiciously. Our main result establishes this conjecture for all sufficiently large $q$ (with respect to $\Delta$). Specifically, we consider the mixing time of Glauber dynamics initialised from the two extreme configurations, the all-in and all-out, and obtain a pair of fast mixing bounds which cover all temperatures $\beta$, including in particular the bottleneck window. Our result is inspired by the recent approach of Gheissari and Sinclair for the Ising model who obtained a similar flavoured mixing-time bound on the random regular graph for sufficiently low temperatures. To cover all temperatures in the RC model, we refine appropriately the structural results of Helmuth, Jenssen and Perkins about the ordered/disordered transition and show spatial mixing properties ‘within the phase’, which are then related to the evolution of the chain.
We study the mixing time of the single-site update Markov chain, known as the Glauber dynamics, for generating a random independent set of a tree. Our focus is obtaining optimal convergence results for arbitrary trees. We consider the more general problem of sampling from the Gibbs distribution in the hard-core model where independent sets are weighted by a parameter $\lambda \gt 0$; the special case $\lambda =1$ corresponds to the uniform distribution over all independent sets. Previous work of Martinelli, Sinclair and Weitz (2004) obtained optimal mixing time bounds for the complete $\Delta$-regular tree for all $\lambda$. However, Restrepo, Stefankovic, Vera, Vigoda, and Yang (2014) showed that for sufficiently large $\lambda$ there are bounded-degree trees where optimal mixing does not hold. Recent work of Eppstein and Frishberg (2022) proved a polynomial mixing time bound for the Glauber dynamics for arbitrary trees, and more generally for graphs of bounded tree-width.
We establish an optimal bound on the relaxation time (i.e., inverse spectral gap) of $O(n)$ for the Glauber dynamics for unweighted independent sets on arbitrary trees. We stress that our results hold for arbitrary trees and there is no dependence on the maximum degree $\Delta$. Interestingly, our results extend (far) beyond the uniqueness threshold which is on the order $\lambda =O(1/\Delta )$. Our proof approach is inspired by recent work on spectral independence. In fact, we prove that spectral independence holds with a constant independent of the maximum degree for any tree, but this does not imply mixing for general trees as the optimal mixing results of Chen, Liu, and Vigoda (2021) only apply for bounded-degree graphs. We instead utilize the combinatorial nature of independent sets to directly prove approximate tensorization of variance via a non-trivial inductive proof.
We consider linear-fractional branching processes (one-type and two-type) with immigration in varying environments. For $n\ge0$, let $Z_n$ count the number of individuals of the nth generation, which excludes the immigrant who enters the system at time n. We call n a regeneration time if $Z_n=0$. For both the one-type and two-type cases, we give criteria for the finiteness or infiniteness of the number of regeneration times. We then construct some concrete examples to exhibit the strange phenomena caused by the so-called varying environments. For example, it may happen that the process is extinct, but there are only finitely many regeneration times. We also study the asymptotics of the number of regeneration times of the model in the example.
For a partially specified stochastic matrix, we consider the problem of completing it so as to minimize Kemeny’s constant. We prove that for any partially specified stochastic matrix for which the problem is well defined, there is a minimizing completion that is as sparse as possible. We also find the minimum value of Kemeny’s constant in two special cases: when the diagonal has been specified and when all specified entries lie in a common row.
We review criteria for comparing the efficiency of Markov chain Monte Carlo (MCMC) methods with respect to the asymptotic variance of estimates of expectations of functions of state, and show how such criteria can justify ways of combining improvements to MCMC methods. We say that a chain on a finite state space with transition matrix P efficiency-dominates one with transition matrix Q if for every function of state it has lower (or equal) asymptotic variance. We give elementary proofs of some previous results regarding efficiency dominance, leading to a self-contained demonstration that a reversible chain with transition matrix P efficiency-dominates a reversible chain with transition matrix Q if and only if none of the eigenvalues of $Q-P$ are negative. This allows us to conclude that modifying a reversible MCMC method to improve its efficiency will also improve the efficiency of a method that randomly chooses either this or some other reversible method, and to conclude that improving the efficiency of a reversible update for one component of state (as in Gibbs sampling) will improve the overall efficiency of a reversible method that combines this and other updates. It also explains how antithetic MCMC can be more efficient than independent and identically distributed sampling. We also establish conditions that can guarantee that a method is not efficiency-dominated by any other method.
We consider a Markov control model with Borel state space, metric compact action space, and transitions assumed to have a density function with respect to some probability measure satisfying some continuity conditions. We study the optimization problem of maximizing the probability of visiting some subset of the state space infinitely often, and we show that there exists an optimal stationary Markov policy for this problem. We endow the set of stationary Markov policies and the family of strategic probability measures with adequate topologies (namely, the narrow topology for Young measures and the $ws^\infty$-topology, respectively) to obtain compactness and continuity properties, which allow us to obtain our main results.
Eaton (1992) considered a general parametric statistical model paired with an improper prior distribution for the parameter and proved that if a certain Markov chain, constructed using the model and the prior, is recurrent, then the improper prior is strongly admissible, which (roughly speaking) means that the generalized Bayes estimators derived from the corresponding posterior distribution are admissible. Hobert and Robert (1999) proved that Eaton’s Markov chain is recurrent if and only if its so-called conjugate Markov chain is recurrent. The focus of this paper is a family of Markov chains that contains all of the conjugate chains that arise in the context of a Poisson model paired with an arbitrary improper prior for the mean parameter. Sufficient conditions for recurrence and transience are developed and these are used to establish new results concerning the strong admissibility of non-conjugate improper priors for the Poisson mean.
In this paper, we introduce a slight variation of the dominated-coupling-from-the-past (DCFTP) algorithm of Kendall, for bounded Markov chains. It is based on the control of a (typically non-monotonic) stochastic recursion by another (typically monotonic) one. We show that this algorithm is particularly suitable for stochastic matching models with bounded patience, a class of models for which the steady-state distribution of the system is in general unknown in closed form. We first show that the Markov chain of this model can easily be controlled by an infinite-server queue. We then investigate the particular case where patience times are deterministic, and this control argument may fail. In that case we resort to an ad-hoc technique that can also be seen as a control (this time, by the arrival sequence). We then compare this algorithm to the primitive coupling-from-the-past (CFTP) algorithm and to control by an infinite-server queue, and show how our perfect simulation results can be used to estimate and compare, for instance, the loss probabilities of various systems in equilibrium.
The purpose of this study is to present a subgeometric convergence formula for the stationary distribution of the finite-level M/G/1-type Markov chain when taking its infinite-level limit, where the upper boundary level goes to infinity. This study is carried out using the fundamental deviation matrix, which is a block-decomposition-friendly solution to the Poisson equation of the deviation matrix. The fundamental deviation matrix provides a difference formula for the respective stationary distributions of the finite-level chain and the corresponding infinite-level chain. The difference formula plays a crucial role in the derivation of the main result of this paper, and the main result is used, for example, to derive an asymptotic formula for the loss probability in the MAP/GI/1/N queue.
Inaccuracy and information measures based on cumulative residual entropy are quite useful and have received considerable attention in many fields, such as statistics, probability, and reliability theory. In particular, many authors have studied cumulative residual inaccuracy between coherent systems based on system lifetimes. In a previous paper (Bueno and Balakrishnan, Prob. Eng. Inf. Sci.36, 2022), we discussed a cumulative residual inaccuracy measure for coherent systems at component level, that is, based on the common, stochastically dependent component lifetimes observed under a non-homogeneous Poisson process. In this paper, using a point process martingale approach, we extend this concept to a cumulative residual inaccuracy measure between non-explosive point processes and then specialize the results to Markov occurrence times. If the processes satisfy the proportional risk hazard process property, then the measure determines the Markov chain uniquely. Several examples are presented, including birth-and-death processes and pure birth process, and then the results are applied to coherent systems at component level subject to Markov failure and repair processes.
For an n-element subset U of $\mathbb {Z}^2$, select x from U according to harmonic measure from infinity, remove x from U and start a random walk from x. If the walk leaves from y when it first enters the rest of U, add y to it. Iterating this procedure constitutes the process we call harmonic activation and transport (HAT).
HAT exhibits a phenomenon we refer to as collapse: Informally, the diameter shrinks to its logarithm over a number of steps which is comparable to this logarithm. Collapse implies the existence of the stationary distribution of HAT, where configurations are viewed up to translation, and the exponential tightness of diameter at stationarity. Additionally, collapse produces a renewal structure with which we establish that the center of mass process, properly rescaled, converges in distribution to two-dimensional Brownian motion.
To characterize the phenomenon of collapse, we address fundamental questions about the extremal behavior of harmonic measure and escape probabilities. Among n-element subsets of $\mathbb {Z}^2$, what is the least positive value of harmonic measure? What is the probability of escape from the set to a distance of, say, d? Concerning the former, examples abound for which the harmonic measure is exponentially small in n. We prove that it can be no smaller than exponential in $n \log n$. Regarding the latter, the escape probability is at most the reciprocal of $\log d$, up to a constant factor. We prove it is always at least this much, up to an n-dependent factor.
We study the $R_\beta$-positivity and the existence of zero-temperature limits for a sequence of infinite-volume Gibbs measures $(\mu_{\beta}(\!\cdot\!))_{\beta \geq 0}$ at inverse temperature $\beta$ associated to a family of nearest-neighbor matrices $(Q_{\beta})_{\beta \geq 0}$ reflected at the origin. We use a probabilistic approach based on the continued fraction theory previously introduced in Ferrari and Martínez (1993) and sharpened in Littin and Martínez (2010). Some necessary and sufficient conditions are provided to ensure (i) the existence of a unique infinite-volume Gibbs measure for large but finite values of $\beta$, and (ii) the existence of weak limits as $\beta \to \infty$. Some application examples are revised to put in context the main results of this work.
Motivated by applications to COVID dynamics, we describe a model of a branching process in a random environment $\{Z_n\}$ whose characteristics change when crossing upper and lower thresholds. This introduces a cyclical path behavior involving periods of increase and decrease leading to supercritical and subcritical regimes. Even though the process is not Markov, we identify subsequences at random time points $\{(\tau_j, \nu_j)\}$—specifically the values of the process at crossing times, viz. $\{(Z_{\tau_j}, Z_{\nu_j})\}$—along which the process retains the Markov structure. Under mild moment and regularity conditions, we establish that the subsequences possess a regenerative structure and prove that the limiting normal distributions of the growth rates of the process in supercritical and subcritical regimes decouple. For this reason, we establish limit theorems concerning the length of supercritical and subcritical regimes and the proportion of time the process spends in these regimes. As a byproduct of our analysis, we explicitly identify the limiting variances in terms of the functionals of the offspring distribution, threshold distribution, and environmental sequences.
In this paper, we consider the friendship paradox in the context of random walks and paths. Among our results, we give an equality connecting long-range degree correlation, degree variability, and the degree-wise effect of additional steps for a random walk on a graph. Random paths are also considered, as well as applications to acquaintance sampling in the context of core-periphery structure.
We show that the total variation mixing time is not quasi-isometry invariant, even for Cayley graphs. Namely, we construct a sequence of pairs of Cayley graphs with maps between them that twist the metric in a bounded way, while the ratio of the two mixing times goes to infinity. The Cayley graphs serving as an example have unbounded degrees. For non-transitive graphs, we construct bounded degree graphs for which the mixing time from the worst starting point for one graph is asymptotically smaller than the mixing time from the best starting point of the random walk on a network obtained by increasing some of the edge weights from 1 to $1+o(1)$.
Suppose that a system is affected by a sequence of random shocks that occur over certain time periods. In this paper we study the discrete censored $\delta$-shock model, $\delta \ge 1$, for which the system fails whenever no shock occurs within a $\delta$-length time period from the last shock, by supposing that the interarrival times between consecutive shocks are described by a first-order Markov chain (as well as under the binomial shock process, i.e., when the interarrival times between successive shocks have a geometric distribution). Using the Markov chain embedding technique introduced by Chadjiconstantinidis et al. (Adv. Appl. Prob.32, 2000), we study the joint and marginal distributions of the system’s lifetime, the number of shocks, and the number of periods in which no shocks occur, up to the failure of the system. The joint and marginal probability generating functions of these random variables are obtained, and several recursions and exact formulae are given for the evaluation of their probability mass functions and moments. It is shown that the system’s lifetime follows a Markov geometric distribution of order $\delta$ (a geometric distribution of order $\delta$ under the binomial setup) and also that it follows a matrix-geometric distribution. Some reliability properties are also given under the binomial shock process, by showing that a shift of the system’s lifetime random variable follows a compound geometric distribution. Finally, we introduce a new mixed discrete censored $\delta$-shock model, for which the system fails when no shock occurs within a $\delta$-length time period from the last shock, or the magnitude of the shock is larger than a given critical threshold $\gamma >0$. Similarly, for this mixed model, we study the joint and marginal distributions of the system’s lifetime, the number of shocks, and the number of periods in which no shocks occur, up to the failure of the system, under the binomial shock process.
Random walks on graphs are an essential primitive for many randomised algorithms and stochastic processes. It is natural to ask how much can be gained by running
$k$
multiple random walks independently and in parallel. Although the cover time of multiple walks has been investigated for many natural networks, the problem of finding a general characterisation of multiple cover times for worst-case start vertices (posed by Alon, Avin, Koucký, Kozma, Lotker and Tuttle in 2008) remains an open problem. First, we improve and tighten various bounds on the stationary cover time when
$k$
random walks start from vertices sampled from the stationary distribution. For example, we prove an unconditional lower bound of
$\Omega ((n/k) \log n)$
on the stationary cover time, holding for any
$n$
-vertex graph
$G$
and any
$1 \leq k =o(n\log n )$
. Secondly, we establish the stationary cover times of multiple walks on several fundamental networks up to constant factors. Thirdly, we present a framework characterising worst-case cover times in terms of stationary cover times and a novel, relaxed notion of mixing time for multiple walks called the partial mixing time. Roughly speaking, the partial mixing time only requires a specific portion of all random walks to be mixed. Using these new concepts, we can establish (or recover) the worst-case cover times for many networks including expanders, preferential attachment graphs, grids, binary trees and hypercubes.
Under the assumption that sequences of graphs equipped with resistances, associated measures, walks and local times converge in a suitable Gromov-Hausdorff topology, we establish asymptotic bounds on the distribution of the
$\varepsilon$
-blanket times of the random walks in the sequence. The precise nature of these bounds ensures convergence of the
$\varepsilon$
-blanket times of the random walks if the
$\varepsilon$
-blanket time of the limiting diffusion is continuous at
$\varepsilon$
with probability 1. This result enables us to prove annealed convergence in various examples of critical random graphs, including critical Galton-Watson trees and the Erdős-Rényi random graph in the critical window. We highlight that proving continuity of the
$\varepsilon$
-blanket time of the limiting diffusion relies on the scale invariance of a finite measure that gives rise to realizations of the limiting compact random metric space, and therefore we expect our results to hold for other examples of random graphs with a similar scale invariance property.
Let G be a finite group. Let
$H, K$
be subgroups of G and
$H \backslash G / K$
the double coset space. If Q is a probability on G which is constant on conjugacy classes (
$Q(s^{-1} t s) = Q(t)$
), then the random walk driven by Q on G projects to a Markov chain on
$H \backslash G /K$
. This allows analysis of the lumped chain using the representation theory of G. Examples include coagulation-fragmentation processes and natural Markov chains on contingency tables. Our main example projects the random transvections walk on
$GL_n(q)$
onto a Markov chain on
$S_n$
via the Bruhat decomposition. The chain on
$S_n$
has a Mallows stationary distribution and interesting mixing time behavior. The projection illuminates the combinatorics of Gaussian elimination. Along the way, we give a representation of the sum of transvections in the Hecke algebra of double cosets, which describes the Markov chain as a mixture of Metropolis chains. Some extensions and examples of double coset Markov chains with G a compact group are discussed.