We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Providing a graduate-level introduction to discrete probability and its applications, this book develops a toolkit of essential techniques for analysing stochastic processes on graphs, other random discrete structures, and algorithms. Topics covered include the first and second moment methods, concentration inequalities, coupling and stochastic domination, martingales and potential theory, spectral methods, and branching processes. Each chapter expands on a fundamental technique, outlining common uses and showing them in action on simple examples and more substantial classical results. The focus is predominantly on non-asymptotic methods and results. All chapters provide a detailed background review section, plus exercises and signposts to the wider literature. Readers are assumed to have undergraduate-level linear algebra and basic real analysis, while prior exposure to graduate-level probability is recommended. This much-needed broad overview of discrete probability could serve as a textbook or as a reference for researchers in mathematics, statistics, data science, computer science and engineering.
Motivated by applications to COVID dynamics, we describe a model of a branching process in a random environment $\{Z_n\}$ whose characteristics change when crossing upper and lower thresholds. This introduces a cyclical path behavior involving periods of increase and decrease leading to supercritical and subcritical regimes. Even though the process is not Markov, we identify subsequences at random time points $\{(\tau_j, \nu_j)\}$—specifically the values of the process at crossing times, viz. $\{(Z_{\tau_j}, Z_{\nu_j})\}$—along which the process retains the Markov structure. Under mild moment and regularity conditions, we establish that the subsequences possess a regenerative structure and prove that the limiting normal distributions of the growth rates of the process in supercritical and subcritical regimes decouple. For this reason, we establish limit theorems concerning the length of supercritical and subcritical regimes and the proportion of time the process spends in these regimes. As a byproduct of our analysis, we explicitly identify the limiting variances in terms of the functionals of the offspring distribution, threshold distribution, and environmental sequences.
We consider a stochastic SIR (susceptible $\rightarrow$ infective $\rightarrow$ removed) model in which the infectious periods are modulated by a collection of independent and identically distributed Feller processes. Each infected individual is associated with one of these processes, the trajectories of which determine the duration of his infectious period, his contamination rate, and his type of removal (e.g. death or immunization). We use a martingale approach to derive the distribution of the final epidemic size and severity for this model and provide some general examples. Next, we focus on a single infected individual facing a given number of susceptibles, and we determine the distribution of his outcome (number of contaminations, severity, type of removal). Using a discrete-time formulation of the model, we show that this distribution also provides us with an alternative, more stable method to compute the final epidemic outcome distribution.
The theory is here generalized to include marked point processes (MPP) on the real line with ageneral mark space. We define and interpret MPP differentials and integrals The compensator and intensity of an MPP is discussed carefully. We present the relevant predictability concept for MPP integrands, andthe connection between MPP integrals and martingales is discussed in detail.
The quantitative analysis of probabilistic programs answers queries involving the expected values of program variables and expressions involving them, as well as bounds on the probabilities of assertions. In this chapter, we will present the use of concentration of measure inequalities to reason about such bounds. First, we will briefly present and motivate standard concentration of measure inequalities. Next, we survey approaches to reason about quantitative properties using concentration of measure inequalities, illustrating these on numerous motivating examples. Finally, we discuss currently open challenges in this area for future work.
For a spectrally negative self-similar Markov process on $[0,\infty)$ with an a.s. finite overall supremum, we provide, in tractable detail, a kind of conditional Wiener–Hopf factorization at the maximum of the absorption time at zero, the conditioning being on the overall supremum and the jump at the overall supremum. In a companion result the Laplace transform of this absorption time (on the event that the process does not go above a given level) is identified under no other assumptions (such as the process admitting a recurrent extension and/or hitting zero continuously), generalizing some existing results in the literature.
Stochastic clearing theory has wide-spread applications in the context of supply chain and service operations management. Historical application domains include bulk service queues, inventory control, and transportation planning (e.g., vehicle dispatching and shipment consolidation). In this paper, motivated by a fundamental application in shipment consolidation, we revisit the notion of service performance for stochastic clearing system operation. More specifically, our goal is to evaluate and compare service performance of alternative operational policies for clearing decisions, as quantified by a measure of timely service referred to as Average Order Delay ($AOD$). All stochastic clearing systems are subject to service delay due to the inherent clearing practice, and $\textrm {AOD}$ can be thought of as a benchmark for evaluating timely service. Although stochastic clearing theory has a long history, the existing literature on the analysis of $\textrm {AOD}$ as a service measure has several limitations. Hence, we extend the previous analysis by proposing a more general method for a generic analytical derivation of $\textrm {AOD}$ for any renewal-type clearing policy, including but not limited to alternative shipment consolidation policies in the previous literature. Our proposed method utilizes a new martingale point of view and lends itself for a generic analytical characterization of $\textrm {AOD}$, leading to a complete comparative analysis of alternative renewal-type clearing policies. Hence, we also close the gaps in the literature on shipment consolidation via a complete set of analytically provable results regarding $\textrm {AOD}$ which were only illustrated through numerical tests previously.
We discuss a continuous-time Markov branching model in which each individual can trigger an alarm according to a Poisson process. The model is stopped when a given number of alarms is triggered or when there are no more individuals present. Our goal is to determine the distribution of the state of the population at this stopping time. In addition, the state distribution at any fixed time is also obtained. The model is then modified to take into account the possible influence of death cases. All distributions are derived using probability-generating functions, and the approach followed is based on the construction of families of martingales.
In this introductory survey, we provide an overview of the major developments of algorithmic randomness with an eye towards the historical development of the discipline. First we give a brief introduction to computability theory and the underlying mathematical concepts that later appear in the survey. Next we selectively cover four broad periods in which the primary developments in algorithmic randomness occurred: (1) the mid-1960s to mid-1970s, in which the main definitions of algorithmic randomness were laid out and the basic properties of random sequences were established; (2) the 1980s through the 1990s, which featured intermittent and important work from a handful of researchers; (3) the 2000s, during which there was an explosion of results as the discipline matured into a fully-fledged subbranch of computability theory; and (4) the early 2010s, in which ties between algorithmic randomness and other subfields of mathematics were discovered. The aim of this survey is to provide a point of entry for newcomers to the field and a useful reference for practitioners.
This paper generalizes the Kunita–Watanabe decomposition of an
$L^2$
space. The generalization comes from using nonlinear stochastic integrals where the integrator is a family of continuous martingales bounded in
$L^2$
. This result is also the solution of an optimization problem in
$L^2$
. First, martingales are assumed to be stochastic integrals. Then, to get the general result, it is shown that the regularity of the family of martingales with respect to its spatial parameter is inherited by the integrands in the integral representation of the martingales. Finally, an example showing how the results of this paper, with the Clark–Ocone formula, can be applied to polynomial functions of Brownian integrals.
We consider a stochastic evolutionary model for a phenotype developing amongst n related species with unknown phylogeny. The unknown tree is modelled by a Yule process conditioned on n contemporary nodes. The trait value is assumed to evolve along lineages as an Ornstein-Uhlenbeck process. As a result, the trait values of the n species form a sample with dependent observations. We establish three limit theorems for the sample mean corresponding to three domains for the adaptation rate. In the case of fast adaptation, we show that for large n the normalized sample mean is approximately normally distributed. Using these limit theorems, we develop novel confidence interval formulae for the optimal trait value.
Counting processes and their compensators are introduced at a heuristic level. The martingale property of stochastic integrals with respect to a compensated counting process leads to moment estimates and asymptotic normal distributions for statistics arising in multiple state, non-parametric and semi-parametric models. The place of survival models in actuarial education is discussed.
A general theory is developed for the projection of martingale related processes onto smaller filtrations, to which they are not even adapted. Martingales, supermartingales, and semimartingales retain their nature, but the case of local martingales is more delicate, as illustrated by an explicit case study for the inverse Bessel process. This has implications for the concept of No Free Lunch with Vanishing Risk, in Finance.
The trading strategy of ‘buy-and-hold for superior stock and sell-at-once for inferior stock’, as suggested by conventional wisdom, has long been prevalent in Wall Street. In this paper, two rationales are provided to support this trading strategy from a purely mathematical standpoint. Adopting the standard binomial tree model (or CRR model for short, as first introduced in Cox, Ross and Rubinstein (1979)) to model the stock price dynamics, we look for the optimal stock selling rule(s) so as to maximize (i) the chance that an investor can sell a stock precisely at its ultimate highest price over a fixed investment horizon [0,T]; and (ii) the expected ratio of the selling price of a stock to its ultimate highest price over [0,T]. We show that both problems have exactly the same optimal solution which can literally be interpreted as ‘buy-and-hold or sell-at-once’ depending on the value of p (the going-up probability of the stock price at each step): when p›½, selling the stock at the last time step N is the optimal selling strategy; when p=½, a selling time is optimal if the stock is sold either at the last time step or at the time step when the stock price reaches its running maximum price; and when p‹½, time 0, i.e. selling the stock at once, is the unique optimal selling time.
Motivated by the observationthat the gain-loss criterion, while offering economically meaningful prices of contingent claims,is sensitive to the reference measure governing the underlying stock price process (a situationreferred to as ambiguity of measure), we propose a gain-loss pricing model robust to shifts in the reference measure. Using a dual representation property of polyhedral risk measures we obtain a one-step, gain-loss criterion based theorem ofasset pricing under ambiguity of measure, and illustrate its use.
We study the Galois tower generated by iterates of a quadratic polynomial $f$ defined over an arbitrary field. One question of interest is to find the proportion $a_n$ of elements at level $n$ that fix at least one root; in the global field case these correspond to unramified primes in the base field that have a divisor at level $n$ of residue class degree one. We thus define a stochastic process associated to the tower that encodes root-fixing information at each level. We develop a uniqueness result for certain permutation groups, and use this to show that for many $f$ each level of the tower contains a certain central involution. It follows that the associated stochastic process is a martingale, and convergence theorems then allow us to establish a criterion for showing that $a_n$ tends to 0. As an application, we study the dynamics of the family $x^2 + c \in\overline{\mathbb{F}}_p[x]$, and this in turn is used to establish a basic property of the $p$-adic Mandelbrot set.
We consider a risk model with two independent classes of insurance risks. We assume that the two independent claim counting processes are, respectively, Poisson and Sparre Andersen processes with generalized Erlang(2) claim inter-arrival times. The Laplace transform of the non-ruin probability is derived from a system of integro-differential equations. Explicit results can be obtained when the initial reserve is zero and the claim severity distributions of both classes belong to the Kn family of distributions. A relation between the ruin probability and the distribution of the supremum before ruin is identified. Finally, the Laplace transform of the non-ruin probability of a perturbed Sparre Andersen risk model with generalized Erlang(2) claim inter-arrival times is derived when the compound Poisson process converges weakly to a Wiener process.
The Kesten-Stigum theorem for the one-type Galton-Watson process gives necessary and sufficient conditions for mean convergence of the martingale formed by the population size normed by its expectation. Here, the approach to this theorem pioneered by Lyons, Pemantle and Peres (1995) is extended to certain kinds of martingales defined for Galton-Watson processes with a general type space. Many examples satisfy stochastic domination conditions on the offspring distributions and suitable domination conditions combine nicely with general conditions for mean convergence to produce moment conditions, like the X log X condition of the Kesten-Stigum theorem. A general treatment of this phenomenon is given. The application of the approach to various branching processes is indicated. However, the main reason for developing the theory is to obtain martingale convergence results in a branching random walk that do not seem readily accessible with other techniques. These results, which are natural extensions of known results for martingales associated with binary branching Brownian motion, form the main application.
A Markov chain model for a battle between two opposing forces is formulated, which is a stochastic version of one studied by F. W. Lanchester. Solutions of the backward equations for the final state yield martingales and stopping identities, but a more powerful technique is a time-reversal analogue of a known method for studying urn models. A general version of a remarkable result of Williams and McIlroy (1998) is proved.
Let (Xn) be a sequence of independent, identically distributed random variables, with common distribution function F, possibly discontinuous. We use martingale arguments to connect the number of upper records from (Xn) with sums of minima of related random variables. From this relationship we derive a general strong law for the number of records for a wide class of distributions F, including geometric and Poisson.