We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter introduces detailed mathematical modelling for diffusion-based molecular communication systems. Mathematical and physical aspects of diffusion are covered, such as the Wiener process, drift, first arrival time distributions, the effect of concentration, and Fick’s laws. Simulation of molecular communication systems is also discussed.
We present a closed-form solution to a discounted optimal stopping zero-sum game in a model based on a generalised geometric Brownian motion with coefficients depending on its running maximum and minimum processes. The optimal stopping times forming a Nash equilibrium are shown to be the first times at which the original process hits certain boundaries depending on the running values of the associated maximum and minimum processes. The proof is based on the reduction of the original game to the equivalent coupled free-boundary problem and the solution of the latter problem by means of the smooth-fit and normal-reflection conditions. We show that the optimal stopping boundaries are partially determined as either unique solutions to the appropriate system of arithmetic equations or unique solutions to the appropriate first-order nonlinear ordinary differential equations. The results obtained are related to the valuation of the perpetual lookback game options with floating strikes in the appropriate diffusion-type extension of the Black–Merton–Scholes model.
We are interested in the law of the first passage time of an Ornstein–Uhlenbeck process to time-varying thresholds. We show that this problem is connected to the laws of the first passage time of the process to members of a two-parameter family of functional transformations of a time-varying boundary. For specific values of the parameters, these transformations appear in a realisation of a standard Ornstein–Uhlenbeck bridge. We provide three different proofs of this connection. The first is based on a similar result for Brownian motion, the second uses a generalisation of the so-called Gauss–Markov processes, and the third relies on the Lie group symmetry method. We investigate the properties of these transformations and study the algebraic and analytical properties of an involution operator which is used in constructing them. We also show that these transformations map the space of solutions of Sturm–Liouville equations into the space of solutions of the associated nonlinear ordinary differential equations. Lastly, we interpret our results through the method of images and give new examples of curves with explicit first passage time densities.
In this paper, we consider a joint drift rate control and two-sided impulse control problem in which the system manager adjusts the drift rate as well as the instantaneous relocation for a Brownian motion, with the objective of minimizing the total average state-related cost and control cost. The system state can be negative. Assuming that instantaneous upward and downward relocations take a different cost structure, which consists of both a setup cost and a variable cost, we prove that the optimal control policy takes an $\left\{ {\!\left( {{s^{\ast}},{q^{\ast}},{Q^{\ast}},{S^{\ast}}} \right),\!\left\{ {{\mu ^{\ast}}(x)\,:\,x \in [ {{s^{\ast}},{S^{\ast}}}]} \right\}} \right\}$ form. Specifically, the optimal impulse control policy is characterized by a quadruple $\left( {{s^{\ast}},{q^{\ast}},{Q^{\ast}},{S^{\ast}}} \right)$, under which the system state will be immediately relocated upwardly to ${q^{\ast}}$ once it drops to ${s^{\ast}}$ and be immediately relocated downwardly to ${Q^{\ast}}$ once it rises to ${S^{\ast}}$; the optimal drift rate control policy will depend solely on the current system state, which is characterized by a function ${\mu ^{\ast}}\!\left( \cdot \right)$ for the system state staying in $[ {{s^{\ast}},{S^{\ast}}}]$. By analyzing an associated free boundary problem consisting of an ordinary differential equation and several free boundary conditions, we obtain these optimal policy parameters and show the optimality of the proposed policy using a lower-bound approach. Finally, we investigate the effect of the system parameters on the optimal policy parameters as well as the optimal system’s long-run average cost numerically.
We consider De Finetti’s control problem for absolutely continuous strategies with control rates bounded by a concave function and prove that a generalized mean-reverting strategy is optimal in a Brownian model. In order to solve this problem, we need to deal with a nonlinear Ornstein–Uhlenbeck process. Despite the level of generality of the bound imposed on the rate, an explicit expression for the value function is obtained up to the evaluation of two functions. This optimal control problem has, as special cases, those solved in Jeanblanc-Picqué and Shiryaev (1995) and Renaud and Simard (2021) when the control rate is bounded by a constant and a linear function, respectively.
We establish higher moment formulae for Siegel transforms on the space of affine unimodular lattices as well as on certain congruence quotients of $\mathrm {SL}_d({\mathbb {R}})$. As applications, we prove functional central limit theorems for lattice point counting for affine and congruence lattices using the method of moments.
This chapter is dedicated to the elementary problem, which concerns interactions between a single particle and the surrounding fluid. First, we explore the drag force, which is the most common interaction. It is shown how this force is derived and applied in practice. This topic is further expanded upon by introducing Basset and added mass force – both are crucial for unsteady cases such as accelerating particles. Next, lift forces (Magnus and Saffman) are shown that may result in the particle’s motion in the lateral direction. To some extent, this is associated with the next issue explained in the chapter: the torque acting on a particle. The following sections pay attention to other interactions: Brownian motion, rarefied gases and the thermophoretic force. These interactions play a role for tiny particles, perhaps of nano-size. Ultimately, we deliberate heat effects when the particle and fluid have different temperatures. Thus, this last section scrutinise convective and radiative heat transfer.
In the classical gambler’s ruin problem, the gambler plays an adversary with initial capitals z and $a-z$, respectively, where $a>0$ and $0< z < a$ are integers. At each round, the gambler wins or loses a dollar with probabilities p and $1-p$. The game continues until one of the two players is ruined. For even a and $0<z\leq {a}/{2}$, the family of distributions of the duration (total number of rounds) of the game indexed by $p \in [0,{\frac{1}{2}}]$ is shown to have monotone (increasing) likelihood ratio, while for ${a}/{2} \leq z<a$, the family of distributions of the duration indexed by $p \in [{\frac{1}{2}}, 1]$ has monotone (decreasing) likelihood ratio. In particular, for $z={a}/{2}$, in terms of the likelihood ratio order, the distribution of the duration is maximized over $p \in [0,1]$ by $p={\frac{1}{2}}$. The case of odd a is also considered in terms of the usual stochastic order. Furthermore, as a limit, the first exit time of Brownian motion is briefly discussed.
In this chapter we present dynamical systems and their probabilistic description. We distinguish between system descriptions with discrete and continuous state-spaces as well as discrete and continuous time. We formulate examples of statistical models including Markov models, Markov jump processes, and stochastic differential equations. In doing so, we describe fundamental equations governing the evolution of the probability of dynamical systems. These equations include the master equation, Langevin equation, and Fokker–Plank equation. We also present sampling methods to simulate realizations of a stochastic dynamical process such as the Gillespie algorithm. We end with case studies relevant to chemistry and physics.
The Brownian bridge or Lévy–Ciesielski construction of Brownian paths almost surely converges uniformly to the true Brownian path. We focus on the uniform error. In particular, we show constructively that at level N, at which there are $d=2^N$ points evaluated on the Brownian path, the uniform error and its square, and the uniform error of geometric Brownian motion, have upper bounds of order $\mathcal {O}(\sqrt {\ln d/d})$, matching the known orders. We apply the results to an option pricing example.
This chapter first overviews the types of coupling and particle concentration descriptors, and then considers aspects of one-way coupling, with the special case of Brownian motion. The remainder of the chapter considers two-way coupling, three-way coupling, and four-way coupling.
We prove existence and uniqueness for the inverse-first-passage time problem for soft-killed Brownian motion using rather elementary methods relying on basic results from probability theory only. We completely avoid the relation to a suitable partial differential equation via a suitable Feynman–Kac representation, which was previously one of the main tools.
Let ${\mathrm{d}} X(t) = -Y(t) \, {\mathrm{d}} t$, where Y(t) is a one-dimensional diffusion process, and let $\tau(x,y)$ be the first time the process (X(t), Y(t)), starting from (x, y), leaves a subset of the first quadrant. The problem of computing the probability $p(x,y)\,:\!=\, \mathbb{P}[X(\tau(x,y))=0]$ is considered. The Laplace transform of the function p(x, y) is obtained in important particular cases, and it is shown that the transform can at least be inverted numerically. Explicit expressions for the Laplace transform of $\mathbb{E}[\tau(x,y)]$ and of the moment-generating function of $\tau(x,y)$ can also be derived.
This chapter presents salient concepts to understand a vast literature on dynamics of charged macromolecules. Starting from a description of hydrodynamic interaction, dynamics of folded proteins, colloids, flexible polyelectrolytes, DNA are described. For flexible macromolecules, the models of Rouse, Zimm, reptation, and entropic barrier are developed in increasing order of complexity. Using this groundwork, the phenomena of ordinary-extraordinary transition, electrophoretic mobility, and topologically frustrated dynamical state are explained.
We consider the random splitting and aggregating of Hawkes processes. We present the random splitting schemes using the direct approach for counting processes, as well as the immigration–birth branching representations of Hawkes processes. From the second scheme, it is shown that random split Hawkes processes are again Hawkes. We discuss functional central limit theorems (FCLTs) for the scaled split processes from the different schemes. On the other hand, aggregating multivariate Hawkes processes may not necessarily be Hawkes. We identify a necessary and sufficient condition for the aggregated process to be Hawkes. We prove an FCLT for a multivariate Hawkes process under a random splitting and then aggregating scheme (under certain conditions, transforming into a Hawkes process of a different dimension).
We study approximations for the Lévy area of Brownian motion which are based on the Fourier series expansion and a polynomial expansion of the associated Brownian bridge. Comparing the asymptotic convergence rates of the Lévy area approximations, we see that the approximation resulting from the polynomial expansion of the Brownian bridge is more accurate than the Kloeden–Platen–Wright approximation, whilst still only using independent normal random vectors. We then link the asymptotic convergence rates of these approximations to the limiting fluctuations for the corresponding series expansions of the Brownian bridge. Moreover, and of interest in its own right, the analysis we use to identify the fluctuation processes for the Karhunen–Loève and Fourier series expansions of the Brownian bridge is extended to give a stand-alone derivation of the values of the Riemann zeta function at even positive integers.
At the mesoscale, the fluctuating phenomena are described using the theory of stochastic processes. Depending on the random variables, different stochastic processes can be defined. The properties of stationarity, reversibility, and Markovianity are defined and discussed. The classes of discrete- and continuous-state Markov processes are presented including their master equation, their spectral theory, and their reversibility condition. For discrete-state Markov processes, the entropy production is deduced and the network theory is developed, allowing us to obtain the affinities on the basis of the Hill–Schnakenberg cycle decomposition. Continuous-state Markov processes are described by their master equation, as well as stochastic differential equations. The spectral theory is also considered in the weak-noise limit. Furthermore, Langevin stochastic processes are presented in particular for Brownian motion and their deduction is carried out from the underlying microscopic dynamics.
The Chow–Robbins game is a classical, still partly unsolved, stopping problem introduced by Chow and Robbins in 1965. You repeatedly toss a fair coin. After each toss, you decide whether you take the fraction of heads up to now as a payoff, otherwise you continue. As a more general stopping problem this reads
$V(n,x) = \sup_{\tau }\mathbb{E} \left [ \frac{x + S_\tau}{n+\tau}\right]$
, where S is a random walk. We give a tight upper bound for V when S has sub-Gaussian increments by using the analogous time-continuous problem with a standard Brownian motion as the driving process. For the Chow–Robbins game we also give a tight lower bound and use these to calculate, on the integers, the complete continuation and the stopping set of the problem for
$n\leq 489\,241$
.
We consider the near-critical Erdős–Rényi random graph G(n, p) and provide a new probabilistic proof of the fact that, when p is of the form
$p=p(n)=1/n+\lambda/n^{4/3}$
and A is large,
where
$\mathcal{C}_{\max}$
is the largest connected component of the graph. Our result allows A and
$\lambda$
to depend on n. While this result is already known, our proof relies only on conceptual and adaptable tools such as ballot theorems, whereas the existing proof relies on a combinatorial formula specific to Erdős–Rényi graphs, together with analytic estimates.