We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We solve the non-discounted, finite-horizon optimal stopping problem of a Gauss–Markov bridge by using a time-space transformation approach. The associated optimal stopping boundary is proved to be Lipschitz continuous on any closed interval that excludes the horizon, and it is characterized by the unique solution of an integral equation. A Picard iteration algorithm is discussed and implemented to exemplify the numerical computation and geometry of the optimal stopping boundary for some illustrative cases.
We present a closed-form solution to a discounted optimal stopping zero-sum game in a model based on a generalised geometric Brownian motion with coefficients depending on its running maximum and minimum processes. The optimal stopping times forming a Nash equilibrium are shown to be the first times at which the original process hits certain boundaries depending on the running values of the associated maximum and minimum processes. The proof is based on the reduction of the original game to the equivalent coupled free-boundary problem and the solution of the latter problem by means of the smooth-fit and normal-reflection conditions. We show that the optimal stopping boundaries are partially determined as either unique solutions to the appropriate system of arithmetic equations or unique solutions to the appropriate first-order nonlinear ordinary differential equations. The results obtained are related to the valuation of the perpetual lookback game options with floating strikes in the appropriate diffusion-type extension of the Black–Merton–Scholes model.
We study a signaling game between an employer and a potential employee, where the employee has private information regarding their production capacity. At the initial stage, the employee communicates a salary claim, after which the true production capacity is gradually revealed to the employer as the unknown drift of a Brownian motion representing the revenues generated by the employee. Subsequently, the employer has the possibility to choose a time to fire the employee in case the estimated production capacity falls short of the salary. In this setup, we use filtering and optimal stopping theory to derive an equilibrium in which the employee provides a randomized salary claim and the employer uses a threshold strategy in terms of the conditional probability for the high production capacity. The analysis is robust in the sense that various extensions of the basic model can be solved using the same methodology, including cases with positive firing costs, incomplete information about an individual’s own type, as well as an additional interview phase.
Random bridges have gained significant attention in recent years due to their potential applications in various areas, particularly in information-based asset pricing models. This paper aims to explore the potential influence of the pinning point’s distribution on the memorylessness and stochastic dynamics of the bridge process. We introduce Lévy bridges with random length and random pinning points, and analyze their Markov property. Our study demonstrates that the Markov property of Lévy bridges depends on the nature of the distribution of their pinning points. The law of any random variables can be decomposed into singular continuous, discrete, and absolutely continuous parts with respect to the Lebesgue measure (Lebesgue’s decomposition theorem). We show that the Markov property holds when the pinning points’ law does not have an absolutely continuous part. Conversely, the Lévy bridge fails to exhibit Markovian behavior when the pinning point has an absolutely continuous part.
We investigate an optimal stopping problem for the expected value of a discounted payoff on a regime-switching geometric Brownian motion under two constraints on the possible stopping times: only at exogenous random times, and only during a specific regime. The main objectives are to show that an optimal stopping time exists as a threshold type and to derive expressions for the value functions and the optimal threshold. To this end, we solve the corresponding variational inequality and show that its solution coincides with the value functions. Some numerical results are also introduced. Furthermore, we investigate some asymptotic behaviors.
We consider the propagation of a stochastic SIR-type epidemic in two connected populations: a relatively small local population of interest which is surrounded by a much larger external population. External infectives can temporarily enter the small population and contribute to the spread of the infection inside this population. The rules for entry of infectives into the small population as well as their length of stay are modeled by a general Markov queueing system. Our main objective is to determine the distribution of the total number of infections within both populations. To do this, the approach we propose consists of deriving a family of martingales for the joint epidemic processes and applying classical stopping time or convergence theorems. The study then focuses on several particular cases where the external infection is described by a linear branching process and the entry of external infectives obeys certain specific rules. Some of the results obtained are illustrated by numerical examples.
In the classical gambler’s ruin problem, the gambler plays an adversary with initial capitals z and $a-z$, respectively, where $a>0$ and $0< z < a$ are integers. At each round, the gambler wins or loses a dollar with probabilities p and $1-p$. The game continues until one of the two players is ruined. For even a and $0<z\leq {a}/{2}$, the family of distributions of the duration (total number of rounds) of the game indexed by $p \in [0,{\frac{1}{2}}]$ is shown to have monotone (increasing) likelihood ratio, while for ${a}/{2} \leq z<a$, the family of distributions of the duration indexed by $p \in [{\frac{1}{2}}, 1]$ has monotone (decreasing) likelihood ratio. In particular, for $z={a}/{2}$, in terms of the likelihood ratio order, the distribution of the duration is maximized over $p \in [0,1]$ by $p={\frac{1}{2}}$. The case of odd a is also considered in terms of the usual stochastic order. Furthermore, as a limit, the first exit time of Brownian motion is briefly discussed.
Candidates arrive sequentially for an interview process which results in them being ranked relative to their predecessors. Based on the ranks available at each time, a decision mechanism must be developed that selects or dismisses the current candidate in an effort to maximize the chance of selecting the best. This classical version of the ‘secretary problem’ has been studied in depth, mostly using combinatorial approaches, along with numerous other variants. We consider a particular new version where, during reviewing, it is possible to query an external expert to improve the probability of making the correct decision. Unlike existing formulations, we consider experts that are not necessarily infallible and may provide suggestions that can be faulty. For the solution of our problem we adopt a probabilistic methodology and view the querying times as consecutive stopping times which we optimize with the help of optimal stopping theory. For each querying time we must also design a mechanism to decide whether or not we should terminate the search at the querying time. This decision is straightforward under the usual assumption of infallible experts, but when experts are faulty it has a far more intricate structure.
We extend the classical setting of an optimal stopping problem under full information to include problems with an unknown state. The framework allows the unknown state to influence (i) the drift of the underlying process, (ii) the payoff functions, and (iii) the distribution of the time horizon. Since the stopper is assumed to observe the underlying process and the random horizon, this is a two-source learning problem. Assigning a prior distribution for the unknown state, standard filtering theory can be employed to embed the problem in a Markovian framework with one additional state variable representing the posterior of the unknown state. We provide a convenient formulation of this Markovian problem, based on a measure change technique that decouples the underlying process from the new state variable. Moreover, we show by means of several novel examples that this reduced formulation can be used to solve problems explicitly.
We prove existence and uniqueness for the inverse-first-passage time problem for soft-killed Brownian motion using rather elementary methods relying on basic results from probability theory only. We completely avoid the relation to a suitable partial differential equation via a suitable Feynman–Kac representation, which was previously one of the main tools.
We define and study properties of implied volatility for American perpetual put options. In particular, we show that if the market prices are derived from a local volatility model with a monotone volatility function, then the corresponding implied volatility is also monotone as a function of the strike price.
Given a spectrally negative Lévy process, we predict, in an
$L_1$
sense, the last passage time of the process below zero before an independent exponential time. This optimal prediction problem generalises [2], where the infinite-horizon problem is solved. Using a similar argument as that in [24], we show that this optimal prediction problem is equivalent to solving an optimal prediction problem in a finite-horizon setting. Surprisingly (unlike the infinite-horizon problem), an optimal stopping time is based on a curve that is killed at the moment the mean of the exponential time is reached. That is, an optimal stopping time is the first time the process crosses above a non-negative, continuous, and non-increasing curve depending on time. This curve and the value function are characterised as a solution of a system of nonlinear integral equations which can be understood as a generalisation of the free boundary equations (see e.g. [21, Chapter IV.14.1]) in the presence of jumps. As an example, we numerically calculate this curve in the Brownian motion case and for a compound Poisson process with exponential-sized jumps perturbed by a Brownian motion.
The existence of moments of first downward passage times of a spectrally negative Lévy process is governed by the general dynamics of the Lévy process, i.e. whether it is drifting to
$+\infty$
,
$-\infty$
, or oscillating. Whenever the Lévy process drifts to
$+\infty$
, we prove that the
$\kappa$
th moment of the first passage time (conditioned to be finite) exists if and only if the
$(\kappa+1)$
th moment of the Lévy jump measure exists. This generalizes a result shown earlier by Delbaen for Cramér–Lundberg risk processes. Whenever the Lévy process drifts to
$-\infty$
, we prove that all moments of the first passage time exist, while for an oscillating Lévy process we derive conditions for non-existence of the moments, and in particular we show that no integer moments exist.
In this paper we study a class of optimal stopping problems under g-expectation, that is, the cost function is described by the solution of backward stochastic differential equations (BSDEs). Primarily, we assume that the reward process is
$L\exp\bigl(\mu\sqrt{2\log\!(1+L)}\bigr)$
-integrable with
$\mu>\mu_0$
for some critical value
$\mu_0$
. This integrability is weaker than
$L^p$
-integrability for any
$p>1$
, so it covers a comparatively wide class of optimal stopping problems. To reach our goal, we introduce a class of reflected backward stochastic differential equations (RBSDEs) with
$L\exp\bigl(\mu\sqrt{2\log\!(1+L)}\bigr)$
-integrable parameters. We prove the existence, uniqueness, and comparison theorem for these RBSDEs under Lipschitz-type assumptions on the coefficients. This allows us to characterize the value function of our optimal stopping problem as the unique solution of such RBSDEs.
In this paper, we study the optimal multiple stopping problem under the filtration-consistent nonlinear expectations. The reward is given by a set of random variables satisfying some appropriate assumptions, rather than a process that is right-continuous with left limits. We first construct the optimal stopping time for the single stopping problem, which is no longer given by the first hitting time of processes. We then prove by induction that the value function of the multiple stopping problem can be interpreted as the one for the single stopping problem associated with a new reward family, which allows us to construct the optimal multiple stopping times. If the reward family satisfies some strong regularity conditions, we show that the reward family and the value functions can be aggregated by some progressive processes. Hence, the optimal stopping times can be represented as hitting times.
We derive closed-form solutions to some discounted optimal stopping problems related to the perpetual American cancellable dividend-paying put and call option pricing problems in an extension of the Black–Merton–Scholes model. The cancellation times are assumed to occur when the underlying risky asset price process hits some unobservable random thresholds. The optimal stopping times are shown to be the first times at which the asset price reaches stochastic boundaries depending on the current values of its running maximum and minimum processes. The proof is based on the reduction of the original optimal stopping problems to the associated free-boundary problems and the solution of the latter problems by means of the smooth-fit and modified normal-reflection conditions. We show that the optimal stopping boundaries are characterised as the maximal and minimal solutions of certain first-order nonlinear ordinary differential equations.
We solve non-Markovian optimal switching problems in discrete time on an infinite horizon, when the decision-maker is risk-aware and the filtration is general, and establish existence and uniqueness of solutions for the associated reflected backward stochastic difference equations. An example application to hydropower planning is provided.
The Chow–Robbins game is a classical, still partly unsolved, stopping problem introduced by Chow and Robbins in 1965. You repeatedly toss a fair coin. After each toss, you decide whether you take the fraction of heads up to now as a payoff, otherwise you continue. As a more general stopping problem this reads
$V(n,x) = \sup_{\tau }\mathbb{E} \left [ \frac{x + S_\tau}{n+\tau}\right]$
, where S is a random walk. We give a tight upper bound for V when S has sub-Gaussian increments by using the analogous time-continuous problem with a standard Brownian motion as the driving process. For the Chow–Robbins game we also give a tight lower bound and use these to calculate, on the integers, the complete continuation and the stopping set of the problem for
$n\leq 489\,241$
.
For the gambler’s ruin problem with two players starting with the same amount of money, we show the playing time is stochastically maximized when the games are fair.