We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Random bridges have gained significant attention in recent years due to their potential applications in various areas, particularly in information-based asset pricing models. This paper aims to explore the potential influence of the pinning point’s distribution on the memorylessness and stochastic dynamics of the bridge process. We introduce Lévy bridges with random length and random pinning points, and analyze their Markov property. Our study demonstrates that the Markov property of Lévy bridges depends on the nature of the distribution of their pinning points. The law of any random variables can be decomposed into singular continuous, discrete, and absolutely continuous parts with respect to the Lebesgue measure (Lebesgue’s decomposition theorem). We show that the Markov property holds when the pinning points’ law does not have an absolutely continuous part. Conversely, the Lévy bridge fails to exhibit Markovian behavior when the pinning point has an absolutely continuous part.
We discuss a variant, named ‘Rattle’, of the product replacement algorithm. Rattle is a Markov chain, that returns a random element of a black box group. The limiting distribution of the element returned is the uniform distribution. We prove that, if the generating sequence is long enough, the probability distribution of the element returned converges unexpectedly quickly to the uniform distribution.
With the increasing prevalence of big data and sparse data, and rapidly growing data-centric approaches to scientific research, students must develop effective data analysis skills at an early stage of their academic careers. This detailed guide to data modeling in the sciences is ideal for students and researchers keen to develop their understanding of probabilistic data modeling beyond the basics of p-values and fitting residuals. The textbook begins with basic probabilistic concepts, models of dynamical systems and likelihoods are then presented to build the foundation for Bayesian inference, Monte Carlo samplers and filtering. Modeling paradigms are then seamlessly developed, including mixture models, regression models, hidden Markov models, state-space models and Kalman filtering, continuous time processes and uniformization. The text is self-contained and includes practical examples and numerous exercises. This would be an excellent resource for courses on data analysis within the natural sciences, or as a reference text for self-study.
Traditionally, depression phenotypes have been defined based on interindividual differences that distinguish between subgroups of individuals expressing distinct depressive symptoms often from cross-sectional data. Alternatively, depression phenotypes can be defined based on intraindividual differences, differentiating between transitory states of distinct symptoms profiles that a person transitions into or out of over time. Such within-person phenotypic states are less examined, despite their potential significance for understanding and treating depression.
Methods:
The current study used intensive longitudinal data of youths (N = 120) at risk for depression. Clinical interviews (at baseline, 4, 10, 16, and 22 months) yielded 90 weekly assessments. We applied a multilevel hidden Markov model to identify intraindividual phenotypes of weekly depressive symptoms for at-risk youth.
Results:
Three intraindividual phenotypes emerged: a low-depression state, an elevated-depression state, and a cognitive-physical-symptom state. Youth had a high probability of remaining in the same state over time. Furthermore, probabilities of transitioning from one state to another did not differ by age or ethnoracial minority status; girls were more likely than boys to transition from a low-depression state to either the elevated-depression state or the cognitive-physical symptom state. Finally, these intraindividual phenotypes and their dynamics were associated with comorbid externalizing symptoms.
Conclusion:
Identifying these states as well as the transitions between them characterizes how symptoms of depression change over time and provide potential directions for intervention efforts
Bisimulation is a concept that captures behavioural equivalence of states in a variety of types of transition systems. It has been widely studied in a discrete-time setting. The core of this work is to generalise the discrete-time picture to continuous time by providing a notion of behavioural equivalence for continuous-time Markov processes. In Chen et al. [(2019). Electronic Notes in Theoretical Computer Science347 45–63.], we proposed two equivalent definitions of bisimulation for continuous-time stochastic processes where the evolution is a flow through time: the first one as an equivalence relation and the second one as a cospan of morphisms. In Chen et al. [(2020). Electronic Notes in Theoretical Computer Science.], we developed the theory further: we introduced different concepts that correspond to different behavioural equivalences and compared them to bisimulation. In particular, we studied the relation between bisimulation and symmetry groups of the dynamics. We also provided a game interpretation for two of the behavioural equivalences. The present work unifies the cited conference presentations and gives detailed proofs.
Matryoshka dolls, the traditional Russian nesting figurines, are known worldwide for each doll’s encapsulation of a sequence of smaller dolls. In this paper, we exploit the structure of a new sequence of nested matrices we call matryoshkan matrices in order to compute the moments of the one-dimensional polynomial processes, a large class of Markov processes. We characterize the salient properties of matryoshkan matrices that allow us to compute these moments in closed form at a specific time without computing the entire path of the process. This simplifies the computation of the polynomial process moments significantly. Through our method, we derive explicit expressions for both transient and steady-state moments of this class of Markov processes. We demonstrate the applicability of this method through explicit examples such as shot noise processes, growth–collapse processes, ephemerally self-exciting processes, and affine stochastic differential equations from the finance literature. We also show that we can derive explicit expressions for the self-exciting Hawkes process, for which finding closed-form moment expressions has been an open problem since their introduction in 1971. In general, our techniques can be used for any Markov process for which the infinitesimal generator of an arbitrary polynomial is itself a polynomial of equal or lower order.
The signature of a path can be described as its full non-commutative exponential. Following T. Lyons, we regard its expectation, the expected signature, as a path space analogue of the classical moment generating function. The logarithm thereof, taken in the tensor algebra, defines the signature cumulant. We establish a universal functional relation in a general semimartingale context. Our work exhibits the importance of Magnus expansions in the algorithmic problem of computing expected signature cumulants and further offers a far-reaching generalization of recent results on characteristic exponents dubbed diamond and cumulant expansions with motivations ranging from financial mathematics to statistical physics. From an affine semimartingale perspective, the functional relation may be interpreted as a type of generalized Riccati equation.
In pioneering work in the 1950s, S. Karlin and J. McGregor showed that probabilistic aspects of certain Markov processes can be studied by analyzing orthogonal eigenfunctions of associated operators. In the decades since, many authors have extended and deepened this surprising connection between orthogonal polynomials and stochastic processes. This book gives a comprehensive analysis of the spectral representation of the most important one-dimensional Markov processes, namely discrete-time birth-death chains, birth-death processes and diffusion processes. It brings together the main results from the extensive literature on the topic with detailed examples and applications. Also featuring an introduction to the basic theory of orthogonal polynomials and a selection of exercises at the end of each chapter, it is suitable for graduate students with a solid background in stochastic processes as well as researchers in orthogonal polynomials and special functions who want to learn about applications of their work to probability.
Whether enjoying the lucid prose of a favourite author or slogging through some other writer’s cumbersome, heavy-set prattle (full of parentheses, em dashes, compound adjectives, and Oxford commas), readers will notice stylistic signatures not only in word choice and grammar but also in punctuation itself. Indeed, visual sequences of punctuation from different authors produce marvellously different (and visually striking) sequences. Punctuation is a largely overlooked stylistic feature in stylometry, the quantitative analysis of written text. In this paper, we examine punctuation sequences in a corpus of literary documents and ask the following questions: Are the properties of such sequences a distinctive feature of different authors? Is it possible to distinguish literary genres based on their punctuation sequences? Do the punctuation styles of authors evolve over time? Are we on to something interesting in trying to do stylometry without words, or are we full of sound and fury (signifying nothing)?
In our investigation, we examine a large corpus of documents from Project Gutenberg (a digital library with many possible editorial influences). We extract punctuation sequences from each document in our corpus and record the number of words that separate punctuation marks. Using such information about punctuation-usage patterns, we attempt both author and genre recognition, and we also examine the evolution of punctuation usage over time. Our efforts at author recognition are particularly successful. Among the features that we consider, the one that seems to carry the most explanatory power is an empirical approximation of the joint probability of the successive occurrence of two punctuation marks. In our conclusions, we suggest several directions for future work, including the application of similar analyses for investigating translations and other types of categorical time series.
This paper investigates the random horizon optimal stopping problem for measure-valued piecewise deterministic Markov processes (PDMPs). This is motivated by population dynamics applications, when one wants to monitor some characteristics of the individuals in a small population. The population and its individual characteristics can be represented by a point measure. We first define a PDMP on a space of locally finite measures. Then we define a sequence of random horizon optimal stopping problems for such processes. We prove that the value function of the problems can be obtained by iterating some dynamic programming operator. Finally we prove via a simple counter-example that controlling the whole population is not equivalent to controlling a random lineage.
In 1973, Williams [D. Williams, On Rényi's ‘record’ problem and Engel's series, Bull. London Math. Soc.5 (1973), 235–237] introduced two interesting discrete Markov processes, namely C-processes and A-processes, which are related to record times in statistics and Engel's series in number theory respectively. Moreover, he showed that these two processes share the same classical limit theorems, such as the law of large numbers, central limit theorem and law of the iterated logarithm. In this paper, we consider the large deviations for these two Markov processes, which indicate that there is a difference between C-processes and A-processes in the context of large deviations.
We study the distribution of 2-Selmer ranks in the family of quadratic twists of an elliptic curve $\def \xmlpi #1{}\def \mathsfbi #1{\boldsymbol {\mathsf {#1}}}\let \le =\leqslant \let \leq =\leqslant \let \ge =\geqslant \let \geq =\geqslant \def \Pr {\mathit {Pr}}\def \Fr {\mathit {Fr}}\def \Rey {\mathit {Re}}E$ over an arbitrary number field $K$. Under the assumption that ${\rm Gal}(K(E[2])/K) \ {\cong }\ S_3$, we show that the density (counted in a nonstandard way) of twists with Selmer rank $r$ exists for all positive integers $r$, and is given via an equilibrium distribution, depending only on a single parameter (the ‘disparity’), of a certain Markov process that is itself independent of $E$ and $K$. More generally, our results also apply to $p$-Selmer ranks of twists of two-dimensional self-dual ${\bf F}_p$-representations of the absolute Galois group of $K$ by characters of order $p$.
This paper deals with robust guaranteed cost control for a class of linear uncertain descriptor systems with state delays and jumping parameters. The transition of the jumping parameters in the systems is governed by a finite-state Markov process. Based on stability theory for stochastic differential equations, a sufficient condition on the existence of robust guaranteed cost controllers is derived. In terms of the LMI (linear matrix inequality) approach, a linear state feedback controller is designed to stochastically stabilise the given system with a cost function constraint. A convex optimisation problem with LMI constraints is formulated to design the suboptimal guaranteed cost controller. A numerical example demonstrates the effect of the proposed design approach.
Expected suprema of a function f observed along the paths of a nice Markov process define an excessive function, and in fact a potential if f vanishes at the boundary. Conversely, we show under mild regularity conditions that any potential admits a representation in terms of expected suprema. Moreover, we identify the maximal and the minimal representing function in terms of probabilistic potential theory. Our results are motivated by the work of El Karoui and Meziou (2006) on the max-plus decomposition of supermartingales, and they provide a singular analogue to the non-linear Riesz representation in El Karoui and Föllmer (2005).
We develop a discrete-time approximation technique dealing with the time-cost trade-off problem in PERT networks. It is assumed that the activity durations are independent random variables with generalized Erlang distributions, in which the mean duration of each activity is a non-increasing function of the amount of resource allocated to it. It is also assumed that the amount of resource allocated to each activity is controllable. Then, we construct an optimal control problem with three conflicting objective functions. Solving this optimal control problem, optimally, is impossible. Therefore, a discrete-time approximation technique is applied to solve the original multi-objective optimal control problem, using goal attainment method. To show the advantages of the proposed technique, we also develop a Simulated Annealing (SA) algorithm to solve the problem, and compare the discrete-time approximation results against the SA and also the genetic algorithm results.
This article provides entropic inequalities for binomial-Poisson distributions, derived from the two point space. They appear as local inequalities of the M/M/∞ queue. They describe in particular the exponential dissipation of Φ-entropies along this process. This simple queueing process appears as a model of “constant curvature”, and plays for the simple Poisson process the role played by the Ornstein-Uhlenbeck process for Brownian Motion. Some of the inequalities are recovered by semi-group interpolation. Additionally, we explore the behaviour of these entropic inequalities under a particular scaling, which sees the Ornstein-Uhlenbeck process as a fluid limit of M/M/∞ queues.Proofs are elementary and rely essentially on the development of a “Φ-calculus”.
This paper discusses the asymptotic behavior of distributions of state variables of Markov processes generated by first-order stochastic difference equations. It studies the problem in a context that is general in the sense that (i) the evolution of the system takes place in a general state space (i.e., a space that is not necessarily finite or even countable); and (ii) the orbits of the unperturbed, deterministic component of the system converge to subsets of the state space which can be more complicated than a stationary state or a periodic orbit, that is, they can be aperiodic or chaotic. The main result of the paper consists of the proof that, under certain conditions on the deterministic attractor and the stochastic perturbations, the Markov process describing the dynamics of a perturbed deterministic system possesses a unique, invariant, and stochastically stable probability measure. Some simple economic applications are also discussed.
Arbitrage-free prices u of European contracts on risky assets whoselog-returns are modelled by Lévy processes satisfya parabolic partial integro-differential equation (PIDE) $\partial_t u + {\mathcal{A}}[u] = 0$.This PIDE is localized tobounded domains and the error due to this localization isestimated. The localized PIDE is discretized by theθ-scheme in time and a wavelet Galerkin method with N degrees of freedom in log-price space. The dense matrix for ${\mathcal{A}}$ can be replaced by a sparse matrix in the wavelet basis, and the linear systemsin each implicit time step are solved approximativelywith GMRES in linear complexity. The total work of the algorithm for M time steps is bounded byO(MN(log(N))2) operations and O(Nlog(N)) memory.The deterministic algorithm gives optimal convergence rates (up to logarithmic terms) for the computed solutionin the same complexity as finite difference approximationsof the standard Black–Scholes equation.Computational examples for various Lévy price processes are presented.
We present a transformation for stochastic matrices and analyze theeffects of using it in stochastic comparison with the strong stochastic(st) order. We show that unless the given stochastic matrix is row diagonallydominant, the transformed matrix provides better st bounds on the steady state probability distribution.