We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
We consider the problem of obtaining effective representations for the solutions of linear, vector-valued stochastic differential equations (SDEs) driven by non-Gaussian pure-jump Lévy processes, and we show how such representations lead to efficient simulation methods. The processes considered constitute a broad class of models that find application across the physical and biological sciences, mathematics, finance, and engineering. Motivated by important relevant problems in statistical inference, we derive new, generalised shot-noise simulation methods whenever a normal variance-mean (NVM) mixture representation exists for the driving Lévy process, including the generalised hyperbolic, normal-gamma, and normal tempered stable cases. Simple, explicit conditions are identified for the convergence of the residual of a truncated shot-noise representation to a Brownian motion in the case of the pure Lévy process, and to a Brownian-driven SDE in the case of the Lévy-driven SDE. These results provide Gaussian approximations to the small jumps of the process under the NVM representation. The resulting representations are of particular importance in state inference and parameter estimation for Lévy-driven SDE models, since the resulting conditionally Gaussian structures can be readily incorporated into latent variable inference methods such as Markov chain Monte Carlo, expectation-maximisation, and sequential Monte Carlo.
We extend the classical setting of an optimal stopping problem under full information to include problems with an unknown state. The framework allows the unknown state to influence (i) the drift of the underlying process, (ii) the payoff functions, and (iii) the distribution of the time horizon. Since the stopper is assumed to observe the underlying process and the random horizon, this is a two-source learning problem. Assigning a prior distribution for the unknown state, standard filtering theory can be employed to embed the problem in a Markovian framework with one additional state variable representing the posterior of the unknown state. We provide a convenient formulation of this Markovian problem, based on a measure change technique that decouples the underlying process from the new state variable. Moreover, we show by means of several novel examples that this reduced formulation can be used to solve problems explicitly.
We introduce a variant of Shepp’s classical urn problem in which the optimal stopper does not know whether sampling from the urn is done with or without replacement. By considering the problem’s continuous-time analog, we provide bounds on the value function and, in the case of a balanced urn (with an equal number of each ball type), an explicit solution is found. Surprisingly, the optimal strategy for the balanced urn is the same as in the classical urn problem. However, the expected value upon stopping is lower due to the additional uncertainty present.
In this article we consider the estimation of the log-normalization constant associated to a class of continuous-time filtering models. In particular, we consider ensemble Kalman–Bucy filter estimates based upon several nonlinear Kalman–Bucy diffusions. Using new conditional bias results for the mean of the aforementioned methods, we analyze the empirical log-scale normalization constants in terms of their
$\mathbb{L}_n$
-errors and
$\mathbb{L}_n$
-conditional bias. Depending on the type of nonlinear Kalman–Bucy diffusion, we show that these are bounded above by terms such as
$\mathsf{C}(n)\left[t^{1/2}/N^{1/2} + t/N\right]$
or
$\mathsf{C}(n)/N^{1/2}$
(
$\mathbb{L}_n$
-errors) and
$\mathsf{C}(n)\left[t+t^{1/2}\right]/N$
or
$\mathsf{C}(n)/N$
(
$\mathbb{L}_n$
-conditional bias), where t is the time horizon, N is the ensemble size, and
$\mathsf{C}(n)$
is a constant that depends only on n, not on N or t. Finally, we use these results for online static parameter estimation for the above filtering models and implement the methodology for both linear and nonlinear models.
Both sequential Monte Carlo (SMC) methods (a.k.a. ‘particle filters’) and sequential Markov chain Monte Carlo (sequential MCMC) methods constitute classes of algorithms which can be used to approximate expectations with respect to (a sequence of) probability distributions and their normalising constants. While SMC methods sample particles conditionally independently at each time step, sequential MCMC methods sample particles according to a Markov chain Monte Carlo (MCMC) kernel. Introduced over twenty years ago in [6], sequential MCMC methods have attracted renewed interest recently as they empirically outperform SMC methods in some applications. We establish an
$\mathbb{L}_r$
-inequality (which implies a strong law of large numbers) and a central limit theorem for sequential MCMC methods and provide conditions under which errors can be controlled uniformly in time. In the context of state-space models, we also provide conditions under which sequential MCMC methods can indeed outperform standard SMC methods in terms of asymptotic variance of the corresponding Monte Carlo estimators.
Iterative Filtering (IF) is an alternative technique to the Empirical Mode Decomposition (EMD) algorithm for the decomposition of non–stationary and non–linear signals. Recently in [3] IF has been proved to be convergent for any L2 signal and its stability has been also demonstrated through examples. Furthermore in [3] the so called Fokker–Planck (FP) filters have been introduced. They are smooth at every point and have compact supports. Based on those results, in this paper we introduce the Multidimensional Iterative Filtering (MIF) technique for the decomposition and time–frequency analysis of non–stationary high–dimensional signals. We present the extension of FP filters to higher dimensions. We prove convergence results under general sufficient conditions on the filter shape. Finally we illustrate the promising performance of MIF algorithm, equipped with high–dimensional FP filters, when applied to the decomposition of two dimensional signals.
Following the approach of standard filtering theory, we analyse investor valuation of firms, when these are modelled as geometric-Brownian state processes that are privately and partially observed, at random (Poisson) times, by agents. Tasked with disclosing forecast values, agents are able purposefully to withhold their observations; explicit filtering formulae are derived for downgrading the valuations in the absence of disclosures. The analysis is conducted for both a solitary firm and m co-dependent firms.
Nonlinear filtering is investigated in a system where both the signal system and the observation system are under non-Gaussian Lévy fluctuations. Firstly, the Zakai equation is derived, and it is further used to derive the Kushner-Stratonovich equation. Secondly, by a filtered martingale problem, uniqueness for strong solutions of the Kushner-Stratonovich equation and the Zakai equation is proved. Thirdly, under some extra regularity conditions, the Zakai equation for the unnormalized density is also derived in the case of α-stable Lévy noise.
We extend the Kalman-Bucy filter to the case where both the system and observation processes are driven by finite dimensional Lévy processes, but whereas the process driving the system dynamics is square-integrable, that driving the observations is not; however it remains integrable. The main result is that the components of the observation noise that have infinite variance make no contribution to the filtering equations. The key technique used is approximation by processes having bounded jumps.
Onwards from the mid-twentieth century, the stochastic filtering problem has caught the attention of thousands of mathematicians, engineers, statisticians, and computer scientists. Its applications span the whole spectrum of human endeavour, including satellite tracking, credit risk estimation, human genome analysis, and speech recognition. Stochastic filtering has engendered a surprising number of mathematical techniques for its treatment and has played an important role in the development of new research areas, including stochastic partial differential equations, stochastic geometry, rough paths theory, and Malliavin calculus. It also spearheaded research in areas of classical mathematics, such as Lie algebras, control theory, and information theory. The aim of this paper is to give a brief historical account of the subject concentrating on the continuous-time framework.
Nonlinear filter problems arise in many applications such as communications and signal processing. Commonly used numerical simulation methods include Kalman filter method, particle filter method, etc. In this paper a novel numerical algorithm is constructed based on samples of the current state obtained by solving the state equation implicitly. Numerical experiments demonstrate that our algorithm is more accurate than the Kalman filter and more stable than the particle filter.
A recursive scheme is proposed for identifying a single input single output (SISO) Wiener-Hammerstein system, which consists of two linear dynamic subsystems and a sandwiched nonparametric static nonlinearity. The first linear block is assumed to be a finite impulse response (FIR) filter and the second an infinite impulse response (IIR) filter. By letting the input be a sequence of mutually independent Gaussian random variables, the recursive estimates for coefficients of the two linear blocks and the value of the static nonlinear function at any fixed given point are proven to converge to the true values, with probability one as the data size tends to infinity. The static nonlinearity is identified in a nonparametric way and no structural information is directly used. A numerical example is presented that illustrates the theoretical results.
The problem of detecting an abrupt change in the distribution of an arbitrary, sequentially observed, continuous-path stochastic process is considered and the optimality of the CUSUM test is established with respect to a modified version of Lorden's criterion. We apply this result to the case that a random drift emerges in a fractional Brownian motion and we show that the CUSUM test optimizes Lorden's original criterion when a fractional Brownian motion with Hurst index H adopts a polynomial drift term with exponent H+1/2.
Our focus in this work is to investigate an efficient state estimation scheme for a singularly perturbed stochastic hybrid system. As stochastic hybrid systems have been used recently in diverse areas, the importance of correct and efficient estimation of such systems cannot be overemphasized. The framework of nonlinear filtering provides a suitable ground for on-line estimation. With the help of intrinsic multiscale properties of a system, we obtain an efficient estimation scheme for a stochastic hybrid system.
We study Markov measures and p-adic random walks with the use of states on the Cuntz algebras Op. Via the Gelfand–Naimark–Segal construction, these come from families of representations of Op. We prove that these representations reflect selfsimilarity especially well. In this paper, we consider a Cuntz–Krieger type algebra where the adjacency matrix depends on a parameter q ( q=1 is the case of Cuntz–Krieger algebra). This is an ongoing work generalizing a construction of certain measures associated to random walks on graphs.
We consider Monte Carlo methods for the classical nonlinear filtering problem. The first method is based on a backward pathwise filtering equation and the second method is related to a backward linear stochastic partial differential equation. We study convergence of the proposed numerical algorithms. The considered methods have such advantages as a capability in principle to solve filtering problems of large dimensionality, reliable error control, and recurrency. Their efficiency is achieved due to the numerical procedures which use effective numerical schemes and variance reduction techniques. The results obtained are supported by numerical experiments.
While the convergence properties of many sampling selection methods can be proven, there is one particular sampling selection method introduced in Baker (1987), closely related to ‘systematic sampling’ in statistics, that has been exclusively treated on an empirical basis. The main motivation of the paper is to start to study formally its convergence properties, since in practice it is by far the fastest selection method available. We will show that convergence results for the systematic sampling selection method are related to properties of peculiar Markov chains.
We study the exponential utility indifference valuation of a contingent claim B in an incomplete market driven by two Brownian motions. The claim depends on a nontradable asset stochastically correlated with the traded asset available for hedging. We use martingale arguments to provide upper and lower bounds, in terms of bounds on the correlation, for the value VB of the exponential utility maximization problem with the claim B as random endowment. This yields an explicit formula for the indifference value b of B at any time, even with a fairly general stochastic correlation. Earlier results with constant correlation are recovered and extended. The reason why all this works is that, after a transformation to the minimal martingale measure, the value VB enjoys a monotonicity property in the correlation between tradable and nontradable assets.
We show that a characterization of scaling functions for multiresolution analyses given by Hernández and Weiss and that a characterization of low-pass filters given by Gundy both hold for multivariable multiresolution analyses.
Particle filters are Monte Carlo methods that aim to approximate the optimal filter of a partially observed Markov chain. In this paper, we study the case in which the transition kernel of the Markov chain depends on unknown parameters: we construct a particle filter for the simultaneous estimation of the parameter and the partially observed Markov chain (adaptive estimation) and we prove the convergence of this filter to the correct optimal filter, as time and the number of particles go to infinity. The filter presented here generalizes Del Moral's Monte Carlo particle filter.