1 Introduction and main results
Write $\mathcal {T} := T\mathopen {(\mkern -3mu(}\mathbb {R}^d\mathclose {)\mkern -3mu)} = \Pi _{k \ge 0} (\mathbb {R}^d)^{\otimes k}$ for the tensor series over $\mathbb {R}^d$ , equipped with a concatenation product, elements of which are written indifferently as
The affine subspace $\mathcal {T}_0$ (respectively, $\mathcal {T}_1$ ) with scalar component $\mathbf {x}^{(0)} = 0$ (respectively, $ = 1$ ) has a natural Lie algebra (respectively, formal Lie group) structure, with the comutator Lie-bracket $[\mathbf {y}, \mathbf {x}] = \mathbf {y}\mathbf {x} - \mathbf {x}\mathbf {y}$ for $\mathbf {x}, \mathbf {y} \in \mathcal {T}_0$ , where $\mathbf {x}\mathbf {y}$ stands for the concatenation product.Footnote 1
Let further $\mathscr {S} = \mathscr {S} (\mathbb {R}^d)$ , respectively, $\mathscr {S}^c= \mathscr {S}^c(\mathbb {R}^d)$ , denote the class of càdlàg,Footnote 2 respectively, continuous, d-dimensional semimartingales on some filtered probability space $(\Omega , (\mathcal {F}_t)_{t \ge 0}, \mathbb {P})$ . We recall that this is the somewhat decisive class of stochastic processes that allows for a reasonable, stochastic integration theory. Classic texts include [Reference Revuz and Yor59, Reference Le Gall43] (continuous semimartingale) and [Reference Jacod and Shiryaev36, Reference Protter57] (càdlàg semimartingales); a concise introduction for readers with no background in stochastic analysis is [Reference Aït-Sahalia and Jacod2], Chapter 1.Footnote 3 (Readers with no background in probability may also focus in a first reading on deterministic semimartingales; these are precisely càdlàg paths of finite variation on compacts.) Following Lyons [Reference Lyons45], for a continuous semimartingale $X\in \mathscr {S}^c$ , the signature given as the formal sum of iterated Stratonovich integrals,
for $0 \le s \le t$ defines a random element in $\mathcal {T}_1$ and, as a process, a formal $\mathcal {T}_1$ -valued semimartingale. By regarding the d-dimensional semimartingale X as a $\mathcal {T}_0$ -valued semimartingale ( $X \leftrightarrow \mathbf {X} = (0,X,0,\dots $ )), we see that the signature of $X\in \mathscr {S}^c$ satisfies the Stratonovich stochastic differential equation
In the general case of $\mathbf {X}\in \mathscr {S}(\mathcal {T}_0)$ with possibly nontrivial higher-order semimartingale components $\mathbf {X} = (0, \mathbf {X}^{(1)}, \mathbf {X}^{(2)}, \dots )$ , the solution to equation (1.1) is also known as the Lie group valued stochastic exponential (or development), with classical references [Reference McKean51, Reference Hakim-Dowek and Lépingle31]; the càdlàg case [Reference Estrade20] is consistent with the geometric or Marcus [Reference Marcus49, Reference Marcus50, Reference Kurtz, Pardoux and Protter41, Reference Applebaum4, Reference Friz and Shekhar25] interpretation of equation (1.1)Footnote 4 with jump behavior $S_t = e^{\Delta \mathbf {X}_t} S_{t-}$ . From a stochastic differential geometry point of view, one aims for an intrinsic understanding of equation (1.1) valid for arbitrary Lie groups. For instance, if $\mathbf {X}$ takes values in any sub Lie algebra $\mathcal {L} \subset \mathcal {T}_0$ , then S takes values in the group $\mathcal {G} = \exp \mathcal {L}$ . In case of a d-dimensional semimartingale X, the minimal choice is the free Lie algebra $\mathrm {Lie}\mathopen {(\mkern -3mu(}\mathbb {R}^d\mathclose {)\mkern -3mu)}$ spanned by $\mathbb {R}^d$ (see, for example, [Reference Reutenauer58]), and the resulting Lie algebra structure of iterated integrals (both in the smooth and Stratonovich semimartingale case) is well-known. The extrinsic linear ambient space $\mathcal {T} \supset \exp {\mathcal {L}}$ will be important to us. Indeed, writing $S_t=\mathrm {Sig}(\mathbf {X})_{0,t}$ for the (unique, global) $\mathcal {T}_1$ -valued solution of equation (1.1) driven by $\mathcal {T}_0$ -valued $\mathbf {X}$ , started at $S_0 = 1$ , we define, whenever $\mathrm {Sig} (\mathbf {X})_{0, T}$ is (componentwise) integrable, the expected signature and signature cumulants (SigCum)
Already when $\mathbf {X}$ is deterministic, and sufficiently regular to make equation (1.1) meaningful, this leads to an interesting (ordinary differential) equation for $\boldsymbol {\kappa }$ with accompanying (Magnus) expansion, well understood as an effective computational tool [Reference Iserles, Munthe-Kaas, Nørsett and Zanna34, Reference Blanes, Casas, Oteo and Ros6]. The importance of the stochastic case $\mathbf {X} = \mathbf {X}(\omega )$ , with expectation and logarithm thereof, was developed by Lyons and coauthors; see [Reference Lyons45] and references therein, with a variety of applications, ranging from machine learning to numerical algorithms on Wiener space known as cubature [Reference Lyons and Victoir47]; signature cumulants were named and first studied in their own right in [Reference Bonnier and Oberhauser7]. The joint appearance of cumulants and Magnus type-expansion is also seen in non-commutative probability [Reference Celestino, Ebrahimi-Fard, Patras and Perales10], although the methods and aims appear quite different.Footnote 5
In the special case of $d=1$ and $\mathbf {X}=(0,X,0,\dots )$ , where X is a scalar semimartingale, $\boldsymbol {\mu } (T)$ and $\boldsymbol {\kappa } (T)$ are nothing but the sequence of moments and cumulants of the real valued random variable $X_T-X_0$ . When $d> 1$ , the expected signature / cumulants provides an effective way to describe the process X on $[0,T]$ ; see [Reference LeJan and Qian44, Reference Lyons45, Reference Chevyrev and Lyons13]. The question arises how to compute them. If one takes $\mathbf {X}$ as d-dimensional Brownian motion, the signature cumulant $\boldsymbol {\kappa }(T)$ equals $(T/2) \mathbf {I}_d$ , where $\mathbf {I}_d$ is the identity $2$ -tensor over $\mathbb {R}^d$ . This is known as Fawcett’s formula [Reference Lyons and Victoir47, Reference Friz and Hairer24]. Loosely speaking, and postponing precise definitions, our main result is a vast generalization of Fawcett’s formula.
Theorem 1.1 FunctEqu $\mathscr {S}$ -SigCum
For sufficiently integrable $\mathbf {X}\in \mathscr {S}(\mathcal {T}_0)$ , the (time-t) conditional signature cumulants $ \boldsymbol {\kappa }_t (T) \equiv \boldsymbol {\kappa }_t := \log \mathbb {E}_t (\mathrm {Sig} (\mathbf {X})_{t, T})$ is the unique solution of the functional equation
where all integrals are understood in an Itô– and Riemann–Stieltjes sense, respectively,Footnote 6 and $\operatorname {\mathrm {ad}}{\mathbf {x}}=[\mathbf {x}, \cdot ]:\mathcal {T}_0\to \mathcal {T}_0$ denotes the adjoined operator associated to $\mathbf {x} \in \mathcal {T}_0$ . The functions $H,G,Q$ are defined in equation (4.1) below; see also also Section 2 for further notation.
As displayed in Figures 1 and 2, this theorem has an avalanche of consequences on which we now comment.
-
○ Equation (1.2) allows to compute $\boldsymbol {\kappa }^{(n)} \in (\mathbb {R}^d)^{\otimes n}$ as a function of $\boldsymbol {\kappa }^{(1)},\dotsc ,\boldsymbol {\kappa }^{(n-1)}$ . (This remark applies mutatis mutandis to all special cases seen as vertices in Figure 1.) The resulting expansions, displayed in Figure 2, are of computational interest. In particular, our approach allows us, in some special cases, either to derive closed expressions for the conditional cumulant series $\boldsymbol {\kappa }_t$ or to characterize it a as the unique solution of a certain parabolic PDE (see Section 6). In any case, this provides a means of computation that can in principle be more efficient than the naïve Monte Carlo approach. Even when such a concrete form of $\boldsymbol {\kappa }_t$ is not available, the recursive nature of the expressions for each homogeneous component can be useful in some numerical scenarios.
-
○ The most classical consequence of equation (1.2) appears when $\mathbf {X}$ is a deterministic continuous semimartingale: that is, in particular, the components of $\mathbf {X}$ are continuous paths of finite variation, which also covers the absolutely continuous case with integrable componentwise derivative $\dot {\mathbf {X}}$ . In this case all bracket terms and the final jump-sum disappear. What remains is a classical differential equation due to [Reference Hausdorff32], here in backward form:
(1.3) $$ \begin{align} - \mathrm{d} \boldsymbol{\kappa}_t (T) = H(\operatorname{\mathrm{ad}}{\boldsymbol{\kappa}_{t}})\mathrm{d}\mathbf{X}_t. \end{align} $$The accompanying expansion is then precisely Magnus expansion [Reference Magnus48, Reference Iserles and Nørsett33, Reference Iserles, Munthe-Kaas, Nørsett and Zanna34, Reference Blanes, Casas, Oteo and Ros6]. By taking $\mathbf {X}$ continuous and piecewise linear on two adjacent intervals, say $[0,1)\cup [1,2)$ , one obtains the Baker–Campbell–Hausdorff formula (see, e.g., [Reference Miller52, Theorem 5.5])(1.4) $$ \begin{align} \begin{aligned} \boldsymbol{\kappa}_0(2)=\log\bigl(\exp(\mathbf{x}_1)\exp(\mathbf{x}_2) \bigr) &=: \operatorname{BCH}(\mathbf{x}_1,\mathbf{x}_2)\\ &=\mathbf{x}_2+\int_0^1\Psi(\exp(\operatorname{\mathrm{ad}} t\mathbf{x}_1)\circ\exp(\operatorname{\mathrm{ad}}\mathbf{x}_2))(\mathbf{x}_1)\,\mathrm dt, \end{aligned} \end{align} $$with$$\begin{align*}\Psi(z):=\frac{\ln(z)}{z-1}=\sum_{n\ge 0}\frac{(-1)^n}{n+1}(z-1)^n. \end{align*}$$It is also instructive to let $\mathbf {X}$ piecewise constant on these intervals, with $\Delta \mathbf {X}_1 =\mathbf {x}_1, \Delta \mathbf {X}_2 = \mathbf {x}_2$ , in which case equation (1.2) reduces to the first equality in equation (1.4). Such jump variations of the Magnus expansion are discussed in Section 5.1. -
○ Write $\pi _{\mathrm {Sym}}: \mathcal {T} \to \mathcal {S}$ for the canonical projection to the extended symmetric algebra $\mathcal {S}$ , the linear space identified with symmetric tensor series, and define the $\mathcal {S}$ -valued semimartingale $\hat {\mathbf {X}} := \pi _{\mathrm {Sym}}(\mathbf {X})$ and symmetric signature cumulants $\hat {\boldsymbol {\kappa }}(T) := \log (\mathbb {E}_{\cdot }(\mathrm {Sig}(\hat {\mathbf {X}})_{\cdot , T})) = \pi _{\mathrm {Sym}}(\boldsymbol {\kappa }(T))$ (see Section 2.3.1 for more detail). Then equation (1.2), in its projected and commutative form becomes (see also Section 5.2)
(1.5) $$ \begin{align} \begin{aligned} \qquad \text{FunctEqu }\mathscr{S}\text{-Cum:} \quad \hat{\boldsymbol{\kappa}}_t (T) & = \mathbb{E}_t\bigg\{ \hat{\mathbf{X}}_{t,T} + \frac{1}{2} \left\langle (\hat{\mathbf{X}}+ \hat{\boldsymbol{\kappa}})^{c} \right\rangle_{t,T}\\ &\qquad + \sum_{t < u \le T}\bigg(\exp \Big( \Delta \hat{\mathbf{X}}_u + \Delta \hat{\boldsymbol{\kappa}}_u \Big) - 1 - (\Delta \hat{\mathbf{X}}_u + \Delta \hat{\boldsymbol{\kappa}}_u ) \bigg) \bigg\}, \end{aligned} \end{align} $$where $\exp \colon \mathcal {S}_0 \mapsto \mathcal {S}_1$ is defined by the usual power series. First-level tensors are trivially symmetric, and therefore equation (1.5) applies to the a $\mathbb {R}^d$ -valued semimartingale X via the canonical embedding $\hat {\mathbf {X}} = (0, X, 0, \dots ) \in \mathscr {S}(\mathcal {S}_0)$ . More interestingly, the case $\hat {\mathbf {X}} = (0,aX,b\langle X \rangle , 0, \dotsc )$ for a d-dimensional continuous martingale X can be seen to underlie the expansions of [Reference Friz, Gatheral and Radoičić23], which improves and unifies previous results [Reference Lacoin, Rhodes and Vargas42, Reference Alos, Gatheral and Radoičić3] treating $(a,b)=(1,0)$ and $(a,b)=(1,-1/2)$ , respectively. Following Gatheral and coworkers, equation (1.5) and subsequent expansions involve ‘diamond’ products of semimartingales, given, whenever well-defined, by$$ \begin{align*}(A \diamond B)_t(T) := \mathbb{E}_t \big( \left\langle A^c, B^c \right\rangle_{t,T} \big). \end{align*} $$We note that equation (1.5) induces recursive formulae for cumulants, dubbed Diamond expansions in Figure 2, previously discussed in [Reference Lacoin, Rhodes and Vargas42, Reference Alos, Gatheral and Radoičić3, Reference Friz, Gatheral and Radoičić23, Reference Fukasawa and Matsushita28], together with a range of applications, from quantitative finance (including rough volatility models [Reference Abi Jaber, Larsson and Pulido1, Reference Gatheral and Keller-Ressel29]) to statistical physics: in [Reference Lacoin, Rhodes and Vargas42], the authors rely on such formulae to compute the cumulants function of log-correlated Gaussian fields, more precisely approximations thereof, that underlies the Sine-Gordon model, which is a key ingredient in their renormalization procedure.With regard to the existing (‘commutative’) literature, our algebraic setup is ideally suited to work under finite moment assumptions; we are able to deal with jumps, not treated in [Reference Lacoin, Rhodes and Vargas42, Reference Alos, Gatheral and Radoičić3]. Equation (1.5), the commutative shadow of equation (1.2), should be compared with Riccati’s ordinary differential equation from affine process theory [Reference Duffie, Filipović and Schachermayer19, Reference Cuchiero, Filipović, Mayerhofer and Teichmann16, Reference Keller-Ressel, Schachermayer and Teichmann40]. A systematic comparison would lead us too far astray from our main object of study; nevertheless, we illustrate the connection in Remark 6.9. Of course, our results, in particular equations (1.2) and (1.5), are not restricted to affine semimartingales. In turn, expected signatures and cumulants - and subsequently all our statements above these - require moments, which is not required for the Riccati evolution of the characteristic function of affine processes. Of recent interest, explicit diamond expansions have been obtained for ‘rough affine’ processes, non-Markov by nature, with a cumulant generating function characterized by Riccati Volterra equations; see [Reference Abi Jaber, Larsson and Pulido1, Reference Gatheral and Keller-Ressel29, Reference Friz, Gatheral and Radoičić23]. It is remarkable that analytic tractability remains intact when one passes to path space and considers signature cumulants, as we illustrate in Section 6.3.
-
○ Finally, we mention Signature-SDEs [Reference Arribas, Salvi and Szpruch5], tractable classes of stochastic differential equations that can be studied from an infinite dimensional affine and polynomial perspective [Reference Cuchiero, Svaluto-Ferro and Teichmann18]. Calibration of such models hinges on the efficient computation of expected signatures, which is the very purpose of this paper.
We conclude this introduction with some remarks on convergence. As explained, this work contains generalizations of cumulant type recursions, previously studied in [Reference Alos, Gatheral and Radoičić3, Reference Lacoin, Rhodes and Vargas42, Reference Friz, Gatheral and Radoičić23], the interest therein being the algorithmic computation of cumulants. Basic facts of analytic functions show that classical moment- and cumulant-generating functions, for random variables with finite moments of all orders, have a radius of convergence $\rho \ge 0$ , directly related to growth of the corresponding sequence. Convergence, in the sense $\rho> 0$ , implies that the moment problem is well-posed. That is, the moments (equivalently: cumulants) determine the law of the underlying random variable. (See also [Reference Friz, Gatheral and Radoičić23] for a related discussion in the context of diamond expansions.) The point of view taken here is to work directly on this space of sequences, which is even more natural in the non-commutative setting, as already seen in the deterministic setting of [Reference Magnus48]. While convergence of expected signatures or signature cumulants series is not directly an interesting question,Footnote 7 understanding their growth most certainly is: in a celebrated paper [Reference Chevyrev and Lyons13], it was shown that under a growth condition of the expected signature, the ‘expected signature problem’ is well-posed; that is, the expected signature (equivalently: signature cumulants) determines the law of the random signature. With this in mind, it is conceivable that Theorem 1.1 will prove useful toward controlling the growth of signature cumulants (and hence expected signatures).
2 Preliminaries
2.1 The tensor algebra and tensor series
Denote by $T({\mathbb {R}^d})$ the tensor algebra over ${\mathbb {R}^d}$ : that is,
elements of which are finite sums (also known as tensor polynomials) of the form
with $\mathbf {x}^{(k)} \in ({\mathbb {R}^d})^{\otimes k}, \mathbf {x}^w \in \mathbb {R}$ and linear basis vectors $e_w := e_{i_1}\dotsm e_{i_k}\in ({\mathbb {R}^d})^{\otimes k}$ , where w ranges over all words $w=i_1\dotsm i_k\in \mathcal {W}_d$ over the alphabet $\{1,\dots ,d\}$ . Note $\mathbf {x}^{(k)} = \sum _{|w|=k} \mathbf {x}^w e_w$ , where $|w|$ denotes the length a word w. The element $e_\emptyset = 1 \in ({\mathbb {R}^d})^{\otimes 0} \cong \mathbb {R}$ is the neutral element of the concatenation (also known as a tensor) product, which is obtained by linear extension of $e_we_{w'}=e_{ww'}$ , where $ww' \in \mathcal {W}_d$ denotes concatenation of two words. We thus have, for $\mathbf {x},\mathbf {y} \in T({\mathbb {R}^d})$ ,
This extends naturally to infinite sums, also known as tensor series, elements of the ‘completed’ tensor algebra
which are written as in equation (2.1), but now as formal infinite sums with identical notation and multiplication rules; the resulting algebra $\mathcal {T}$ obviously extends $T(\mathbb {R}^d)$ . For any $n\in {\mathbb {N}_{\ge 1}}$ , define the projection to tensor levels by
Denote by $\mathcal {T}_0$ and $\mathcal {T}_1$ the subspaces of tensor series starting with $0$ and $1$ , respectively; that is, $\mathbf {x} \in \mathcal {T}_0$ (respectively, $\mathcal {T}_1$ ) if and only if $\mathbf {x}^\emptyset =0$ (respectively, $\mathbf {x}^\emptyset =1$ ). Restricted to $\mathcal {T}_0$ and $\mathcal {T}_1$ , respectively, the exponential and logarithm in $\mathcal {T}$ , defined by the usual series,
are globally defined and inverse to each other. The vector space $\mathcal {T}_0$ becomes a Lie algebra with the commutator bracket
Define the adjoined operator associated to an Lie-algebra element $\mathbf {y} \in \mathcal {T}_0$ by
The exponential image $\mathcal {T}_1=\exp (\mathcal {T}_0)$ is a Lie group, at least formally so. We refrain from equipping the infinite-dimensional $\mathcal {T}_1$ with a differentiable structure, not necessary in view of the ‘locally finite’ nature of the group law $(\mathbf {x},\mathbf {y}) \mapsto \mathbf {x} \mathbf {y}$ .
Let $(a_k)_{k\ge 1}$ be a sequence of real numbers and $\mathbf {x}\in \mathcal {T}_0$ . Then we can always define a linear operator on $\mathcal {T}_0$ by
where $(\operatorname {\mathrm {ad}}\mathbf {x})^{0} = \mathrm {Id}$ is the identity operator and $(\operatorname {\mathrm {ad}} \mathbf {x})^{n} = \operatorname {\mathrm {ad}}\mathbf {x} \circ (\operatorname {\mathrm {ad}} \mathbf {x})^{n-1}$ for any $n\in {\mathbb {N}_{\ge 1}}$ . Indeed, there is no convergence issue due to the graded structure, as can be seen by projecting to some tensor level $n\in {\mathbb {N}_{\ge 1}}$
where the inner summation in the right-hand side is over a finite set of multi-indices $\ell = (l_1, \dotsc , l_{k+1})\in ({\mathbb {N}_{\ge 1}})^{k+1}$ , where and . In the following we will simply write $(\operatorname {\mathrm {ad}}\mathbf {x}\operatorname {\mathrm {ad}}\mathbf {y}) \equiv (\operatorname {\mathrm {ad}}\mathbf {x} \circ \operatorname {\mathrm {ad}}\mathbf {y})$ for the composition of adjoint operators. Further, when $\ell = (l_1)$ is a multi-index of length one, we will use the notation $(\operatorname {\mathrm {ad}}\mathbf {x}^{(l_2)} \cdots \operatorname {\mathrm {ad}}\mathbf {x}^{(l_{k+1})}) \equiv \mathrm {Id}$ . Note also that the iteration of adjoint operations can be explicitly expanded in terms of left- and right-multiplication, as follows:
For a word $w \in \mathcal {W}_d$ with $|w|>0$ , we define the directional derivative for a function $f\colon \mathcal {T} \to \mathbb {R}$ by
for any $\mathbf {x} \in \mathcal {T}$ such that the right-hand derivative exists.
2.2 The outer tensor algebra
Denote by $\mathrm {m}: T({\mathbb {R}^d})\otimes T({\mathbb {R}^d}) \to T({\mathbb {R}^d})$ the multiplication (concatenation) map of the tensor algebra. Note that $\mathrm {m}$ is linear and, due to the non-commutativity of the tensor product, not symmetric. The map can naturally be extended to linear map $\mathrm {m}: T({\mathbb {R}^d}) \overline {\otimes } T({\mathbb {R}^d}) \to \mathcal {T}$ , where $T({\mathbb {R}^d}) \overline {\otimes } T({\mathbb {R}^d})$ is the following graded algebra:
Note that there is the following natural linear embedding
We will of course refrain from explicitly denoting the embedding and simply regard $\mathbf {x} \otimes \mathbf {y}$ as an element in $T({\mathbb {R}^d}) \overline {\otimes } T({\mathbb {R}^d})$ . We emphasize that here $\otimes $ does not denote the (inner) tensor product in $\mathcal {T}$ , for which we did not reserve a symbol, but it denotes another (outer) tensor product. We can lift two linear maps $g, f\colon \mathcal {T}\to \mathcal {T}$ to a linear map $g \odot f: T({\mathbb {R}^d}) \overline {\otimes } T({\mathbb {R}^d}) \to \mathcal {T}$ defined by
In particular, for all $\mathbf {x},\mathbf {y}\in \mathcal {T}$ , it holds that
2.3 Some quotients of the tensor algebra
2.3.1 The symmetric tensor algebra
The symmetric algebra over ${\mathbb {R}^d}$ , denoted by $S({\mathbb {R}^d})$ , is the quotient of $T({\mathbb {R}^d})$ by the two-sided ideal I generated by $\{xy-yx:x,y\in {\mathbb {R}^d}\}$ . A linear basis of $S({\mathbb {R}^d})$ is then given by $\{ \hat {e}_v \}$ over non-decreasing words, $v=(i_1,\dotsc ,i_n) \in \widehat {\mathcal {W}}_d$ , with $1 \le i_1 \le \dots \le i_n \le d, n \ge 0$ . Every $\hat {\mathbf {x}} \in S(\mathbb {R}^d)$ can be written as a finite sum,
and we have an immediate identification with polynomials in d commuting indeterminates. The canonical projection
where $\hat {w}\in \hat {\mathcal {W}}_d$ denotes the non-decreasing reordering of the letters of the word $w\in \mathcal {W}_d$ , is an algebra epimorphism, which extends to an epimorphism $\pi _{\mathrm {Sym}}: \mathcal {T}\twoheadrightarrow \mathcal {S}$ , where $\mathcal {S} = S \mathopen {(\mkern -3mu(} \mathbb {R}^d\mathclose {)\mkern -3mu)}$ is the algebra completion, identifiable as a formal series in d non-commuting (respectively, commuting) indeterminates. As a vector space, $\mathcal {S}$ can be identified with symmetric formal tensor series. Denote by $ \mathcal {S}_0$ and $\mathcal {S}_1$ the affine space given by those $\hat {\mathbf {x}}\in \mathcal {S}$ with $ \hat {\mathbf {x}}^\emptyset =0$ and $ \hat {\mathbf {x}}^\emptyset =1$ , respectively. The usual power series in $\mathcal {S}$ define $\widehat {\exp {}}\colon \mathcal {S}_0 \to \mathcal {S}_1$ with inverse $\widehat {\log {}}\colon \mathcal {S}_1 \to \mathcal {S}_0$ , and we have
We shall abuse notation in what follows and write $\exp $ (respectively, $\log $ ), instead of $\widehat {\exp }$ (respectively, $ \widehat {\log }$ ).
2.3.2 The (step-n) truncated tensor algebra
For $n\in {\mathbb {N}}$ , the subspace
is a two-sided ideal of $\mathcal {T}$ . Therefore, the quotient space $\mathcal {T}/\mathcal {I}_n$ has a natural algebra structure. We denote the projection map by $\pi _{(0,n)}$ . We can identify $\mathcal {T}/\mathcal {I}_n$ with
equipped with truncated tensor product
The sequence of algebras $(\mathcal {T}^n:n\ge 0)$ forms an inverse system with limit $\mathcal {T}$ . There are also canonical inclusions $\mathcal {T}^k\hookrightarrow \mathcal {T}^{n}$ for $k\le n$ ; in fact, this forms a direct system with limit $T({\mathbb {R}^d})$ . The usual power series in $\mathcal {T}^{n}$ define $\exp _n\colon \mathcal {T}^n_0 \to \mathcal {T}^n_1$ with inverse $\log _n\colon \mathcal {T}^n_1 \to \mathcal {T}^n_0$ ; we may again abuse notation and write $\exp $ and $\log $ when no confusion arises. (In the proof section Section 7.2, we will stick to the $\exp _n$ notation to emphasis the presence of truncation.) As before, $\mathcal {T}^n_0$ has a natural Lie algebra structure, and $\mathcal {T}^n_1$ (now finite dimensional) is a bona fide Lie group.
We equip $T(\mathbb {R}^d)$ with the norm
where $|\cdot |_{({\mathbb {R}^d})^{\otimes k}}$ is the Euclidean norm on $({\mathbb {R}^d})^{\otimes k}\cong \mathbb {R}^{d^k}$ , which makes it a Banach space. The same norm makes sense in $\mathcal {T}^n$ , and since the definition is consistent in the sense that $|a|_{\mathcal {T}^k} = |a|_{\mathcal {T}^n}$ for any $a \in \mathcal {T}^{n}$ and $k \ge n$ , and $|a|_{\mathcal {T}^n} = |a|_{({\mathbb {R}^d})^{\otimes n}}$ for any $a \in ({\mathbb {R}^d})^{\otimes n}$ . We will drop the index whenever it is possible and write simply $|a|$ .
2.4 Semimartingales
Let $\mathscr {D}$ be the space of adapted càdlàg process $X\colon \Omega \times [0,T) \to \mathbb {R}$ with $T\in (0,\infty ]$ defined on some filtered probability space $(\Omega , (\mathcal {F}_t)_{0 \le t \le T}, \mathbb {P})$ . The space of semimartingales $\mathscr {S}$ is given by the processes $X\in \mathscr {D}$ that can be decomposed as
where $M \in \mathscr {M}_{\mathrm {loc}}$ is a càdlàg local martingale and $A\in \mathscr {V}$ is a càdlàg adapted process of locally bounded variation, both started at zero. Recall that every $X \in \mathscr {S}$ has a well-defined continuous local martingale part denoted by $X^c\in \mathscr {M}^c_{\mathrm {loc}}$ . The quadratic variation process of X is then given by
where $\left \langle \cdot \right \rangle $ denotes the (predictable) quadratic variation of a continuous semimartingale. Covariation square, respectively, angle brackets $[X,Y]$ and $\left \langle X^{c}, Y^{c} \right \rangle $ , for another real-valued semimartingale Y, are defined by polarization. For $q \in [1, \infty )$ , write $\mathcal {L}^{q} = L^{q}(\Omega , \mathcal {F}, \mathbb {P})$ ; then a Banach space $\mathscr {H}^q \subset \mathscr {S}$ is given by those $X\in \mathscr {S}$ with $X_0 = 0$ and
Note that for a local martingale $M \in \mathscr {M}_{\mathrm {loc}}$ , it holds that (see [Reference Protter57, Ch. V, p. 245])
For a process $X\in \mathscr {D}$ , we define
and define the space $\mathscr {S}^q\subset \mathscr {S}$ of semimartingales $X\in \mathscr {S}$ such that $\Vert {X}\Vert _{\mathscr {S}^q}< \infty $ . Note that there exists a constant $c_q>0$ depending on q such that (see [Reference Protter57, Ch. V, Theorem 2])
We view d-dimensional semimartingales, $X= \sum _{i=1}^d X^i e_i \in \mathscr {S} (\mathbb {R}^d)$ , as special cases of tensor series valued semimartingales $\mathscr {S} (\mathcal {T}\,)$ of the form
with each component $\mathbf {X}^w$ a real-valued semimartingale. This extends mutatis mutandis to the spaces of $\mathcal {T}$ -valued adapted càdlàg processes $\mathscr {D}(\mathcal {T}\,)$ , martingales $\mathscr {M}(\mathcal {T}\,)$ and adapted càdlàg processes with finite variation $\mathscr {V}(\mathcal {T}\,)$ . Note also that we typically deal with $\mathcal {T}_0$ -valued semimartingales, which amounts to having only words with length $|w| \ge 1$ . Standard notions such as continuous local martingale $\mathbf {X}^{c}$ and jump process $\Delta \mathbf {X}_t = \mathbf {X}_t - \mathbf {X}_{t^-}$ are defined componentwise.
Brackets: Now let $\mathbf {X}$ and $\mathbf {Y}$ be $\mathcal {T}$ -valued semimartingales. We define the (non-commutative) outer quadratic covariation bracket of $\mathbf {X}$ and $\mathbf {Y}$ by
Similarly, define the (non-commutative) inner quadratic covariation bracket by
for continuous $\mathcal {T}$ -valued semimartingales $\mathbf {X}, \mathbf {Y}$ , this coincides with the predictable quadratic covariation
As usual, we may write
and $\left \langle \mathbf {X} \right \rangle \equiv \left \langle \mathbf {X}, \mathbf {X} \right \rangle $ .
$\mathscr {H}$ -spaces: The definition of $\mathscr {H}^{q}$ -norm naturally extends to tensor valued martingales. More precisely, for $\mathbf {X}^{(n)} \in \mathscr {S}(({\mathbb {R}^d})^{\otimes n})$ with $n\in {\mathbb {N}_{\ge 1}}$ and $q\in [1,\infty )$ , we define
where the infimum is taken over all possible decompositions $\mathbf {X}^{(n)} = \mathbf {M} + \mathbf {A}$ with $\mathbf {M}\in \mathscr {M}_{\mathrm {loc}}(({\mathbb {R}^d})^{\otimes n})$ and $\mathbf {A}\in \mathscr {V}(({\mathbb {R}^d})^{\otimes n})$ , where
with the supremum taken over all partitions of the interval $[0,T]$ . One may readily check that
Further define the following subspace $\mathscr {H}^{q,N} \subset \mathscr {S}(\mathcal {T}_0^{N})$ of homogeneously q-integrable semimartingales
where for any $\mathbf {X} \in \mathscr {S}(\mathcal {T\,}^{N})$ , we define
Note that $\vert \mkern -2.5mu\vert \mkern -2.5mu\vert {\cdot }\vert \mkern -2.5mu\vert \mkern -2.5mu\vert _{\mathscr {H}^{q,N}}$ is sub-additive and positive definite on $\mathscr {H}^{q, N}$ , and it is homogeneous under dilation in the sense that
We also introduce the following subspace of $\mathscr {S}(\mathcal {T}\,)$ :
Note that if $\mathbf {X}\in \mathscr {S}(\mathcal {T}\,)$ such that $\vert \mkern -2.5mu\vert \mkern -2.5mu\vert {\mathbf {X}^{(0,N)}}\vert \mkern -2.5mu\vert \mkern -2.5mu\vert _{\mathscr {H}^{1,N}} < \infty $ for all $N\in {\mathbb {N}_{\ge 1}}$ , then it also holds that $\mathbf {X}\in \mathscr {H}^{\infty -}(\mathcal {T}\,)$ .
Stochastic integrals: We are now going to introduce a notation for the stochastic integration with respect to tensor valued semimartingales. Denote by $\mathcal {L}(\mathcal {T}; \mathcal {T}) = \{ f: \mathcal {T} \to \mathcal {T}\;|\; f \text { is linear.}\}$ the space of endomorphism on $\mathcal {T}$ , and let $\mathbf {F}: \Omega \times [0,T] \to \mathcal {L}(\mathcal {T}; \mathcal {T})$ with $(t, \omega ) \mapsto \mathbf {F}_t(\omega; \cdot )$ such that it holds
where $\mathcal {I}_n\subset \mathcal {T}$ was introduced in Section 2.3.2, consisting of series with tensors of level $n+1$ and higher. In this case, we can define the stochastic Itô integral (and then analogously the Stratonovich/Marcus integral) of $\mathbf {F}$ with respect to $\mathbf {X}\in \mathscr {S}(\mathcal {T}\,)$ by
For example, let $\mathbf {Y}, \mathbf {Z} \in \mathscr {D}(\mathcal {T}\,)$ , and define $\mathbf {F} := \mathbf {Y}\,\mathrm {Id}\,\mathbf {Z}$ : that is, $\mathbf {F}_t(\mathbf {x}) = \mathbf {Y}_t \, \mathbf {x} \, \mathbf {Z}_t$ , the concatenation product from the left and right, for all $\mathbf {x}\in \mathcal {T}$ . Then we see that $\mathbf {F}$ indeed satisfies the conditions in equations (2.6) and (2.7), and we have
Another important example is given by $\mathbf {F} = (\operatorname {\mathrm {ad}} \mathbf {Y})^{k}$ for any $\mathbf {Y}\in \mathscr {D}(\mathcal {T}_0)$ and $k\in {\mathbb {N}}$ . Indeed, we immediately see that $\mathbf {F}$ satisfies the condition in equation (2.7); and recalling from equation (2.3) that the iteration of adjoint operations can be expanded in terms of left- and right-multiplication, we also see that $\mathbf {F}$ satisfies equation (2.6). More generally, let $(a_k)_{k=0}^{\infty }\subset \mathbb {R}$ , and let $\mathbf {X}\in \mathscr {S}(\mathcal {T}_0)$ ; then the following integral
is well defined in the sense of equation (2.9). The definition of the integral with integrands of the form $\mathbf {F}: \Omega \times [0,T] \to \mathcal {L}(T({\mathbb {R}^d}) \overline {\otimes } T({\mathbb {R}^d}); \mathcal {T})$ with respect to processes $\mathbf {X} \in \mathscr {S}(T({\mathbb {R}^d}) \overline {\otimes } T({\mathbb {R}^d}))$ is completely analogous.
Quotient algebras: All of this extends in a straightforward way to the case of semimartingales in the quotient algebra of Section 2.3: that is, symmetric and truncated algebra, in particular given $\mathbf {X}$ and $\mathbf {Y}$ in $\mathscr {S}(\mathcal {S})$ have well-defined continuous local martingale parts denoted by $\mathbf {X}^c,\mathbf {Y}^c$ , respectively, with inner (predictable) quadratic covariation given by
Write $\mathcal {S}^N$ for the truncated symmetric algebra, linearly spanned by $\{ \hat {e}_{w}: w \in \widehat {\mathcal {W}}_d, |w| \le N\}$ and $\mathcal {S}^N_0$ for those elements with zero scalar entry. In complete analogy with non-commutative setting discussed above, we then write $\widehat {\mathscr {H}}^{q,N} \subset \mathscr {S}(\mathcal {S}^N_0)$ for the corresponding space of homogeneously q-integrable semimartingales.
2.5 Diamond products
We extend the notion of the diamond product introduced in [Reference Alos, Gatheral and Radoičić3] for continuous scalar semimartingales to our setting. Denote by $\mathbb {E}_t$ the conditional expectation with respect to the sigma algebra $\mathcal {F}_t$ . Then we have the following:
Definition 2.1. For $\mathbf {X}$ and $\mathbf {Y}$ in $\mathscr {S}(\mathcal {T}\,)$ , define
whenever the $\mathcal {T}$ -valued quadratic covariation that appears on the right-hand side is integrable. Similar to the previous section, we also define an outer diamond, for $\mathbf {X},\mathbf {Y}\in \mathcal {T}$ , by
This definition extends immediately to semimartingales with values in the quotient algebras of Section 2.3. In particular, given $\hat {\mathbf {X}}$ and $\hat {\mathbf {Y}}$ in $\mathscr {S}(\mathcal {S})$ , we have
where the last expression is given in terms of diamond products of scalar semimartingales.
Lemma 2.2. Let $p,q,r\in [1,\infty )$ such that $1/p + 1/q + 1/r < 1$ , and let $X\in \mathscr {M}_{\mathrm {loc}}^{c}(({\mathbb {R}^d})^{\otimes l})$ , $Y\in \mathscr {M}_{\mathrm {loc}}^{c}(({\mathbb {R}^d})^{\otimes m})$ , and $Z\in \mathscr {D}(({\mathbb {R}^d})^{\otimes n})$ with $l,m,n\in {\mathbb {N}}$ , such that . Then it holds that for all $0 \le t \le T$
Proof. Using the Kunita-Watanabe inequality (Lemma 7.1) we see that the expectation on the right-hand side is well defined. Further note that it follows from Emery’s inequality (Lemma 7.3) and Doob’s maximal inequality that the local martingale
is a true martingale. Recall the definition of the diamond product, and observe that the difference of the left- and right-hand sides of the above equation is a conditional expectation of a martingale interment and is hence zero.
2.6 Generalized signatures
We now give the precise meaning of equation (1.1): that is, $\mathrm d\mathbf {S}=\mathbf {S}\,{\circ \mathrm d}\mathbf {X}$ , or component-wise, for every word $w\in \mathcal {W}_d$ ,
where the driving noise $\mathbf {X}$ is a $\mathcal {T}_0$ -valued semimartingale, so that $\mathbf {X}^{\emptyset } \equiv 0$ . Following [Reference Marcus49, Reference Marcus50, Reference Estrade20, Reference Kurtz, Pardoux and Protter41, Reference Friz and Shekhar25, Reference Bruned, Curry and Ebrahimi-Fard8], the integral meaning of this equation, started at time s from $\mathbf {s} \in \mathcal {T}_1$ , for times $t \ge s$ , is given by
leaving the component-wise version to the reader. We have
Proposition 2.3. Let $\mathbf {s}\in \mathcal {T}_1$ and suppose $\mathbf {X}$ takes values in $\mathcal {T}_0$ . For every $s \ge 0$ and $\mathbf {s} \in \mathcal {T}_1$ , equation (2.11) has a unique global solution on $\mathcal {T}_1$ starting from $\mathbf {S}_s=\mathbf {s}$ .
Proof. Note that $\mathbf {S}$ solves equation (2.11) iff $\mathbf {s}^{-1} \mathbf {S}$ solves the same equation started from $1 \in \mathcal {T}_1$ . We may thus take $\mathbf {s} = 1$ without loss of generality. The graded structure of our problem, and more precisely that $\mathbf {X} = (0,X,\mathbb {X},\dots )$ in equation (2.11) has no scalar component, shows that the (necessarily) unique solution is given explicitly by iterated integration, as may be seen explicitly when writing out $\mathbf {S}^{(0)} \equiv 1$ , $\mathbf {S}^{(1)}_t = \int _s^t \mathrm d X = X_{s,t} \in \mathbb {R}^d$ ,
and so on. (In particular, we do not need to rely on abstract existence, uniqueness results for Marcus SDEs [Reference Kurtz, Pardoux and Protter41] or Lie group stochastic exponentials [Reference Hakim-Dowek and Lépingle31].)
Definition 2.4. Let $\mathbf {X}$ be a $\mathcal {T}_0$ -valued semimartingale defined on some interval $[s,t]$ . Then
is defined to be the unique solution to equation (2.11) on $[s,t]$ , such that $\mathrm {Sig}(\mathbf {X})_{s,s}=1$ .
The following can be seen as a (generalized) Chen relation.
Lemma 2.5. Let $\mathbf {X}$ be a $\mathcal {T}_0$ -valued semimartingale on $[0,T]$ and $0 \le s \le t \le u \le T$ . Then the following identity holds with probability one, for all such $s,t,u$ :
Proof. Call $\Phi _{t \leftarrow s} \mathbf {s} := \mathbf {S}_t$ the solution to equation (2.11) at time $t \ge s$ , started from $\mathbf {S}_s = \mathbf {s}$ . By uniqueness of the solution flow, we have $ \Phi _{u \leftarrow t} \circ \Phi _{t \leftarrow s} = \Phi _{u \leftarrow s}. $ It now suffices to remark that, thanks to the multiplicative structure of equation (2.11), we have $ \Phi _{t \leftarrow s} \mathbf {s} = \mathbf {s} \mathrm {Sig}(\mathbf {X})_{s,t}$ .
3 Expected signatures and signature cumulants
3.1 Definitions and existence
Throughout this section, let $\mathbf {X} \in \mathscr {S}(\mathcal {T}_0)$ be defined on a filtered probability space $(\Omega , \mathcal {F}, (\mathcal {F}_t)_{0 \le t \le T}, \mathbb {P})$ . Recall that $\mathbb {E}_t$ denotes the conditional expectation with respect to the sigma algebra $\mathcal {F}_t$ . When $\mathbb {E}(|\mathrm {Sig}(\mathbf {X})^w_{0,t}|)<\infty $ for all $0 \le t \le T$ and all words $w\in \mathcal {W}_d$ , then the (conditional) expected signature
is well defined. In this case, we can also define the (conditional) signature cumulant of $\mathbf {X}$ by
An important observation is the following:
Lemma 3.1. Given $\mathbb {E}(|\mathrm {Sig}(\mathbf {X})^w_{0,t}|)<\infty $ for all $0 \le t \le T$ and words $w\in \mathcal {W}_d$ , then $\boldsymbol {\mu }(T) \in \mathscr {S}(\mathcal {T}_1)$ and $\boldsymbol {\kappa }(T)\in \mathscr {S}(\mathcal {T}_0)$ .
Proof. It follows from the relation in equation (2.12) that
Therefore, projecting to the tensor components, we have
Since $(\mathrm {Sig}(\mathbf {X})^w_{0, t})_{0 \le t \le T}$ and $(\mathbb {E}_t(\mathrm {Sig}(\mathbf {X})^w_{0,T})_{0 \le t \le T}$ are semimartingales (the latter in fact a martingale), it follows from Itô’s product rule that $\boldsymbol {\mu }^w(T)$ is also a semimartingale for all words $w\in \mathcal {W}_d$ , hence $\boldsymbol {\mu }(T)\in \mathscr {S}(\mathcal {T}_1)$ . Further recall that $\boldsymbol {\kappa }(T) = \log (\boldsymbol {\mu }(T))$ , and therefore it follows from the definition of the logarithm on $\mathcal {T}_1$ that each component $\boldsymbol {\kappa }(T)^w$ with $w\in \mathcal {W}_d$ is a polynomial of $(\boldsymbol {\mu }(T)^{v})_{v\in \mathcal {W}_d, |v|\le |w|}$ . Hence it follows again by Itô’s product rule that $\boldsymbol {\kappa }(T)\in \mathscr {S}(\mathcal {T}_0)$ .
It is of strong interest to have a more explicit necessary condition for the existence of the expected signature. The following theorem, the proof of which can be found in Section 7.1, yields such a criterion.
Theorem 3.2. Let $q\in [1, \infty )$ and $N\in {\mathbb {N}_{\ge 1}}$ ; then there exist two constants $c,C>0$ depending only on d, N and q, such that for all $\mathbf {X} \in \mathscr {H}^{q,N}$
In particular, if $\mathbf {X}\in \mathscr {H}^{\infty -}(\mathcal {T}_0)$ , then $\mathrm {Sig}(\mathbf {X})_{0,\cdot }\in \mathscr {H}^{\infty -}(\mathcal {T}_1)$ , and the expected signature exists.
Remark 3.3. Let $\mathbf {X} = (0, M, 0, \dotsc , 0)$ , where $M \in \mathscr {M}({\mathbb {R}^d})$ is a martingale; then
and we see that the above estimate implies that
This estimate is already known and follows from the Burkholder-Davis-Gundy inequality for enhanced martingales, which was first proved in the continuous case in [Reference Friz and Victoir26] and for the general case in [Reference Chevyrev and Friz12].
Remark 3.4. When $q>1$ , the above estimate also holds true when the signature $\mathrm {Sig}(\mathbf {X})_{0,\cdot }$ is replaced by the conditional expected signature $\boldsymbol {\mu }(T)$ or the conditional signature cumulant $\boldsymbol {\kappa }(T)$ . This will be seen in the proof of Theorem 4.1 below (more precisely in Claim 7.12).
3.2 Moments and cumulants
We quickly discuss the development of a symmetric algebra valued semimartingale, more precisely, $\hat {\mathbf {X}} \in \mathscr {S}(\mathcal {S}_0)$ , in the group $\mathcal {S}_1$ . That is, we consider
It is immediate (validity of chain rule) that the unique solution to this equation, at time $t \ge s$ , started at $\hat {\mathbf {S}}_s = \hat {\mathbf {s}} \in \mathcal {S}_1$ is given by
and we also write $\hat {\mathbf {S}}_{s,t} = \exp ( \hat {\mathbf {X}}_t-\hat {\mathbf {X}}_s )$ for this solution started at time s from $1\in \mathcal {S}_1$ . The relation to signatures is as follows. Recall that the $\pi _{\mathrm {Sym}}$ denotes the canonical projection from $\mathcal {T}$ to $\mathcal {S}$ .
Proposition 3.5. Let $\mathbf {X},\mathbf Y\in \mathscr {S}(\mathcal {T}\,)$ , and define $\hat {\mathbf {X}} := \pi _{\mathrm {Sym}}(\mathbf {X})$ and $\hat {\mathbf {Y}} := \pi _{\mathrm {Sym}}(\mathbf {Y})$ . Then it holds that
-
(i) $\hat {\mathbf {X}}, \hat {\mathbf {Y}}\in \mathscr {S}(\mathcal {S})$ , and for the indefinite Itô integral, we have in the sense of indistinguishable processes
(3.2) $$ \begin{align} \pi_{\mathrm{Sym}} \int \mathbf{X} \mathrm d\mathbf Y= \int \hat{\mathbf{X}}\,\mathrm d\hat{\mathbf Y}, \end{align} $$ -
(ii) $\hat {\mathbf {S}} := \pi _{\mathrm {Sym}}{\mathrm {Sig}(\mathbf {X})_{s,\cdot }}$ solves equation (3.1) started at time s from $1\in \mathcal {S}_1$ and driven by $\hat {\mathbf {X}}$ . In particular,
$$ \begin{align*}\hat{\mathbf{S}}_{s,t}=\exp(\hat{\mathbf{X}}_t-\hat{\mathbf{X}}_s)\end{align*} $$for all $0 \le s \le t \le T$ .
Proof. (i) That the projections $\hat {\mathbf {X}},\hat {\mathbf Y}$ define $\mathcal {S}$ -valued semimartingales follows from the componentwise definition and the fact that the canonical projection is linear. In particular, the right-hand side of equation (3.2) is well defined. To show equation (3.2), we apply the canonical projection $\pi _{\mathrm {Sym}}$ to both sides of equation (2.9) after choosing $Z_t\equiv \mathbf 1$ ; and using the explicit action of $\pi _{\mathrm {Sym}}$ on basis tensors, we obtain the identity
by equation (2.4). Part (ii) is then immediate.
Assuming componentwise integrability, we then define symmetric moments and cumulants of the $\mathcal {S}$ -valued semimartingale $\hat {\mathbf {X}}$ by
for $0\le t \le T$ . If $\hat {\mathbf {X}} = \pi _{\mathrm {Sym}}(\mathbf {X})$ , for $ \mathbf {X} \in \mathscr {S}(\mathcal {T}\,)$ , with expected signature and signature cumulants $\boldsymbol {\mu }(T)$ and $\boldsymbol {\kappa }(T)$ , it is then clear that the symmetric moments and cumulants of $\hat {\mathbf {X}}$ are obtained by projection,
Example 3.6. Let $ X$ be an $\mathbb {R}^d$ -valued martingale in $\mathscr H^{\infty -}$ , and $\hat {\mathbf {X}}_t:=\sum _{i=1}^dX^i_t\hat {e}_i$ . Then
consists of the (time-t conditional) multivariate moments of $X_T-X_t \in \mathbb {R}^d$ . Here, the series on the right-hand side is understood in the formal sense. It readily follows, also noted in [Reference Bonnier and Oberhauser7, Example 3.3], that $\hat {\boldsymbol {\kappa }}_t (T) = \log (\hat {\boldsymbol {\mu }}_t(T))$ consists precisely of the multivariate cumulants of $X_T-X_t$ . Note that the symmetric moments and cumulants of the scaled process $a X$ , $a \in \mathbb {R}$ , are precisely given by $\delta _a \hat {\boldsymbol {\mu }}$ and $\delta _a \hat {\boldsymbol {\kappa }}$ , where the linear dilation map is defined by $\delta _a\colon \hat {e}_w \mapsto a^{|w|} \hat {e}_w$ . The situation is similar for $a \cdot X=(a_1X^1,\dotsc ,a_dX^d)$ , $a \in \mathbb {R}^d$ , but now with $\delta _a\colon \hat {e}_w \mapsto a^w \hat {e}_1^{|w|}$ with $a^w = a_1^{n_1} \cdots a_d^{n_d}$ , where $n_i$ denotes the multiplicity of the letter $i \in \{1,\dots , d\}$ in the word w.
We next consider linear combinations, $\hat {\mathbf {X}} = a X + b \langle X \rangle $ , for general pairs $a,b \in \mathbb {R}$ , having already dealt with $b=0$ . The special case $b = - a^2/2$ , by scaling there is no loss in generality to take $(a,b) = (1,-1/2)$ , yields a (at least formally) familiar exponential martingale identity.
Example 3.7. Let $ X$ be an $\mathbb {R}^d$ -valued martingale in $\mathscr H^{\infty -}$ , and define
In this case, we have trivial symmetric cumulants $\hat {\boldsymbol {\kappa }}_t(T)=0$ for all $0\le t\le T$ . Indeed, Itô’s formula shows that $t\mapsto \exp (\hat {\mathbf {X}}_t)$ is an $\mathcal {S}_1$ -valued martingale, so that
While the symmetric cumulants of the last example carries no information, it suffices to work with
in which case $\hat {\boldsymbol {\mu }} = \hat {\boldsymbol {\mu }} (a,b), \hat {\boldsymbol {\kappa }} = \hat {\boldsymbol {\kappa }}(a,b)$ contains full information of the joint moments of X and its quadratic variation process. A recursion of these was constructed as a diamond expansion in [Reference Friz, Gatheral and Radoičić23].
4 Main results
4.1 Functional equation for signature cumulants
Let $\mathbf {X}\in \mathscr {S}(\mathcal {T}_0)$ defined on a filtered probability space $(\Omega , \mathcal {F}, (\mathcal {F}_t)_{0\le t\le T<\infty },\mathbb {P})$ satisfying the usual conditions. For all $\mathbf {x}\in \mathcal {T}_0$ (or $\mathcal {T}_0^N$ ), define the following operators, with Bernoulli numbers $(B_k)_{k\ge 0} = (1, -\frac {1}{2}, \frac {1}{6}\dotsc )$ ,
noting $G(z) = (\exp (z)-1)/z$ , $H(z) = G^{-1}(z) = z/(\exp (z)-1)$ . Our main result is the following.
Theorem 4.1. Let $\mathbf {X} \in \mathscr {H}^{\infty -}(\mathcal {T}_0)$ ; then the signature cumulant $\boldsymbol {\kappa } =\boldsymbol {\kappa } (T) = (\log \mathbb {E}_t(\mathrm {Sig}(\mathbf {X})_{t,T}))_{0 \le t \le T}$ is the unique solution (up to indistinguishably) of the following functional equation: for all $0 \le t \le T$
Equivalently, $\boldsymbol {\kappa } =\boldsymbol {\kappa } (T) $ is the unique solution to
Furthermore, if $\mathbf {X}\in \mathscr {H}^{1,N}$ for some $N\in {\mathbb {N}_{\ge 1}}$ , then the identities in equations (4.2) and (4.3) still hold true for the truncated signature cumulant $\boldsymbol {\kappa } := (\log \mathbb {E}_t(\mathrm {Sig}(\mathbf {X}^{(0,N)})_{t,T}))_{0\le t \le T}$ .
Proof. We postpone the proof of the fact that $\boldsymbol {\kappa }$ satisfies equations (4.2) and (4.3) to section Section 7.2. The uniqueness part of the statement can be easily seen as follows. Regarding equation (4.2), we first note that it holds
where we have used that $\boldsymbol {\kappa }_T \equiv 0$ (and the fact that the conditional expectation is well defined, which is shown in the first part of the proof). Hence, after subtracting the identity from G, we can bring $\boldsymbol {\kappa }_t$ to the left-hand side in equation (4.2). This identity is an equality of tensor series in $\mathcal {T}_0$ and can be projected to yield an equality for each tensor level of the series. As presented in more detail in the following subsection, we see that projecting the latter equation to tensor level, say $n\in {\mathbb {N}_{\ge 1}}$ , the right-hand side only depends on $\boldsymbol {\kappa }^{(k)}$ for $k < n$ , hence giving an explicit representation $\boldsymbol {\kappa }^{(n)}$ in terms of $\mathbf {X}$ and strictly lower tensor levels of $\boldsymbol {\kappa }$ . Therefore equation (4.2) characterizes $\boldsymbol {\kappa }$ up to a modification and then due to right-continuity up to indistinguishably. The same argument applies to equation (4.3), referring to the following subsections for details on the recursion.
Diamond formulation: The functional equations given in Theorem 4.1 above can be phrased in terms of the diamond product between $\mathcal {T}_0$ -valued semimartingales. Writing $\mathbf {J}_t(T) = \sum _{t < u \le T} (\dots )$ for the last (jump) sum in equation (4.2), this equation can be written, thanks to Lemma 2.2, which applies just the same with outer diamonds,
and a similar form may be given for equation (4.3). While one may or may not prefer this equation to equation (4.2), diamonds become very natural in $d=1$ (or upon projection to the symmetric algebra; see also Section 5.2). In this case, $G = \mathrm {Id}$ , $Q = \mathrm {Id} \odot \mathrm {Id}$ ; and with identities of the form
some simple rearrangement, using bilinearity of the diamond product, gives
If we further impose martingality and continuity, we arrive at
4.2 Recursive formulas for signature cumulants
Theorem 4.1 allows for an iterative computation of signature cumulants, trivially started from
The second signature cumulant, obtained from Theorem 4.1 or first principles, reads
For instance, consider the special case with vanishing higher-order components, $\mathbf {X}^{(i)} \equiv 0$ , for $i \ne 1$ , and $\mathbf {X} = \mathbf {X}^{(1)} \equiv M$ , a d-dimensional continuous square-integrable martingale. In this case, $\boldsymbol {\kappa }^{(1)} = \boldsymbol {\mu }^{(1)} \equiv 0$ , and from the very definition of the logarithm relating $\boldsymbol {\kappa }$ and $\boldsymbol {\mu }$ , we have $\boldsymbol {\kappa }^{(2)} = \boldsymbol {\mu }^{(2)} - \frac 12 \boldsymbol {\mu }^{(1)} \boldsymbol {\mu }^{(1)} = \boldsymbol {\mu }^{(2)}$ . It then follows from Stratonovich-Ito correction that
which is indeed a (very) special case of the general expression for $\boldsymbol {\kappa }^{(2)}$ . We now treat general higher-order signature cumulants.
Corollary 4.2. Let $\mathbf {X}\in \mathscr {H}^{1,N}$ for some $N\in \mathbb {N}_{\ge 1}$ ; then we have
for all $0 \le t \le T$ and for $n \in \{2, \dotsc , N\}$ , we have recursively (the right-hand side only depends on $\boldsymbol {\kappa }^{(j)},j<n$ )
with $\ell = (l_1, \dotsc , l_k)$ , $l_i \in {\mathbb {N}_{\ge 1}}$ , $|\ell |:= k\in {\mathbb {N}_{\ge 1}}$ , $\|\ell \|:= l_1 + \dotsb + l_k$ and
Proof. As in the proof of Theorem 4.1 above, in equation (4.2), we can separate the identity from G and bring the resulting $\boldsymbol {\kappa }_t$ to the left-hand side. Projecting to the tensor level n, and using that the projection can be interchanged with taking the expectation, we obtain the following equation
for all $0 \le t \le T$ , where $\mathbf {Y}\in \mathscr {S}(\mathcal {T}_0^{N})$ and $\mathbf {V}, \mathbf {C}, \mathbf {J} \in \mathscr {V}(\mathcal {T}_0^{N})$ are defined in equation (7.9). Note that we can take the conditional expectation of each term separately, as $\mathbf {X}^{(n)}$ has sufficient integrability by the assumption, and the integrability of the remaining terms is shown in the Proof of Theorem 4.1 below, more precisely in equation (7.27). The recursion then follows by spelling out the explicit composition for each of the terms appearing in the above equation. For the quadratic variation term, we may easily verify from the bilinearity of the covariation bracket that it holds
We can also take the conditional expectation of each term in the above right-hand side separately, as the integrability of these terms follows from the assumptions on $\mathbf {X}$ and Lemma 7.1. The composition of the terms $\mathbf {Y}^{(n)}, \mathbf {V}^{(n)}$ and $\mathbf {C}^{(n)}$ follows from the explicit form of the stochastic Itô integral of a power series of adjoint operations with respect to a tensor valued semimartingale in Section 2.4, more specifically in equation (2.10). We will demonstrate this for the term $\mathbf {Y}^{(n)}$ in more detail:
for all $0 \le t \le T$ . The composition of the terms $\mathbf {V}^{(n)}$ and $\mathbf {C}^{(n)}$ , respectively, in terms of $\mathrm {Qua}(\mathbf {X}, \boldsymbol {\kappa })$ and $\mathrm {Cov}(\mathbf {X}, \boldsymbol {\kappa })$ follows analogously. It then remains to show that the term $\mathbf {J}^{(n)}$ can be composed in terms of $\mathrm {Jmp}(\mathbf {X}, \boldsymbol {\kappa })$ , which is, however, a simple combinatorial exercise.
We obtain another recursion for the signature cumulants from projecting the functional equation (4.3). Note that, apart from the first two levels, it is far from trivial to see that the following recursion is equivalent to the recursion in Corollary 4.2.
Corollary 4.3. Let $\mathbf {X}\in \mathscr {H}^{1,N}$ for some $N\in \mathbb {N}_{\ge 1}$ ; then we have
with $\ell = (l_1, \dotsc , l_k)$ , $l_i \ge 1$ , $|\ell |=k$ , $||\ell ||=l_1 + \dotsb + l_k$ and
Proof. The recursion follows from projecting equation (4.3) to each tensor level, analogously to the way that the recursion of Corollary 4.2 follows from equation (4.2) (see the proof of Corollary 4.2).
Diamonds. All recursions here can be rewritten in terms of diamonds. In a first step, by definition, the second term in Corollary 4.2 can be rewritten as
Thanks to Lemma 2.2, we may also write
Similarly,
Inserting these expressions into Equation (4.6), we may obtain a ‘diamond’ form of the recursions in H form.
When $d=1$ (or in the projection onto the symmetric algebra; see also Section 5.2), the recursions take a particularly simple form, since $\operatorname {\mathrm {ad}}\mathbf {x}\equiv 0$ for all $\mathbf {x}\in \mathcal {T}_0$ , for $d=1$ a commutative algebra. Equation (4.5) then becomes
where $\mathbf {J}^{(n)}_t(T) = \sum _{|\ell |\ge 2, \; \|\ell \|=n} \mathrm {Jmp}(X,\boldsymbol {\kappa }; \ell )_{t,T}$ contains the nth tensor component of the jump contribution. The above diamond recursion can also be obtained by projecting the functional relation (4.4) to the nth tensor level. We shall revisit this in a multivariate setting and comment on related works in Section 5.2.
5 Two special cases
5.1 Variations on Hausdorff, Magnus and Baker–Campbell–Hausdorff
We now consider a deterministic driver $\mathbf {X}$ of finite variation. This includes the case when $\mathbf {X}$ is absolutely continuous, in which case we recover, up to a harmless time reversal, $t \leftrightarrow T-t$ , Hausdorff’s ODE and the classical Magnus expansion for the solution to a linear ODE in a Lie group [Reference Hausdorff32, Reference Magnus48, Reference Chen11, Reference Iserles and Nørsett33]. Our extension with regard to discontinuities seems to be new and somewhat unifies Hausdorff’s equation with multivariate Baker–Campbell–Hausdorff integral formulas.
Theorem 5.1. Let $\mathbf {X} \in \mathscr {V} (\mathcal {T}_0)$ , and more specifically $\mathbf {X}\colon [0,T] \to \mathcal {T}_0$ deterministic, càdlàg of bounded variation. The log-signature $\Omega _t=\Omega _{t}(T):= \log (\mathrm {Sig}(\mathbf {X})_{t,T})$ satisfies the integral equation
with $\Psi (z):= H(\log z)={\log z}/{(z-1)}$ as in the introduction. The sum in equation (5.1) is absolutely convergent, over (at most countably many) jump times of $\mathbf {X}$ , and vanishes when $\mathbf {X} \equiv \mathbf {X}^c$ , in which case equation (1.4) reduces to Hausdorff’s ODE.
(i) The accompanying Jump Magnus expansion becomes $\Omega ^{(1)}_{t}(T) = \mathbf {X}^{(1)}_{t,T}$ followed by
where the right-hand side only depends on $\Omega ^{(k)}, k<n$ .
(ii) If $\mathbf {X} \in \mathscr {V} (V)$ for some linear subspace $V \subset \mathcal {T}_0 = T_0\mathopen {(\mkern -3mu(}\mathbb {R}^d\mathclose {)\mkern -3mu)}$ , it follows that, for all $t\in [0,T]$ ,
and we say that $\Omega _{t}(T)$ is Lie in V. In case $V=\mathbb {R}^d$ , one speaks of (free) Lie series; see also [Reference Lyons45, Def. 6.2].
Proof. Since we are in a purely deterministic setting, the signature cumulant coincides with the log-signature $\boldsymbol {\kappa }_t(T) = \Omega _{t}(T)$ , and Theorem 4.1 applies without any expectation and angle brackets.
Using $\Delta \Omega _u = \Omega _{u} - \Omega _{u-} = \Omega _u - \log (\mathrm e^{\Delta \mathbf {X}_u}\mathrm e^{\Omega _u})$ , we see that
where we used the identity
Remark 5.2 (Baker–Campbell–Hausdorff)
The identity equation (5.2) is well-known but also easy to obtain en passant, thereby rendering the above proof self-contained. We treat directly the n-fold case. Given $\mathbf {x}_1,\dotsc ,\mathbf {x}_n\in \mathcal {T}_0$ , one defines a continuous piecewise affine linear path $(\mathbf {X}_t: 0 \le t \le n)$ with $\mathbf {X}_i - \mathbf {X}_{i-1} = \mathbf {x}_i$ . Then $\mathrm {Sig} ( \mathbf {X} |_{[i-1,i]})=\mathrm {Sig} ( \mathbf {X})_{i-1,i}=\exp (\mathbf {x}_i)$ and by Lemma 2.5 have
A computation based on Theorem 5.1, but now applied without jumps, reveals the general form
The flexibility of our Theorem 5.1 is then nicely illustrated by the fact that this n-fold BCH formula is an immediate consequence of equation (5.1), applied to a piecewise constant càdlàg path $(\mathbf {X}_t: 0 \le t \le n)$ with $\mathbf {X}_\cdot - \mathbf {X}_{i-1} \equiv \mathbf {x}_i$ on $[i-1,i)$ .
5.2 Diamond relations for multivariate cumulants
As in Section 2.3, we write $\mathcal {S}$ for the symmetric algebra over $\mathbb {R}^d$ , and $\mathcal {S}_0,\mathcal {S}_1$ for those elements with scalar component $0,1$ , respectively. Recall the exponential map $\exp : \mathcal {S}_0\to \mathcal {S}_1$ with global defined inverse $\log $ . Following Definition 2.1, the diamond product for $\mathcal {S}_0$ -valued semimartingales $\hat {\mathbf {X}}, \hat {\mathbf {Y}}$ is another $\mathcal {S}_0$ -valued semimartingale given by
with summation over all $w_1,w_2 \in \widehat {\mathcal {W}}_d$ , provided all brackets are integrable. This trivially adapts to $\mathcal {S}^N$ -valued semimartingales, $N\in \mathbb {N}_{\ge 1}$ , in which case all words have length less equal N; the summation is restricted accordingly to $|w_1|+|w_2| \le N$ .
Theorem 5.3. (i) Let $\Xi = (0, \Xi ^{(1)},\Xi ^{(2)},\ldots )$ be an $\mathcal {F}_T$ -measurable random variable with values in $\mathcal {S}_0 (\mathbb {R}^d)$ , componentwise in $\mathcal {L}^{\infty -}$ . Then
satisfies the following functional equation, for all $0 \le t \le T$ ,
with jump component,
Furthermore, if $N\in \mathbb {N}_{\ge 1}$ , and $\Xi =(\Xi ^{(1)},\ldots ,\Xi ^{(N)})$ is $\mathcal {F}_T$ -measurable with graded integrability condition
then the identity equation (5.3) holds for the cumulants up to level N: that is, for $\mathbb {K}^{(0,N)} := \log (\mathbb {E}_t\exp (\Xi ^{(0,N)}))$ with values in $\mathcal {S}^{(N)}_0 (\mathbb {R}^d)$ .
Remark 5.4. Identity equation (5.3) is reminiscent to the quadratic form of the generalized Riccati equations for affine jump diffusions. The relation will be presented more explicitly in Remark 6.9 of Section 6.2.2, when the involved processes are assumed to have a Markov structure and the functional signature cumulant equation reduces to a PIDE system. The framework described here, however, requires neither Markov nor affine structure. We will show in Section 6.3 that such computations are also possible in the fully non-commutative setting: that is, to obtain signature cumulants of affine Volterra processes.
Proof. We first observe that since $\Xi \in \mathcal L^{\infty -}$ , by Doob’s maximal inequality and the BDG inequality, we have that $\hat {\mathbf {X}}_t:=\mathbb {E}_t\Xi $ is a martingale in $\mathscr {H}^{\infty -}(\mathcal {S}_0)$ . In particular, thanks to Theorem 3.2, the signature moments are well defined. According to Section 3.2, the signature is then given by
hence $\hat {\boldsymbol {\kappa }}_t(T)=\mathbb {K}_t(T)-\hat {\mathbf {X}}_t$ .
Projecting equation (4.3) onto the symmetric algebra yields
and equation (5.3) follows upon recalling that $(\mathbb {K}\diamond \mathbb {K})_t(T)=\mathbb {E}_t\langle \mathbb {K}(T)^c\rangle _{t,T}$ . The proof of the truncated version is left to the reader.
As a corollary, we provide a general view on recent results of [Reference Alos, Gatheral and Radoičić3, Reference Lacoin, Rhodes and Vargas42, Reference Friz, Gatheral and Radoičić23]. Note that we also include jump terms in our recursion.
Corollary 5.5. The conditional multivariate cumulants $(\mathbb {K}_t)_{0\le t\le T}$ of a random variable $\Xi $ with values in $\mathcal {S}_0(\mathbb {R}^d)$ , componentwise in $\mathcal L^{\infty -}$ satisfy the recursion
with
The analogous statement holds true in the N-truncated setting: that is, as a recursion for $n=1,..,N$ under the condition in equation (5.4).
Example 5.6 (Continuous setting)
In case of an absence of jumps and higher-order information (i.e., $\mathbb {J} \equiv 0,\Xi ^{(2)} = \Xi ^{(3)} = \ldots \equiv 0$ ), this type of cumulant recursion appears in [Reference Lacoin, Rhodes and Vargas42] and under optimal integrability conditions $\Xi ^{(1)}$ with finite N.th moments [Reference Friz, Gatheral and Radoičić23]. (This requires a localization argument that is avoided here by directly working in the correct algebraic structure.)
Example 5.7 (Discrete filtration)
As the opposite of the previous continuous example, we consider a purely discrete situation, starting from a discretely filtered probability space with filtration $(\mathcal {F}_t\colon t = 0,1,\dotsc ,T \in {\mathbb {N}})$ . For $\Xi $ as in Corollary 5.5, a discrete martingale is defined by $\mathbb {E}_t \exp (\Xi )$ , which we may regard as a càdlàg semimartingale with respect to $\mathcal {F}_t := \mathcal {F}_{[t]}$ , and similar for $\mathbb {K}_t(T) = {\log \mathbb {E}_t \exp (\Xi ) \in \mathcal {S}_0}$ : that is, the conditional cumulants of $\Xi $ . Clearly, the continuous martingale part of $\mathbb {K}(T)$ vanishes, as does any diamond product with $\mathbb {K}(T)$ . What remains is the functional equation
As before, the resulting expansions are of interest. On the first level, trivially, $\mathbb {K}^{(1)}_t = \mathbb {E}_t(\Xi ^{(1)})$ , whereas on the second level we see
which one can recognize, in case $\Xi ^{(2)} = 0$ as an energy identity for the discrete square-integrable martingale $\ell _u := \mathbb {E}_u \Xi ^{(1)}$ . Going further in the recursion yields increasingly non-obvious relations. Taking $\Xi ^{(2)} = \Xi ^{(3)} = \ldots \equiv 0$ for notational simplicity gives
It is interesting to note that related identities have appeared in the statistics literature under the name Bartlett identities; see also Mykland [Reference Mykland53] and the references therein.
5.3 Remark on tree representation
As illustrated in the previous section, in the case where $d=1$ , or when projecting onto the symmetric algebra, our functional equation takes a particularly simple form (see Theorem 5.3). If one further specializes the situation, in particular discards all jumps, we are from an algebraic perspective in the setting of Friz, Gatheral and Radoiçić [Reference Friz, Gatheral and Radoičić23], which give a tree series expansion of cumulants using binary trees. This representation follows from the fact that the diamond product of semimartingales is commutative but not associative. As an example (with notations taken from Section 5.2), in case of a one-dimensional continuous martingale, the first terms are
This expansion is organized (graded) in terms of the number of leaves in each tree, and each leaf represents the underlying martingale.
In the deterministic case, tree expansions are also known for the Magnus expansion [Reference Iserles and Nørsett33] and the BCH formula [Reference Casas and Murua9]. These expansions are also, in terms of binary trees, different from the ones above as they are required to be non-planar to account for the non-commutativity of the Lie algebra. As an example (with the notations of Section 5.1), we have
In this expansion, the nodes represent the underlying vector field and edges represent integration and application of the Lie bracket, coming from the $\operatorname {\mathrm {ad}}$ operator.
Since our functional equation and the associated recursion puts both contexts into a single common framework. We suspect that our general recursion, Corollary 4.2 and thereafter, allows for a sophisticated tree representation, at least in absence of jumps, and propose to return to this question in future work.
6 Applications
6.1 Brownian and stopped Brownian signature cumulants
6.1.1 Time dependent Brownian motion
Let B be a m-dimensional standard Brownian motion defined on a portability space $(\Omega , \mathcal {F}, \mathbb {P})$ with the canonical filtration $(\mathcal {F}_t)_{t\ge 0}$ , and define the continuous (Gaussian) martingale $X = (X_t)_{0\le t\le T}$ by
with $\sigma \in L^2 ([0, T],\mathbb {R}^{m\times d})$ . The quadratic variation of X is finite and deterministic, and therefore we immediately see that the integrability condition $\mathbf {X} = (0, X, 0, \dots )\in \mathscr {H}^{\infty -}$ is trivially satisfied, and thus Theorem 4.1 applies. The Brownian signature cumulants $\boldsymbol {\kappa }_t(T) = \log (\mathbb {E}_t(\mathrm {Sig}(\mathbf {X})_{t,T}))$ satisfies the functional equation, with $\mathbf {a}(t) := \sigma (t)\sigma (t)^T \in \mathrm {Sym}({\mathbb {R}^d} \otimes {\mathbb {R}^d}),$
Therefore the tensor levels are precisely given by the Magnus expansion, starting with
and the general term
Note that $\boldsymbol {\kappa }_t(T)$ is Lie in $\mathrm {Sym}({\mathbb {R}^d} \otimes {\mathbb {R}^d}) \subset \mathcal {T}_0$ , but, in general, not a Lie series. In the special case $X=B$ , that is, $m=d$ and identity matrix $\sigma = \mathbf {I}_d= \sum _{i=1}^{d}e_{ii}\in \mathrm {Sym}({\mathbb {R}^d}\otimes {\mathbb {R}^d})$ , all commutators vanish, and we obtain what is known as Fawcett’s formula [Reference Fawcett21, Reference Friz and Hairer24]:
Example 6.1. Consider $B^1,B^2$ two Brownian motions on the filtered space $(\Omega ,\mathcal F,\mathbb P)$ , with correlation $\mathrm {d}\langle B^1,B^2\rangle _t=\rho \,\mathrm {d} t$ for some fixed constant $\rho \in [-1,1]$ . Suppose that $K^1,K^2\colon [0,\infty )^2\to \mathbb {R}$ are two kernels such that $K^i(t,\cdot )\in L^2([0,t])$ for all $t\in [0,T]$ , and set
for some fixed initial values $X^1_0,X^2_0$ . Note that neither process is a semimartingale in general. However, for each $T>0$ , the process $\xi ^i_t(T):=\mathbb {E}_t[X^i_T]$ is a martingale, and we have
that is, $(\xi ^1,\xi ^2)$ is a time-dependent Brownian motion as defined above. In particular, one sees that
Equation (6.1) and the paragraph below it then give an explicit recursive formula for the signature cumulants, the first of which are given by
We notice that in the particular case when $K^1=K^2\equiv K$ , the matrix $\mathbf {a}$ has the form
Therefore, we have $ \mathbf {a}(t)\otimes \mathbf {a}(t')- \mathbf {a}(t')\otimes \mathbf {a}(t)=0$ for any $t,t'\in [0,T]$ . Hence, in this case, our recursion shows that for any $\rho \in [-1,1]$ ,
and $\boldsymbol {\kappa }_t^{(n)}(T)=0$ for all $0\le t\le T$ and $n\ge 3$ .
6.1.2 Brownian motion up to the first exit time from a domain
Let $B=(B_t)_{t\ge 0}$ be a d-dimensional Brownian motion defined on a probability space $(\Omega , \mathcal {F}, \mathbb {P})$ with a possibly random starting value $B_0$ . Assume also that there is a family of probability measures $\{\mathbb {P}^x\}_{x\in {\mathbb {R}^d}}$ on $(\Omega , \mathcal {F})$ such that $\mathbb {P}^x(B_0 = x) = 1$ , and denote by $\mathbb {E}^x$ the expectation with respect to $\mathbb {P}^x$ . We define the canonical Brownian filtrationFootnote 8 by $(\mathcal {F}_t)_{t\ge 0} = (\mathcal {F}^{B}_t)_{t\ge 0}$ . Further let $\Gamma \subset {\mathbb {R}^d}$ be a bounded domain, and define the stopping time $\tau _\Gamma $ of the first exit of B from the domain $\Gamma $ : that is,
In [Reference Lyons and Ni46], Lyons–Ni exhibit an infinite system of partial differential equations for the expected signature of the Brownian motion until the exit time as a functional of the starting point. The following result can be seen as the corresponding result for the signature cumulant, which follows directly from the expansion in Theorem 1.1. Recall that a boundary point $x \in \partial \Gamma $ is called regular if and only if
The domain $\Gamma $ is called regular if all points on the boundary are regular. For example, domains with smooth boundary are regular; and see [Reference Karatzas and Shreve38, Section 4.2.C] for a further characterization of regularity.
Corollary 6.2. Let $\Gamma \subset {\mathbb {R}^d}$ be a regular domain, such that
The signature cumulant $\boldsymbol {\kappa }_t = \log (\mathbb {E}(\mathrm {Sig}(B)_{t\wedge {\tau _\Gamma }, {\tau _\Gamma }}))$ of the Brownian motion B up to the first exit from the domain $\Gamma $ has the following form
where $\mathbf {F} = \sum _{|w|\ge 2} e_w F^{w}$ with $F^{w}\in C^{0}(\overline {\Gamma },\mathbb {R})\cap C^{2}(\Gamma ,\mathbb {R})$ is the unique bounded classical solution to the elliptic PDE
for all $x\in \Gamma $ with the boundary condition $\mathbf {F}\vert _{\partial \Gamma } \equiv 0$ .
Proof. Define the martingale $\mathbf {X} = ((0, B_{t\wedge {\tau _\Gamma }}, 0, \dotsc ) )_{t\ge 0}\in \mathscr {S}(\mathcal {T}_0)$ , and note that . It then follows from the integrability of ${\tau _\Gamma }$ that $\mathbf {X} \in \mathscr {H}^{\infty -}(\mathcal {T}_0)$ and thus by Theorem 3.2 that $(\mathrm {Sig}(\mathbf {X})_{0,t})_{t\ge 0} \in \mathscr {H}(\mathcal {T}_1)^{\infty -}$ . This implies that the signature cumulant $\boldsymbol {\kappa }_t(T):= \log (\mathbb {E}_t(\mathrm {Sig}(\mathbf {X})_{t,T}))$ is well defined for all $0 \le t \le T < \infty $ , and furthermore under (component-wise) application of the dominated convergence theorem that it holds
Again by $\mathbf {X} \in \mathscr {H}^{\infty -}(\mathcal {T}_0)$ , it follows that Theorem 1.1 applies to the martingale $(\mathbf {X}_t)_{0\le t \le T}$ for any $T>0$ and therefore $\boldsymbol {\kappa }(T)$ satisfies the functional equation (4.3). It follows from the Itô’s representation theorem [Reference Revuz and Yor59, Theorem 3.4] that all local martingales with respect to the Brownian filtration $(\mathcal {F}_t)_{0 \le t \le T}$ are continuous, and therefore it is easy to see that also $\boldsymbol {\kappa }(T)\in \mathscr {S}^c(\mathcal {T}_0)$ . Therefore equation (4.3) simplifies to the following equation
where we have already used the martingality of $\mathbf {X}$ and the explicit form of the quadratic variation $\left \langle \mathbf {X} \right \rangle _t = \mathbf {I}_d(t \wedge {\tau _\Gamma })$ with $\mathbf {I}_d = \sum _{i=1}^{d}e_{ii} \in ({\mathbb {R}^d})^{\otimes 2}$ . It follows that $\boldsymbol {\kappa }^{(1)} \equiv \boldsymbol {\kappa }(T)^{(1)} \equiv 0$ , and for the second level, we have from the integrability of ${\tau _\Gamma }$ and the strong Markov property of Brownian motion that
Now note that the function $u(x) := \mathbb {E}^x({\tau _\Gamma })$ for $x \in \Gamma $ is in $C^{0}(\overline {\Gamma }, \mathbb {R})\cap C^{2}(\Gamma , \mathbb {R})$ and solves the Poisson equation $ -(1/2)\Delta u = g$ with boundary condition $u\vert _{\partial \Gamma } = 0$ and data $g \equiv 1$ . Indeed, since $\Gamma $ is regular and g is bounded and differentiable, this follows from Theorem 9.3.3 (and the remark thereafter) in [Reference Øksendal55]. Moreover, from the assumption in equation (6.3), we immediately see that u is bounded on $\overline {\Gamma }$ , and it follows from Theorem 9.3.2 in [Reference Øksendal55] that u is the unique bounded classical such solution. Thus we have shown that the statement holds true up to the second tensor level with $\mathbf {F}^{(1)}\equiv 0$ and $\mathbf {F}^{(2)}(u) =\frac {1}{2} \mathbf {I}_d u$ under the usual notation $\mathbf {F}^{(n)} = \sum _{|w|=n}e_wF^{w}$ .
Now assume that the statement of the corollary holds true up to the tensor level $(N-1)$ for some $N \ge 3$ . Then, for any $n, k < N$ , we have by applying Itô’s formula
and
Further define the function $\mathbf {G}^{(N)}$ by the projection under $\pi _N$ of the right-hand side of equation (6.4) multiplied by the factor $1/2$ . Then applying Theorem 4.1 to $\mathbf {X}^{(0,N)}$ on the probability space $(\Omega , \mathcal {F}, (\mathcal {F}_t),\mathbb {P}^{x})$ , we see that it follows from the estimate in equation (7.30) that there exists a constant $c>0$ such that
Therefore it follows, from projecting equation (6.5) to level N and using the dominated convergence theorem to pass to the $T\to \infty $ limit, that $\boldsymbol {\kappa }^{(N)}$ is of the form
Furthermore, by the assumption, it also holds that $G^{w} \in C^1(\Gamma )$ for all $w\in \mathcal {W}_d$ , $|w|=N$ . Therefore we can conclude again with Theorem 9.3.3 in [Reference Øksendal55] that $F^{w} \in C^{0}(\overline {\Gamma }, \mathbb {R})\cap C^{2}(\overline {\Gamma }, \mathbb {R})$ solves the Poisson equation with data $g=G^{w}$ for all words w with $|w|=N$ . The statement then follows by induction.
Example 6.3. For $n \in \{1,\dotsc ,d\}$ , let $\mathbb {D}^n $ be the open unit ball in $\mathbb {R}^n$ , and define the (regular) domain $\Gamma = \mathbb {D}^n \times \mathbb {R}^{d-n}\subset \mathbb {R}^d$ . Further note that it holds
Hence we readily see that ${\tau _\Gamma }$ satisfies the condition in equation (6.3). Applying Corollary 6.2, it follows that the signature cumulant of the Brownian motion B up to the exit of the domain $\Gamma $ is of the form $\boldsymbol {\kappa }_t = \mathbf {1}_{\{t<\tau _\Gamma \}} \mathbf {F}(B_t)$ , where $\mathbf {F}$ satisfies the PDE (6.4). Recall that $\mathbf {F}^{(1)}\equiv 0$ and projecting to the second level we see that
The unique bounded solution of the above Poisson equation is given by
More generally, we see that the Poisson equation $\Delta u = -g$ on $\Gamma $ with zero boundary condition, where $g\colon \Gamma \to \mathbb {R}$ is a polynomial in the first n-variables, has a unique bounded solution u, which is also a polynomial of the first n-variables of degree $\mathrm {deg}(u) = \mathrm {deg}(g)+2$ and has the factor $(1-\sum _{i=1}^n x_i^2)$ (see Lemma 3.10 in [Reference Lyons and Ni46]). Hence it follows inductively that each component of $\mathbf {F}^{(n)}$ is a polynomial of degree n with the factor $(1-\sum _{i=1}^n x_i^2)$ . The precise coefficients of the polynomial can be obtained as the solution to a system of linear equations recursively derived from the forcing term in (6.4). This is similar to [Reference Lyons and Ni46, Theorem 3.5]; however, we note that a direct conversion of the latter result for the expected signature to signature cumulants is not trivially seen to yield the same recursion and requires combinatorial relations as studied in [Reference Bonnier and Oberhauser7].
6.2 Lévy and diffusion processes
Let $X\in \mathscr {S}(\mathbb {R}^d)$ , and throughout this section assume that the filtration $(\mathcal {F}_t)_{0 \le t \le T}$ is generated by X. Denote by $\varepsilon _a$ the Dirac measure at point $a\in {\mathbb {R}^d}$ ; the random measure $\mu ^{X}$ associated to the jumps of X is an integer-valued random measure of the form
There is a version of the predictable compensator of $\mu ^X$ , denoted by $\nu $ , such that the $\mathbb {R}^d$ -valued semimartingale X is quasi-left continuous if and only if $\nu (\omega , \{t\} \times \mathbb {R}^d) = 0$ for all $\omega \in \Omega $ ; see [Reference Jacod and Shiryaev36, Corollary II.1.19]. In general, $\nu $ satisfies $(|x|^{2} \wedge 1) \ast \nu \in \mathscr {A}_{\mathrm {loc}}$ : that is, locally of integrable variation. The semimartingale X admits a canonical representation (using the usual notation for stochastic integrals with respect to random measures as introduced, for example, in [Reference Jacod and Shiryaev36, II.1])
where $h(x) = x 1_{|x| \le 1}$ is a truncation function (other choice are possible). Here $B(h)$ is a predictable $\mathbb {R}^d$ -valued process with components in $\mathscr {V}$ and $X^{c}$ is the continuous martingale part of X.
Denote by C the predictable $\mathbb {R}^{d} \otimes \mathbb {R}^d$ -valued covariation process defined as $C^{ij}:=\langle X^{i,c}, X^{j,c} \rangle $ . Then the triplet $(B(h), C, \nu )$ is called the triplet of predictable characteristics of X (or simply the characteristics of X). In many cases of interest, including the case of Lévy and diffusion processes discussed in the subsection below, we have differential characteristics $(b,c,K)$ such that
where b is a d-dimensional predictable process, c is a predictable process taking values in the set of symmetric non-negative definite $d\times d$ -matrices and K is a transition kernel from $(\Omega \times \mathbb {R}_{+}, \mathcal {B}^d)$ into $(\mathbb {R}^d, \mathcal {B}^d)$ . We call such a process Itô semimartingale and the triplet $(b,c,K)$ its differential (or local) characteristics. This extends mutatis mutandis to an $\mathcal {T}_0^N$ (and then $\mathcal {T}_0$ ) valued semimartingale $\mathbf {X}$ , with local characteristics $(\mathbf {b},\mathbf {c},\mathbf {K})$ .
While every Itô semimartingale is quasi-left continuous, it is in general not true that $\boldsymbol {\kappa }$ is continuous (with the notable exception of time-inhomogeneous Lévy processes discussed below), and therefore there is no significant simplification of the functional equation (4.3) in these general terms. The following example illustrates this point in more detail.
Example 6.4. Take $X \in \mathscr {S}(\mathbb {R}^d)$ and then $d=1$ , so that we are effectively in the symmetric setting. In this case $\exp (\boldsymbol {\kappa }_t (T)) = \mathbb {E}_t(\exp ({X_T -X_t}))$ , in the power series sense of enlisting all moments with factorial factors. These can also be obtained by taking higher-order derivatives at $u=0$ of $\mathbb {E}_t (\exp ({u (X_T -X_t)}))$ , now with the classical calculus interpretation of the exponential. The important class of affine models satisfies
In the Levy-case, we have the trivial situation $\Psi (\cdot ,u) \equiv u$ , but otherwise $(\phi ,\Psi )$ solve (generalized) Riccati equations and are in particular continuous in $T-t$ . We see that, in non-trivial situations, the log of $ \mathbb {E}_t(\exp ({u X_T-u X_t}))$ and any of its derivatives will jump when X jumps. In particular, $\boldsymbol {\kappa }_t(T)$ will not be continuous in t, even if X is quasi-left continuous: that is, when $\Delta X_\tau = 0$ a.s. for all predictable times $\tau $ (see [Reference Jacod and Shiryaev36, p. 22]). Let us note in this context that, in the general non-commutative setting and directly from definition of $\boldsymbol {\kappa }$ ,
where the second equality holds true under the assumption of quasi-left continuity of X. If we assume for a moment $\mathcal {F}_{t-} = \mathcal {F}_t$ , then we could conclude that $\boldsymbol {\kappa }_{t-} = \boldsymbol {\kappa }_t$ and hence (right-continuity is clear) that $\boldsymbol {\kappa }_t$ is continuous in t. Since we know that this fails beyond Lévy processes, such left continuity of filtrations is not a good assumption, at least not beyond Lévy processes.
6.2.1 The case of time-inhomogeneous Lévy processes
We consider now a d-dimensional time-inhomogeneous Lévy processes of the form
for all $0\le t\le T$ , with $b \in L^{1}([0,T],\mathbb {R}^d)$ , $\sigma \in L^2 ([0, T],\mathbb {R}^{m\times d})$ , B a d-dimensional Brownian motion and $\mu ^X$ is an independent inhomogeneous Poisson random measure with the intensity measure $\nu $ on $[0,T] \times \mathbb {R}^d$ , such that $\nu (\mathrm {d} t, \mathrm {d} x) = K_t(\mathrm {d} x)\mathrm {d} t$ with Lévy measures $K_t$ : that is, $K_t(\{0\})=0$ , and
and measurability of $t \mapsto K_t (A) \in [0,\infty ]$ , any measurable $A \subset \mathbb {R}^d$ . Consider further the condition
for some integer $N\in {\mathbb {N}_{\ge 1}}$ . The Brownian case in equation (6.1) then generalizes as follows.
Corollary 6.5. Let X be an inhomogenous Lévy process of the form in equation (6.7), such that the family of Lévy measures $\{K_t\}_{t>0}$ satisfy the moment condition in equation (6.8) for all $N\in {\mathbb {N}_{\ge 1}}$ . Then $X\in \mathscr {H}^{\infty -}({\mathbb {R}^d})$ , and the signature cumulant $\boldsymbol {\kappa }_t := \log ( \mathbb {E}_t(\mathrm {Sig}(X)_{t,T}))$ satisfies the following integral equation
with $\mathbf {a}(t) = \sigma (t) \sigma (t)^{T}\in {\mathbb {R}^d}\otimes {\mathbb {R}^d} \subset \mathcal {T}_0$ and
where $\mathbf {x} = (0, x, 0, \dots )\in \mathcal {T}_0$ for $x\in {\mathbb {R}^d}$ . In case the Lévy measures $\{K_t\}_{t>0}$ satisfy the condition in equation (6.8) only up to some finite level $N\in {\mathbb {N}_{\ge 1}}$ , we have $X\in \mathscr {H}^{N}$ , and the identity equation (6.9) holds for the truncated signature cumulant in $\mathcal {T}_0^N$ .
Remark 6.6. Corollary 6.5 extends a main result of [Reference Friz and Shekhar25], where a Lévy-Kintchin type formula was obtained for the expected signature of time homogeneous Lévy processes with constant triplet $(\mathbf {b},\mathbf {a},K)$ . Now this is an immediate consequence of equation (6.9), with all commutators vanishing in time-homogeneous case and explicit solution
Proof. Assume that the Lévy measures $\{ K_t \}_{t>0}$ satisfy the condition in equation (6.8) for some $N\in {\mathbb {N}_{\ge 1}}$ . We will first show that $X\in \mathscr {H}^{N}({\mathbb {R}^d})$ . Note that the decomposition in equation (6.7) naturally yields a semimartingale decomposition $X = M + A$ , where the local martingale M and the adapted bounded variation process A are defined by
Regarding the integrability of the $1$ -variation of A, we note that it holds
Define the increasing, piecewise constant process
. Since b is deterministic and integrable over the interval $[0,T]$ , it suffices to show that $V_T$ has finite Nth moment. To this end, note that it holds
Further, it holds that for any $n \in \{1, \dotsc ,N\}$ that
and by definition
. Now let $n = 2$ and $k \in \{0, \dotsc , n-1\}$ ; then we have
It then follows inductively that $\mathbb {E}(V_T^n)$ is finite for all $n = 1, \dotsc , N$ and hence that the 1-variation of A has finite Nth moment.
Concerning the integrability of the quadratic variation of M, let $w\in \{1, \dots , d\}$ ; then it is well known that (see, e.g., [Reference Jacod and Shiryaev36, Ch. II Theorem 1.33])
where $\left \langle M \right \rangle $ denotes the dual predictable projection (or compensator) of $\left [ M \right ]$ . Further, using that the compensated martingale
is orthogonal to continuous martingales, we have
Now let $q \in [1, \infty )$ ; then from Theorem 8.2.20 in [Reference Cohen and Elliott14], we have the following estimation
where $c>0$ is a constant depending on q.
We have shown that $\mathbf {X} = (0, X, 0, \dotsc , 0) \in \mathscr {H}^{1,N}$ , and it follows from Theorem 4.1 that the signature cumulant $\boldsymbol {\kappa }_t = \log (\mathbb {E}_t(\mathrm {Sig}(\mathbf {X})_{t,T}))$ satisfies the functional equation (4.3). On the other hand, it follows from the condition in equation (6.8) that $\mathfrak {y}$ in equation (6.10) is well defined. Now define $\widetilde {\boldsymbol {\kappa }} = (\widetilde {\boldsymbol {\kappa }}_t)_{0 \le t \le T}$ by the identity equation (6.9). Noting that $\widetilde {\boldsymbol {\kappa }}$ is deterministic and has absolutely continuous components, it is easy to see that $\widetilde {\boldsymbol {\kappa }}$ also satisfies the functional equation (4.3) for the semimartingale X. It thus follows that $\boldsymbol {\kappa }$ and $\widetilde {\boldsymbol {\kappa }}$ are identical.
6.2.2 Markov jump diffusions
The generator of a general Markov jump diffusion is of the form
where the summations are over $i,j\in \{1, \dots , d\}$ , $\mathbf {b}\colon {\mathbb {R}^d} \to {\mathbb {R}^d}$ , $\mathbf {a}\colon {\mathbb {R}^d} \to {\mathbb {R}^d} \otimes {\mathbb {R}^d}$ (symmetric, positive definite) and K is a Borel transition kernel from ${\mathbb {R}^d}$ into ${\mathbb {R}^d}$ with $K(\cdot , \{0\}) \equiv 0$ . Denote by $\mathrm {Lip}(E)$ the space of real valued, bounded and globally Lipschitz continuous functions over a Banach space E. We pose the following further assumptions on the generator:
-
(A1) $\mathbf {b}^{i}, \mathbf {a}^{ij} \in \mathrm {Lip}({\mathbb {R}^d})$ for all $i,j = 1,\dots , d$ ,
-
(A2) There exists a constant $c>0$ such that it holds
-
(A3) There exists a bounded measure m on ${\mathbb {R}^d}$ and a measurable function $\delta :{\mathbb {R}^d}\times {\mathbb {R}^d}\to {\mathbb {R}^d}$ , such that the kernel K is of the form
$$ \begin{align*} K(x, A) = \int_{{\mathbb{R}^d}}\mathbf 1_{A\setminus\{0\}}(\delta(x,z))m(\mathrm{d} z), \qquad\text{for all measurable }A\subset{\mathbb{R}^d} \text{ and } x\in{\mathbb{R}^d}. \end{align*} $$Further, we assume that there exists a function $\rho :{\mathbb {R}^d}\to \mathbb {R}_+$ , such that$$ \begin{align*} \int_{{\mathbb{R}^d}} (\rho(z)^{2}\vee\rho(z)^{n}) m(\mathrm{d} z) < \infty,\qquad \text{for all}\quad n\in{\mathbb{N}}, \end{align*} $$and for all $x, x^{\prime }, z \in {\mathbb {R}^d}$ , it holds that
Note that under the above assumptions,Footnote 9 the martingale problem associated to the generator $\mathcal {L}$ admits a unique solution (see [Reference Jacod35, Theorem 13.58]). More precisely, for any probability measure $\eta $ on ${\mathbb {R}^d}$ , there exists a unique measure $\mathbb {P}$ on the canonical Skorohod space $(\Omega , \mathcal {F}, (\mathcal {F}_t))$ , such that the canonical process X satisfies $\mathbb {P}(X_0 \in A ) = \eta (A)$ for all measurable $A\subset {\mathbb {R}^d}$ , and the process
is a local martingale under $\mathbb {P}$ for all $f \in C^{2}_b({\mathbb {R}^d})$ , the space of real-valued, bounded and twice continuously differentiable functions, with bounded first and second-order partial derivatives. In general, we may call a semimartingale a Markov jump diffusion associated to the generator $\mathcal {L}$ if its law coincides with $\mathbb {P}$ : that is, it solves the martingale problem associated to $\mathcal {L}$ . The martingale problem can be equivalently formulated in terms of semimartingale characteristics (see [Reference Jacod35, XIII.3] and [Reference Jacod and Shiryaev36, III.2.c]). In particular, under the given assumptions, it holds that $\mathbb {P}$ is a solution to the martingale problem if and only if the canonical processes has the following semimartingale characteristics (see [Reference Jacod35, Theorem 13.58])
The extensions to Markov processes with characteristics of the form $(\mathbf {b}(t,x),\mathbf {a}(t,x),K(t,x,\mathrm {d} y))$ with associated local Lévy generators [Reference Stroock60] is mostly notational. Apart from already presented references, the reader may also consult the abundant literature in the continuous case (e.g., [Reference Stroock and Varadhan61], [Reference Revuz and Yor59, VII.2] or [Reference Karatzas and Shreve38, 5.4]).
The expected signature of a Markov jump diffusion X was seen in [Reference Friz and Shekhar25] to satisfy a system of linear partial integro-differential equations (PIDEs); the continuous case was already presented [Reference Ni54] with a corresponding system of PDEs. The passage to signature cumulants amounts to taking the logarithm, which represents a non-commutative Cole–Hopf transform with resulting quadratic non-linearity, if viewed as a $\mathcal {T}_1$ -valued PIDE, resolved - thanks to the graded structure - into a system of linear PIDEs. In Corollary 6.7 below we will show how this PIDE can be derived directly from our Theorem 4.1.
Since we are mainly interested in exposing the algebraic structure of this PIDE system and we want to avoid dwelling much further on the solution theory of PIDEs associated to the operator $\mathcal {L}$ , we pose one further assumption:
-
(A4) For any $f\in \mathrm {Lip}([0,T]\times {\mathbb {R}^d})$ the Cauchy problem with zero terminal condition
(6.12) $$ \begin{align} \begin{aligned} [\partial_t + \mathcal{L}]u(t,x) &= f(t,x), \qquad (t,x) \in [0, T)\times{\mathbb{R}^d}, \\ u(T, x) &= 0, \qquad x\in{\mathbb{R}^d}, \end{aligned} \end{align} $$has a unique solution u in the spaceFootnote 10$$ \begin{align*}C_{b}^{1,2}([0,T]\times {\mathbb{R}^d}) := \left\{ u\in C^{1,2}([0,T]\times{\mathbb{R}^d}):\; u,\,\partial_{x_1}u, \,\dots, \,\partial_{x_d}u \in \mathrm{Lip}([0,T]\times{\mathbb{R}^d})\right\},\end{align*} $$where $C^{1,2}([0,T]\times {\mathbb {R}^d})$ is the space of real valued functions on $[0,T]\times {\mathbb {R}^d}$ that are once continuously differentiable in t and twice continuously differentiable in x.
A result similar to the above assumption (A4) based on conditions similar to (A1)-(A3) can be found [Reference Pham56, Proposition 5.3]. The main difference is that in the latter result only a linear growth condition on the forcing f is assumed and hence only a growth property of the solution is obtained. Note that the proof in [Reference Pham56] is based on a viscosity solution approach, which then allows to recast the PIDE in terms of a PDE with modified drift and forcing terms. Hence the bounds on the solution of the Cauchy problem and its derivatives should follow as a consequence of the stronger assumptions on the forcing and classical estimates as they can be found in [Reference Friedman22, Sec. 9.4].
We now present the main result of this section.
Corollary 6.7. Assume that $\mathcal {L}$ in equation (6.11) satisfies (A1)–(A4), and let X be a d-dimensional Markov jump diffusion with this generator. Then $\mathbf {X} = (0, X, 0, \dots )\in \mathscr {H}^{\infty -}$ , and the signature cumulant is of the form
where $\mathbf {v}=\sum _{w} \mathbf {v}^{w} e_w$ is the unique solution with $\mathbf {v}^{w} \in C^{1,2}_b([0,T] \times {\mathbb {R}^d})$ for all $w\in \mathcal {W}_d$ of the following partial integro-differential equation
on $[0,T]\times {\mathbb {R}^d}$ with terminal condition $\mathbf {v}(T, \cdot ) \equiv 0$ , where $\mathbf {y} := (0, y, 0 \dots )\in \mathcal {T}_0$ and $\tau _y(t, x) = (t, x + y)$ .
Remark 6.8. It is instructive to look at the one-dimensional case. As seen several times before, all adjoint operators vanish due to commutativity in this case. With the more classical notation $\partial _x = \partial _1 \partial _{xx} = \partial _{11}$ , $\varepsilon = e_1$ , $\mathbf {b} = b\varepsilon $ , $\mathbf {a} = a \varepsilon ^2$ and with $\mathbf {v}_\varepsilon (t,x) := \varepsilon x + \mathbf {v}(t,x)$ , we see that the PIDE (6.13) then simplifies to
Introducing the Cole-Hopf transformation $\mathbf {u}_\varepsilon (t,x) = \exp (\mathbf {v}_\varepsilon (t,x))$ , multiplying equation (6.14) with $\mathbf {u}_\varepsilon $ and considering the differential relations
we obtain the following PIDE for $\mathbf {u}_\varepsilon $ :
where the equivalence follows from the invertability of $\mathbf {u}_\varepsilon $ in $\mathbb {R}[[\varepsilon ]]\cong T((\mathbb {R}))$ . Conversely, the signature simplifies in the one-dimensional case to the exponential of the path increment, and therefore we have
Hence, the above PIDE for $\mathbf {u}_\varepsilon $ is already expected from the Feynman-Kac representation (see, e.g., [Reference Cont and Tankov15, Proposition 12.5]). The expansion $\mathbf {v}_\varepsilon = \varepsilon v^{1} + \varepsilon ^{2} v^{2} + \dots $ , with $\mathbf {v}_\varepsilon ^{(i)} = \varepsilon ^{i}v^{i}$ , is related to the Wild expansion used in Hairer’s approach for solving the KPZ equation [Reference Hairer30]. Indeed, as explained in [Reference Friz, Gatheral and Radoičić23, Section 4.5] for the continuous case, each term $v^{i}$ can be composed as a sum of solutions to linear PDEs, indexed by binary trees with i leaves. The jump case follows similarly; however, the root joining operation corresponds to the creation of a different forcing term, which apart from the carré du champ (compare with [Reference Friz, Gatheral and Radoičić23, Remark 4.1]), includes terms derived from the integral in equation (6.14) (the combinatorial relations from breaking apart the exponential are analogous to those in Corollary 5.5).
Remark 6.9. We are still in the one-dimensional case and use notation from the previous remark. Assume now that the underlying Markov jump diffusion X takes values in the domain $D=\mathbb {R}$ or $D=\mathbb {R}_{\ge 0}$ and has affine differential characteristics of the form
for all $x\in D$ with $\beta _0\in D$ , $\beta _1\in \mathbb {R} \alpha _0, \alpha _1, \lambda _0, \lambda _1 \ge 0$ , and m is a suitably integrable measure with $\mathrm {supp}(m)\subset D$ , where $\alpha _0=0$ if $D=\mathbb {R}_{\ge 0}$ and $\lambda _1 = \alpha _1=0$ if $D=\mathbb {R}$ . In this case, $\mathbf {b}$ and $\mathbf {a}$ clearly do not satisfy the conditions (A1) and (A2) in general. Nevertheless, we can start by considering the PIDE (6.14) and use the following affine ansatz
for differentiable functions $\phi ^i, \psi ^i:[0,T]\to \mathbb {R}$ with $\psi ^1(0) = 1$ and $\phi ^i(0) =\psi ^{i+1}(0) = 0$ for all $i \in {\mathbb {N}_{\ge 1}}$ , which, after equating coefficients in x, yields the following formal Ricatti ODE
Equating coefficients in $\varepsilon $ this formal quadratic ODE turns into a system of linear ODEs
where $m_k$ is the kth moment of m; that can easily be solved explicitly. The resulting formulas relate to the cumulant formulas obtained in the more general setting of polynomial processes in [Reference Cuchiero, Keller-Ressel and Teichmann17]. We do not claim to provide any new results on the topic of affine processes, and therefore we also do not relate the above formal steps back to the cumulants of the process, which would require a more careful analysis. An interesting open question from this perspective is how to fully employ the affine structure in the non-commutative setting. In the next section, we present a result in this direction applied to affine Volterra processes.
Proof of Corollary 6.7
We can easily verify the boundedness of $\mathbf {b}$ and $\mathbf {a}$ , and the moment condition in assumption (A3) implies that $\mathbf {X} = (0, X, 0, \dots ) \in \mathscr {H}^{\infty -}$ (compare also with the proof of Corollary 6.5). It then follows from Theorem 4.1 that $\boldsymbol {\kappa }(T) = (\mathbb {E}_t(\mathrm {Sig}(\mathbf {X})_{t,T}))_{0 \le t \le T}$ is the unique solution to the functional equation (4.3).
Next we are going to discuss the existence and uniqueness of a solution to the PIDE. Therefore, note that projecting equation (6.13) to any tensor level $w \in \mathcal {W}_d$ , we obtain a Cauchy problem of the form in equation (6.12), where the forcing term f is given by the projection of the right-hand side in equation (6.13) to the tensor component w. Therefore the existence of a unique solution $\mathbf {v}^{w} \in C^{1,2}_b([0,T]\times {\mathbb {R}^d})$ follows by assumption (A4), provided we can show that the corresponding forcing term is in $\mathrm {Lip}([0,T]\times {\mathbb {R}^d})$ . We prove the latter assertion inductively: projecting the right-hand side of equation (6.13) to the tensor component $i = \{1, \dots , d\}$ , we obtain the following forcing term
Recall that $\mathbf {b}^{i}\in \mathrm {Lip}([0,T]\times {\mathbb {R}^d})$ by assumption (A1). Regarding the integral term, we see that boundedness and the global Lipschitz continuity is a consequence of assumption (A3). Hence it follows that the above forcing term is indeed in $\mathrm {Lip}([0,T]\times {\mathbb {R}^d})$ , which closes the base of the induction.
As the induction claim, assume that the projection of the right-hand side of equation (6.13) to all tensor components ${w^{\prime }}\in \mathcal {W}_d$ with $ \left \vert {{w^{\prime }}} \right \vert \le n$ for some $n\in {\mathbb {N}_{\ge 1}}$ yields a function in $\mathrm {Lip}([0,T]\times {\mathbb {R}^d})$ and hence, by assumption (A4), the corresponding Cauchy problem with generator $\mathcal {L}$ has unique solutions $\mathbf {v}^{{w^{\prime }}}\in C^{1,2}([0,T]\times {\mathbb {R}^d})$ .
For the induction step, let $w\in \mathcal {W}_d$ with
be arbitrary. Note that the right-hand side of equation (6.13) consists of two main terms, the second one of which is given by an integral. Projecting the first term to the tensor components yields a linear combination of (finite) products of the functions $\mathbf {a}^{ij},\mathbf {v}^{{w^{\prime }}}$ and $\partial _i \mathbf {v}^{{w^{\prime }}}$ with $ \left \vert {{w^{\prime }}} \right \vert \le n$ , $i\in \{1, \dots , d\}$ . By assumption (A1) and the induction hypothesis, each of these functions is in $\mathrm {Lip}([0,T]\times {\mathbb {R}^d})$ and therefore their products and linear combinations also are (clearly $\mathrm {Lip}([0,T]\times {\mathbb {R}^d})$ forms an algebra). Hence we are left with proving that the projection of the integral term in the right-hand side of equation (6.13) is also in $\mathrm {Lip}([0,T]\times {\mathbb {R}^d})$ . Using a Taylor expansion and the derivatives of the exponential map from Lemma 7.5 (compare also with Lemma 7.9), we obtain the following identity:
Hence, we see from the above right-hand side and the induction claim that the projection of the integral term in the forcing of equation (6.13) to the tensor component w is a linear combination of functions of the form
where $g, h \in \mathrm {Lip}([0,T]\times {\mathbb {R}^d})$ , and where $p(y)$ and $q(y)$ are homogeneous polynomials in $(y^{1}, \dots , y^{d})$ with $\deg (p) = 2$ , $0 \le \deg (q)\le n-1$ and $q\neq 0$ . It follows from assumption (A3) that the above function and hence any linear combination of functions of the same form are in $\mathrm {Lip}([0,T]\times {\mathbb {R}^d})$ . Indeed, let $(t,x),(t^{\prime }, x^{\prime }) \in [0,T]\times {\mathbb {R}^d}$ ; then we have
where $c,c^{\prime }> 0$ are constants (which will change from line to line in what follows). Regarding the first integral term in the last line above, note that for all $y\in {\mathbb {R}^d}$ , it holds that
It then follows by assumption (A3) that for all $x\in {\mathbb {R}^d}$ , we have
Regarding the second integral term, we further note that for all $y,y^{\prime }\in {\mathbb {R}^d}$ , it holds
Therefore, again by assumption (A3), we have
where $c^{\prime \prime }>0$ is another constant. This finishes the proof of the assertion that $f\in \mathrm {Lip}([0,T]\times {\mathbb {R}^d})$ and hence also the proof of the induction step. We have thus concluded the proof about the existence and uniqueness of a solution to the PIDE.
Now define $\tilde {\boldsymbol {\kappa }}\in \mathscr {S}(\mathcal {T}_0)$ by $\tilde {\boldsymbol {\kappa }}_t := \mathbf {v}(t,X_t)$ for all $0 \le t \le T$ , and note that $\tilde {\boldsymbol {\kappa }}_{t-} = \mathbf {v}(t, X_{t-})$ . We are going to show that $\tilde {\boldsymbol {\kappa }}$ also satisfies the functional equation (4.3). Since X solves the martingale problem with generator $\mathcal {L}$ and $\mathbf {v}$ is sufficiently regular, an application of Itô’s formula yields that the processes
is a local martingale: that is, in $\mathscr {M}_{\mathrm {loc}}(\mathcal {T}_0)$ . However, as $\mathbf {v}^{w} \in C^{1,2}_b([0,T]\times {\mathbb {R}^d})$ , it follows that the above process is bounded and therefore also a true martingale: that is, in $\mathscr {M}(\mathcal {T}_0)$ . Hence, using further the terminal condition $\mathbf {v}(T,\cdot )\equiv 0$ , we obtain the identity
On the other hand, we can plug $\tilde {\boldsymbol {\kappa }}$ into the right-hand side of equation (4.3). We then obtain for the first integral inside the conditional expectation
where $\mu ^{X}$ denotes the random measure associated with the jumps of X and
for all $0 \le t \le T$ , $y\in {\mathbb {R}^d}$ and where
is the usual truncation function. Similarly, we have
where $0 \le t \le T$ and for $y\in {\mathbb {R}^d}$ with $\mathbf {y} = (0, y, 0, \dots )\in \mathcal {T}_0$
Finally, for the quadratic variation terms with respect to continuous parts, we have
Provided that we can show the following integrability property holds for all words $w\in \mathcal {W}_d$
it follows that
where in the last line we have used $\mathbf {v}$ , which satisfies the PIDE. Since the above left-hand side is precisely the right-hand side of the functional equation (4.3), it follows together with equation (6.15) that $\tilde {\boldsymbol {\kappa }}$ satisfies the functional equation (4.3).
Note that in case the integrability condition in equation (6.16) is satisfied for all words $w\in \mathcal {W}_d$ with for some length $n\in {\mathbb {N}_{\ge 1}}$ , it follows that the above equality holds up to the projection with $\pi _{(0,n)}$ . For words with , the condition in equation (6.16) is an immediate consequence of $\mathbf {X}\in \mathscr {H}^{\infty -}$ . It then follows inductively by the same arguments as in the proof of Claim 7.12 that equation (6.16) is indeed satisfied for all words $w\in \mathcal {W}_d$ .
Since $\boldsymbol {\kappa }(T)$ is the unique solution to equation (4.3), it then follows that $\tilde {\boldsymbol {\kappa }} \equiv \boldsymbol {\kappa }(T)$ .
6.3 Affine Volterra processes
For $i =1, 2$ , let $K^{i}$ be an integration kernel such that $K^{i}(t, \cdot ) \in L^{2}([0, t])$ for all $0 \le t \le T$ , and let $V^{i}$ be the solution to the Volterra integral equation
with $V^{i}_0> 0$ , where $W^{1}$ and $W^{2}$ are uncorrelated standard Brownian motions which generate the filtration $(\mathcal {F}_t)_{0 \le t \le T}$ . Note that in general, $V^{i}$ is not a semimartingale. In particular, this is not the case when $K^{i}$ is a power-law kernel of the form $K(t,s) \sim (t-s)^{H-1/2}$ for some $H\in (0, 1/2)$ , which is the prototype of a rough affine volatility model (see, e.g., [Reference Keller-Ressel, Larsson and Pulido39]). However, a martingale $\xi ^{i}(T)$ is naturally associated to $V^{i}$ by
In the financial context, $\xi ^{i}(T)$ is the central object of a forward variance model (see, e.g., [Reference Gatheral and Keller-Ressel29]). It was seen in [Reference Friz, Gatheral and Radoičić23] that the iterated diamond products of $\xi ^{1}(T)$ are of a particularly simple form and easily translated to a system of convolutional Riccati equations of the type studied in [Reference Abi Jaber, Larsson and Pulido1] for the cumulant generating function. We are interested in the signature cumulant of the two-dimensional martingale $X = (\xi ^{1}(T), \xi ^{2}(T))$ .
Corollary 6.10. It holds that $\mathbf {X} = (0,\, \xi ^{1}(T)e_1 + \xi ^{2}(T)e_2,\, 0,\, \dots )\in \mathscr {H}^{\infty -}$ and the signature cumulant $\boldsymbol {\kappa }_t(T) = \log \mathbb {E}_t(\mathrm {Sig}(\mathbf {X})_{t,T})$ is the unique solution to the functional equation: for all $0 \le t \le T$
Proof. Regarding the integrability statement, it suffices to check that $V^{i}_T$ has moments of all order for $i=1, 2$ . This is indeed the case, and we refer to [Reference Abi Jaber, Larsson and Pulido1, Lemma 3.1] for a proof. Hence we can apply Theorem 4.1, and we see that $\boldsymbol {\kappa }$ satisfies the functional equation (4.3). As described in Section 4.1, this equation can be reformulated with brackets replaced by diamonds. Further note that, due to the continuity, jump terms vanish and, due to the martingality, the Itô integrals with respect to $\mathbf {X}$ have zero expectation. The final step to arrive at the above form of the functional equation is to calculate the brackets $\langle \xi ^{i}(T), \xi ^{j}(T) \rangle $ . From the definition $\xi ^{i}(T)$ and $V^{i}$ , we have for all $0 \le t \le T$
Therefore, due to the independence, we have $\langle \xi ^{1}(T), \xi ^{2}(T) \rangle = 0$ ; and for the square bracket, we have $\mathrm {d} \langle \xi ^{i}(T), \xi ^{i}(T)\rangle _t = K^{i}(T,t)^{2}V^{i}_t \mathrm {d} t$ .
The recursion for the signature cumulants from Corollary 4.3 is easily simplified in analogy to the above corollary. In the rest of this section, we are going to demonstrate explicit calculations for the first four levels. Clearly, due to the martingality, the first-level signature cumulants are identically zero $\boldsymbol {\kappa }^{(1)}(T) \equiv 0$ . In the second level, we start to observe the type of simplifications that appear due to the affine structure
where $\xi ^{i}_t(u) = \mathbb {E}_t(V^{i}_u)$ for all $0 \le t \le u \le T$ . The third level is of the same form
where we have used that for any suitable $h:[0, T]\to \mathbb {R}$ , it holds that for all $0 \le t \le T$
The fourth level starts to reveal some of the structure that is not visible in the commutative setting
where $\{i, \bar {i}\} = \{1,2\}$ and $h^{i}$ is defined by
7 Proofs
For ease of notation, we introduce a norm on the space of tensor valued finite variation process, which could have been introduced in Section 2.4 but was not needed until now. Let $q\in [1,\infty )$ and $\mathbf {A}\in \mathscr {V}(({\mathbb {R}^d})^{\otimes n})$ for some $n\in {\mathbb {N}_{\ge 1}}$ ; then we define
It is easy to see that it holds $ \left \Vert {\mathbf {A}} \right \Vert _{\mathscr {H}^{q}} \le \left \Vert {\mathbf {A}} \right \Vert _{\mathscr {V}^{q}}$ , and this inequality can be strict.
Further, for an element $\mathbb {A} \in T({\mathbb {R}^d}) \overline {\otimes } T({\mathbb {R}^d})$ , we introduce the following notation
and for $l_1, l_2 \in {\mathbb {N}_{\ge 1}}$
Next we will prove two well-known lemmas translated to the setting of tensor valued semimartingales.
Lemma 7.1 (Kunita-Watanabe inequality)
Let $\mathbf {X} \in \mathscr {S}(({\mathbb {R}^d})^{\otimes n})$ and $\mathbf {Y} \in \mathscr {S}(({\mathbb {R}^d})^{\otimes m})$ ; then the following estimate holds a.s.
where $c>0$ is a constant that only depends on d, m and n.
Proof. From the definition of the quadratic variation of tensor valued semimartingales in Section 2.4, we have
where the first estimate follows from the triangle inequality, the second estimate from the (scalar) Kunita-Watanabe inequality [Reference Protter57, Ch. II, Theorem 25] and the last two estimates follow from the standard estimate between the $1$ -norm and the $2$ -norm on $({\mathbb {R}^d})^{\otimes m}\cong \mathbb {R}^{d^m}$ .
In order to prove the next well-known lemma (Emery’s inequality), we need the following technical lemma.
Lemma 7.2. Let $\mathbf {A}\in \mathscr {V}(({\mathbb {R}^d})^{\otimes n})$ , $\mathbf {Y}\in \mathscr {D}(({\mathbb {R}^d})^{\otimes l})$ , $\mathbf {Z}\in \mathscr {D}(({\mathbb {R}^d})^{\otimes m})$ ; then it holds that
where the integration with respect to
denotes the integration with respect to the increasing one-dimensional path
. Further, let $\mathbf {Y}^{\prime }\in \mathscr {D}(({\mathbb {R}^d})^{\otimes l^{\prime }})$ and $\mathbf {Z}^{\prime }\in \mathscr {D}(({\mathbb {R}^d})^{\otimes m^{\prime }})$ , and let $(\mathbb {A}_t)_{0 \le t \le T}$ be a process taking values in $({\mathbb {R}^d})^{\otimes n}\otimes ({\mathbb {R}^d})^{\otimes n^{\prime }}$ such that $A^{w_1, w_2} \in \mathscr {V}$ for all $w_1, w_2\in \mathcal {W}_d$ with $|w_1|= n$ and $|w_2| = n^{\prime }$ . Then it holds that
where $(\mathbf {Y} \mathrm {Id} \mathbf {Y}^{\prime })(\mathbf {A}) = \mathbf {Y}\mathbf {A} \mathbf {Y}^{\prime }$ is the left- respectively right-multiplication by $\mathbf {Y}$ , respectively $\mathbf {Y}^{\prime }$ .
Proof. Let $0 \le s \le t \le T$ ; then it holds that
Indeed, as follows, for example, from [Reference Young62, Theorem on Stieltjes integrability], we can approximate the integral in the left-hand side by Riemann sums. Then for a partition $(t_i)_{i=1, \dotsc , k}$ of the interval $[s,t]$ , we have
where the last inequality follows from the fact that for homogeneous tensors $\mathbf {x}\in ({\mathbb {R}^d})^{\otimes m}$ and $\mathbf {y}\in ({\mathbb {R}^d})^{\otimes n}$ , it holds that
. Regarding the $1$ -variation, we then have
Regarding the second statement, we see that for any $0 \le s \le t \le T$ , we have
Indeed, we approximate the integral in the right-hand side again by a Riemann sum. Then for a partition $(t_i)_{i=1, \dotsc , k}$ of the interval $[s,t]$ , we have
where the last inequality follows from the definition of the norm on (homogeneous) tensors and the definition of the multiplication map m. We conclude analogously to the proof of the first statement.
Lemma 7.3 (Emery’s inequality)
Let $\mathbf {X}\in \mathscr {S}(({\mathbb {R}^d})^{\otimes n})$ , $\mathbf {Y}\in \mathscr {D}(({\mathbb {R}^d})^{\otimes l})$ and $\mathbf {Z}\in \mathscr {D}(({\mathbb {R}^d})^{\otimes m})$ ; then for $p, q\in [1,\infty )$ and $1/r = 1/p + 1/q$ , it holds that
where $c>0$ is a constant that only depends on d and m.
Proof. Let $\mathbf {X} = \mathbf {X}_0 + \mathbf {M} + \mathbf {A}$ be a semimartingale decomposition with $\mathbf {M}_0 = \mathbf {A}_0 = 0$ . Then it follows by definition of the $\mathscr {H}^{r}$ -norm and the above Lemma 7.2
where we have used the generalized Hölder inequality and the Kunita-Watanabe inequality (Lemma 7.1) to get to the last line. Taking the infimum of over all semimartingale decomposition $\mathbf {M} + \mathbf {A}$ yields the statement.
The following technical lemma will be used in the proof of both Theorem 3.2 and Theorem 4.1.
Lemma 7.4. Let $\mathbf {X}, \mathbf {Y}\in \mathscr {S}(\mathcal {T\,}^{N})$ , $N\in {\mathbb {N}_{\ge 1}}$ , $q\in [1,\infty )$ , and assume that there exists a constant $c>0$ such that
where the summation is over $\ell = (l_1, \dots , l_j)\in ({\mathbb {N}_{\ge 1}})^{j}$ , $j\in {\mathbb {N}_{\ge 1}}$ , $\Vert \ell \Vert = l_1 + \dots + l_j$ . Then there exists a constant $C>0$ , depending only on c and N, such that
Proof. Note that for any $n\in \{1, \dotsc , N\}$ , it holds that
where $c_n>0$ is a constant depending only on n, and the second inequality follows from Young’s inequality for products. Hence by the above estimate and the assumption we have
where $C>0$ is a constant depending only on c and N.
7.1 Proof of Theorem 3.2
Proof. Denote by $\mathbf {S} = (\mathrm {Sig}(\mathbf {X})_{0,t})_{0\le t \le T}$ the signature process. We will first prove the upper inequality: that is, that there exists a constant $C>0$ depending only on d, N and q such that
According to Lemma 7.4, it is sufficient to show that for all $n \in \{1, \dots , N\}$ , it holds that
where $c_n>0$ is a constant (depending only on q, d and n). In the inequality above (and the rest of the proof), the summation is over $\ell = (l_1, \dots , l_j) \in ({\mathbb {N}_{\ge 1}})^{j}$ , $j\in {\mathbb {N}_{\ge 1}}$ with ; in particular, the variable j is reserved for the length $|\ell |$ of the multiindex $\ell $ inside such summations. Note that implies that $l_i \le n$ for all $i = 1, \ldots , j$ . It can easily be seen from the definition of the quantities $(\rho _{\mathbf {X}}^{1}, \dots , \rho _{\mathbf {X}}^{N})$ in equation (7.2) that they satisfy the following (‘cascading’) property for all $n = 1, \dots , N$
where $c^{\prime }$ and $c^{\prime \prime }$ are constants depending only on n. We are going to prove equation (7.2) inductively.
For $n=1$ , we have $\mathbf {S}^{(1)} = \mathbf {X}^{(1)} - \mathbf {X}^{(1)}_0 = \mathbf {X}^{(1)} \in \mathscr {H}^{qN}$ , and therefore the estimate follows immediately. Now, assume that equation (7.2) holds for all tensor levels up to some level $n-1$ with $n \in \{2, \dotsc , N\}$ . We will denote by $c^{\prime }, c^{\prime \prime }> 0$ constants that only depend on n, d and q. Then we have from equation (2.11)
For the first term in the above right-hand side, we have by Emery’s inequality (Lemma 7.3) the following estimate
where the last inequality follows from the induction claim and equation (7.3). Further, from the Kunita-Watanabe inequality (Lemma 7.1) and the generalized Hölder inequality, it follows that for all $l_1, l_2\in {\mathbb {N}_{\ge 1}}$ with $l_1 + l_2 \le n$ , we have
Then we have again by Emery’s inequality, the induction base and equation (7.3) that it holds
Finally, we have for the summation term
where the last inequality follows again by the Kunita-Watanabe inequality, the induction basis and equation (7.3). Thus we have shown that equation (7.2) holds for all $n\in \{1, \dots , N\}$ .
Now we will prove the lower inequality: that is, that there exists a constant $c>0$ depending only on d, N and q such that
Therefore define ${\bar {\mathbf {X}}}^{n} := (0, \mathbf {X}^{(1)}, \dots , \mathbf {X}^{(n)}, 0, \dots , 0)\in \mathscr {H}^{q,N}$ , and note that it holds
Now assume that it holds
for some $n \in \{1, \dotsc ,N\}$ . It follows from the definition of the signature that
and further, we have from the upper bound in equation (7.1), which was already proven above, that
Then we have
where we have used equation (7.5) in the second line and equation (7.6) in the last line. Therefore, noting that ${\bar {\mathbf {X}}}^{N}=\mathbf {X}$ , the inequality in equation (7.4) follows by induction.
7.2 Proof of Theorem 4.1
We prepare the proof of Theorem 4.1 with a few more lemmas. Recall from Section 2.3.2 that $\exp _N\colon \mathcal {T}_0^{N} \to \mathcal {T}_1^{N}$ denotes the exponential map in the truncated tensor algebra $\mathcal {T\,}^{N}$ for some $N\in {\mathbb {N}_{\ge 1}}$ , which is given by the power series
where all concatenation products are understood in the truncated tensor algebra $\mathcal {T\,}^{N}_1$ , and the summation is over all $\ell = (l_1, \dots , l_k) \in ({\mathbb {N}_{\ge 1}})^{k}$ , $k\in {\mathbb {N}_{\ge 1}}$ with
and
. Also recall that $\log _N: \mathcal {T}_1 \to \mathcal {T}_0$ denotes the logarithm in the truncated tensor algebra $\mathcal {T\,}^{N}$ , which is analogously given by the corresponding $\log $ -power series.
Lemma 7.5. Let $N \in {\mathbb {N}_{\ge 1}}$ ; then we have the following directional derivatives of the truncated exponential map $\exp _N\colon \mathcal {T}_0^{N} \to \mathcal {T}_1^{N}$
for all words $w, w' \in \mathcal {W}_d$ with $1\le |w|, |w'| \le N$ , where G is defined in equation (4.1) and for
Proof. For all $w\in \mathcal {W}_d$ with $0 \le |w| \le N$ and $\mathbf {x}\in \mathcal {T}_0^{N}$ , the expression $\exp _N(\mathbf {x})^{w}$ is a polynomial in the tensor components $(\mathbf {x}^{v})_{1 \le |v| \le |w|}$ . Therefore the map $\exp _N: \mathcal {T\,}^{N}_1 \to \mathcal {T}_0^{N}$ is smooth, and in particular the first and second-order partial derivatives exist in all directions. For a proof of the explicit form of the first-order partial derivatives, we refer to [Reference Friz and Victoir27, Lemma 7.23]. For the second-order derivatives, we follow the proof of [Reference Kamm, Pagliarani and Pascucci37, Lemma A.1]. Therefore, let $\mathbf {x}\in \mathcal {T}_0^{N}$ and $w, {w^{\prime }}$ arbitrary with $1\le |w|, |{w^{\prime }}| \le N$ . Then we have by the definition of the partial derivatives in $\mathcal {T}_0^{N}$ and the product rule
From [Reference Friz and Victoir27, Lemma 7.22], it holds that $\exp _N(\operatorname {\mathrm {ad}}{\mathbf {x}})({y}) = \exp _N(\mathbf {x}) y \exp _N(-\mathbf {x})$ for all $\mathbf {x},y\in \mathcal {T}_0^{N}$ , and it follows further by representing G in integral form that
Then the proof is finished after noting that
Note that the operator Q defined in equation (4.1) differs from the operator $\tilde {Q}$ defined above. However, we have the following:
Lemma 7.6. Let $N\in {\mathbb {N}_{\ge 1}}$ and $\mathbf {x}\in \mathcal {T}_0^{N}$ ; then it holds that
for all $\mathbb {A}\in \mathcal {T}_0^{N}\otimes \mathcal {T}_0^{N}$ with symmetric coefficients $\mathbb {A}^{w_1, w_2} = \mathbb {A}^{w_2, w_1}$ for all $w_1, w_2 \in \mathcal {W}_d$ .
Proof. Let $N\in {\mathbb {N}_{\ge 1}}$ and $\mathbf {x}\in \mathcal {T}_0^N$ be arbitrary. Then from the bilinearity of $\tilde {Q}(\operatorname {\mathrm {ad}}{\mathbf {x}})$ and the symmetry of $\mathbb {A}$ , we have, with summation over all words $w_1, w_2$ with $1 \le |w_1|,|w_2|\le N$ ,
The following two applications of Itô’s formula in the non-commutative setting will be a key ingredient in the proof of Theorem 4.1.
Lemma 7.7 (Itô’s product rule)
Let $\mathbf {X}, \mathbf {Y} \in \mathscr {S}(\mathcal {T}_1^N)$ for some $N\in {\mathbb {N}_{\ge 1}}$ ; then it holds that
Proof. The statement is an immediate consequence of the one-dimensional Itô’s product rule for càdlàg semimartingales (e.g., [Reference Protter57, Ch. II, Corollary 2]) and the definition of the outer bracket and the multiplication map in Section 2.4.
Lemma 7.8. Let $\mathbf {X} \in \mathscr {S}(\mathcal {T}_0^{N})$ for some $N\in {\mathbb {N}_{\ge 1}}$ ; then it holds that
for all $0 \le t \le T$ .
Proof. As discussed in the proof of Lemma 7.5, it is clear that the map $\exp _N\colon \mathcal {T}_0^{N} \to \mathcal {T}_1^{N}$ is smooth. Further, $\mathcal {T}_0^{N}$ is isomorphic to $\mathbb {R}^{D}$ with $D = d + \dotsb + d^{N}$ , and we can apply the multidimensional Itô’s formula for càdlàg semimartingales (e.g., [Reference Protter57, Ch. II, Theorem 33]) to obtain
for all $0 \le t \le T$ . From Lemma 7.5 we then have for the first integral term
and analogously
Moreover, from Lemma 7.8 and the definition of the outer bracket in Section 2.4,
Finally, the outer bracket
is symmetric in the sense of Lemma 7.6, and therefore we can replace $\tilde {Q}$ with Q in the above identity.
Lemma 7.9. Let $\mathbf {X} \in \mathscr {S}(\mathcal {T}_0)$ , and let $\mathbf {A}\in \mathscr {V}(\mathcal {T}_0)$ . For all $k \in {\mathbb {N}_{\ge 1}}$ and $\ell = (l_1, \dotsc , l_k)\in ({\mathbb {N}_{\ge 1}})^k$ , it holds that
for all $0 \le t \le T$ . Furthermore, let $(\mathbb {A}_t)_{0 \le t \le T}$ be a process taking values in $\mathcal {T}_0\otimes \mathcal {T}_0$ such that $\mathbb {A}^{w_1,w_2}\in \mathscr {V}$ for all $w_1, w_2 \in \mathcal {W}_d$ . Then it holds that for all $0 \le t \le T$
Proof. Recall from equation (2.3) that we expand iterated adjoined operations into a sum of left- and right tensor multiplications and apply Lemma 7.2. Note again that for homogeneous tensors $\mathbf {x}$ and $\mathbf {y}$ , it holds that . Therefore the statement follows by counting the terms in the expansion.
Lemma 7.10. Let $N\in {\mathbb {N}_{\ge 1}}$ , $\Delta \mathbf {x},\mathbf {y},\Delta \mathbf {y} \in \mathcal {T}_0^{N}$ , and define the function
Then $f(0,0) = 1$ , and the first-order partial derivatives of f at $(s,t)=(0,0)$ are given by
Further, the following explicit bound for the second-order partial derivatives holds
for all $n \in \{2, \dotsc , N\}$ , where $c_n> 0$ is a constant depending only on n, $\ell =(l_1,\dotsc ,l_k)\in ({\mathbb {N}_{\ge 1}})^{k}$ with $|\ell | = k$ and $z_l := \max \{|\Delta \mathbf {x}^{(l)}|, |\mathbf {y}^{(l)}|, |(\mathbf {y}+\Delta \mathbf {y})^{(l)}|\}$ for all $l\in \{1, \dotsc , N-2\}$ .
Proof. The tensor components of $f(s,t)$ are polynomial in s and t, and it follows that f is smooth. From Lemma 7.5, we have that the first-order partial derivatives of f are given by
Evaluating at $s=t=0$ , we obtain the first result. Further, by Lemma 7.5 the second-order derivatives are given by
Now let $n \in \{2, \dotsc , N\}$ . Then it follows from above and Lemma 7.9 that we can bound the second-order derivatives as follows
and
where $c_n^{\prime }, c_n^{\prime \prime }, c_n^{\prime \prime \prime }>0$ are constants depending only on n, and the second statement of the lemma follows.
Lemma 7.11. For all $N \in {\mathbb {N}_{\ge 1}}$ and all $\mathbf {x} \in \mathcal {T}_0^{N}$ , it holds that
where G and H are defined in equation (4.1). Hence, the identity also holds for all $x\in \mathcal {T}_0$ .
Proof. Recall the exponent generating function of the Bernoulli numbers, for z near $0$ ,
Therefore $H(z)G(z) \equiv 1$ identically for all z in a neighbourhood of zero. Repeated differentiation in z then yields the following property of the Bernoulli numbers
Hence the statement of the lemma follows by projecting $H(\operatorname {\mathrm {ad}} \mathbf {x}) \circ G(\operatorname {\mathrm {ad}} \mathbf {x})$ to each tensor level.
We are now ready to give the
Proof of Theorem 4.1
Note that $\pi _{(0,N)}\mathrm {Sig}(\mathbf {X}) = \mathrm {Sig}(\mathbf {X}^{(0,N)})$ for any $\mathbf {X}\in \mathscr {S}(\mathcal {T}_0)$ and all truncation levels $N\in {\mathbb {N}_{\ge 1}}$ . Therefore it suffices to show that the identities in equations (4.2) and (4.3) hold for the signature cumulant of an arbitrary $\mathcal {T\,}^{N}_0$ -valued semimartingale $\mathbf {X}\in \mathscr {S}(\mathcal {T}_0^{N})$ that satisfies the integrability condition $\vert \mkern -2.5mu\vert \mkern -2.5mu\vert {\mathbf {X}}\vert \mkern -2.5mu\vert \mkern -2.5mu\vert _{\mathscr {H}^{1,N}} < \infty $ . Recall from Theorem 3.2 that this implies that $\vert \mkern -2.5mu\vert \mkern -2.5mu\vert {\mathrm {Sig}(\mathbf {X})}\vert \mkern -2.5mu\vert \mkern -2.5mu\vert _{\mathscr {H}^{1,N}} < \infty $ , and thus the truncated signature cumulant ${\boldsymbol {\kappa }} = (\log _N\mathbb {E}_t(\mathrm {Sig}(\mathbf {X})_{t,T}))_{0 \le t \le T} \in \mathscr {S}(\mathcal {T}_0^N)$ is well defined. Throughout the proof, we will use the symbol $\lesssim $ to denote an inequality that holds up to a multiplication of the right-hand side by a constant that may depend only on d and N.
Recall the definition of the signature in the Marcus sense from Section 2.6. Projecting equation (2.11) to the truncated tensor algebra, we see that the signature process ${S} = (\mathrm {Sig}(\mathbf {X})_{0,t})_{0 \le t \le T} \in \mathscr {S}(\mathcal {T}^N_1)$ satisfies the integral equation
for $0 \le t \le T$ . Then by Chen’s relation in equation (2.12), we have
It then follows from the above identity and the integrability of ${S}_{T}$ that the process ${S}\exp _N({{\boldsymbol {\kappa }}})$ is a $\mathcal {T\,}^{N}_1$ -valued martingale in the sense of Section 2.4. On the other hand, we have by applying Itô’s product rule in Lemma 7.7
Further, by applying the Itô’s rule for the exponential map from Lemma 7.8 to the $\mathcal {T}_0^{N}$ -valued semimartingale ${\boldsymbol {\kappa }}$ and using equation (7.7), we have the following form of the continuous covariation term
and for the jump covariation term
From the above identities and again with Lemma 7.8 and in equation (7.7), we have
where $\mathbf {L} \in \mathscr {S}(\mathcal {T}_0^N)$ is defined by
with $\mathbf {Y}\in \mathscr {S}(\mathcal {T}_0^{N})$ , $\mathbf {V}, \mathbf {C}, \mathbf {J} \in \mathscr {V}(\mathcal {T}_0^{N})$ given by
Note that we have explicitly separated the identity operator $\mathrm {Id}$ from G in the above definition of $\mathbf {L}$ .
Since the left-hand side in equation (7.8) is a martingale, and since ${S}_t$ and $\exp ({\boldsymbol {\kappa }}_t)$ have the multiplicative left- respectively right-inverse ${S}_t^{-1}$ and $\exp (-{\boldsymbol {\kappa }}_t)$ , respectively, for all $0\le t \le T$ , it follows that $\mathbf {L} + {\boldsymbol {\kappa }}$ is a $\mathcal {T}_0^N$ -valued local martingale. Let $(\tau _k)_{k\ge 1}$ be a sequence of increasing stopping times with $\tau _k \to T$ a.s. for $k \to \infty $ , such that the stopped process $(\mathbf {L}_{t \wedge \tau _k} + {\boldsymbol {\kappa }}_{t\wedge \tau _k})_{0\le t\le T}$ is a true martingale. Then we have
The estimate in equation (7.11) below shows that $\mathbf {L}$ has sufficient integrability in order to use the dominated convergence theorem to pass to the $k \to \infty $ limit in the above identity equation (7.10), which yields precisely the identity equation (4.2) (recall that ${\boldsymbol {\kappa }}_T = 0$ ) and hence concludes the first part of the proof of Theorem 4.1.
Claim 7.12. It holds that
Proof of Claim 7.12
According to Lemma 7.4, it suffices to show that for all $n\in \{1, \dotsc , N\}$ , it holds that
where the summation above (and in the rest of the proof) is over multi-indices $\ell = (l_1, \dotsc , l_j)\in ({\mathbb {N}_{\ge 1}})^j$ , with $|\ell |=j$ and $\Vert \ell \Vert = l_1 + \dotsb + l_j$ . For $\mathbf {M}\in \mathscr {M}_{\mathrm {loc}}(\mathcal {T}_0^{N})$ and $\mathbf {A}\in \mathscr {V}(\mathcal {T}_0^{N})$ define
where for any $q\in [1,\infty )$
Note that it holds
Furthermore, it follows from the definition of the $\mathscr {H}^{q}$ -norm that
where the infimum is taken over all semimartingale decomposition of $\mathbf {X}$ .
Now fix $\mathbf {M}\in \mathscr {M}_{\mathrm {loc}}(\mathcal {T}_0^{N})$ and $\mathbf {A}\in \mathscr {V}(\mathcal {T}_0^{N})$ arbitrarily, such that $\mathbf {X} = \mathbf {M} + \mathbf {A}$ and ${{\rho }_{\mathbf {M}, \mathbf {A}}}^{n}<\infty $ for all $n\in \{1, \dotsc , N\}$ (such a decomposition always exists since $\mathbf {X} \in \mathscr {H}^{1,N}$ ). In particular, it holds that $\mathbf {M}$ is a true martingale. Next we will prove the following.
Claim 7.13. For all $n \in \{1, \dots , N\}$ , it holds that
and further, there exists a semimartingale decomposition $\boldsymbol {\kappa }^{(n)} = \boldsymbol {\kappa }_0^{(n)} + \mathbf {m}^{(n)} + \mathbf {a}^{(n)}$ , with $\mathbf {m}^{(n)}\in \mathscr {M}(({\mathbb {R}^d})^{\otimes n})$ and $\mathbf {a}^{(n)}\in \mathscr {V}(({\mathbb {R}^d})^{\otimes n})$ such that $ \left \Vert {\mathbf {a}^{(n)}} \right \Vert _{\mathscr {V}^{{N/n}}} < {{\rho }_{\mathbf {M}, \mathbf {A}}}^{n}$ ; and in case $n \le N-1$ , it holds that
Proof of Claim 7.13
We are going to prove inductively over $n \in \{1, \dots , N\}$ . Let $n=1$ , and note that we have $\mathbf {L}^{(1)} = \mathbf {X}^{(1)}$ , and therefore
Using that $\mathbf {M}^{(1)}$ is a martingale, we can identify a semimartingale decomposition of $\boldsymbol {\kappa }^{(1)}$ by
In case $N\ge 2$ , we further have from the BDG-inequality and Doob’s maximal inequality that
and this shows the second part of the induction claim.
Now assume that $N \ge 2$ and that the induction claim in equations (7.14) and (7.15) holds true up to level $n-1$ for some $n \in \{2, \dotsc , N\}$ . Note that $\mathbf {L}^{(n)}$ has the following decomposition:
where $\mathbf {N}^{(n)} \in \mathscr {M}_{\mathrm {loc}}((\mathbb {R}^{d})^{\otimes n})$ and $\mathbf {B}^{(n)} \in \mathscr {V}((\mathbb {R}^{d})^{\otimes n})$ is a decomposition of $\mathbf {Y}^{(n)} \in \mathscr {S}((\mathbb {R}^{d})^{\otimes n})$ defined by
with $\overline {\mathbf {a}} = \pi _{(0,N)}(\mathbf {a}^{(1)} + \dots + \mathbf {a}^{(n-1)})\in \mathscr {V}(\mathcal {T}_0^{N})$ and $\overline {\mathbf {m}} = \pi _{(0,N)}(\mathbf {m}^{(1)} + \dotsb + \mathbf {m}^{(n-1)})\in \mathscr {M}(\mathcal {T}_0^{N})$ .
From Lemma 7.1 and the generalized Hölder inequality, we have
It follows from equation (7.15) and the induction basis that for all $l \in \{1, \dotsc , n-1\}$ , it holds that
and further that
From the definition and linearity of $Q(\operatorname {\mathrm {ad}}\mathbf {x})$ ( $\mathbf {x}\in \mathcal {T}_0^N$ ), Lemmas 7.1 and 7.9, we have the following estimate:
It then follows from the generalized Hölder inequality
where the second inequality follows from the induction basis and the estimates in equations (7.19) and (7.15), noting that $\Vert \ell \Vert =n$ and
implies that $l_1, \dotsc , l_j \le n-1$ , and the third inequality follows from equation (7.12). From similar arguments, we see that the following two estimates also hold:
and
For the local martingale $\mathbf {N}^{(n)}$ , we use Lemmas 7.1 and 7.9 to estimate its quadratic variation as follows:
Then it follows once again by the generalized Hölder inequality and the induction basis that
Finally, let us treat the term $\mathbf {J}^{(n)}$ . First define
And from equation (7.19), it follows that for all $l=\{1, \dotsc , n-1\}$ , it holds that
Then by Taylor’s theorem and Lemma 7.10, we have
Hence it follows by the generalized Hölder inequality that
where the last estimate follows from equations (7.24), (7.18) and (7.12).
Summarizing the estimates in equations (7.20), (7.21), (7.22), (7.23) and (7.25), we have
which proves the first part of the induction claim in equation (7.14). Then it follows from dominated convergence theorem that projecting equation (7.10) to the tensor level n and passing to the $k\to \infty $ limit yields
Since $\mathbf {M}^{(n)}$ and $\mathbf {N}^{(n)}$ are true martingales (for the latter, this follows from equation (7.23)), we are able to identify a decomposition $\boldsymbol {\kappa }^{(n)} = \boldsymbol {\kappa }^{(n)}_0 + \mathbf {m}^{(n)} + \mathbf {a}^{(n)}$ by
Again from the estimates in equations (7.20), (7.21), (7.23) and (7.25), it follows that
and in case $n \le N - 1$ , it follows from the BDG-inequality and Doob’s maximal inequality that
which proves the second part of the induction claim in equation (7.15).
The estimate in equation (7.11) immediately follows from equations (7.13) and (7.14), which finishes the proof of Claim 7.13.
Note that since $\mathbf {Y} = \mathbf {N} + \mathbf {B}$ and $\left \langle \mathbf {X}^{c} \right \rangle $ , $\mathbf {V}$ , $\mathbf {C}, \mathbf {J}$ are independent of the decomposition $\mathbf {X} = \mathbf {M} + \mathbf {A}$ , it follows from taking the infimum over all such decompositions in the inequality in equation (7.26) that
for all $n \in \{1, \dotsc , N\}$ . The same argument applies to ${\boldsymbol {\kappa }}$ and the estimate in equation (7.18) and we obtain
for all $n\in \{1,\dots , N-1\}$ .
Next we are going to show that ${\boldsymbol {\kappa }}$ satisfies the functional equation (4.3). Recall that $\mathbf {L} + {\boldsymbol {\kappa }} \in \mathscr {M}_{\mathrm {loc}}(\mathcal {T}_0^{N})$ . From Lemma 7.11, we have the following equality
for all $0\le t \le T$ , where
From Lemma 7.3 (Emery’s inequality) and the estimates in equations (7.27) and (7.28), it follows
Hence, by Lemma 7.4, it holds that
Now note that we have already shown in Claim 7.13 that ${\boldsymbol {\kappa }} = {\boldsymbol {\kappa }}_0 + \mathbf {m} + \mathbf {a}$ , where $\mathbf {m}\in \mathscr {M}(\mathcal {T}_0^{N})$ and $\mathbf {a}\in \mathscr {V}(\mathcal {T}_0^{N})$ , which satisfies that $ \left \Vert {\mathbf {a}^{(n)}} \right \Vert _{\mathscr {V}^{N/n}} <\infty $ for all $n \in \{1, \dotsc , N\}$ . Together with the above estimate, it then follows that $\boldsymbol {\kappa } + \widetilde {\mathbf {L}}$ is indeed a true martingale and therefore
which is precisely the identity equation (4.3).
Acknowledgments
The authors thank the anonymous referees for the careful reading.
Funding statement
PKF has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement No. 683164) and the DFG Research Unit FOR 2402. PH and NT have been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – The Berlin Mathematics Research Center MATH+ (EXC-2046/1, project ID: 390685689).
Conflicts of Interest
None