1. Introduction
1.1. Background and motivation
In the last four decades there has been a lot of interest in spatial branching models. These models include branching random walks, branching Brownian motion, superprocesses, and so on. During the last three decades, branching models with interactions were studied very extensively on the level of continuous state models and particle models. We provide here a partial list of branching models with interactions that were studied in the literature.
Models with catalytic branching, where one population catalyzes another were studied in [Reference Dawson and Fleischmann13, Reference Dawson, Fleischmann and Mueller14, Reference Li and Ma30]. For measure-valued diffusions with mutual catalytic branching, see [Reference Dawson, Etheridge, Fleischmann, Mytnik, Perkins and Xiong11, Reference Dawson, Etheridge, Fleischmann, Mytnik, Perkins and Xiong12, Reference Dawson, Fleischmann, Mytnik, Perkins and Xiong15, Reference Dawson and Perkins16]. Models with symbiotic branching (these are models with a correlation in branching laws of two populations) were investigated in [Reference Avneri and Mytnik1, Reference Blath, Döring and Etheridge3–Reference Blath and Ortgiese5, Reference Etheridge and Fleischmann20, Reference Glöde and Mytnik22, Reference Hammer, Ortgiese and Völlering23], among others. Infinite rate branching models was introduced in [Reference Klenke and Mytnik26, Reference Klenke and Oeler29] and studied later in [Reference Döring, Klenke and Mytnik17–Reference Döring and Mytnik19, Reference Klenke and Mytnik27, Reference Klenke and Mytnik28]. In addition, various particle models were introduced in [Reference Birkner2, Reference Kesten and Sidoravicius25].
Let us say a few words about mutually catalytic branching model in the continuous state setting introduced in [Reference Dawson and Perkins16].
Dawson and Perkins [Reference Dawson and Perkins16] constructed the model with
$\mathbb{Z}^{d}$
being a space of sites and
$(u,v)\in\mathbb{R}_{+}^{\mathbb{Z}^{d}}\times\mathbb{R}_{+}^{\mathbb{Z}^{d}}$
a pair which undergo random migration and continuous state mutually catalytic branching. The random migration is described by a
$\mathbb{Z}^{d}$
-valued Markov chain with the associated Q-matrix,
${{Q=(q_{ij})}}$
, subject to certain technical conditions on Q and associated transition probabilities (see [Reference Dawson and Perkins16, p. 1090] and, in particular, (
$H_0$
), (
$H_1$
), and (
$H_2$
) there). The branching rate of one population at a site is proportional to the mass of the other population at the site. The system is modeled by the following infinite system of stochastic differential equations (SDEs):
\begin{equation}\begin{cases}\begin{array}{c}u_{t}(x)=u_{0}(x)+\int_{0}^{t}u_{s}Q(x)\,{\mathrm{d}} s+\int_{0}^{t}\sqrt{\vphantom{\sum}\widetilde{\gamma}u_{s}(x)v_{s}(x)}\,{\mathrm{d}} B_{s}^{x},\quad t\geq0,\ x\in\mathbb{Z}^{d},\\[11pt]v_{t}(x)=v_{0}(x)+\int_{0}^{t}v_{s}Q(x)\,{\mathrm{d}} s+\int_{0}^{t}\sqrt{\vphantom{\sum}\widetilde{\gamma}u_{s}(x)v_{s}(x)}\,{\mathrm{d}} W_{s}^{x},\quad t\geq0,\ x\in\mathbb{Z}^{d},\end{array}\end{cases}\end{equation}
where
$\{B_{s}^{x}\}_{x\in\mathbb{Z}^{d}},\,\{W_{s}^{x}\}_{x\in\mathbb{Z}^{d}}$
are collections of one-dimensional independent Brownian motions, and
$\tilde{\gamma}>0$
.
One of the main questions which was introduced in [Reference Dawson and Perkins16] is the question of coexistence and noncoexistence of types in the long run. In particular, it has been proved that there is a clear dichotomy: coexistence is possible if the migration is transient, that is in dimensions
$d\geq 3$
, and is impossible if the migration is recurrent, that is if
$d\leq 2$
.
The above model is a particular case of so-called ‘interacting mutually catalytic diffusions’ studied by many authors in different settings, see for example, [Reference Theodore Cox, Klenke and Perkins8, Reference Dawson, Etheridge, Fleischmann, Mytnik, Perkins and Xiong11, Reference Mytnik33].
Cox et al. [Reference Cox, Dawson and Greven6] analyzed the behavior of the Dawson–Perkins system with very large but finite space of sites in comparison with the corresponding model with infinite space of sites. This type of question arises if, for example, one is interested in determining what simulations of finite systems can say about the corresponding infinite spatial system. In [Reference Cox, Dawson and Greven6], the authors considered a sequence of finite subsets of
$\mathbb{Z}^{d}$
increasing to the whole
$\mathbb{Z}^{d}$
and check the limiting behavior of mutually catalytic models restricted to these sets, while time is also suitably rescaled. It is called a ‘finite system scheme’. This concept appeared in [Reference Cox, Greven and Shiga7, Reference Cox, Greven and Shiga9, Reference Cox and Greven10].
To formulate a result from [Reference Cox, Dawson and Greven6] we need to introduce the following construction. Fix
$n\in\mathbb{N}$
. Define
$\Lambda_{n}=\left[-n,n\right]^{d}\cap\mathbb{Z}^{d}$
. Let
$Q=\left(q(i,j)\right)_{i,j\in\mathbb{Z}^{d}}$
be the Q-matrix of
$\mathbb{Z}^{d}$
-valued Markov chain. Define
$Q^{n}=\left(q^{n}(i,j)\right)_{i,j\in\Lambda_{n}}$
as follows
where
Consider a process that solves Dawson–Perkins equations (1.1) with state space being the torus
$\Lambda_{n}$
, i.e.
\begin{equation}\begin{cases}\begin{array}{c}u_{t}^{n}(x)=u_{0}^{n}(x)+\int_{0}^{t}u_{s}^{n}Q^{n}(x)\,{\mathrm{d}} s+\int_{0}^{t}\sqrt{\vphantom{\sum}\widetilde{\gamma}u_{s}^{n}(x)v_{s}^{n}(x)}\,{\mathrm{d}} B_{s}^{x},\quad t\geq0,\ x\in\Lambda_{n},\\[11pt]v_{t}^{n}(x)=v_{0}^{n}(x)+\int_{0}^{t}v_{s}^{n}Q^{n}(x)\,{\mathrm{d}} s+\int_{0}^{t}\sqrt{\vphantom{\sum}\widetilde{\gamma}u_{s}^{n}(x)v_{s}^{n}(x)}\,{\mathrm{d}} W_{s}^{x},\quad t\geq0,\ x\in\Lambda_{n},\end{array}\end{cases}\end{equation}
where
$\{B_{s}^{x}\}_{x\in\Lambda_{n}},\,\{W_{s}^{x}\}_{x\in\Lambda_{n}}$
are collections of one-dimensional independent Brownian motions, and
$\tilde{\gamma}>0$
.
We denote such a process by
$(U_{t}^{n},V_{t}^{n})_{t\geq0}\,:\!=\,((u_{t}^{n}(x))_{x\in\Lambda_{n}},(v_{t}^{n}(x))_{x\in\Lambda_{n}})_{t\geq0}$
. Define time scaling depending on the system size
Define
and renormalized process
\begin{equation}D_{n}((U_{\cdot}^{n},V_{\cdot}^{n}))=\big(D_{n}^{1},D_{n}^{2}\big)=\frac{1}{\left|\Lambda_{n}\right|}\left(\sum_{x\in\Lambda_{n}}u_{\cdot}^{n}(x),\sum_{x\in\Lambda_{n}}v_{\cdot}^{n}(x)\right).\end{equation}
In what follows
$\mathcal{L}({\cdot})$
denotes the law of random variable or process.
It was shown in [Reference Cox, Dawson and Greven6] that in dimensions
$d\geq 3$
the sequence
$D_{n}$
-processes with suitably rescaled time is tight and converges to a diffusion.
Theorem 1.1. (Theorem 1(a) in [Reference Cox, Dawson and Greven6]). Let
$d\geq3$
, and let Q be a generator of a simple random walk on
$\mathbb{Z}^{d}$
. Assume that
Then
where
$\left(X_{t},Y_{t}\right)_{t\geq0}$
is the unique weak solution for the following system of SDEs:
\begin{equation}\begin{cases}{\mathrm{d}} X_{t} =\sqrt{\tilde{\gamma}X_{t}Y_{t}}\,{\mathrm{d}} w^{1}(t),\quad t\geq0,\\[5pt]{\mathrm{d}} Y_{t} =\sqrt{\tilde{\gamma}X_{t}Y_{t}}\,{\mathrm{d}} w^{2}(t),\quad t\geq0,\end{cases}\end{equation}
with initial conditions
$(X_{0},Y_{0})=\bar{\theta}=(\theta_1,\theta_2)$
, where
$w^{1},\, w^{2}$
are two independent standard Brownian motions.
In this paper we consider the Dawson–Perkins mutually catalytic model for particle systems and study its properties. As for other particle models in the presence of interactions, they have been considered earlier by many authors. A partial list of examples follows.
Birkner [Reference Birkner2] studied a system of particles performing random walks on
$\mathbb{Z}^{d}$
and performing branching; the rate of branching of any particle on a site depends on the number of other particles at the same site (this is the ‘catalytic’ effect). Birkner introduced a formal construction of such processes, via solutions of certain stochastic equations, proved existence and uniqueness theorems for these equations, and studied the properties of the processes. Under suitable assumptions he proved the existence of an equilibrium distribution for shift-invariant initial conditions. He also studied survival and extinction of the process in the long run. Note that the construction of the process in [Reference Birkner2] was motivated by the construction of Ligget and Spitzer [Reference Liggett and Spitzer31].
Among many other works where branching particle systems with catalysts were studied, we can mention [Reference Kesten and Sidoravicius25] and [Reference Li and Ma30]. For example, Kesten and Sidoravicius [Reference Kesten and Sidoravicius25] investigated the survival/extinction of two particle populations A and B. Both populations perform an independent random walk. The B particles perform a branching random walk, but with a birth rate of new particles which is proportional to the number of A particles which coincide with the appropriate B particles. It was shown that for a choice of parameters the system becomes (locally) extinct in all dimensions.
In [Reference Li and Ma30] catalytic discrete state branching processes with immigration were defined as strong solutions of stochastic integral equations. Li and Ma [Reference Li and Ma30] proved limit theorems for these processes.
In this paper we consider two interactive populations: to be more precise, we construct the so-called mutually catalytic branching model and study its long time behavior and finite systems scheme.
2. Our Model and Main Results
2.1. Description of the model
Let us define the following interactive particle system. We consider two populations on a countable set of sites
$S\subset\mathbb{Z}^{d}$
, where particles in both populations move as independent Markov chains on S with rate of jumps
$\kappa>0$
, and symmetric transition jump probabilities
They also undergo branching events. In order to define our model formally, we are following the ideas of [Reference Birkner2].
Let
$\left\{ \nu_{k}\right\} _{k\geq0}$
be the branching law. Suppose that Z is a random variable distributed according to
$\nu$
. We assume that branching law is critical and has a finite variance:
The pair of processes
$(\xi,\eta)$
describes the time evolution of the following ‘particle’ model. Between branching events in
$\xi$
and
$\eta$
populations move as independent Markov chains on S with rate of jumps
$\kappa$
and transition probabilities
$p_{xy}$
,
$x,y\in S$
. Fix some
$\gamma>0$
. The ‘infinitesimal’ rate of a branching event for a particle from population
$\xi$
at site x at time t is equal to
$\gamma\eta_{t}(x)$
; similarly the ‘infinitesimal’ rate of a branching event for a particle from population
$\eta$
at site x at time t is equal to
$\gamma\xi_{t}(x)$
. When a ‘branching event’ occurs, a particle dies and is replaced by a random number of offspring, distributed according to the law
$\left\{ \nu_{k}\right\} _{k\geq0}$
, independently from the history of the process. To define a process formally, as a solution to a system of equations we need further notation. Note that construction of the process follows the steps in [Reference Birkner2].
The Markov chain is defined in the following way. Let
$(W_{t},P)$
be a continuous-time S-valued Markov chain with rate of jumps
$\kappa>0$
, and symmetric transition jump probabilities
$p_{x,y}=p_{y,x},\,x,y\in S.$
Set
$p_{t}(x,y)=P(W_{t}=y|W_{0}=x)$
as transitions probabilities. Let
$Q=(q_{x,y})$
denote the associated Q-matrix; that is
$q_{x,y}=\kappa p_{x,y}$
is the jump rate from x to y (for
$x\neq y$
) and
$q_{x,x}=-\sum_{y\neq x}q_{x,y}=-\kappa>-\infty$
. Clearly, by our assumptions on transition jump probabilities, the Q-matrix is symmetric (
$q_{x,y}=q_{y,x}$
). Define the Green function for every
$x,y\in S$
:
\begin{equation}g_{t}(x,y)=\mathop\int\limits_{0}^{t}p_{s}(x,y)\,{\mathrm{d}} s.\end{equation}
Note that if our motion process is a symmetric random walk on S, hence, with certain abuse of notation,
$g_{t}(x,y)=g_{t}(y,x)=g_{t}(x-y)$
, in particular
$g_{t}(x,x)=g_{t}(0)$
.
Let
$P_{t}f(x)=\sum_{y}p_{t}(x,y)f(y)$
be the semigroup associated with the Markov chain W and
$Q\,f(x)=\sum_{y}q_{x,y}\,f(y)$
is its generator.
Remark 2.1. If W is a symmetric random walk, then clearly
$g_{\infty}(0)<\infty$
means that W is transient, and
$g_{\infty}(0)=\infty$
implies that W is recurrent.
Let
$\mathcal{F}=\left(\mathcal{F}_{t}\right)_{t\geq0}$
be a (right-continuous, complete) natural filtration. In what follows, when we call a process martingale, we mean that it is an
$\mathcal{F}_{t}$
-martingale.
Let
denote independent Poisson point processes on
$\mathbb{R}_{+}\times\mathbb{R}_{+}$
. We assume that, for any
$x,y\in S,\, x\neq y$
both Poisson point processes
$N_{x,y}^{\textrm{RW}_{\xi}}$
and
$N_{x,y}^{\textrm{RW}_{\eta}}$
have intensity measure
$\kappa p_{x,y}\,{\mathrm{d}} s\otimes {\mathrm{d}} u$
. Similarly, we assume that, for any
$x\in S,\, k\in\mathbb{Z}_{+}$
both Poisson point processes
$N_{x,k}^{{\mathrm{br}}_{\xi}}$
and
$N_{x,k}^{{\mathrm{br}}_{\eta}}$
have intensity measure
$\nu_{k}\,{\mathrm{d}} s\otimes {\mathrm{d}} u$
. We assume that the above Poisson processes are
$\mathcal{F}$
-adapted in the ‘time’ s-coordinate.
Now we are going to define the pair of processes
$(\xi_{t},\eta_{t})_{t\geq0}$
where
$(\xi_{t},\eta_{t})\in\mathbb{N}_{0}^{S}\times\mathbb{N}_{0}^{S}$
, and
$\mathbb{N}_{0}$
denotes the set of non-negative integers.
For any
$x\in S$
,
$\xi_{t}(x)$
counts the number of particles from the first population at site x at time t. Similarly, for any
$x\in S$
,
$\eta_{t}(x)$
counts the number of particles from the second population at site x at time t.
Now we are ready to describe
$(\xi_{t},\eta_{t})_{t\geq0}$
formally as a solution of the following system of equations:
\begin{align}\xi_{t}(x) & = \xi_{0}(x)+\sum_{y\neq x}\left\{ \mathop\int\limits_{0}^{t}\int_{\mathbb{R}_{+}}1_{\{\xi_{s-}(y)\geq u\}}N_{y,x}^{\textrm{RW}_{\xi}}({\mathrm{d}} s\,{\mathrm{d}} u)-\mathop\int\limits_{0}^{t}\int_{\mathbb{R}_{+}}1_{\{\xi_{s-}(x)\geq u\}}N_{x,y}^{\textrm{RW}_{\xi}}({\mathrm{d}} s\,{\mathrm{d}} u)\right\} \nonumber \\ & \quad +\sum_{k\geq0}\mathop\int\limits_{0}^{t}\int_{\mathbb{R}_{+}}(k-1)1_{\{\gamma\eta_{s-}(x)\xi_{s-}(x)\geq u\}}N_{x,k}^{{\mathrm{br}}_{\xi}}({\mathrm{d}} s\,{\mathrm{d}} u),\quad t\geq0,\ x\in S,\\ \eta_{t}(x) & = \eta_{0}(x)+\sum_{y\neq x}\left\{ \mathop\int\limits_{0}^{t}\int_{\mathbb{R}_{+}}1_{\{\eta_{s-}(y)\geq u\}}N_{y,x}^{\textrm{RW}_{\eta}}({\mathrm{d}} s\,{\mathrm{d}} u)-\mathop\int\limits_{0}^{t}\int_{\mathbb{R}_{+}}1_{\{\eta_{s-}(x)\geq u\}}N_{x,y}^{\textrm{RW}_{\eta}}({\mathrm{d}} s\,{\mathrm{d}} u)\right\} \nonumber \\ & \quad +\sum_{k\geq0}\mathop\int\limits_{0}^{t}\int_{\mathbb{R}_{+}}(k-1)1_{\{\gamma\xi_{s-}(x)\eta_{s-}(x)\geq u\}}N_{x,k}^{{\mathrm{br}}_{\eta}}({\mathrm{d}} s\,{\mathrm{d}} u),\quad t\geq0,\ x\in S.\nonumber\end{align}
Why do these equations actually describe our processes? The first sum on the right-hand side of equations for
$\xi$
and
$\eta$
describes the random walks of particles, and the second sum describes their branching. The first integrals in the first sums describe all particles jumping to site x from different sites
$y\neq x$
. The second integrals in the first sum describe particles that leave site x. The last integral describes the death of a particle at site x and the birth of its k offspring, so after that event the number of particles at the site has changed by
$k-1$
. The branching events at site x happen with the infinitesimal rate proportional to the product of the number of particles of both populations at site x.
Definition 2.1. The process
$(\xi_{t},\eta_{t})$
solving (2.4) is called a mutually catalytic branching process with initial conditions
$(\xi_{0},\eta_{0})$
.
2.2. Main results
We start with stating the result on the existence and uniqueness of the solution for the system of equations (2.4). This implies that the process we described in the introduction does exist and is defined uniquely via the solution to (2.4). In the next theorem, we formulate the result for finite initial conditions, i.e. each population has a finite number of particles at initial time (
$t=0$
). First, we introduce another piece of notation. For
$m\in\mathbb{N}$
, define the
$L^{m}$
-norm of
$\varphi\in\mathbb{Z}^{S}$
:
\begin{equation}\left\Vert \varphi\right\Vert _{m}\,:\!=\,\left(\sum_{i\in S}|\varphi(i)|^{m}\right)^{1/m}.\end{equation}
Similarly, for any
$(\varphi,\psi)\in\mathbb{Z}^{S}\times\mathbb{Z}^{S}$
,
$(\varphi,\psi,\tilde\varphi,\tilde\psi)\in\left(\mathbb{Z}^{S}\right)^4$
, with some abuse of notation, we define
\begin{equation}\left\Vert \left(\varphi,\psi\right)\right\Vert _{m}\,:\!=\,\left(\sum_{i\in S}\left(|\varphi(i)|^{m}+|\psi(i)|^{m}\right)\right)^{1/m},\end{equation}
\begin{equation*}\Vert (\varphi,\psi,\tilde\varphi,\tilde\psi)\Vert _{m}\,:\!=\,\left(\sum_{i\in S}\left(|\varphi(i)|^{m}+|\psi(i)|^{m}+|\tilde\varphi(i)|^{m}+|\tilde\psi(i)|^{m}\right)\right)^{1/m}.\end{equation*}
In addition let us define the space of functions
$E_{\mathrm{fin}}$
:
We equip
$E_{\mathrm{fin}}$
with the metric:
$d_{E_{\mathrm{fin}}}(f,g)= \left\Vert f-g \right\Vert _{1}$
for any
$f,g\in E_{\mathrm{fin}}$
.
Theorem 2.1. Let
$S\subset\mathbb{Z}^{d}$
.
-
(a) For any initial conditions
${{(\xi_{0},\eta_{0})\in E_{\mathrm{fin}}\times E_{\mathrm{fin}}}}$
there is a unique strong solution
$(\xi_{t},\eta_{t})_{t\geq0}$
to (2.4), taking values in
$E_{\mathrm{fin}}\times E_{\mathrm{fin}}$
. -
(b) The solution
$\left\{ (\xi_{t},\eta_{t}),\, t\geq0\right\} $
to (2.4) is a Markov process.
It is possible to generalize the result to some infinite mass initial conditions case but since this is not the goal of this paper it will be done elsewhere.
Let
$(\xi,\eta)$
be the process constructed in Theorem 2.1 with finite initial conditions. Denote
That is,
$\boldsymbol{\xi}$
is the total mass process of
$\xi$
, and
$\boldsymbol{\eta}$
is the total mass process of
$\eta$
. Clearly, by construction,
$\boldsymbol{\xi}$
and
$\boldsymbol{\eta}$
are non-negative local martingales and, hence, by the martingale convergence theorem there exist almost surely (a.s.) limits
Now we are ready to give a definition of coexistence or noncoexistence.
Definition 2.2. Let
$(\xi,\eta)$
be a unique strong solution to (2.4) with
$(\xi_{0},\eta_{0})\in E_{\mathrm{fin}}\times E_{\mathrm{fin}}$
. We say that coexistence is possible for
$(\xi,\eta)$
if
${{{\mathrm{P}}(\boldsymbol{\xi}_{\infty}\boldsymbol{\eta}_{\infty}>0)>0}}$
. We say that coexistence is impossible for
$(\xi,\eta)$
if
${\mathrm{P}}(\boldsymbol{\xi}_{\infty}\boldsymbol{\eta}_{\infty}>0)=0$
.
Convention We say that the motion process for the mutually catalytic branching process on
$S=\mathbb{Z}^{d}$
, is the nearest-neighbor random walk if
for
$e_{i}$
a unit vector in an axis direction,
$i=1,\ldots,d$
.
We prove that in the finite initial conditions case, with motion process being the nearest-neighbor random walk, the coexistence is possible if and only if the random walk is transient. Recall that the nearest-neighbor random walk is recurrent in dimensions
$d=1,2$
, and it is transient in dimensions
$d\geq3$
. Then we have the following theorem.
Theorem 2.2. Let
$S=\mathbb{Z}^{d}$
and assume that the motion process is the nearest-neighbor random walk. Let
$(\xi_{0},\eta_{0})\in E_{\mathrm{fin}}\times E_{\mathrm{fin}}$
with
$\boldsymbol{\xi}_{0}\boldsymbol{\eta}_{0}>0$
.
-
(a) If
$d\geq3$
, then coexistence of types is possible. -
(b) If
$d\leq2$
, then coexistence of types is impossible.
The proof is simple and based on the following observation: if there is a finite number of particles and the motion is recurrent, the particles will meet an infinite number of times, and eventually one of the populations dies out, due to the criticality of the branching mechanism. On the other hand, if the motion is transient, there exists a finite time such that after this time the particles of different populations never meet, and hence there is a positive probability of survival of both populations.
Finally we are interested in a finite system scheme. We construct a system of renormalized processes started from an exhausting sequence of finite subsets of
$\mathbb{Z}^{d}$
,
$\Lambda_{n}\subset\mathbb{Z}^{d}$
. The duality techniques will be used to investigate its limiting behavior.
Define
\begin{align*}\Lambda_{n} & = \big\{ x\in\mathbb{Z}^{d}\,|\,\forall i=1,\ldots ,d,\left|x_{i}\right|\leq n\big\} \subseteq\mathbb{Z}^{d},\nonumber\\[4pt]\left|\Lambda_{n}\right| & = (2n+1)^{d}.\end{align*}
Convention Let
$S=\Lambda_{n}$
. We say that the motion process is the nearest-neighbor random walk on
$\Lambda_{n}$
if its transition jump probabilities are given by
\begin{equation*}p_{x,y}^{n}=p_{0,y-x}^{n}=\begin{cases}\frac{1}{2d}, & \mathrm{if}\,\left|x-y\right|=1,\\[5pt]0, & \mathrm{otherwise,}\end{cases}\end{equation*}
where ‘
$y-x$
’ is the difference on the torus
$\Lambda_{n}$
.
Fix
$\bar{\theta}=\left(\theta_{1},\theta_{2}\right)$
with
$\theta_{1},\theta_{2} \in\mathbb{N}_{0}$
. Assume
$\bar{\boldsymbol{\theta}}=\left(\boldsymbol{\theta}_{1},\boldsymbol{\theta}_{2}\right)$
, where
${{\boldsymbol{\theta}_{i}=(\theta_{i},\theta_{i},\ldots )\in\mathbb{N}_{0}^{\Lambda_{n}}}}$
,
$i=1,2$
. Let
$(\xi_{t},\eta_{t})_{t\geq0}$
be the mutually catalytic branching process with initial conditions
$(\xi_{0},\eta_{0})=\bar{\boldsymbol{\theta}}$
, and site space
$S=\Lambda_{n}$
, and motion process being the nearest-neighbor walk on
$\Lambda_{n}$
.
Set
We define the following time change:
Our goal is to identify the limiting distribution of
as
$n\rightarrow\infty$
, for all
$t\geq0$
.
Theorem 2.3. Let
$d\geq3$
, and assume that
and
$\sum_{k}k^{3}\nu_{k}<\infty$
. Then for any
$T\in (0,1]$
, we have
where
$\left(X_{t},Y_{t}\right)_{t\geq0}$
is a solution of the following system of SDEs:
\begin{equation}\begin{cases}{\mathrm{d}} X_{t}=\sqrt{\gamma\sigma^{2}X_{t}Y_{t}}\,{\mathrm{d}} w^{1}(t), & t\geq0,\\[6pt]{\mathrm{d}} Y_{t}=\sqrt{\gamma\sigma^{2}X_{t}Y_{t}}\,{\mathrm{d}} w^{2}(t), & t\geq0,\end{cases}\end{equation}
with initial conditions
$(X_{0},Y_{0})=\bar{\theta}$
, where
$w^{1},\, w^{2}$
are two independent standard Brownian motions.
Remark 2.2. The above theorem gives convergence of one-dimensional distributions of the rescaled processes
$(\boldsymbol{\xi}^{n}, \boldsymbol{\eta}^{n})$
to the one-dimensional distributions of the solution of (2.9) starting at initial conditions
$\bar{\theta}\in \mathbb{N}_{0}^2$
. It seems possible to treat a more general class of initial conditions, for example, independent and identically distributed (i.i.d.) configurations on
$\Lambda_{n}$
with mean vector
$\bar{\theta}=(\theta_1, \theta_2)\in \mathbb{R}_+^2$
. However, this will make the argument more technically involved, thus we decided to treat this case elsewhere.
Remark 2.3. The condition
$\gamma\sigma^{2}<\frac{1}{\sqrt{\cdot3^{5}}(\frac12 g_{\infty}(0)+1)}$
arises from the method of proof, which requires boundedness of the fourth moment of the dual processes (see Lemma 6.8, where this condition is applied). We conjecture, however, that the result holds without the additional constraint on
$\gamma\sigma^2$
, as is the case in the finite scheme for Dawson–Perkins processes.
The above result is similar, although a bit weaker, to the result in Theorem 1 in [Reference Cox, Dawson and Greven6], where a finite scheme for the system of continuous SDEs is studied. The proof of Theorem 2.3 is based on the duality principle for our particle system and the result for SDEs in [Reference Cox, Dawson and Greven6]. In fact, let us mention that the self-duality property for our mutually catalytic branching particle model (the property which is well known for processes solving equations of type (1.3)) does not hold. Thus, we use the so-called approximating duality technique to prove Theorem 2.3. The approximating duality technique was used in the past to resolve a number of weak uniqueness problems (see e.g. [Reference Mytnik32, Reference Mytnik34]). We believe that using approximate duality to prove limit theorems is novel and this technique is of independent interest.
Let us note that it would be very interesting to extend the above results. First, it would be nice to address the question of coexistence/noncoexistence for more general motions and infinite mass initial conditions. As for extending results in Theorem 2.3, we would be interested to check what happens in the case of large
$\gamma$
, to prove ‘functional convergence’ result as in [Reference Cox, Dawson and Greven6], and investigate the system’s behavior for the case of recurrent motion, that is, in the dimensions
$d=1,2$
. We plan to address these problems in the future.
3. Existence and Uniqueness: Proof of Theorem 2.1
This section is devoted to the proof of Theorem 2.1. Note that our proofs follow closely the argument of Birkner [Reference Birkner2] with suitable adaptation to the two types of case.
For any metric space D with metric d, let
$\mathrm{Lip}(D)$
denote a set of Lipschitz functions on D. We say that
$f\,:\,D\rightarrow\mathbb{R}$
is in
$\mathrm{Lip}(D)$
if and only if there exists a positive constant
$C\in\mathbb{R}_{+}$
such that for any
$\varphi,\psi\in D$
,
$\left|f(\varphi)-f(\psi)\right|\leq Cd(\varphi,\psi)$
. Let
$L^{(2)}$
denote the following operator: for a measurable function
$f: E_{\mathrm{fin}}\times E_{\mathrm{fin}} \to \mathbb{R}$
, let
\begin{align*}L^{(2)}f(\varphi,\psi) & = \kappa\sum_{x,y\in S}\varphi(x)p_{xy}(\,f\big(\varphi^{x\rightarrow y},\psi\big)-f(\varphi,\psi))\\ & \quad +\kappa\sum_{x,y\in S}\psi(x)p_{xy}(\,f\big(\varphi,\psi^{x\rightarrow y}\big)-f(\varphi,\psi))\\ & \quad +\sum_{x\in S}\gamma\varphi(x)\psi(x)\sum_{k\geq0}\nu_{k}\left(\,f(\varphi+(k-1)\delta_{x},\psi)-f(\varphi,\psi)\right)\\ & \quad +\sum_{x\in S}\gamma\varphi(x)\psi(x)\sum_{k\geq0}\nu_{k}\left(\,f(\varphi,\psi+(k-1)\delta_{x})-f(\varphi,\psi)\right),\end{align*}
where
$\varphi^{x\rightarrow y}=\varphi+\delta_{y}-\delta_{x}$
, i.e.
$\varphi^{x\rightarrow y}(x)=\varphi(x)-1$
,
$\varphi^{x\rightarrow y}(y)=\varphi(y)+1$
and
$\varphi^{x\rightarrow y}(z)=\varphi(z)$
for all
$z\in S$
and
$z\neq x,y$
.
Theorem 2.1 follows immediately from the next lemma.
Lemma 3.1. We have the following.
-
(a) For any initial conditions
$(\xi_{0},\eta_{0})\in E_{\mathrm{fin}}\times E_{\mathrm{fin}}$
there is a unique strong solution
$(\xi_{t},\eta_{t})_{t\geq0}$
to (2.4), taking values in
$E_{\mathrm{fin}}\times E_{\mathrm{fin}}$
. -
(b) The solution
$\left\{ (\xi_{t},\eta_{t}),\, t\geq0\right\} $
to (2.4) is a Markov process. -
(c) Let
$m\in\mathbb{N}$
. If
$\sum_{k}k^{m}\nu_{k}<\infty$
, there exists a constant
$C_{m}$
such that (3.1)
\begin{equation}{\mathrm{E}}\left[\left\Vert \left(\xi_{t},\eta_{t}\right)\right\Vert _{m}^{m}\right]\leq\exp(C_{m}t)\left\Vert \left(\xi_{0},\eta_{0}\right)\right\Vert _{m}^{m}.\end{equation}
-
(d) For
$f\in\mathrm{Lip}(E_{\mathrm{fin}}\times E_{\mathrm{fin}}),\quad (\xi_{0},\eta_{0})\in E_{\mathrm{fin}}\times E_{\mathrm{fin}}$
, (3.2)is a martingale. Moreover, if
\begin{equation}M^{f}(t)\,:\!=\,f(\xi_{t},\eta_{t})-f(\xi_{0},\eta_{0})-\mathop\int\limits_{0}^{t}L^{(2)}f(\xi_{s},\eta_{s})\,{\mathrm{d}} s,\end{equation}
$f\in\mathrm{Lip}(\mathbb{R}_{+}\times E_{\mathrm{fin}}\times E_{\mathrm{fin}})$
and there is constant
$C^{*}$
such that (3.3)for any
\begin{equation}\left|\frac{\partial}{\partial s}f(s,\varphi,\psi)\right|\leq C^{*}\left\Vert (\varphi,\psi)\right\Vert _{2}^{2}\end{equation}
$\varphi,\psi\in E_{\mathrm{fin}}$
, then (3.4)is also a martingale.
\begin{equation}N^{\,f}(t)\,:\!=\,f(t,\xi_{t},\eta_{t})-f(0,\xi_{0},\eta_{0})-\mathop\int\limits_{0}^{t}\left[L^{(2)}f(s,\xi_{s},\eta_{s})+\frac{\partial}{\partial s}\,f(s,\xi_{s},\eta_{s})\right]\,{\mathrm{d}} s,\end{equation}
Proof. Note that in our proof we follow ideas of the proof of Lemma 1 in Birkner [Reference Birkner2].
(a) We have the collection of independent Poisson point processes
$\{N_{x,y}^{\textrm{RW}_{\xi}}\}_{x,y\in S},$
$\{N_{x,y}^{\textrm{RW}_{\eta}}\}_{x,y\in S},\quad \{N_{x,k}^{{\mathrm{br}}_{\xi}}\}_{x\in S,k\in\mathbb{Z}_{+}},\quad \{N_{x,k}^{{\mathrm{br}}_{\eta}}\}_{x\in S,k\in\mathbb{Z}_{+}}$
. Therefore, with probability 1 there is no more than one jump simultaneously. Then we can define a stopping time
$T_{1}$
: the first time a jump happens. The stopping time
$T_1$
is strictly positive a.s., since
$(\xi_{0},\eta_{0})\in E_{\mathrm{fin}}\times E_{\mathrm{fin}}$
, and therefore the indicators
$1_{\{\xi_{s-}(y)\geq u\}}, 1_{\{\eta_{s-}(y)\geq u\}},1_{\{\gamma\xi_{s-}(x)\eta_{s-}(x)\geq u\}}$
make the rate of the first jump finite. Then
$(\xi_{t},\eta_{t})=(\xi_{0},\eta_{0})$
for
$t\in[0,T_{1})$
. In the same way, we define a sequence of stopping times:
$0=T_{0} < T_{1} < T_{2} < \cdots < \infty$
, and again the rate of jumps is finite since for any
$i\geq 1$
,
$(\xi_{T_{i}},\eta_{T_{i}})\in E_{\mathrm{fin}}\times E_{\mathrm{fin}}$
a.s., and this makes the rate of the
$(i+1)$
th jump a.s. finite by the same reasoning as for the first jump. Clearly, by construction, the process
$(\xi_{t},\eta_{t})$
is constant on the intervals
$[T_{i},T_{i+1})$
. In order to show that this construction defines the process properly (that is, it does not explode in finite time), it is enough to show that
$\lim_{n\rightarrow\infty}T_{n}=\infty$
, a.s. To this end define
$M_{n}=\sum_{x\in S}\left(\xi_{T_{n}}(x)+\eta_{T_{n}}(x)\right)$
. Here
$M_{n}$
denotes the total number of particles in both populations at time
$T_{n}$
. Since the branching mechanism is critical (see (2.2)), it is easy to see that
$\left\{ M_{n}\right\} _{n\geq0}$
is a non-negative martingale.
Indeed, suppose that
$T_{n}$
is the stopping time originated from a ‘random walk’ jump, that is, from a jump of one of the processes
\begin{equation*}R_{t}^{\xi,x}= \sum_{y\neq x}\left\{ \mathop\int\limits_{0}^{t}\int_{\mathbb{R}_{+}}1_{\{\xi_{s-}(y)\geq u\}}N_{y,x}^{\textrm{RW}_{\xi}}({\mathrm{d}} s\,{\mathrm{d}} u)-\mathop\int\limits_{0}^{t}\int_{\mathbb{R}_{+}}1_{\{\xi_{s-}(x)\geq u\}}N_{x,y}^{\textrm{RW}_{\xi}}({\mathrm{d}} s\,{\mathrm{d}} u)\right\},\!\!\quad t\geq 0, \ x\in S,\end{equation*}
or
\begin{equation*}R_{t}^{\eta,x}=\sum_{y\neq x}\left\{ \mathop\int\limits_{0}^{t}\int_{\mathbb{R}_{+}}1_{\{\eta_{s-}(y)\geq u\}}N_{y,x}^{\textrm{RW}_{\eta}}({\mathrm{d}} s\,{\mathrm{d}} u)-\mathop\int\limits_{0}^{t}\int_{\mathbb{R}_{+}}1_{\{\eta_{s-}(x)\geq u\}}N_{x,y}^{\textrm{RW}_{\eta}}({\mathrm{d}} s\,{\mathrm{d}} u)\right\},\!\!\quad t\geq 0, \ x\in S.\end{equation*}
In that case the total number of particles does not change (this can also be readily seen from our (2.4) since
$\sum_{x\in S} R_{t}^{\xi,x}=\sum_{x\in S} R_{t}^{\eta,x}=0$
) and thus we have
$M_{n}=M_{n-1}$
. Alternatively
$T_{n}$
can be originated from the ‘branching’, that is from the jump of one of the processes
$B_{t}^{\xi}= \sum_{x\in S}\sum_{k\geq0}\int_{0}^{t}\int_{\mathbb{R}_{+}}(k-1)1_{\{\gamma\eta_{s-}(x)\xi_{s-}(x)\geq u\}}N_{x,k}^{{\mathrm{br}}_{\xi}}({\mathrm{d}} s\,{\mathrm{d}} u)$
or
$B_{t}^{\eta}= \sum_{x\in S}\sum_{k\geq0}\int_{0}^{t}\int_{\mathbb{R}_{+}}(k-1)1_{\{\gamma\eta_{s-}(x)\xi_{s-}(x)\geq u\}}N_{x,k}^{{\mathrm{br}}_{\eta}}({\mathrm{d}} s\,{\mathrm{d}} u)$
. In this case one can easily get that
where Z is distributed according to the branching law
$\nu$
.
Therefore, by the well-known martingale convergence theorem
$\sup_{n\geq1}M_{n}<\infty$
a.s. [Reference Ikeda and Watanabe24, Theorem 1.6.4]. This implies that
$\sup_{n}T_{n}=\infty$
a.s.
Now let us turn to the proof of uniqueness. Let
$(\tilde{\xi},\tilde{\eta})_{t}$
be another solution to (2.4) starting from the same initial conditions
$\tilde{\xi}_{0}=\xi^{0},\ \tilde{\eta}_{0}=\eta^{0}$
. We see from (2.4) that
$\tilde{\xi}_{t}(x)=\xi_{t}(x),\ \tilde{\eta}_{t}(x)=\eta_{t}(x)$
for all
$x\in S$
and
$t\in[0,T_{1})$
, and also that
$\tilde{\xi}_{T_{1}}=\xi_{T_{1}},\ \tilde{\eta}_{T_{1}}=\eta_{T_{1}}$
. Then, by induction,
$(\xi,\eta)$
and
$(\tilde{\xi},\tilde{\eta})$
agree on
$[T_{n},T_{n+1})$
for all
$n\in\mathbb{N}$
.
(b) Poisson processes have independent and stationary increments. Therefore, by construction described in (a), we can immediately see that the distribution of
$(\xi_{t+h},\eta_{t+h})$
, given
$\mathcal{F}_{t}$
, depends only on
$(\xi_{t},\eta_{t})$
, and hence the process
$(\xi_{t},\eta_{t})_{t\geq0}$
is Markov.
(c) Now we show that
$(\xi_{t},\eta_{t})_{t\geq0}$
satisfies (3.1). Define a sequence of stopping times
Choose arbitrarily
$m\in\mathbb{N}$
such that
For
$\varphi, \psi\in E_{\mathrm{fin}}$
, define
Then, by an appropriate version of the Itô formula, we have that
$\{ M_{t\wedge T_{n}}^{h_{m}}\}_{t\geq 0} $
is a martingale. Let
$\varphi,\psi\in E_{\mathrm{fin}}$
. Apply the
$L^{(2)}$
operator on
$h_{m}(\varphi,\psi)$
to get
\begin{align*}L^{(2)}h_{m}(\varphi,\psi) & = \kappa\sum_{x,y}\varphi(x)p_{xy}\left\{ \left((\varphi(y)+1)^{m}-\varphi(y)^{m}\right)\right.\\ & \quad \left.+\left((\varphi(x)-1)^{m}-\varphi(x)^{m}\right)\right\} \\ & \quad +\kappa\sum_{x,y}\psi(x)p_{xy}\left\{\left((\psi(y)+1)^{m}-\psi(y)^{m}\right)\right.\\ & \quad \left.+\left((\psi(x)-1)^{m}-\psi(x)^{m}\right)\right\} \\ & \quad +\sum_{x}\gamma\varphi(x)\psi(x)\sum_{k\geq 0}\nu_{k}\left\{ (\varphi(x)+k-1)^m-\varphi(x)^{m}\right\}\\ & \quad +\sum_{x}\gamma\varphi(x)\psi(x)\sum_{k\geq 0}\nu_{k}\left\{ (\psi(x)+k-1)^m-\psi(x)^{m}\right\} \\ & = \kappa\sum_{x}\varphi(x)\sum_{j=1}^{m}\binom{m}{j}({-}1)^{j}\varphi(x)^{m-j}\\ & \quad +\kappa\sum_{x,y}\varphi(x)p_{xy}\sum_{j=1}^{m}\binom{m}{j}\varphi(y)^{m-j}\\ & \quad +\kappa\sum_{x}\psi(x)\sum_{j=1}^{m}\binom{m}{j}({-}1)^{j}\psi(x)^{m-j} \\ & \quad +\kappa\sum_{x,y}\psi(x)p_{xy}\sum_{j=1}^{m}\binom{m}{j}\psi(y)^{m-j}\\ & \quad +\sum_{x}\gamma\varphi(x)\psi(x)\sum_{k\geq 0}\nu_{k}\sum_{j=2}^{m}\binom{m}{j}\varphi(x)^{m-j}(k-1)^{\,j}\\ & \quad +\sum_{x}\gamma\varphi(x)\psi(x)\sum_{k\geq 0}\nu_{k}\sum_{j=2}^{m}\binom{m}{j}\psi(x)^{m-j}(k-1)^{\,j}.\end{align*}
where in the last equality we used the binomial expansion, the fact that
$\sum_{y}p_{xy}=1$
and our assumptions
$\sum_{k\geq0}(k-1)\nu_{k}=0.$
Now we can estimate
\begin{align}\big|L^{(2)}h_{m}(\varphi,\psi)\big| & \leq \kappa\Bigg(\sum_{j=1}^{m}\binom{m}{j}\Bigg)\big(h_{m}(\varphi,\psi)+\sum_{x,y}p_{xy}\big[\varphi(x)^m + \varphi(y)^{m}+\psi(x)^m+ \psi(y)^{m}\big]\big)\nonumber\\ & \quad +\Bigg(3\gamma\sum_{j=2}^{m}\binom{m}{j}\sum_{k}\nu_{k}(k-1)^{j}\Bigg)h_{m}(\varphi,\psi)\end{align}
where we used the following simple inequalities: for
$m\geq j\geq1$
,
(recall that
$\varphi(x)$
is a non-negative integer number), and
Since, by our assumptions,
$p_{xy}=p_{yx}$
for all
$x,y\in S$
, we have
Denote
\begin{equation*}c_{m}\,:\!=\,\sum_{j=1}^{m}\binom{m}{j}=2^{m}-1,\quad c'_{m}\,:\!=\,3\gamma\sum_{j=2}^{m}\binom{m}{j}\sum_{k\geq 0}\nu_{k}(k-1)^{j}<\infty.\end{equation*}
Then by (3.6) and (3.7) we get
Now recall that for any n,
$\left\{M^{h_{m}}(t\wedge T_{n})\right\}_{t\geq0}$
is a martingale. Therefore,
\begin{align*}{\mathrm{E}}\big[h_{m}\big(\xi_{t\wedge T_{n}},\eta_{t\wedge T_{n}}\big)\big] & = h_{m}(\xi^{0},\eta^{0})+{\mathrm{E}}\left[\mathop\int\limits_{0}^{t\wedge T_{n}}L^{(2)}h_{m}(\xi_{s},\eta_{s})\,{\mathrm{d}} s\right]\\ & \leq h_{m}(\xi^{0},\eta^{0})+C_{m}\mathop\int\limits_{0}^{t}{\mathrm{E}}\left[\mathbf{1}_{\left\{ s\leq T_{n}\right\} }h_{m}(\xi_{s},\eta_{s})\right]\,{\mathrm{d}} s\\ & \leq h_{m}(\xi^{0},\eta^{0})+C_{m}\mathop\int\limits_{0}^{t}{\mathrm{E}}\left[h_{m}(\xi_{s\wedge T_{n}},\eta_{s\wedge T_{n}})\right]\,{\mathrm{d}} s.\end{align*}
Thus, from Gronwall’s lemma we get that
uniformly in n. It is easy to see from (a) that
$T_n\to \infty$
as
$n\to\infty$
, a.s. Thus, inequality (3.1) follows from Fatou’s lemma by letting
$n\rightarrow\infty$
.
(d) Let
$f\in\mathrm{Lip}(E_{\mathrm{fin}}\times E_{\mathrm{fin}})$
. We wish to show that
$M^{\,f}$
is indeed a martingale. In order to do that, first we show that for any such f there is a constant
$C=C(\kappa,p,\sigma,\nu,f)$
such that
We decompose
$L^{(2)}f(\varphi,\psi)$
into two parts corresponding to motion and branching mechanisms:
where
\begin{align}L_{\,\mathrm{RW}}\,f(\varphi,\psi) & = \kappa\sum_{x,y\in S}\varphi(x)p_{xy}\left(\,f\big(\varphi^{x\rightarrow y},\psi\big)-f(\varphi,\psi)\right)\nonumber\\[4pt] & \quad +\kappa\sum_{x,y\in S}\psi(x)p_{xy}\left(\,f\big(\varphi,\psi^{x\rightarrow y}\big)-f(\varphi,\psi)\right), \end{align}
\begin{align} L_{\,\mathrm{br}}\,f(\varphi,\psi) & = \sum_{x\in S}\gamma\varphi(x)\psi(x)\sum_{k\geq0}\nu_{k}\left(\,f(\varphi+(k-1)\delta_{x},\psi)-f(\varphi,\psi)\right)\nonumber\\[4pt] & \quad +\sum_{x\in S}\gamma\varphi(x)\psi(x)\sum_{k\geq0}\nu_{k}\left(\,f(\varphi,\psi+(k-1)\delta_{x})-f(\varphi,\psi)\right).\end{align}
Using the Lipshitz property of f we obtain
\begin{align*}\left|L_{\,\mathrm{RW}}\,f(\varphi,\psi)\right| & = \Bigg|\sum_{x,y\in S}\varphi(x)p_{xy}\left(\,f\big(\varphi^{(x,y)},\psi\big)-f(\varphi,\psi)\right)\nonumber\\ & \quad +\sum_{x,y\in S}\psi(x)p_{xy}\big(\,f\big(\varphi,\psi^{(x,y)}\big)-f(\varphi,\psi)\big)\Bigg|\\ & \quad\leq C_{f}\sum_{x,y\in S}\varphi(x)p_{xy}\big\Vert \big(\varphi^{(x,y)},\psi\big)-(\varphi,\psi)\big\Vert _{1}\\ & \quad +C_{f}\sum_{x,y\in S}\psi(x)p_{xy}\big\Vert \big(\varphi,\psi^{(x,y)}\big)-(\varphi,\psi)\big\Vert _{1}\\ & \quad\leq 2C_{f}\sum_{x,y\in S}\varphi(x)p_{xy}+2C_{f}\sum_{x,y\in S}\psi(x)p_{xy}\\ & = 2C_{f}\left\Vert (\varphi,\psi)\right\Vert _{1}\leq 2C_{f}\left\Vert (\varphi,\psi)\right\Vert _{2}^{2},\end{align*}
where in the last inequality we used
$\left\Vert (\varphi,\psi)\right\Vert _{1}\leq\left\Vert (\varphi,\psi)\right\Vert _{2}^{2}$
, which holds since the functions
$\left\{ \varphi(x)\right\} _{x\in S}$
and
$\left\{ \psi(x)\right\} _{x\in S}$
are integer valued. Turning to
$L_{\,\mathrm{br}}\,f(\varphi,\psi)$
we get
\begin{align*}\left|L_{\,\mathrm{br}}\,f(\varphi,\psi)\right| & = \bigg|\sum_{x\in S}\gamma\varphi(x)\psi(x)\sum_{k\geq 0}\nu_{k}\left(\left[f(\varphi+(k-1)\delta_{x},\psi)-f(\varphi,\psi)\right]\right.\\ & \quad \left.+\left[f(\varphi,\psi+(k-1)\delta_{x})-f(\varphi,\psi)\right]\right)\bigg|\\ & \leq C_{f}\sum_{x\in S}\gamma\varphi(x)\psi(x)\sum_{k\geq 0}\nu_{k}\left(2\left\Vert (k-1)\delta_{x}\right\Vert _{1}\right)\\ & = \bigg(2C_{f}\sum_{k\geq 0}\nu_{k}|k-1|\bigg)\sum_{x\in S}\gamma\varphi(x)\psi(x)\\ & \leq 2\gamma C_{f}\sum_{k\geq0}2\nu_{k}|k-1|\cdot\left\Vert (\varphi,\psi)\right\Vert _{2}^2,\end{align*}
where in the last inequality we used the fact that
$\varphi(x)\psi(x)\leq\varphi(x)^{2}+\psi(x)^{2}$
. Thus, (3.9) holds with
Consider a bounded
$f\in\mathrm{Lip}(E_{\mathrm{fin}}\times E_{\mathrm{fin}})$
, then
$M^{f}$
is a local martingale, so we have for all
$t,h\geq0$
where
$T_n$
is defined in (3.5). The right-hand side converges to
$M^{f}(t)$
a.s., as
$n\rightarrow\infty$
. Then
\begin{equation*}{\mathrm{E}}\left[\left.\mathop\int\limits_{0}^{(t+h)\wedge T_{n}}L^{(2)}f(\xi_{s},\eta_{s})\,{\mathrm{d}} s\right|\mathcal{F}_{t}\right]\rightarrow{\mathrm{E}}\left[\left.\mathop\int\limits_{0}^{t+h}L^{(2)}f(\xi_{s},\eta_{s})\,{\mathrm{d}} s\right|\mathcal{F}_{t}\right],\ \mathrm{a.s},\end{equation*}
as
$n\rightarrow\infty$
. Here, we used again the dominated convergence, since by (3.9)
\begin{equation*}\mathop\int\limits_{0}^{(t+h)\wedge T_{n}}L^{(2)}f(\xi_{s},\eta_{s})\,{\mathrm{d}} s\leq C\mathop\int\limits_{0}^{t+h}\left\Vert (\xi_{s},\eta_{s})\right\Vert _{2}^{2}\,{\mathrm{d}} s\end{equation*}
and the expectation on the right-hand side of the above inequality is bounded due to (3.1) and finite initial conditions, for which clearly
$\left\Vert (\xi_0,\eta_0)\right\Vert _{2}^{2}<\infty$
. Thus,
$M^{f}$
is indeed a martingale in the case of bounded
$f\in\mathrm{Lip}(E_{\mathrm{fin}}\times E_{\mathrm{fin}})$
.
Next, consider
$f\in\mathrm{Lip}(E_{\mathrm{fin}}\times E_{\mathrm{fin}})$
which is non-negative, but not necessarily bounded. Define
$f_{n}(\varphi,\psi)\,:\!=\,f(\varphi,\psi)\wedge n$
. Note that
$f_{n}$
is bounded and
$f_{n}\in\mathrm{Lip}(E_{\mathrm{fin}}\times E_{\mathrm{fin}})$
with Lipshitz constant
$C_{f_{n}}\leq C_{f}$
. As
$n\rightarrow\infty$
, we have
by monotone convergence. Observe that
$\left|L^{(2)}f_{n}(\varphi,\eta)\right|\leq C\left\Vert (\varphi,\eta)\right\Vert _{2}^{2}$
uniformly in n. We thus obtain
a.s. as
$n\rightarrow\infty$
by the dominated convergence theorem. Therefore,
$M^{f}$
is a martingale for non-negative Lipschitz f. For the general case, we use the decomposition of
$f\in\mathrm{Lip}(E_{\mathrm{fin}}\times E_{\mathrm{fin}})$
as
$f=f^{+}-f^{-}$
, where
$f^{+}\,:\!=\,\max(f,0)$
and
$f^{-}\,:\!=\,\max({-}f,0)$
.
The same proof holds for
$N^{f}$
too, since
$\frac{\partial}{\partial s}f$
is bounded by (3.3).
4. Proof of Theorem 2.2.
The aim of this section is to prove Theorem 2.2
Let
$(\xi_{t},\eta_{t})$
be the mutually catalytic branching process described in Theorem 2.2, starting at
$(\xi_{0},\eta_{0})$
with with
$\boldsymbol{\xi}_{0}+ \boldsymbol{\eta}_{0}<\infty$
. Recall that
$\boldsymbol{\xi}_t,\, \boldsymbol{\eta}_t,\ t\geq 0,$
denote the total size of each population at time t:
4.1. Proof of Theorem 2.2 (a): transient case
The proof is simple and we decided to avoid technical details. The observation is as follows: since the motion of particles is transient and the number of particles in the original populations is finite, there exists a.s. a finite time
$\hat T$
such that, if one suppresses the branching, the initial particles of different populations never meet after time
$\hat T$
. On the other hand, due to the finiteness of the number of particles, the total branching rate in the system is finite and, thus, there is a positive probability for the event that in the original particle system there is no branching event until time
$\hat T$
. On this event, particles of different populations never meet after time
$\hat T$
and therefore there is a positive probability of survival of both populations.
4.2. Proof of Theorem 2.2(b): recurrent case
We would like to show that
First, recall why
$\lim_{t\rightarrow\infty}\boldsymbol{\xi}_{t}\boldsymbol{\eta}_{t}=\boldsymbol{\xi}_{\infty}\boldsymbol{\eta}_{\infty}$
exists. By Itô’s formula it is easy to see that
$\left\{ \boldsymbol{\xi}_{t}\boldsymbol{\eta}_{t}\right\} _{t\geq0}$
is a non-negative local martingale, that is a non-negative supermartingale. By the martingale convergence theorem non-negative supermartingales converge a.s. as time goes to infinity. Hence,
Also note that
$\left\{\boldsymbol{\xi}_{t}\boldsymbol{\eta}_{t}\right\} _{t\geq0}$
is an integer-valued supermartingale. Therefore, there exists a random time
$T_{0}$
such that
Now assume that
$\boldsymbol{\xi}_{\infty}\boldsymbol{\eta}_{\infty}>0$
, that is
$\boldsymbol{\xi}_{t}>0$
and
$\boldsymbol{\eta}_{t}>0$
for
$t\geq T_{0}$
. Since the motion is recurrent, there is probability one for a ‘meeting’ of two populations after time
$T_{0}$
at some site. Moreover, on the event
$\left\{\boldsymbol{\xi}_{\infty}\boldsymbol{\eta}_{\infty}>0\right\} $
, by recurrence, after time
$T_{0}$
, two populations spend an infinite amount of time ‘together’. Since the branching rate is at least
$\gamma>0$
, when particles of two populations spend time ‘together’ on the same site, we immediately get that eventually a branching event will happen with probability one. However, this is a contradiction with (4.1). Therefore,
$\boldsymbol{\xi}_{t}=0$
or
$\boldsymbol{\eta}_{t}=0$
for all
$t\geq T_{0}$
, that is one of the populations becomes extinct, and coexistence is not possible.
5. Moment Computations for
$S=\Lambda_{n}$
In this section, we derive some useful moment estimates for
$(\xi_{t},\eta_{t})$
solving (2.4) in the case of
$S=\Lambda_{n}$
for arbitrary
$n\geq1$
(recall that
$\Lambda_{n}$
is the torus defined in Section 1.1). These estimates will be essential for proving Theorem 2.3 in Section 6.
To simplify the notation we suppress dependence on ‘n’. Throughout the section the motion process for a mutually catalytic process
$(\xi_{t},\eta_{t})$
is the nearest-neighbor random walk on
$S=\Lambda_{n}$
. The transition semigroup (respectively the transition density
$\left\{ p_{t}(\cdot,\cdot)_{t\geq0}\right\} $
, Q-matrix) of the motion process will be denoted by
$\left\{ P_{t}\right\} _{t\geq0}$
. The motion process will be the nearest-neighbor random walk on S. For the definition of conditional quadratic variation [Reference Protter35, Chapter III]. For
$\psi\in \mathbb{Z}^S$
and
$\varphi\in \mathbb{R}^S$
define the inner product
whenever the sum is absolutely convergent.
Lemma 5.1. Assume that
$S=\Lambda_{n}$
. Let
$(\xi_{0},\eta_{0})\in E_{\mathrm{fin}}\times E_{\mathrm{fin}}$
. If
$\varphi\,:\,S\rightarrow\mathbb{R}_{+}$
, then
where
\begin{align}N_{s}^{\xi}(t,\varphi) = & \sum_{x\in S}\Bigg(\sum_{y\neq x}\Bigg\{ \mathop\int\limits_{0}^{s}\mathop\int\limits_{\mathbb{R}_{+}}P_{t-r}\varphi(x)1_{\xi_{r-}(y)\geq u}N_{y,x}^{\textrm{RW}_{\xi}}({\mathrm{d}} r\,{\mathrm{d}} u)\\ & -\mathop\int\limits_{0}^{s}\mathop\int\limits_{\mathbb{R}_{+}}P_{t-r}\varphi(x)1_{\xi_{r-}(x)\geq u}N_{x,y}^{\textrm{RW}_{\xi}}({\mathrm{d}} r\,{\mathrm{d}} u)\Bigg\} -\mathop\int\limits_{0}^{s}\xi_{r}Q(x)P_{t-r}\varphi(x)\,{\mathrm{d}} r\Bigg)\nonumber \\ & +\sum_{x\in S}\sum_{k\geq0}(k-1)\mathop\int\limits_{0}^{s}\mathop\int\limits_{\mathbb{R}_{+}}P_{t-r}\varphi(x)1_{\{\gamma\eta_{r-}(x)\xi_{r-}(x)\geq u\}}N_{x,k}^{{\mathrm{br}}_{\xi}}({\mathrm{d}} r\,{\mathrm{d}} u),\quad s\leq t,\nonumber\end{align}
and
\begin{align*}N_{s}^{\eta}(t,\varphi) & = \sum_{x\in S}\Bigg(\sum_{y\neq x}\Bigg\{ \mathop\int\limits_{0}^{s}\mathop\int\limits_{\mathbb{R}_{+}}P_{t-r}\varphi(x)1_{\eta_{r-}(y)\geq u}N_{y,x}^{\textrm{RW}_{\eta}}({\mathrm{d}} r\,{\mathrm{d}} u)\\ & \quad -\mathop\int\limits_{0}^{s}\mathop\int\limits_{\mathbb{R}_{+}}P_{t-r}\varphi(x)1_{\eta_{r-}(x)\geq u}N_{x,y}^{\textrm{RW}_{\eta}}({\mathrm{d}} r\,{\mathrm{d}} u)\Bigg\} -\mathop\int\limits_{0}^{s}\eta_{r}Q(x)P_{t-r}\varphi(x)\,{\mathrm{d}} r\Bigg)\\ & \quad +\sum_{x\in S}\sum_{k\geq0}(k-1)\mathop\int\limits_{0}^{s}\mathop\int\limits_{\mathbb{R}_{+}}P_{t-r}\varphi(x)1_{\{\gamma\eta_{r-}(x)\xi_{r-}(x)\geq u\}}N_{x,k}^{{\mathrm{br}}_{\eta}}({\mathrm{d}} r\,{\mathrm{d}} u),\quad s\leq t\end{align*}
are orthogonal square-integrable
$\mathcal{F}_{s}$
-martingales on
$s\in[0,t]$
(the series converge in
$L^{2}$
uniformly in
$s\leq t$
) with conditional quadratic variations
\begin{align}\left\langle N_{\cdot}^{\xi}(t,\varphi)\right\rangle _{s} & = \kappa\sum_{y\in S}\mathop\int\limits_{0}^{s}\xi_{r-}(y){\mathrm{E}}\big[(P_{t-r}\varphi(Z+y)-P_{t-r}\varphi(y))^{2}\big]\,{\mathrm{d}} r\nonumber \\[4pt] & \quad + \sigma^{2}\gamma\Bigg(\sum_{x\in S}\mathop\int\limits_{0}^{s}\left(P_{t-r}\varphi(x)\right)^{2}\xi_{r-}(x)\eta_{r-}(x)\,{\mathrm{d}} r\Bigg)\!,\end{align}
and
\begin{align*}\left\langle N_{\cdot}^{\eta}(t,\varphi)\right\rangle _{s} & = \kappa\sum_{y\in S}\mathop\int\limits_{0}^{s}\eta_{r-}(y){\mathrm{E}}\big[(P_{t-r}\varphi(Z+y)-P_{t-r}\varphi(y))^{2}\big]\,{\mathrm{d}} r \\[4pt] & \quad + \sigma^{2}\gamma\Bigg(\sum_{x\in S}\mathop\int\limits_{0}^{s}\left(P_{t-r}\varphi(x)\right)^{2}\xi_{r-}(x)\eta_{r-}(x)\,{\mathrm{d}} r\Bigg).\end{align*}
Here Z is the random variable distributed as a jump of the nearest-neighbor random walk.
Proof. The proof goes through application of Lemma 3.1 and Itô’s formula to functions in the form of
$f(s,\xi_s,\eta_s)=\left\langle \xi_{s},P_{t-s}\varphi\right\rangle$
and
$f(s,\xi_s,\eta_s)=\left\langle \eta_{s},P_{t-s}\varphi\right\rangle$
. The proof is pretty standard and we leave details to the enthusiastic reader.
In the end the orthogonality of the martingales
$N_{\cdot}^{\xi}(t,\varphi)$
and
$N_{\cdot}^{\eta}(t,\psi)$
follows from independence of driving Poisson point processes.
Corollary 5.1. Assume
$S=\Lambda_{n}$
. Let
$(\xi_{0},\eta_{0})\in E_{\mathrm{fin}}\times E_{\mathrm{fin}}$
. If
$\varphi,\psi\,:\,S\rightarrow\mathbb{R}_{+}$
, then
and
Proof. Since
$\Lambda_n$
is finite, (5.4) follows immediately from Lemma 5.1.
As for (5.5), recalling again that
$\Lambda_n$
is finite, from Lemma 5.1 we get
\begin{eqnarray*}{\mathrm{E}}(\langle \xi_{t},\varphi\rangle \langle \eta_{t},\psi\rangle ) & \,=\, & {\mathrm{E}}\big(\big[\langle \xi_{0},P_{t}\varphi\rangle +N_{t}^{\xi}(t,\varphi)\big]\big[\langle \eta_{0},P_{t}\psi\rangle +N_{t}^{\eta}(t,\psi)\big]\big)\\ & =\, & \left\langle \xi_{0},P_{t}\varphi\right\rangle \left\langle \eta_{0},P_{t}\psi\right\rangle +\left\langle \eta_{0},P_{t}\psi\right\rangle {\mathrm{E}}\big(N_{t}^{\xi}(t,\varphi)\big)\\ & & +\left\langle \xi_{0},P_{t}\varphi\right\rangle {\mathrm{E}}(N_{t}^{\eta}(t,\psi))+{\mathrm{E}}\big(N_{t}^{\xi}(t,\varphi)N_{t}^{\eta}(t,\psi)\big)\\ & =\, & \left\langle \xi_{0},P_{t}\varphi\right\rangle \left\langle \eta_{0},P_{t}\psi\right\rangle ,\end{eqnarray*}
where the second and the third terms on the right-hand side are equal to zero since
$N_{t}^{\xi}(t,\varphi)$
and
$N_{t}^{\eta}(t,\psi)$
are martingales, and the last term vanishes because of the orthogonality of
$N_{t}^{\xi}(t,\varphi)$
and
$N_{t}^{\eta}(t,\psi)$
.
Let
$B\subseteq S$
be an arbitrary finite (bounded) subset, and let
$\left|B\right|$
denote a number of sites in B.
Now we are ready to compute the expected value and variance of a number of particles (each population separately) at a site
$x\in S$
and at set B.
Corollary 5.2. Assume
$S=\Lambda_{n}$
. Let
$\xi_{0}(x)\equiv v,\ \eta_{0}(x)\equiv u\ \forall\ x\in S$
, where
$(v,u)\in \mathbb{N}_{0}^2$
. Then,
Moreover, for any finite
$B\subset S$
,
Proof. The result follows easily from Corollary 5.1.
Before we treat the second moments of
$\xi_{t}(x)$
and
$\eta_{t}(x)$
, let us prove a simple technical lemma. Recall that
$g_{t}(\cdot,\cdot)$
is the Green function defined in (2.3) (for the nearest-neighbor random walk on
$S=\Lambda_{n}$
). Note that whenever the motion process is the nearest-neighbor random walk, we have
$g_{t}(x,y)=g_{t}(x-y)$
(with certain abuse of notation).
Lemma 5.2. Assume
$S=\Lambda_{n}$
, and the motion process is a nearest-neighbor random walk on S. For every
$x\in S$
:
\begin{equation*}g_{t}(x)-\frac{1}{2d}\sum_{i=1}^{d}\left[g_{t}(x+e_{i})+g_{t}(x-e_{i})\right]=\frac{1}{\kappa}(\delta_{x,0}-p_{t}(x)),\quad\forall\ t\geq0,\end{equation*}
where
$\delta_{x,y}=1$
if and only if
$x=y$
and
$\delta_{x,y}=0$
otherwise.
Proof. The proof follows a standard procedure, using the evolution equation for the transition densities of a continuous-time nearest-neighbor random walk. As we could not find an exact reference, we have included the derivation here for completeness.
The evolution of the transition probabilities of a continuous-time Markov chain is governed by the first-order differential equation
In our case
$Q=(q_{xy})_{x,y\in S}$
where
Therefore, we have
\begin{eqnarray*}\frac{\partial}{\partial t}p_{t}(x) & \,=\, & \sum_{y\in S}q_{xy}p_{t}(y)=q_{xx}(p_{t}(x)-\frac{1}{2d}\sum_{i=1}^{d}(p_{t}(x+e_{1})+p_{t}(x-e_{1})))\\ & =\, & -\kappa\left(p_{t}(x)-\frac{1}{2d}\sum_{i=1}^{d}\left[p_{t}(x+e_{i})+p_{t}(x-e_{i})\right]\right).\end{eqnarray*}
Now, integrate both sides over the interval [0, t], which gives
\begin{eqnarray*}p_{t}(x)-p_{0}(x) & \,=\, & -\kappa\mathop\int\limits_{0}^{t}\left(p_{s}(x)-\frac{1}{2d}\sum_{i=1}^{d}\big[p_{s}(x+e_{i})+p_{s}(x-e_{i})\big]\right)\,{\mathrm{d}} s\\ & =\, & -\kappa\left(g_{t}(x)-\frac{1}{2d}\sum_{i=1}^{d}\big[g_{t}(x+e_{i})+g_{t}(x-e_{i})\big]\right).\end{eqnarray*}
The result follows immediately once we recall that
$p_0(x)=\delta_{x,0}$
.
Now we are ready to handle the second moments of
$\xi_{t}(x)$
and
$\eta_{t}(x)$
.
Lemma 5.3. Assume
$S=\Lambda_{n}$
. Let
$\xi_{0}(x)\equiv v,\ \eta_{0}(x)\equiv u$
for all
$x\in S$
, where
$(v,u)\in \mathbb{N}_{0}^2$
. Then, for all
$t\geq0$
,
and
Proof. We prove only (5.6), since the proof of (5.7) is the same. Again we use the representation of the process from Lemma 5.1, with
$\varphi\left(\cdot\right)=\delta_{x}\left(\cdot\right)$
and use notation
$\phi_{r}({\cdot})=p_{t-r}\delta_{x}({\cdot})=p_{t-r}(\cdot-x).$
For
$0\leq s\leq t$
denote
\begin{eqnarray*}N_{s}^{\xi}(y) & = & N_{s}^{\xi}(t,y)=N_{s}^{\xi}(t,\delta_{y}).\\{\mathrm{E}}\big(\xi_{t}(x)^{2}\big) & = & \left(P_{t}\xi_{0}(x)\right)^{2}+{\mathrm{E}}\big(\big(N_{t}^{\xi}(x)\big)^{2}\big)\\ & = & v^{2}+{\mathrm{E}}\left(\left\langle N_{\cdot}^{\xi}(x)\right\rangle _{t}\right)\\ & = & v^{2}+v\kappa\sum_{y\in S}\mathop\int\limits_{0}^{t}\sum_{z\in S}(\phi_{r}(z)-\phi_{r}(y))^{2}p{}_{y,z}\,{\mathrm{d}} r\\ & & +\,\sigma^{2}\gamma uv\sum_{y\in S}\mathop\int\limits_{0}^{t}\phi_{r}(y)^{2}\,{\mathrm{d}} r\\ & \,=\!:\, & v^{2}+v\kappa J_{1}(t)+\sigma^{2}\gamma uv J_{2}(t),\quad t\geq0\end{eqnarray*}
where in the third equality we used again Lemma 5.1, the Fubini theorem, and Corollary 5.1 which implies
${\mathrm{E}}(\xi_{r-}(y)\eta_{r-}(y))=P_{t}\xi_{0}(y)P_{t}\eta_{0}(y)=vu$
,
${\mathrm{E}}(\xi_{r-}(y))=P_{t}\xi_{0}(y)=v.$
Now we will compute each term separately.
First, let us evaluate
$J_{2}(t)$
. Recall that
$\phi_{r}(y)=p_{t-r}(y-x)$
. Then we have
\begin{align}J_{2}(t) = & \mathop\int\limits_{0}^{t}\sum_{y\in S}p_{t-r}(y-x)^{2}\,{\mathrm{d}} r =\mathop\int\limits_{0}^{t}p_{2(t-r)}(0)\,{\mathrm{d}} r =\frac{1}{2}\mathop\int\limits_{0}^{2t}p_{\tau}(0)\,{\mathrm{d}} \tau=0.5g_{2t}(0),\end{align}
where
$p_{s}(x,x)=p_{s}(0,0)=p_{s}(0),$
for all
$x\in S$
.
Now we will handle
$J_{1}(t)$
:
\begin{align}J_{1}(t) & = \sum_{y\in S}\mathop\int\limits_{0}^{t}\sum_{z\in S}(\phi_{r}(z)-\phi_{r}(y))^{2}p{}_{y,z}\,{\mathrm{d}} r\\ & = \sum_{y\in S}\sum_{z\in S}\mathop\int\limits_{0}^{t}p_{t-r}(z-x)^{2}p_{y,z}\,{\mathrm{d}} r+\sum_{y\in S}\sum_{z\in S}\mathop\int\limits_{0}^{t}p_{t-r}(y-x)^{2}p_{y,z}\,{\mathrm{d}} dr\nonumber \\ & -2\sum_{y\in S}\sum_{z\in S}\mathop\int\limits_{0}^{t}p_{t-r}(z-x)p_{t-r}(y-x)p_{y,z}\,{\mathrm{d}} r.\nonumber\end{align}
We will treat each of the three terms above separately. For the first term we have
\begin{align}\sum_{y\in S}\sum_{z\in S}\mathop\int\limits_{0}^{t}p_{t-r}(z,x)^{2}p_{y,z}\,{\mathrm{d}} r & = \sum_{z\in S}\mathop\int\limits_{0}^{t}p_{t-r}(z,x)^{2}\,{\mathrm{d}} r = 0.5g_{2t}(0)\end{align}
where the last equality follows as in (5.8).
Similarly we get
\begin{align}\sum_{y\in S}\sum_{z\in S}\mathop\int\limits_{0}^{t}p_{t-r}(y,x)^{2}p_{y,z}\,{\mathrm{d}} dr &=0.5g_{2t}(0).\end{align}
Finally. it is easy to obtain
\begin{align}& \sum_{y\in S}\sum_{z\in S}\mathop\int\limits_{0}^{t}p_{t-r}(z,x)p_{t-r}(y,x)p_{y,z}\,{\mathrm{d}} r\nonumber\\& \qquad\qquad\qquad\qquad\qquad =0.5\frac{1}{2d}\sum_{i=1}^{d}\left[g_{2t}(e_{i})+g_{2t}({-}e_{i})\right]=\frac{1}{2d}\sum_{i=1}^{d}g_{2t}(e_{i}).\end{align}
By putting (5.10), (5.11), (5.12), and (5.8) together we have
\begin{equation}{\mathrm{E}}\big(\xi_{t}(x)^{2}\big)=v^{2}+\sigma^{2}\gamma uv\frac{1}{2}g_{2s}(0)+v\kappa\left(g_{2t}(0)-\frac{1}{2d}\sum_{i=1}^{d}g_{2t}(e_{i})\right),\quad t\geq0,\ x\in S.\end{equation}
Now use (5.13) and Lemma 5.2 with
$x=y$
to get
We also need to evaluate
${\mathrm{E}}(\xi_{t}(x)\xi_{t}(y))$
for
$x\neq y$
. To this end we will prove the following lemma.
Lemma 5.4. Assume
$S=\Lambda_{n}$
. Let
$\xi_{0}(x)\equiv v,\ \eta_{0}(x)\equiv u\ \forall\ x\in S$
, where
$(v,u)\in \mathbb{N}_{0}^2$
. Let
$x\neq y$
, then
and
Proof. The proof goes along similar lines as the proof of Lemma 5.3 and thus is omitted.
6. Proof of Theorem 2.3
Let
$\left(\xi_{t}^{n},\eta_{t}^{n}\right)$
be a pair of processes solving (2.4) with site space
$S=\Lambda_{n}$
, and
$N_{x,y}^{\textrm{RW}_{\xi}}$
,
$N_{x,y}^{\textrm{RW}_{\eta}}$
being Poisson point processes with intensity measure
${{q^{n}(x,y)\,{\mathrm{d}} s\otimes {\mathrm{d}} u}}$
,
$q^{n}$
is defined by (1.2). Here
$\{ p_{x,y}^{n}\} _{x,y\in\Lambda_{n}}$
are the transition jump probabilities of the underlying random walk, and
$\left\{ P_{t}^{n}\right\} _{t\geq0}$
is the associated semigroup. In what follows, we assume
$d\geq 3$
.
Fix
$(\theta_{1},\theta_{2}) \in \mathbb{N}_{0}^2$
. Assume the following initial conditions for
$(\xi_{t}^{n},\eta_{t}^{n})$
:
Set
We define the following time change:
Theorem 2.3 identifies the limiting distribution of
as
$n\rightarrow\infty$
, for
$t\in [0,1]$
.
In Section 1 we defined a system of Dawson–Perkins processes
$(U_{t}^{n},V_{t}^{n})_{t\geq0}$
on
$\Lambda_{n}$
, that solves (1.3). Recall that
The limiting behavior of
$(\mathbf{U}_{t}^{n},\mathbf{V}_{t}^{n})_{t\geq0}$
was studied in [Reference Cox, Dawson and Greven6], we stated the result in Theorem 1.1.
Theorem 2.3 claims that the limiting behavior of
$\frac{1}{\left|\Lambda_{n}\right|}\big(\boldsymbol{\xi}_{\beta_{n}(t)}^{n},\boldsymbol{\eta}_{\beta_{n}(t)}^{n}\big)$
is similar to
$\frac{1}{\left|\Lambda_{n}\right|}\big(\mathbf{U}_{\beta_{n}(t)}^{n},\mathbf{V}_{\beta_{n}(t)}^{n}\big)$
for
$t\in [0,1]$
. As we have mentioned above, in contrast to Dawson–Perkins processes solving equation (1.3), the useful self-duality property does not hold for our branching particle model. However, we use the so-called approximating duality technique that allows us to prove Theorem 2.3.
In what follows, we will use a periodic sum on
$\Lambda_{n}$
: for
$x,y\in\Lambda_{n}$
we have
$x+y=(x+y)\Lambda(\mathrm{mod})\, {n}\in\Lambda_{n}$
.
The next proposition is crucial for the proof of Theorem 2.3.
Proposition 6.1. Let
$(X_{t},Y_{t})_{t\geq0}$
be the solution to (2.9). Then for all
${{a,b\geq0}}$
,
\begin{align*}\lim_{n\rightarrow\infty}{\mathrm{E}}\big({\mathrm{e}}^{-\frac{1}{\left|\Lambda_{n}\right|}(\boldsymbol{\xi}_{\beta_{n}(t)}^{n}+\boldsymbol{\eta}_{\beta_{n}(t)}^{n})(a+b)-{\mathrm{i}}\frac{1}{\left|\Lambda_{n}\right|}(\boldsymbol{\xi}_{\beta_{n}(t)}^{n}-\boldsymbol{\eta}_{\beta_{n}(t)}^{n})(a-b)}\big)\\={\mathrm{E}}\big({\mathrm{e}}^{-(X_{t}+Y_{t})(a+b)-{\mathrm{i}}(X_{t}-Y_{t})(a-b)}\big),\end{align*}
for
$t\in [0,1]$
.
Proof of Theorem 2.3. By easy adaptation of Lemma 2.5 of [Reference Mytnik33] one gets that the mixed Laplace–Fourier transform
determines the distribution of non-negative two-dimensional random variables (X, Y). Therefore, Theorem 2.3 follows easily from Proposition 6.1 and properties of weak convergence.
The rest of the section is organized as follows. Section 6.1 is devoted to the proof of Proposition 6.1, and the proof of one of the technical propositions is deferred to Section 6.2.
6.1. Proof of Proposition 6.1
In what follows, fix
$T\in (0,1]$
. Let
$(\xi_{t}^{n},\eta_{t}^{n})_{t\geq0}$
be a mutually catalytic branching random walk from Theorem 2.3 (we will refer to it as the ‘discrete process’). In the proof of the proposition we will use the duality technique introduced in [Reference Mytnik33]. To this end, we need the following Dawson–Perkins processes.
-
• Let
$(u_{t}^{n},v_{t}^{n})_{t\geq0}$
be a solution to (1.3), with
$Q^n$
being the Q-matrix of the nearest-neighbor random walk on
$\Lambda_n$
, and with some initial conditions
$\left(u_{0},v_{0}\right)$
. -
• For arbitrary
$a,b\geq 0$
, the sequence
$(\tilde{u}_{t}^{n},\tilde{v}_{t}^{n})_{t\geq0}$
solving (1.3) with initial conditions (6.2)
\begin{equation}\tilde{u}_{0}^{n}(x)=\frac{a}{\left|\Lambda_{n}\right|},\quad \tilde{v}_{0}^{n}(x)=\frac{b}{\left|\Lambda_{n}\right|}\quad \mathrm{for\ every}\ x\in\Lambda_{n}.\end{equation}
In what follows, we assume that
$(u_{t}^{n},v_{t}^{n})_{t\geq0},\,(\tilde{u}_{t}^{n},\tilde{v}_{t}^{n})_{t\geq0}$
and
$(\xi_{t}^{n},\eta_{t}^{n})_{t\geq0}$
are independent. Now let us describe the state spaces for the processes involved in this section.
Similarly to
$E_{\mathrm{fin}}$
define
$E_{\mathrm{fin}}^{n}=\left\{ f:\Lambda_{n}\longrightarrow\mathbb{N}_{0}\right\}$
, and
$E_{\mathrm{fin,con}}^{n}= E_{\mathrm{fin}}^{n}\times E_{\mathrm{fin}}^{n}$
. Clearly, since
$\Lambda_{n}$
is compact, the
$L^1$
norm of functions in
$E_{\mathrm{fin}}^{n}$
is finite. In addition, define
$\widetilde{E}_{\mathrm{fin}}^{n}=\left\{ f\,:\,\Lambda_{n}\longrightarrow\mathbb{R}_{+}\right\}$
, and
$\widetilde{E}_{\mathrm{fin,con}}^{n}= \widetilde{E}_{\mathrm{fin}}^{n}\times \widetilde{E}_{\mathrm{fin}}^{n}$
.
First, by Theorem 2.1, the process
$(\xi_{t}^{n},\eta_{t}^{n})$
that solves (2.4) with initial conditions
$(\xi_{0}^{n},\eta_{0}^{n})=\bar{\boldsymbol{\theta}}$
is an
$E_{\mathrm{fin}}^{n}\times E_{\mathrm{fin}}^{n}$
-valued process. By our definition (6.2),
$(\tilde{u}_{0}^{n},\tilde{v}_{0}^{n})\in \widetilde{E}_{\mathrm{fin,con}}^{n}$
. Moreover, by simple adaptation of the proof of Theorem 2.2(d) in [Reference Dawson and Perkins16] to our state space
$\Lambda_{n}$
, we get
For
$(\varphi,\psi,\tilde\varphi,\tilde\psi)\in\mathbb{R}_{+}^{\Lambda_{n}}\times\mathbb{R}_{+}^{\Lambda_{n}}\times\mathbb{R}_{+}^{\Lambda_{n}}\times\mathbb{R}_{+}^{\Lambda_{n}}$
define
and
\begin{eqnarray*}F_{t,s}^{n} & \,=\, & {\mathrm{E}}\left[H\left(\xi_{t}^{n},\eta_{t}^{n},\tilde{u}_{s}^{n},\tilde{v}_{s}^{n}\right)\right]\\[3pt] & \,=\, & {\mathrm{E}}\big[{\mathrm{e}}^{-\left\langle \xi_{t}^{n}+\eta_{t}^{n},\tilde{u}_{s}^{n}+\tilde{v}_{s}^{n}\right\rangle -{\mathrm{i}}\left\langle \xi_{t}^{n}-\eta_{t}^{n},\tilde{u}_{s}^{n}-\tilde{v}_{s}^{n}\right\rangle }\big],\end{eqnarray*}
for
$0\leq s,t\leq\beta_{n}(T)$
.
Let us recall the self-duality lemma from [Reference Cox, Dawson and Greven6, Lemma 4.1].
Lemma 6.1. Let
$\left(u_{0},v_{0}\right),\left(\tilde{u}_{0},\tilde{v}_{0}\right)\in \widetilde{E}_{\mathrm{fin,con}}^{n}$
, where
$\left(u_{t},v_{t}\right)_{t\geq0},\left(\tilde{u}_{t},\tilde{v}_{t}\right)_{t\geq0}$
are independent solutions of (1.3). Then
Remark 6.1. In [Reference Cox, Dawson and Greven6] the above lemma is proved for more general state spaces and initial conditions. The conditions in Lemma 4.1 in [Reference Cox, Dawson and Greven6] hold trivially in our case.
Then we have the following proposition.
Proposition 6.2. For any
$(\theta_{1},\theta_{2})\in \mathbb{N}_{0}^2$
,
$a,b\geq0$
,
Proof. The proof is postponed until the end of this section. It is proved via a series of other results.
Given Proposition 6.2, it is easy to complete.
Proof of Proposition
6.1. Fix arbitrary
$\theta_{1},\theta_{2}\geq 0$
and
$a,b\geq0$
. For any
$n\geq1$
, let
$(u_{t}^{n},v_{t}^{n})_{t\geq0}$
be the solution to (1.3) with
$Q^{n}$
being a Q-matrix of the nearest-neighbor random walk on
$\Lambda_{n}$
, and initial conditions
$(u_{0}^{n},v_{0}^{n})=(\xi_{0}^{n},\eta_{0}^{n})=\boldsymbol{\bar{\theta}}$
. Recall that
Note that
\begin{align}\lim_{n\rightarrow\infty}{\mathrm{E}}\big[F_{0,\beta_{n}(T)}^{n}\big] & = \lim_{n\rightarrow\infty}{\mathrm{E}}\Big({\mathrm{e}}^{-\big\langle \xi_{0}^{n}+\eta_{0}^{n},\tilde{u}_{\beta_{n}(T)}^{n}+\tilde{v}_{\beta_{n}(T)}^{n}\big\rangle -{\mathrm{i}}\langle \xi_{0}^{n}-\eta_{0}^{n},\tilde{u}_{\beta_{n}(T)}^{n}-\tilde{v}_{\beta_{n}(T)}^{n}\rangle }\Big)\nonumber \\& = \lim_{n\rightarrow\infty}{\mathrm{E}}\Big({\mathrm{e}}^{-\big(\mathbf{U}_{\beta_{n}(T)}^{n}+\mathbf{V}_{\beta_{n}(T)}^{n}\big) \frac{1}{\left|\Lambda_{n}\right|}(a+b)-{\mathrm{i}}\big(\mathbf{U}_{\beta_{n}(T)}^{n}-\mathbf{V}_{\beta_{n}(T)}^{n}\big) \frac{1}{\left|\Lambda_{n}\right|}(a-b)}\Big)\nonumber \\& = {\mathrm{E}}\big(e^{-(X_{T}+Y_{T})(a+b)-i(X_{T}-Y_{T})(a-b)}\big),\end{align}
where the second equality follows by a self-duality relation in Lemma 6.1, and the third equality follows by Theorem 1.1. This means that
\begin{align}& = \lim_{n\rightarrow\infty}{\mathrm{E}}\big[F_{\beta_{n}(T),0}^{n}\big]=\lim_{n\rightarrow\infty}{\mathrm{E}}\big[F_{0,\beta_{n}(T)}^{n}\big]\nonumber\\[3pt]& = {\mathrm{E}}\big({\mathrm{e}}^{-(X_{T}+Y_{T})(a+b)-{\mathrm{i}}(X_{T}-Y_{T})(a-b)}\big),\end{align}
where the second equality follows by Proposition 6.2, and the last equality follows by (6.4). This finishes the proof of Proposition 6.1.
To prove Proposition 6.2 we will need other results. First we need [Reference Ethier and Kurtz21, Lemma 4.10].
Lemma 6.2. (Lemma 4.10 of [Reference Ethier and Kurtz21]). Suppose a function f(s,t) on
$\left[0,\infty\right)\times\left[0,\infty\right)$
is absolutely continuous in s for each fixed t and absolutely continuous in t for each fixed s. Set
$\left(\,f_{1},f_{2}\right)\equiv\nabla f$
, and assume that
\begin{equation}\mathop\int\limits_{0}^{T}\mathop\int\limits_{0}^{T}\left|f_{i}(s,t)\right|\,{\mathrm{d}} s\,{\mathrm{d}} t<\infty,\quad i=1,2,\ \forall\ T>0.\end{equation}
Then for almost every
$t\geq0$
,
\begin{equation}f\left(t,0\right)-f\left(0,t\right)=\mathop\int\limits_{0}^{t}\left(\,f_{1}\left(s,t-s\right)-f_{2}\left(s,t-s\right)\right)\,{\mathrm{d}} s.\end{equation}
We apply this lemma for the function
$F_{r,s}^{n}={\mathrm{E}}\left[H\left(\xi_{r}^{n},\eta_{r}^{n},\tilde{u}_{s}^{n},\tilde{v}_{s}^{n}\right)\right]$
. Then we show that for
$f(r,s)=F_{r,s}^{n}$
and
$t=\beta_{n}(T)$
, the right-hand side of (6.8) tends to 0, as
$n\rightarrow\infty$
.
In order to check the conditions in Lemma 6.2 we will need several lemmas. In the next two lemmas we will derive martingale problems for processes
$(\xi_{\cdot}^{n},\eta_{\cdot}^{n})$
and
$(\tilde{u}_{\cdot}^{n},\tilde{v}_{\cdot}^{n})$
. Recall that
$\{ p_{x,y}^{n}\} _{x,y\in\Lambda_{n}}$
are the transition jump probabilities of the underlying nearest-neighbor random walk on
$\Lambda_{n}$
.
Lemma 6.3. For any
$(\varphi,\psi)\in \widetilde{E}_{\mathrm{fin,con}}^{n}$
define
\begin{align} & g\left(\xi_{s}^{n},\eta_{s}^{n},\varphi,\psi\right) = H\left(\xi_{s}^{n},\eta_{s}^{n},\varphi,\psi\right)\Bigg\{ \kappa\sum_{x,y\in\Lambda_{n}}\xi_{s}^{n}(x)\nonumber\\[3pt] & \quad \times p_{xy}^{n}\big[{\mathrm{e}}^{-\varphi(y)-\psi(y)+\varphi(x)+\psi(x)-{\mathrm{i}}\left(\varphi(y)-\psi(y)-\varphi(x)+\psi(x)\right)}-1\big]\nonumber\\[3pt] & \quad +\kappa\sum_{x,y\in\Lambda_{n}}\eta_{s}^{n}(x)p_{xy}^{n}\big[{\mathrm{e}}^{-\varphi(y)-\psi(y)+\varphi(x)+\psi(x) +{\mathrm{i}}\left(\varphi(y)-\psi(y)-\varphi(x)+\psi(x)\right)}-1\big]\nonumber\\[3pt] & \quad+\gamma\sum_{x\in\Lambda_{n}}\xi_{s}^{n}(x)\eta_{s}^{n}(x)\sum_{k\geq0}\nu_{k} \big[{\mathrm{e}}^{-(k-1)\left(\varphi(x)+\psi(x)+{\mathrm{i}}\left(\varphi(x)-\psi(x)\right)\right)}-1\big]\nonumber\\ & \quad +\gamma\sum_{x\in\Lambda_{n}}\xi_{s}^{n}(x)\eta_{s}^{n}(x)\sum_{k\geq0}\nu_{k} \big[{\mathrm{e}}^{-(k-1)\left(\varphi(x)+\psi(x)-{\mathrm{i}}\left(\varphi(x)-\psi(x)\right)\right)}-1\big]\Bigg\} ,\quad \forall\ s\geq0.\end{align}
Then
\begin{equation*}H\left(\xi_{t}^{n},\eta_{t}^{n},\varphi,\psi\right)-\mathop\int\limits_{0}^{t}g\left(\xi_{s}^{n},\eta_{s}^{n},\varphi,\psi\right)\,{\mathrm{d}} s,\quad \forall\ t\geq0.\end{equation*}
is an
$\{ \mathcal{F}_{t}^{\xi,\eta}\} _{t\geq0}$
-martingale.
Proof. The result is immediate by Lemma 3.1(d).
A similar result holds for the Dawson–Perkins process.
Lemma 6.4. For any
$(\varphi,\psi)\in E_{\mathrm{fin,con}}^{n}$
, define
\begin{align}h\left(\varphi,\psi,\tilde{u}_{s}^{n},\tilde{v}_{s}^{n}\right) & = H\left(\varphi,\psi,\tilde{u}_{s}^{n},\tilde{v}_{s}^{n}\right)\bigg\{- \sum_{x\in\Lambda_{n}}\tilde{u}_{s}^{n}Q^n(x)\left(\varphi(x)+\psi(x)+i\left(\varphi(x)-\psi(x)\right)\right)\nonumber \\ & \quad -\sum_{x\in\Lambda_{n}}\tilde{v}_{s}^{n}Q^n(x)\left(\varphi(x)+\psi(x)-i\left(\varphi(x)-\psi(x)\right)\right)\nonumber \\ & \quad +4\tilde{\gamma}\sum_{x\in\Lambda_{n}}\tilde{u}_{s}^{n}(x)\tilde{v}_{s}^{n}(x)\varphi(x)\psi(x)\bigg\} ,\quad \forall\ s\geq0.\end{align}
Then
\begin{equation*}H\left(\varphi,\psi,\tilde{u}_{t}^{n},\tilde{v}_{t}^{n}\right)-\mathop\int\limits_{0}^{t}h\left(\varphi,\psi,\tilde{u}_{s}^{n},\tilde{v}_{s}^{n}\right)\,{\mathrm{d}} s,\quad t\geq0,\end{equation*}
is an
$\{ \mathcal{F}_{t}^{\tilde{u}^{n},\tilde{v}^{n}}\} _{t\geq0}$
-martingale.
Proof. The result is immediate by [Reference Dawson and Perkins16, Theorem 2.2(c)(iv)], Itô’s lemma [Reference Ikeda and Watanabe24, Theorem II.5.1]), and simple algebra.
Lemma 6.5. For any
$t>0$
,
\begin{equation}\sup_{\begin{array}{c}0\leq s\leq t\\0\leq r\leq t\end{array}}{\mathrm{E}}\left|h(\xi_{r}^{n},\eta_{r}^{n},\tilde{u}_{s}^{n},\tilde{v}_{s}^{n})\right|<\infty\end{equation}
and
\begin{equation}\sup_{\begin{array}{c}0\leq s\leq t\\0\leq r\leq t\end{array}}{\mathrm{E}}\left|g(\xi_{r}^{n},\eta_{r}^{n},\tilde{u}_{s}^{n},\tilde{v}_{s}^{n})\right|<\infty.\end{equation}
Proof. Equation (6.11) is verified in the proof of Theorem 2.4(b) in [Reference Dawson and Perkins16].
Now let us check (6.12). First, by simple algebra it is trivial to see that for any
$z\in\mathbb{R}_{+}$
and
$y\in\mathbb{R}$
,
Hence,
\begin{align*}& \sup_{\begin{array}{c}0\leq s\leq t\\0\leq r\leq t\end{array}}{\mathrm{E}}\left|g\big(\xi_{r}^{n},\eta_{r}^{n},\tilde{u}_{s}^{n},\tilde{v}_{s}^{n}\big)\right|\\& \qquad\qquad \leq \sup_{0\leq s,r\leq t}C{\mathrm{E}}\Bigg\{ \kappa\sum_{x,y\in\Lambda_{n}}\xi_{r}^{n}(x)p_{xy}^{n}\left[\tilde{u}_{s}^{n}(y)+\tilde{v}_{s}^{n}(y)+\tilde{u}_{s}^{n}(x)+\tilde{v}_{s}^{n}(x)\right]\\ & \qquad\qquad +\kappa\sum_{x,y\in\Lambda_{n}}\eta_{r}^{n}(x)p_{xy}^{n}\left[\tilde{u}_{s}^{n}(y)+\tilde{v}_{s}^{n}(y)+\tilde{u}_{s}^{n}(x)+\tilde{v}_{s}^{n}(x)\right]\\ & \qquad\qquad +\gamma\sum_{x\in\Lambda_{n}}\xi_{r}^{n}(x)\eta_{r}^{n}(x)\sum_{k\geq0}\nu_{k}\left|k-1\right|\left[\tilde{u}_{s}^{n}(x)+\tilde{v}_{s}^{n}(x)\right]\\ & \qquad\qquad +\gamma\sum_{x\in\Lambda_{n}}\xi_{r}^{n}(x)\eta_{r}^{n}(x)\sum_{k\geq0}\nu_{k}\left|k-1\right|\left[\tilde{u}_{s}^{n}(x)+\tilde{v}_{s}^{n}(x)\right]\Bigg\} ,\end{align*}
where
$C>0$
is a constant and the last inequality follows from (6.13). Recall that, by Corollary 5.2,
${\mathrm{E}}[\xi_{s}^{n}(x)]=\theta_{1},\ {\mathrm{E}}[\eta_{s}^{n}(x)]=\theta_{2}$
and,
${\mathrm{E}}[\xi_{s}^{n}(x)\eta_{s}^{n}(x)]=\theta_{1}\theta_{2}$
. By [Reference Dawson and Perkins16, Theorem 2.2b(iii)],
since initial conditions have a finite mass. Also note that
${{\sum_{k\geq0}\left|k-1\right|\nu_{k}<\infty}}$
. Then (6.12) holds.
Now we are ready to prove the following lemma.
Lemma 6.6. For any
$n\geq1$
, and every
$t>0,$
\begin{align} & {\mathrm{E}}\left[H\left(\xi_{t}^{n},\eta_{t}^{n},\tilde{u}_{0}^{n},\tilde{v}_{0}^{n}\right)\right] -{\mathrm{E}}\left[H\left(\xi_{0}^{n},\eta_{0}^{n},\tilde{u}_{t}^{n},\tilde{v}_{t}^{n}\right)\right]\nonumber \\ & \quad={\mathrm{E}}\Bigg[\mathop\int\limits_{0}^{t}\left\{ g\left(\xi_{s}^{n},\eta_{s}^{n},\tilde{u}_{t-s}^{n},\tilde{v}_{t-s}^{n}\right)-h\left(\xi_{s}^{n},\eta_{s}^{n}, \tilde{u}_{t-s}^{n},\tilde{v}_{t-s}^{n}\right)\right\} \,{\mathrm{d}} s\Bigg].\end{align}
Proof. By Lemmas 6.3, 6.4, and 6.5 we can apply Lemma 6.2 to the function
and immediately see that (6.14) holds for almost every
$t>0$
. However, again by Lemmas 6.3, 6.4, and 6.5 one can see that both left-hand and right-hand sides of (6.14) are continuous in t. Hence, (6.14) holds for all
$t>0$
.
Define
\begin{align}e(T,n) & = {\mathrm{E}}\bigg[\int_{0}^{\beta_{n}(T)}\Big\{ g\Big(\xi_{s}^{n},\eta_{s}^{n},\tilde{u}_{\beta_{n}(T)-s}^{n},\tilde{v}_{\beta_{n}(T)-s}^{n}\Big)\\ & -h\Big(\xi_{s}^{n},\eta_{s}^{n},\tilde{u}_{\beta_{n}(T)-s}^{n},\tilde{v}_{\beta_{n}(T)-s}^{n}\Big)\Big\}\,{\mathrm{d}} s\bigg].\nonumber\end{align}
To finish the proof of Proposition 6.2 we need the following proposition.
Proposition 6.3. We have
$e(T,n)\rightarrow0$
as
$n\rightarrow\infty$
.
The next subsection is devoted to the proof of the above proposition. Now we are ready to complete.
6.2. Proof of Proposition 6.3
Fix
$t>0$
. For simplicity, denote
$f_{s}=H(\xi_{s}^{n},\eta_{s}^{n},\varphi,\psi)$
. Apply the Taylor series expansion on the exponents inside the sums on the right-hand side of (6.9) to get
\begin{align*} & g(\xi_{s}^{n},\eta_{s}^{n},\varphi,\psi)\\& \quad =\, f_{s}\Bigg\{ \kappa\sum_{x,y\in\Lambda_{n}}\xi_{s}^{n}(x)p_{xy}^{n}\\ & \qquad \times\big[{-}\varphi(y)-\psi(y)+\varphi(x)+\psi(x)-{\mathrm{i}}\left(\varphi(y)-\psi(y)-\varphi(x)+\psi(x)\right)\\ & \qquad +\frac{1}{2}\left({-}\varphi(y)-\psi(y)+\varphi(x)+\psi(x)-{\mathrm{i}}\left(\varphi(y)-\psi(y)-\varphi(x)+\psi(x)\right)\right)^{2}\\ & \qquad +G^{1,1}(\varphi,\psi,x,y)\big]\Bigg\} \\ & \qquad +f_{s}\Bigg\{ \kappa\sum_{x,y\in\Lambda_{n}}\eta_{s}^{n}(x)p_{xy}^{n}\\ & \qquad \times\big[{-}\varphi(y)-\psi(y)+\varphi(x)+\psi(x)+{\mathrm{i}}\left(\varphi(y)-\psi(y)-\varphi(x)+\psi(x)\right)\\ & \qquad +\frac{1}{2}\left({-}\varphi(y)-\psi(y)+\varphi(x)+\psi(x)+{\mathrm{i}}\left(\varphi(y)-\psi(y)-\varphi(x)+\psi(x)\right)\right)^{2}\\ & \qquad + G^{1,2}(\varphi,\psi,x,y)\big]\\ & \qquad +\gamma\sum_{x\in\Lambda_{n}}\xi_{s}^{n}(x)\eta_{s}^{n}(x)\left[\frac{1}{2}\sigma^{2}\left(\varphi(x)+\psi(x)+ {\mathrm{i}}\left(\varphi(x)-\psi(x)\right)\right)^{2}+G^{2,1}(\varphi,\psi,x)\right]\\ & \qquad +\gamma\sum_{x\in\Lambda_{n}}\xi_{s}^{n}(x)\eta_{s}^{n}(x)\left[\frac{1}{2}\sigma^{2}\left(\varphi(x)+\psi(x)- {\mathrm{i}}\left(\varphi(x)-\psi(x)\right)\right)^{2}+G^{2,2}(\varphi,\psi,x)\right]\Bigg\} ,\\ & \qquad \forall s\geq0.\end{align*}
where we used our assumption on the branching mechanism:
For the error terms in the Taylor expansion we have the following bounds
\begin{align} |G^{1,j}(\varphi,\psi,x,y)| &\leq {\mathrm{e}}^{\varphi(x)+\psi(x)}\left| -\varphi(y)-\psi(y)+\varphi(x)+\psi(x)-{\mathrm{i}}\left(\varphi(y)-\psi(y)-\varphi(x)+\psi(x)\right)\right|^{3} \nonumber\\[5pt]&\leq C_{(6.16)}{\mathrm{e}}^{\varphi(x)+\psi(x)}\big( \varphi(y)^3+\psi(y)^3+\varphi(x)^3+\psi(x)^3\big), \quad j=1,2, \end{align}
\begin{align}|G^{2,1}(\varphi,\psi,x)| + |G^{2,2}(\varphi,\psi,x)| &\leq {\mathrm{e}}^{\varphi(x)+\psi(x)}\sum_{k\geq0}\nu_{k}|k-1|^3 \big( \left|\varphi(x)+\psi(x)-i\left(\varphi(x)-\psi(x)\right)\right|^3\nonumber\\[5pt] & +\left|\varphi(x)+\psi(x)+i\left(\varphi(x)-\psi(x)\right)\right|^3\big)\nonumber\\[5pt] &\leq C_{(6.17)}{\mathrm{e}}^{\varphi(x)+\psi(x)}\big(\varphi(x)^3+\psi(x)^3\big),\end{align}
where the positive constants
$C_{(6.16)}, C_{(6.17)}$
are independent of
$\varphi, \psi, x,y$
and in (6.17) we used the assumption on
$\sum_{k\geq0}\nu_{k}k^{3}<\infty$
on the branching mechanism.
We use simple algebra to obtain
\begin{align*}g(\xi_{s}^{n},\eta_{s}^{n},\varphi,\psi)=\, & f_{s}\bigg\{ \kappa\sum_{x,y\in\Lambda_{n}}\xi_{s}^{n}(x)p_{xy}^{n}\\ & \times\big[{-}\varphi(y)-\psi(y)+\varphi(x)+\psi(x)-{\mathrm{i}}\left(\varphi(y)-\psi(y)-\varphi(x)+\psi(x)\right)\\ & +2\left(\varphi(x)-\varphi(y)\right)\left(\psi(x)-\psi(y)\right)+i\big((\varphi(x)-\varphi(y))^{2}-(\psi(x)-\psi(y))^{2}\big)\\ & +G^{1,1}(\varphi,\psi,x,y)\big]\bigg\} \\ & +f_{s}\bigg\{ \kappa\sum_{x,y\in\Lambda_{n}}\eta_{s}^{n}(x)p_{xy}^{n}\\ & \times\big[{-}\varphi(y)-\psi(y)+\varphi(x)+\psi(x)+i\left(\varphi(y)-\psi(y)-\varphi(x)+\psi(x)\right)\\ & +2\left(\varphi(x)-\varphi(y)\right)\left(\psi(x)-\psi(y)\right)-i\big((\varphi(x)-\varphi(y))^{2}-(\psi(x)-\psi(y))^{2}\big)\\ & +G^{1,2}(\varphi,\psi,x,y)\big]\\ & +\gamma\sum_{x\in\Lambda_{n}}\xi_{s}^{n}(x)\eta_{s}^{n}(x)\big[4\sigma^{2}\varphi(x)\psi(x)+G^{2,1}(\varphi,\psi,x)\\ & + G^{2,2}(\varphi,\psi,x)\big]\bigg\} ,\quad \forall\ s\geq0.\end{align*}
Let us define
\begin{align}\tilde f_{T,s}^{n} & = H\big(\xi_{s}^{n},\eta_{s}^{n},\tilde{u}_{\beta_{n}(T)-s}^{n},\tilde{v}_{\beta_{n}(T)-s}^{n}\big) \\[3pt]& = {\mathrm{e}}^{-\big\langle \xi_{s}^{n}+\eta_{s}^{n},\tilde{u}_{\beta_{n}(T)-s}^{n}+\tilde{v}_{\beta_{n}(T)-s}^{n}\big\rangle -{\mathrm{i}}\big\langle \xi_{s}^{n}-\eta_{s}^{n},\tilde{u}_{\beta_{n}(T)-s}^{n}-\tilde{v}_{\beta_{n}(T)-s}^{n}\big\rangle },\quad 0\leq s\leq T.\nonumber\end{align}
Now by using the above and Lemmas 6.4 and 6.3 we get (recall that
$\tilde{\gamma}=\gamma\sigma^{2}$
and e(T, n) is defined in (6.15)):
\begin{align}e(T,n)&=e_{\xi,\textrm{RW}}(T,n)+e_{\eta,\textrm{RW}}(T,n)+e_{\mathrm{br}}(T,n)\\\nonumber&\,=\!:\, \sum_{j=1}^2 e_{\xi,\textrm{RW},\,j}(T,n)+\sum_{j=1}^2 e_{\eta,\textrm{RW},j}(T,n)+e_{\mathrm{br}}(T,n),\end{align}
where
\begin{eqnarray*}e_{\xi,\textrm{RW},1}(T,n) & \,=\, & {\mathrm{E}}\int_{0}^{\beta_{n}(T)}\tilde f_{T,s}^{n}\Bigg\{ \kappa\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{s}^{n}(x)\left(2\Big(\tilde{u}_{\beta_{n}(T)-s}^{n}(x)-\tilde{u}_{\beta_{n}(T)-s}^{n}(y)\Big)\right.\\ & & \times\left(\tilde{v}_{\beta_{n}(T)-s}^{n}(x)-\tilde{v}_{\beta_{n}(T)-s}^{n}(y)\right)\\ & & \left.+i\left[\left(\tilde{u}_{\beta_{n}(T)-s}^{n}(x)-\tilde{u}_{\beta_{n}(T)-s}^{n}(y)\right)^{2}-\left(\tilde{v}_{\beta_{n}(T)-s}^{n}(x)-\tilde{v}_{\beta_{n}(T)-s}^{n}(y)\right)^{2}\right]\right)\Bigg\}\, {\mathrm{d}} s,\\ e_{\xi,\textrm{RW},2}(T,n) & \,=\, & {\mathrm{E}}\int_{0}^{\beta_{n}(T)}\tilde f_{T,s}^{n}\Bigg\{ \kappa\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{s}^{n}(x)G^{1,1}\Big(\tilde{u}_{\beta_{n}(T)-s}^{n},\tilde{v}_{\beta_{n}(T)-s}^{n},x,y\Big)\Bigg\}\, {\mathrm{d}} s,\\ e_{\eta,\textrm{RW},1}(T,n) & \,=\, & {\mathrm{E}}\int_{0}^{\beta_{n}(T)}\tilde f_{T,s}^{n}\Bigg\{ \kappa\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\eta_{s}^{n}(x)\left(2\left(\tilde{u}_{\beta_{n}(T)-s}^{n}(x)-\tilde{u}_{\beta_{n}(T)-s}^{n}(y)\right)\right.\\ & & \times\left(\tilde{v}_{\beta_{n}(T)-s}^{n}(x)-\tilde{v}_{\beta_{n}(T)-s}^{n}(y)\right)\\ & & \left.-{\mathrm{i}}\left[\left(\tilde{u}_{\beta_{n}(T)-s}^{n}(x)-\tilde{u}_{\beta_{n}(T)-s}^{n}(y)\right)^{2}-\left(\tilde{v}_{\beta_{n}(T)-s}^{n}(x)-\tilde{v}_{\beta_{n}(T)-s}^{n}(y)\right)^{2}\right]\right)\Bigg\}\, {\mathrm{d}} s,\\ e_{\eta,\textrm{RW},2}(T,n) & \,=\, & {\mathrm{E}}\int_{0}^{\beta_{n}(T)}\tilde f_{T,s}^{n}\Bigg\{ \kappa\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\eta_{s}^{n}(x)G^{1,2}\Big(\tilde{u}_{\beta_{n}(T)-s}^{n},\tilde{v}_{\beta_{n}(T)-s}^{n},x,y\Big)\Bigg\} \, {\mathrm{d}} s,\\ e_{\mathrm{br}}(T,n) & \,=\, & {\mathrm{E}}\int_{0}^{\beta_{n}(T)}\tilde f_{T,s}^{n}\sum_{x\in\Lambda_{n}}\gamma \xi_{s}^{n}(x)\eta_{s}^{n}(x)\Bigg(\sum_{j=1}^2 G^{2,j}(\tilde{u}_{\beta_{n}(T)-s}^{n},\tilde{v}_{\beta_{n}(T)-s}^{n},x) \Bigg)\, {\mathrm{d}} s.\end{eqnarray*}
Now we are going to show that indeed e(T, n) vanishes, as
$n\rightarrow\infty$
. We start with the following technical lemma that was proved in Lemma 2.1 in [Reference Cox, Greven and Shiga7].
Lemma 6.7. Denote by
$\left\{ p_{t}^{n}(x,y)\,:\, t\geq 0, x,y \in \Lambda_n \right\}$
the transition probabilities of the symmetric nearest-neighbor random walk on the domain
$\Lambda_{n}$
, and let
$\left\{ p_{t}(x,y)\,:\, t\geq 0, x,y \in\mathbb{Z}^{d} \right\}$
denote the corresponding transition probabilities on
$\mathbb{Z}^{d}$
. Let
$\left\{ g_{t}({\cdot})\right\} _{t\geq0}$
be the Green function of the symmetric nearest-neighbor random walk on
$\mathbb{Z}^{d}$
. Then the following hold.
-
(a) If
$t_{n}/n^{2}\rightarrow\infty$
as
$n\rightarrow\infty$
, then
\begin{equation*}\sup_{t\geq t_{n}}\sup_{x,y\in\Lambda_{n}}(2n)^{d}\big|p_{t}^{n}(x,y)-(2n)^{-d}\big|\rightarrow 0.\end{equation*}
-
(b) If
$d\geq3$
, and
$T(n)/\left|\Lambda_{n}\right|\rightarrow s\in(0,\infty)$
as
$n\rightarrow\infty$
, then
\begin{equation*}\lim_{n\rightarrow\infty}\mathop\int\limits_{0}^{T(n)}p_{2t}^{n}(x,y)\,{\mathrm{d}} t=\mathop\int\limits_{0}^{\infty}p_{2t}(x,y)\,{\mathrm{d}} t+s=\frac{1}{2}g_{\infty}(x-y)+s.\end{equation*}
First we state the lemma that gives us an important bound on moments of the processes
$u^n, v^n$
. This is where the condition (2.8) on
$\tilde{\gamma} = \gamma\sigma^{2}$
is used.
Lemma 6.8. Let
$d\geq 3$
and
$\tilde{\gamma}=\gamma\sigma^{2}<\frac{1}{\sqrt{3^{5}}(\frac12 g_{\infty}(0)+1)}$
. Let
$\left(u_{t}^{n},v_{t}^{n}\right)_{t\geq0}$
be a solution of (1.3), with
$Q^{n}$
being a Q-matrix of the nearest-neighbor random walk on
$\Lambda_{n}$
. Let
$\vartheta_1,\vartheta_2\geq 0.$
Assume that
$u_{0}^{n}(x)=\vartheta_1, v_{0}^{n}(x)=\vartheta_2$
for all
$x\in \Lambda_n$
. Then, for any
$T\leq 1$
,
Proof. The proof is technical, however it follows easily from the proof of Lemma 2.2 in [Reference Cox, Greven and Shiga7]. Since
$u^n, v^n$
have constant initial conditions it is easy to see that
${\mathrm{E}}(u_{t}^{n}(x)^p)$
,
$ {\mathrm{E}}(v_{t}^{n}(x)^p)$
,
${\mathrm{E}}(u_{t}^{n}(x)^p v_t^n(x)^p)$
are constant functions in x for
$p>0$
. Thus, denote
$f^n_t={\mathrm{E}}((u_{t}^{n}(0))^{4}), r^n_t={\mathrm{E}}((v_{t}^{n}(0))^{4}),\ t\geq 0$
. Then following the argument in the proof of Lemma 2.2 in [Reference Cox, Greven and Shiga7, pp.175--176], one gets that there exists a constant
$C>0$
such that
\begin{align}\nonumber f^n(t) &\leq C \vartheta_1^4 + 3^5 \int_0^t p^n_{2s}(0)\,{\mathrm{d}} s \int_0^t p^n_{2s}(0)\tilde\gamma^2{\mathrm{E}}\big(\big(u_{t-s}^{n}(0)v_{t-s}^{n}(0)\big)^2\big)\,{\mathrm{d}} s\\[5pt] &\leq C \vartheta_1^4 + 3^5\int_0^t p^n_{2s}(0)\,{\mathrm{d}} s\int_0^t p^n_{2s}(0)\tilde\gamma^2\frac{1}{2}(f^n(t-s) + r^n(t-s))\,{\mathrm{d}} s,\end{align}
and, similarly,
Letting
$J^n(t)=\int_0^t p^n_{2s}(0)\,{\mathrm{d}} s,$
and
we have
From Lemma 6.7 (b) we get that
Recalling that
$\tilde\gamma<\frac{1}{\sqrt{3^{5}}(\frac12 g_{\infty}(0)+1)}$
we get that
Since
$\bar h^n(\beta_n(T))<\infty$
for each finite n, we are done.
From this, we derive the following corollary.
Corollary 6.1. For any
$x,y\in\Lambda_{n}$
,
Proof. By Lemmas 5.3 and 5.4 it is enough to show that.
where
$\left\{ g_{t}^{n}(\cdot,\cdot)\right\} _{t\geq0}$
is the Green function of the symmetric nearest-neighbor random walk on
$\Lambda_{n}$
. For any
$t\geq0$
,
$x,y\in\Lambda_{n}$
, we have
By Lemma 6.7 (b)
$\sup_{n}g_{\beta_{n}(T)}^{n}(0,0)$
is finite, and we are done.
Since, for
$x,y\in \Lambda_n$
,
$p_{t}^{n}(x,y)$
,
$g_{t}^{n}(x,y)$
are functions of
$x-y$
, with some abuse of notation we will sometimes use the notation
$p_{t}^{n}(x-y)$
,
$g_{t}^{n}(x-y)$
for
$p_{t}^{n}(x,y)$
,
$g_{t}^{n}(x,y)$
, respectively.
In what follows, we always assume that
$\tilde{\gamma}=\gamma\sigma^{2}<\frac{1}{\sqrt{3^{5}}(\frac12 g_{\infty}(0)+1)}$
. With Lemma 6.8 at hand we are ready to treat the terms
$e_{\mathrm{br}}(T,n)$
,
$e_{\xi,\textrm{RW},2}(T,n)$
, and
$ e_{\eta,\textrm{RW},2}(T,n)$
.
Lemma 6.9. We have
Proof. We will show only (6.21), since the proof of (6.22) follows along the similar lines:
\begin{align}\nonumber|e_{\mathrm{br}}(T,n)| & = \Bigg|{\mathrm{E}}\mathop\int\limits_{0}^{\beta_{n}(T)}\tilde f_{T,s}^{n}\sum_{x\in\Lambda_{n}}\gamma \xi_{s}^{n}(x)\eta_{s}^{n}(x)\Bigg(\sum_{j=1}^2 G^{2,j}(\tilde{u}_{\beta_{n}(T)-s}^{n},\tilde{v}_{\beta_{n}(T)-s}^{n},x) \Bigg)\,{\mathrm{d}} s\Bigg|\\\nonumber&\leq C_{(6.17)} {\mathrm{E}}\mathop\int\limits_{0}^{\beta_{n}(T)}|\tilde f_{T,s}^{n}|\sum_{x\in\Lambda_{n}}\gamma \xi_{s}^{n}(x)\eta_{s}^{n}(x){\mathrm{e}}^{\tilde{u}_{\beta_{n}(T)-s}^{n}(x)+\tilde{v}_{\beta_{n}(T)-s}^{n}(x)}\\\nonumber&\times\left(\tilde{u}_{\beta_{n}(T)-s}^{n}(x)^3+\tilde{v}_{\beta_{n}(T)-s}^{n}(x)^3\right)\,{\mathrm{d}} s\\&\leq C_{(6.17)}\gamma {\mathrm{E}}\mathop\int\limits_{0}^{\beta_{n}(T)}\sum_{x\in\Lambda_{n}} \xi_{s}^{n}(x)\eta_{s}^{n}(x)\left(\tilde{u}_{\beta_{n}(T)-s}^{n}(x)^3+\tilde{v}_{\beta_{n}(T)-s}^{n}(x)^3\right)\,{\mathrm{d}} s,\end{align}
where the first inequality follows by (6.17) and the second inequality follows by the trivial inequality
for all
$x\in \Lambda_n$
(recall the definition of
$\tilde f_{T,s}^{n}$
in (6.18)).
Consider the process
$(\hat u^{n},\hat v^{n})$
that solves (1.3) equations with initial conditions
Then for any
$s>0$
,
Therefore, by the above, (6.23), and Fubini’s theorem, we get
\begin{eqnarray*}|e_{\mathrm{br}}(T,n)|&\,\leq\,&|\Lambda_{n}|^{-3}C_{(6.17)}\gamma \mathop\int\limits_{0}^{\beta_{n}(T)} {\mathrm{E}}\Bigg[\sum_{x\in\Lambda_{n}} \xi_{s}^{n}(x)\eta_{s}^{n}(x)\left(\hat{u}_{\beta_{n}(T)-s}^{n}(x)^3+\hat{v}_{\beta_{n}(T)-s}^{n}(x)^3\right)\Bigg]\,{\mathrm{d}} s\\ & \leq\, & C_{(6.17)}\gamma T |\Lambda_{n}|^{-1}\theta_{1}\theta_{2}\sup_{s\leq\beta_{n}(T)}\frac{1}{\left|\Lambda_{n}\right|}{\mathrm{E}}\Bigg[\sum_{x\in\Lambda_{n}}\Big(\Big(\hat u_{\beta_{n}(T)-s}^{n}(x)\Big)^{3}+\Big(\hat v_{\beta_{n}(T)-s}^{n}(x)\Big)^{3}\Big)\Bigg]\\ & \leq\, & C_{(6.17)}\gamma T |\Lambda_{n}|^{-1}\theta_{1}\theta_{2}\sup_{x\in\Lambda_{n}}\sup_{s\leq\beta_{n}(T)}{\mathrm{E}}\Big[\Big(\left(\hat u_{\beta_{n}(T)-s}^{n}(x)\right)^{3}+\Big(\hat v_{\beta_{n}(T)-s}^{n}(x)\Big)^{3}\Big)\Big],\end{eqnarray*}
where the second inequality follows by Corollary 5.2, and the third inequality is trivial. With this, to obtain (6.21), it is enough to show that
However, this follows from Lemma 6.8 and Jensen’s inequality.
Before we begin analyzing the limiting behavior of
$e_{\xi,\textrm{RW},1}(T,n)$
and
$e_{\eta,\textrm{RW},1}(T,n)$
, we require a technical lemma whose proof is simple and thus is omitted.
Lemma 6.10. For any
$n\in\mathbb{N}$
,
$r>0$
,
Now we are ready to derive the limiting behavior of
$e_{\xi,\textrm{RW},1}(T,n)$
and
$e_{\eta,\textrm{RW},1}(T,n)$
.
Lemma 6.11. We have
Proof. We will take care of
$e_{\xi,\textrm{RW},1}(T,n)$
; the proof for
$e_{\eta,\textrm{RW},1}(T,n)$
is the same:
\begin{multline}\left|e_{\xi,\textrm{RW},1}(T,n)\right|\\\begin{aligned}\quad \quad \quad \leq\, & C\kappa{\mathrm{E}}\Bigg(\mathop\int\limits_{0}^{\beta_{n}(T)}\left| \tilde f_{T,s}^{n}\right|\Bigg\{ \Bigg|\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x)\left(\tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y)\right)\left(\tilde{v}_{s}^{n}(x)-\tilde{v}_{s}^{n}(y)\right)\Bigg|\\ & +\Bigg|\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x)\left(\tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y)\right)^{2}-\left(\tilde{v}_{s}^{n}(x)-\tilde{v}_{s}^{n}(y)\right)^{2}\Bigg|\,{\mathrm{d}} s\Bigg\} \Bigg)\\ \quad \quad \quad \leq & C{\mathrm{E}}\Bigg(\mathop\int\limits_{0}^{\beta_{n}(T)}J_{1}^{n}(s)\,{\mathrm{d}} s+\mathop\int\limits_{0}^{\beta_{n}(T)}J_{2}^{n}(s)\,{\mathrm{d}} s\Bigg),\end{aligned}\end{multline}
where
\begin{eqnarray*}J_{1}^{n}(s) & \,=\, & \Bigg|\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x)\left(\tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y)\right)\left(\tilde{v}_{s}^{n}(x)-\tilde{v}_{s}^{n}(y)\right)\Bigg|,\\J_{2}^{n}(s) & \,=\, & \Bigg|\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x)\left(\left(\tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y)\right)^{2}-\left(\tilde{v}_{s}^{n}(x)-\tilde{v}_{s}^{n}(y)\right)^{2}\right)\Bigg|.\end{eqnarray*}
Let us bound the expected value of
$J_{1}^{n}$
:
\begin{eqnarray*}{\mathrm{E}}(J_{1}^{n}(s)) & \,=\, & {\mathrm{E}}\Bigg|\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x)\left(\tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y)\right)\left(\tilde{v}_{s}^{n}(x)-\tilde{v}_{s}^{n}(y)\right)\Bigg|\\ & \leq\, & \sqrt{{\mathrm{E}}\Bigg[\Bigg(\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x)\left(\tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y)\right)\left(\tilde{v}_{s}^{n}(x)-\tilde{v}_{s}^{n}(y)\right)\Bigg)^{2}\Bigg]}.\end{eqnarray*}
Now we will recall the following representation from [Reference Dawson and Perkins16, Theorem 2.2] with
$\varphi=\delta_{x}$
:
\begin{equation*}\begin{cases}\tilde{u}_{t}^{n}(x)=P_{t}^{n}\tilde{u}_{0}^{n}(x)+\sum\limits_{z\in\Lambda_{n}}\int_{0}^{t}p_{t-s}^{n}(x-z)\sqrt{\tilde{\gamma}\tilde{u}_{s}^{n}(z)\tilde{v}_{s}^{n}(z)}\,{\mathrm{d}} B_{s}(z), & x\in\Lambda_{n},\\[12pt]\tilde{v}_{t}^{n}(x)=P_{t}^{n}\tilde{v}_{0}^{n}(x)+\sum\limits_{z\in\Lambda_{n}}\int_{0}^{t}p_{t-s}^{n}(x-z)\sqrt{\tilde{\gamma}\tilde{u}_{s}^{n}(z)\tilde{v}_{s}^{n}(z)}\,{\mathrm{d}} W_{s}(z), & x\in\Lambda_{n},\end{cases}\end{equation*}
to get
\begin{align}N_{r}^{t}(x,y) & \,:\!=\, P_{t-r}^{n}\tilde{u}_{r}^{n}(x)-P_{t-r}^{n}\tilde{u}_{r}^{n}(y)\\[5pt] & = P_{t}^{n}\tilde{u}_{0}^{n}(x)-P_{t}^{n}\tilde{u}_{0}^{n}(y)\nonumber \\[5pt] & \quad +\sum_{z\in\Lambda_{n}}\mathop\int\limits_{0}^{r}\left(p_{t-s}^{n}(x-z)-p_{t-s}^{n}(y-z)\right)\sqrt{\tilde{\gamma}\tilde{u}_{s}^{n}(z)\tilde{v}_{s}^{n}(z)}\,{\mathrm{d}} B_{s}(z),\quad r\leq t,\nonumber\end{align}
where the last equality follows from the Chapman–Kolmogorov formula. Similarly, for
$r\leq t$
we get
\begin{align}M_{r}^{t}(x,y) \,:\!=\, & P_{t-r}^{n}\tilde{v}_{t}^{n}(x)-P_{t-r}^{n}\tilde{v}_{t}^{n}(y)\\[5pt] = & P_{t}^{n}\tilde{v}_{0}^{n}(x)-P_{t}^{n}\tilde{v}_{0}^{n}(y)\nonumber \\[5pt] & +\sum_{z\in\Lambda_{n}}\mathop\int\limits_{0}^{r}\left(p_{t-s}^{n}(x-z)-p_{t-s}^{n}(y-z)\right)\sqrt{\tilde{\gamma}\tilde{u}_{s}^{n}(z)\tilde{v}_{s}^{n}(z)}\,{\mathrm{d}} W_{s}(z)\nonumber\end{align}
where
$\left\{ B_{\cdot}(z)\right\} _{z\in\Lambda_{n}}$
,
$\left\{ W_{\cdot}(z)\right\} _{z\in\Lambda_{n}}$
are orthogonal Brownian motions.
Note that
$\left\{ N_{r}^{t}(x,y)\right\} _{0\leq r\leq t}$
and
$\left\{ M_{r}^{t}(x,y)\right\} _{0\leq r\leq t}$
are martingales; in addition
\begin{equation}\begin{array}{c}\tilde{u}_{t}^{n}(x)-\tilde{u}_{t}^{n}(y)=\left.P_{t-r}^{n}\tilde{u}_{r}^{n}(x)-P_{t-r}^{n}\tilde{u}_{r}^{n}(y)\right|_{r=t}=\left.N_{r}^{t}(x,y)\right|_{r=t},\\[5pt]\tilde{v}_{t}^{n}(x)-\tilde{v}_{t}^{n}(y)=\left.P_{t-r}^{n}\tilde{v}_{r}^{n}(x)-P_{t-r}^{n}\tilde{v}_{r}^{n}(y)\right|_{r=t}=\left.M_{r}^{t}(x,y)\right|_{r=t}.\end{array}\end{equation}
Then by orthogonality of the Brownian motions
$B_{\cdot}(z)$
and
$W_{\cdot}(z)$
for all
${{z\in\Lambda_{n}}}$
, and the Itô formula we get
\begin{multline*}\sum_{x,y\in\Lambda_{n}}p_{x,y}^{n}\xi_{\beta_{n}(T)-s}^{n}M_{s}^{s}(x,y)N_{s}^{s}(x,y)\\\begin{aligned}= & \sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x)\left(\tilde{u}_{s}^{n}(x)-\tilde{u}_{s}^{n}(y)\right)\left(\tilde{v}_{s}^{n}(x)-\tilde{v}_{s}^{n}(y)\right)\\= & \sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x)\sum_{z\in\Lambda_{n}}\mathop\int\limits_{0}^{s}\left(P_{s-r}^{n}\tilde{u}_{r}^{n}(x)-P_{s-r}^{n}\tilde{u}_{r}^{n}(y)\right)\\ & \times\left(p_{s-r}^{n}(x-z)-p_{s-r}^{n}(y-z)\right)\sqrt{\tilde\gamma\tilde{u}_{r}^{n}(z)\tilde{v}_{r}^{n}(z)}\,{\mathrm{d}} W_{r}(z)\\ & +\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x)\sum_{z\in\Lambda_{n}}\mathop\int\limits_{0}^{s}\left(P_{s-r}^{n}\tilde{v}_{r}^{n}(x)-P_{s-r}^{n}\tilde{v}_{r}^{n}(y)\right)\\ & \times\left(p_{s-r}^{n}(x-z)-p_{s-r}^{n}(y-z)\right)\sqrt{\tilde\gamma\tilde{u}_{r}^{n}(z)\tilde{v}_{r}^{n}(z)}\,{\mathrm{d}} B_{r}(z)\\\,=\!:\, & \sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x)\sum_{z\in\Lambda_{n}}\tilde{I}_{1,1}^{n}(s,x,y,z)\\ & +\sum_{x,y\in\Lambda_{n}}p_{xy}^{n}\xi_{\beta_{n}(T)-s}^{n}(x)\sum_{z\in\Lambda_{n}}\tilde{I}_{1,2}^{n}(s,x,y,z)\\\,=\!:\,&I_{1,1}^{n}(s)+I_{1,2}^{n}(s).\end{aligned}\end{multline*}
Note that
Thus, let us bound
${\mathrm{E}}[(I_{1,1}^{n}(s))^{2}]$
: for all
$ s\leq\beta_{n}(T)$
, we have
\begin{align}{\mathrm{E}}\big[\big(I_{1,1}^{n}(s)\big)^{2}\big] = & \sum_{x_{1},y_{1}\in \Lambda_n}\sum_{x_{2},y_{2}\in \Lambda_n}{\mathrm{E}}\big(\xi_{\beta_{n}(T)-s}^{n}(x_{1})\xi_{\beta_{n}(T)-s}^{n}(x_{2})\big)\\\nonumber & \times p_{x_{1},y_{1}}^{n}p_{x_{2},y_{2}}^{n}\sum_{z_{1}\in\Lambda_{n}}\sum_{z_{2}\in\Lambda_{n}}{\mathrm{E}}\left[\tilde{I}_{1,1}^{n}(s,x_{1},y_{1},z_{1})\tilde{I}_{1,1}^{n}(s,x_{2},y_{2},z_{2})\right].\nonumber\end{align}
Note that for
$z_{1}\neq z_{2}$
,
$\tilde{I}_{1,1}^{n}(r,x_{1},y_{1},z_{1})$
and
$\tilde{I}_{1,1}^{n}(r,x_{2},y_{2},z_{2})$
are orthogonal square integrable martingales for
$r\leq s$
and, hence,
\begin{align*} & \sum_{z_{1}\in\Lambda_{n}}\sum_{z_{2}\in\Lambda_{n}}{\mathrm{E}}\left[\tilde{I}_{1,1}^{n}(s,x_{1},y_{1},z_{1})\tilde{I}_{1,1}^{n}(s,x_{2},y_{2},z_{2})\right]\\ & \quad = \sum_{z\in\Lambda_{n}}{\mathrm{E}}\left[\tilde{I}_{1,1}^{n}(s,x_{1},y_{1},z)\tilde{I}_{1,1}^{n}(s,x_{2},y_{2},z)\right]\\ & \quad = \tilde\gamma\sum_{z\in\Lambda_{n}}{\mathrm{E}}\Bigg[\mathop\int\limits_{0}^{s}\left(P_{s-r}^{n}\tilde{u}_{r}^{n}(x_{1})-P_{s-r}^{n}\tilde{u}_{r}^{n}(y_{1})\right)\left(p_{s-r}^{n}(x_{1}-z)-p_{s-r}^{n}(y_{1}-z)\right)\\ & \qquad \times\left(P_{s-r}^{n}\tilde{u}_{r}^{n}(x_{2})-P_{s-r}^{n}\tilde{u}_{r}^{n}(y_{2})\right)\left(p_{s-r}^{n}(x_{2}-z)-p_{s-r}^{n}(y_{2}-z)\right)\tilde{u}_{r}^{n}(z)\tilde{v}_{r}^{n}(z)\,{\mathrm{d}} r\Bigg]\\ & \quad = \tilde\gamma\sum_{z\in\Lambda_{n}}{\mathrm{E}}\Bigg[\mathop\int\limits_{0}^{s}\sum_{z_{1}\in\Lambda_{n}}\left(p_{s-r}^{n}(x_{1}-z_{1})-p_{s-r}^{n}(y_{1}-z_{1})\right)\tilde{u}_{r}^{n}(z_{1})\\[3pt] & \qquad \times\sum_{z_{2}\in\Lambda_{n}}\left(p_{s-r}^{n}(x_{2}-z_{2})-p_{s-r}^{n}(y_{2}-z_{2})\right)\tilde{u}_{r}^{n}(z_{2})\\[3pt] & \qquad \times\left(p_{s-r}^{n}(x_{1}-z)-p_{s-r}^{n}(y_{1}-z)\right)\left(p_{s-r}^{n}(x_{2}-z)-p_{s-r}^{n}(y_{2}-z)\right)\\[3pt] & \qquad \times \tilde{u}_{r}^{n}(z)\tilde{v}_{r}^{n}(z)\,{\mathrm{d}} r\Bigg]\\[3pt] & \quad\leq \tilde\gamma\sum_{z\in \Lambda_n}\mathop\int\limits_{0}^{s}\sum_{z_{1}\in \Lambda_n}\sum_{z_{2}\in \Lambda_n} \hat J_{1,1}(\vec{x},\vec{y},\vec{z},s-r){\mathrm{E}}\left[\tilde{u}_{r}^{n}(z_{1})\tilde{u}_{r}^{n}(z_{2})\tilde{u}_{r}^{n}(z)\tilde{v}_{r}^{n}(z)\right]\,{\mathrm{d}} r,\end{align*}
where
$\vec{x}=(x_{1},x_{2}),\vec{y}=(y_{1},y_{2}),\vec{z}=(z_{1},z_{2},z)$
and
By Lemma 6.8 and assumption on the initial conditions of
$(\tilde{u},\tilde{v})$
,
is bounded by
$C\left|\Lambda_{n}\right|^{-4}$
uniformly on
$z,z_{1},z_{2}\in\Lambda_{n}$
,
$r\leq\beta_{n}(T)$
and
$n\geq 1$
. Therefore,
\begin{align}\sum_{z_{1}\in\Lambda_{n}}&\sum_{z_{2}\in\Lambda_{n}}{\mathrm{E}}\left[\tilde{I}_{1,1}^{n}(s,x_{1},y_{1},z_{1})\tilde{I}_{1,1}^{n}(s,x_{2},y_{2},z_{2})\right]\nonumber\\[3pt]&\leq C\left|\Lambda_{n}\right|^{-4}\sum_{z\in \Lambda_n}\mathop\int\limits_{0}^{s}\sum_{z_{1}\in \Lambda_n}\sum_{z_{2}\in \Lambda_n} \hat J_{1,1}(\vec{x},\vec{y},\vec{z},s-r)\,{\mathrm{d}} r.\end{align}
Denote
Now we decompose the term on the right-hand side of (6.30) into two terms
\begin{equation}C\left|\Lambda_{n}\right|^{-4}\mathop\int\limits_{0}^{\left(s-n^{\delta}\right)_{+}}\tilde{J}_{1,1}(\vec{x},\vec{y},s-r)dr+C\left|\Lambda_{n}\right|^{-4}\mathop\int\limits_{\left(s-n^{\delta}\right)_{+}}^{s}\tilde{J}_{1,1}(\vec{x},\vec{y},s-r)dr\end{equation}
for some
$\delta\in(2,d)$
.
By Lemma 6.7 (a) we get
for any
$\delta>2$
. This implies that, for any
$\delta>2$
, there exists a sequence
$a_{n}=a_{n}(\delta)$
, such that
where
$a_{n}\rightarrow0$
, as
$n\rightarrow\infty$
.
By (6.32) we immediately get
for
$s>n^{\delta}$
and
$r\leq s-n^{\delta}$
. Hence for
$s\leq \beta_n(T)$
, we get
\begin{align}\left|\Lambda_{n}\right|^{-4}\mathop\int\limits_{0}^{(s-n^{\delta})_{+}}\tilde{J}_{1,1}(\vec{x},\vec{y},s-r)dr & \leq\left|\Lambda_{n}\right|^{-4} a_{n}^{4}\left|\Lambda_{n}\right|^{-1}\mathop\int\limits_{0}^{(s-n^{\delta})_{+}}1dr \leq C\left|\Lambda_{n}\right|^{-4}a_{n}^{4},\end{align}
where the last inequality follows since
$s\leq \beta_n(T)=\left|\Lambda_{n}\right|T$
.
Let us treat the second term in (6.31). Note that
In addition,
\begin{multline*}\sum_{z\in\Lambda_{n}}\left|p_{s-r}^{n}(x_{1}-z)-p_{s-r}^{n}(y_{1}-z)\right|\left|p_{s-r}^{n}(x_{2}-z)-p_{s-r}^{n}(y_{2}-z)\right|\\\begin{aligned}\quad & \leq\sum_{z\in\Lambda_{n}} \left(p_{s-r}^{n}(x_{1}-z)+p_{s-r}^{n}(y_{1}-z)\right)\left(p_{s-r}^{n}(x_{2}-z)+p_{s-r}^{n}(y_{2}-z)\right)\\ & =p_{2(s-r)}^{n}(x_{1}-x_{2})+p_{2(s-r)}^{n}(y_{1}-y_{2})+p_{2(s-r)}^{n}(x_{1}-y_{2})+p_{2(s-r)}^{n}(y_{1}-x_{2}),\end{aligned}\end{multline*}
where the last equality follows from the Chapman–Kolmogorov formula. Then
\begin{multline}C\left|\Lambda_{n}\right|^{-4}\mathop\int\limits_{(s-n^{\delta})_+}^{s}\tilde{J}_{1,1}(\vec{x},\vec{y},s-r)\,{\mathrm{d}} r\\\quad\leq C\left|\Lambda_{n}\right|^{-4}\mathop\int\limits_{0}^{n^{\delta}}\left(p_{2r}^{n}(x_{1}-x_{2})+p_{2r}^{n}(y_{1}-y_{2}) +p_{2r}^{n}(x_{1}-y_{2})+p_{2r}^{n}(y_{1}-x_{2})\right)\,{\mathrm{d}} r.\end{multline}
By (6.30), (6.31), (6.34), and (6.35) we get
\begin{multline*}\sum_{z_{1}\in\Lambda_{n}}\sum_{z_{2}\in\Lambda_{n}}{\mathrm{E}}\left[\tilde{I}_{1,1}^{n}(s,x_{1},y_{1},z_{1})\tilde{I}_{1,1}^{n}(s,x_{2},y_{2},z_{2})\right]\\\begin{aligned}\quad\,\leq & C\left|\Lambda_{n}\right|^{-4}\left(a_{n}^{4}+\int_{0}^{n^{\delta}}\left(p_{2r}^{n}(x_{1}-x_{2})+p_{2r}^{n}(y_{1}-y_{2}) +p_{2r}^{n}(x_{1}-y_{2})+p_{2r}^{n}(y_{1}-x_{2})\right)dr\right).\end{aligned}\end{multline*}
Use the above inequality, (6.29), and also Corollary 6.1 and Lemma 6.10 to get
\begin{align}{\mathrm{E}}\big[\big(I_{1,1}^{n}(s)\big)^{2}\big] & \leq C\bigg(\sum_{x_{1},y_{1}\in \Lambda_n}\sum_{x_{2},y_{2}\in \Lambda_n}p_{x_{1},y_{1}}^{n}p_{x_{2},y_{2}}^{n}\left|\Lambda_{n}\right|^{-4}a_{n}^{4}+\left|\Lambda_{n}\right|^{-4}n^{\delta}\left|\Lambda_{n}\right|\bigg)\nonumber \\ & \leq C\big(\left|\Lambda_{n}\right|^{-2}a_{n}^{4}+\left|\Lambda_{n}\right|^{-3}n^{\delta}\big).\end{align}
In the same way, we handle
$I_{1,2}^{n}(s)$
and get
\begin{align}{\mathrm{E}}\big[\big(I_{1,2}^{n}(s)\big)^{2}\big] & \leq C\bigg(\sum_{x_{1},y_{1}\in \Lambda_n}\sum_{x_{2},y_{2}\in \Lambda_n}p_{x_{1},y_{1}}^{n}p_{x_{2},y_{2}}^{n}\left|\Lambda_{n}\right|^{-4}a_{n}^{4}+\left|\Lambda_{n}\right|^{-4}n^{\delta}\left|\Lambda_{n}\right|\bigg)\nonumber \\ & \leq C\big(\left|\Lambda_{n}\right|^{-2}a_{n}^{4}+\left|\Lambda_{n}\right|^{-3}n^{\delta}\big).\end{align}
By (6.36), (6.37), and (6.28), we have
\begin{eqnarray*}{\mathrm{E}}\left[J_{1}^{n}(s)\right] & \leq & \sqrt{{\mathrm{E}}\Big[\Big(I_{1,1}^{n}(s)\Big)^{2}+\Big(I_{1,2}^{n}(s)\Big)^{2}\Big]}\\ & \leq & C\sqrt{\left|\Lambda_{n}\right|^{-2}a_{n}^{4}+\left|\Lambda_{n}\right|^{-3}n^{\delta}}\\ & \leq & C\left|\Lambda_{n}\right|^{-1}a_{n}^{2}+C\left|\Lambda_{n}\right|^{-3/2}n^{\delta/2}.\end{eqnarray*}
Thus,
\begin{align}\int_{0}^{\beta_{n}(T)}{\mathrm{E}}\left(J_{1}^{n}(s)\right)\,{\mathrm{d}} s & \leq C\int_{0}^{\beta_{n}(T)}\left(\left|\Lambda_{n}\right|^{-1}a_{n}^{2}+\left|\Lambda_{n}\right|^{-3/2}n^{\delta/2}\right)\,{\mathrm{d}} s\\ & \leq C\left|\Lambda_{n}\right|\left(\left|\Lambda_{n}\right|^{-1}a_{n}^{2}+\left|\Lambda_{n}\right|^{-3/2}n^{\delta/2}\right)\nonumber \\ & \leq Ca_{n}^{2}+C\left(\frac{n^{\delta}}{\left|\Lambda_{n}\right|}\right)^{1/2}\rightarrow0,\quad\mathrm{as}\ n\rightarrow\infty,\nonumber\end{align}
where the last convergence holds since
$\delta < d$
and
$\left|\Lambda_{n}\right|=(2n+1)^{d}$
.
Now we are ready to treat
$J_{2}^{n}(s)$
in a similar way.
By the Itô formula,
\begin{equation*}\left(M_{r}^{s}(x,y)\right)^{2}=\mathop\int\limits_{0}^{s}M_{r}^{s}(x,y)\,{\mathrm{d}} M_{r}^{s}(x,y)+\left\langle M_{\cdot}^{s}(x,y)\right\rangle _{s}\end{equation*}
and
\begin{equation*}\left(N_{r}^{s}(x,y)\right)^{2}=\mathop\int\limits_{0}^{s}N_{r}^{s}(x,y)\,{\mathrm{d}} N_{r}^{s}(x,y)+\left\langle N_{\cdot}^{s}(x,y)\right\rangle _{s}.\end{equation*}
Note that
$\left\langle M_{\cdot}^{t}(x,y)\right\rangle _{t}=\left\langle N_{\cdot}^{t}(x,y)\right\rangle _{t}$
, and recall (6.27); therefore
\begin{equation*}J_{2}^{n}(s)=\sum_{x,y\in\Lambda_{n}}\frac{1}{2}p_{x,y}^{n}\xi_{\beta_{n}(T)-s}^{n}\left[\mathop\int\limits_{0}^{s}M_{r}^{s}(x,y)\,{\mathrm{d}} M_{r}^{s}(x,y)-\mathop\int\limits_{0}^{s}N_{r}^{s}(x,y)\,{\mathrm{d}} N_{r}^{s}(x,y)\right].\end{equation*}
If we follow the steps of computations for
$J_{1}^{n}(s)$
, we get that
\begin{equation*}\lim_{n\rightarrow\infty} {\mathrm{E}}\left( \mathop\int\limits_{0}^{\beta_{n}(T)} J_{2}^{n}(s)\,{\mathrm{d}} s\right)=0.\end{equation*}
Proof of Proposition 6.3. Proposition 6.3 follows immediately from (6.19), and Lemmas 6.9 and 6.11.
Acknowledgments
LM is supported in part by ISF grant Nos. 1704/18 and 1985/22.
Funding information
There are no funding bodies to thank relating to the creation of this article.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.

