Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-27T08:06:46.432Z Has data issue: false hasContentIssue false

Time-convergent random matrices from mean-field pinned interacting eigenvalues

Published online by Cambridge University Press:  18 November 2022

Levent Ali Mengütürk*
Affiliation:
University College London
*
*Postal address: Department of Mathematics, University College London, London WC1E 6BT, United Kingdom. Email address: ucaheng@ucl.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

We study a multivariate system over a finite lifespan represented by a Hermitian-valued random matrix process whose eigenvalues (i) interact in a mean-field way and (ii) converge to their weighted ensemble average at their terminal time. We prove that such a system is guaranteed to converge in time to the identity matrix that is scaled by a Gaussian random variable whose variance is inversely proportional to the dimension of the matrix. As the size of the system grows asymptotically, the eigenvalues tend to mutually independent diffusions that converge to zero at their terminal time, a Brownian bridge being the archetypal example. Unlike commonly studied random matrices that have non-colliding eigenvalues, the proposed eigenvalues of the given system here may collide. We provide the dynamics of the eigenvalue gap matrix, which is a random skew-symmetric matrix that converges in time to the $\textbf{0}$ matrix. Our framework can be applied in producing mean-field interacting counterparts of stochastic quantum reduction models for which the convergence points are determined with respect to the average state of the entire composite system.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Starting with the seminal work of [Reference Wigner53] for analysing level spacing distributions of nuclei in nuclear physics, random matrix theory has proven to be a fruitful research avenue, with deep mathematical results that have found many applications in numerous fields, including superconductors ([Reference Bahcall8, Reference Beenakker11, Reference Guhr, Müller-Groeling and Weidenmüller27]), quantum chromodynamics ([Reference Akemann7, Reference Verbaarschot and Wettig51, Reference Yamamoto and Kanazawa54]), quantum chaos ([Reference Bohigas, Giannoni and Schmit14, Reference Haake28, Reference Seligman, Verbaarschot and Zirnbauer49]), RNA folding ([Reference Bhadola and Deo12, Reference Orland and Zee46, Reference Vernizzi and Orland52]), neural networks ([Reference García del Molino, Pakdaman, Touboul and Wainrib24, Reference Muir and Mrsic-Flogel45, Reference Rajan and Abbott48]), number theory ([Reference Hughes, Keating and O’Connell30, Reference Keating and Snaith35, Reference Mezzadri and Snaith44]), portfolio optimization ([Reference Laloux, Cizeau, Bouchaud and Potters36, Reference Laloux, Cizeau, Potters and Bouchaud37, Reference Lillo and Mantegna40]), and many others. In random matrix theory, one often starts with an $n\times n$ random matrix for some $n\in\mathbb{N}_+$ and moves on to studying the statistical behaviour of its eigenvalues. Such a program has given us an important family of non-colliding interacting stochastic processes (that model the dynamics of the eigenvalues) through Dyson’s Brownian motion and its generalizations ([Reference Adler, van Moerbeke and Wang4, Reference Bru20, Reference Corwin and Hammond22, Reference Dyson23, Reference Liechty and Wang39]), which are linked to harmonic Doob transforms in Weyl chambers ([Reference Grabiner26, Reference Katori33]). As part of the literature, we can also see non-intersecting paths over finite-time horizons, whereby all the eigenvalues are conditioned to converge to a fixed value at some fixed future point in time; see for example Airy processes ([Reference Adler, Delépine and van Moerbeke1Reference Adler and van Moerbeke3, Reference Johansson32, Reference Prähofer and Spohn47]), Pearcey processes ([Reference Bleher and Kuijlaars13, Reference Brézin and Hikami15, Reference Brézin and Hikami16, Reference Tracy and Widom50]), and the temporally inhomogeneous non-colliding diffusions with their path-configuration topologies ([Reference Katori and Tanemura34]).

In this paper, we are also interested in finite-time systems with time-convergent behaviour, but whose eigenvalues interact in a mean-field way rather than in a non-colliding manner as above. We additionally ask the corresponding matrices to converge to a random variable governed by the distribution of all the interacting eigenvalues at a fixed future time, instead of a set of deterministic points as above. Accordingly, we aim to construct random matrices with interacting eigenvalues to represent multivariate systems for which their terminal probability distribution is determined by the weighted ensemble average of the eigenvalues of the system at that future point in time. We do this by starting from the eigenvalue dynamics and constructing a family of mean-field eigenvalues in the spirit of the pinned interacting particles of [Reference Mengütürk42], while accounting for situations where the dominance of an eigenvalue in determining the average state of the system can be non-homogeneous. The framework also enables us to study the space asymptotics of the system as $n\rightarrow\infty$ , when Kolmogorov’s strong law property holds. In fact, we shall prove that the iterated limits with respect to the size of the system n and the time evolution t are commutative—i.e. the limiting behaviours of our random matrices are consistent and exchangeable across space and time.

Our motivation arises from producing an alternative framework within quantum measurement theory in addressing the problem of consistent collapse dynamics of wave functions. Consequently, our work may lend itself as a mean-field counterpart to finite-time stochastic quantum reduction models—see [Reference Adler, Brody, Brun and Hughston5, Reference Adler and Horwitz6, Reference Brody and Hughston17, Reference Brody and Hughston18, Reference Gisin and Percival25, Reference Hughston31, Reference Mengütürk41, Reference Mengütürk and Mengütürk43]—whereby the collapse of the energy-based eigenstates is now governed by the average state of the full composite system. In this paper, we shall establish the mathematical groundwork, leaving the detailed study of this application for separate work.

For the rest of this paper, we fix a finite time horizon $\mathbb{T}=[0,T]$ for some $T<\infty$ and define $\mathbb{T}_{-}=[0,T)$ . We represent the space of $n\times n$ Hermitian matrices by $\mathbb{H}^n$ and the group of $n\times n$ unitary matrices by $\mathbb{U}^n$ for $n\geq 2$ . We reserve bold capital letters to stand for matrix-valued processes, where $\{\boldsymbol{{H}}_t(n)\}_{t\in\mathbb{T}}\in(\mathbb{H}^n\times \mathbb{T})$ and $\{\boldsymbol{{U}}_t(n)\}_{t\in\mathbb{T}}\in(\mathbb{U}^n\times \mathbb{T})$ . Using the spectral decomposition theorem, we can unitarily diagonalize every element of $\mathbb{H}^n$ by

\begin{align*}\boldsymbol{{H}}_t(n) = \boldsymbol{{U}}_t(n) \boldsymbol{\Lambda}_t(n) \boldsymbol{{U}}^*_t(n)\quad \text{$\forall t\in\mathbb{T}$}, \end{align*}

where $\boldsymbol{\Lambda}_t(n)=\text{diag}\{\lambda_t^{(1,n)}, \ldots, \lambda_t^{(n,n)}\}$ is a diagonal matrix of eigenvalues for which the initial state is $\boldsymbol{\Lambda}_0(n)$ . We denote by $\{A_t^{(n)}\}_{t\in\mathbb{T}}$ the process that encodes the weighted ensemble average of the eigenvalues via the following:

\begin{align*}A_t^{(n)} = \frac{1}{n}\sum_{i=1}^n \beta^{(i,n)} \lambda_t^{(i,n)} \quad \text{$\forall t\in\mathbb{T}$},\end{align*}

with $|\beta^{(i,n)}| <\infty$ . We choose the coefficient vector $\boldsymbol{\beta}^{(n)} =[\beta^{(1,n)}, \ldots,\beta^{(n,n)}]^{\top}$ normalized so that

(1) \begin{align}\frac{1}{n}\sum_{i=1}^n \beta^{(i,n)} = 1.\end{align}

If each $\beta^{(i,n)}\geq 0$ , then $A_t^{(n)}$ is a convex combination of the eigenvalue vector

\begin{align*}\boldsymbol{\lambda}^{(n)}_t = [\lambda^{(1,n)}_t,\ldots,\lambda^{(n,n)}_t]^\top, \end{align*}

for every $t\in\mathbb{T}$ . If each $\beta^{(i,n)}=1$ , then $\{A_t^{(n)}\}_{t\in\mathbb{T}}$ is the standard ensemble average process of the vector-valued process $\{\boldsymbol{\lambda}^{(n)}_t\}_{t\in\mathbb{T}}$ .

For our stochastic framework, $(\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t \leq \infty},\mathbb{P})$ is our probability space, where we work with mutually independent standard $(\mathbb{P},\{\mathcal{F}_{t}\})$ -Brownian motions $\{W^{(i)}_t\}_{t\in\mathbb{T}}$ for all $i\in\mathcal{I}$ , where $\mathcal{I}=\{1,\ldots,n\}$ . In addition, we let $\{Z^{(i)}\;:\;i\in\mathcal{I}\}$ be independent identically distributed Gaussian variables to represent the initial values $\lambda^{(i,n)}_0$ , where $Z^{(i)}\sim\mathcal{N}(z,\kappa)$ for some $z\in(\!-\!\infty,\infty)$ and $\kappa\in[0,\infty)$ —we take the degenerate Gaussian case $\kappa=0$ as $Z^{(i)}=z$ . The dynamics of each eigenvalue $\{\lambda_t^{(i,n)}\}_{t\in\mathbb{T}}$ is governed by the interacting system of stochastic differential equations (SDEs) given by

(2) \begin{align}\,\textrm{d} \lambda^{(i,n)}_t &= f(t)\left(A^{(n)}_t - \lambda^{(i,n)}_t \right)\,\textrm{d} t + \sigma\left(\rho \,\textrm{d} B_t + \sqrt{1 -\rho^2}\,\textrm{d} W^{(i)}_t\right), \nonumber \\[5pt] \lambda^{(i,n)}_0 &= Z^{(i)}\end{align}

for all $t\in\mathbb{T}_{-}$ and $i\in\mathcal{I}$ , where $f\;:\;\mathbb{T}_{-} \rightarrow\mathbb{R}$ is a continuous measurable function that satisfies

\begin{align*}\int_0^t\exp\left( - \int_s^t f(u)\,\textrm{d} u\right)\,\textrm{d} s < \infty, \end{align*}

and $\sigma\neq 0$ , $\rho\in[\!-\!1,1]$ , and $\{B_t\}_{t\in\mathbb{T}}$ is an independent $(\mathbb{P},\{\mathcal{F}_{t}\})$ -Brownian motion that models common noise in the system. The conditional expectation problem given by

\begin{align*}v(t, \boldsymbol{\lambda}^{(n)}) = \mathbb{E}\left[\left. \exp\left(-\int_t^Th(s)\,\textrm{d} s\right) \psi(A^{(n)}_T) \, \right| \, \boldsymbol{\lambda}_t^{(n)} = \boldsymbol{\lambda}^{(n)} \right], \end{align*}

for some $\psi\;:\;\mathbb{R}\rightarrow\mathbb{R}$ and integrable function $h\;:\;\mathbb{T}\rightarrow\mathbb{R}$ , can be computed by solving the partial differential equation

\begin{align}\frac{\partial v(t,\boldsymbol{\lambda}^{(n)})}{\partial t} &- h(t)v(t,\boldsymbol{\lambda}^{(n)}) + \sum_{i\in\mathcal{I}}\frac{\partial v(t,\boldsymbol{\lambda}^{(n)})}{\partial \lambda^{(i,n)}}f(t)\left(\frac{1}{n}\sum_{i\in\mathcal{I}}\beta^{(i,n)}\lambda^{(i,n)} - \lambda^{(i,n)} \right)\notag \\[5pt] &+ \frac{1}{2}\sigma^2\sum_{i\in\mathcal{I}}\frac{\partial^2 v(t,\boldsymbol{\lambda}^{(n)})}{\partial \lambda^{(i,n)}\partial \lambda^{(i,n)}} + \frac{1}{2}\sigma^2\rho^2\sum_{i\in\mathcal{I}}\sum_{j\in\mathcal{I}, j\neq i}\frac{\partial^2 v(t,\boldsymbol{\lambda}^{(n)})}{\partial \lambda^{(i,n)}\partial \lambda^{(j,n)}} = 0, \notag\end{align}

with the boundary condition given by

\begin{align}v(T,\boldsymbol{\lambda}^{(n)}) = \psi\left(\frac{1}{n}\sum_{i\in\mathcal{I}}\beta^{(i,n)}\lambda^{(i,n)}\right). \notag\end{align}

The SDE in (2) is a mean-field model. In the classical setting where f is constant and $\rho=0$ , the mean-field limits as $n\rightarrow\infty$ are mutually independent Ornstein–Uhlenbeck processes. We also refer to [Reference Carmona, Fouque and Sun21], where f is constant but $\rho\neq0$ . In order to achieve our objective of producing random matrices that converge to the random weighted ensemble average $A^{(n)}_T\boldsymbol{{I}}(n)$ as $t\rightarrow T$ (where $\boldsymbol{{I}}(n)$ is the $n \times n$ identity matrix), we will require f to be a strictly non-constant function—i.e. one of the conditions we shall ask from f will nullify the situation where f can be a constant. Accordingly, if we choose $\rho=0$ , the decoupling property of mean-field models is still maintained, but where the individual eigenvalues tend to mutually independent pinned diffusions (see [Reference Hildebrandt and Rœlly29, Reference Li38]) as $n\rightarrow\infty$ : essentially re-coupling at time T, unlike Ornstein–Uhlenbeck processes. Therefore, we will encounter examples where we get mutually independent $\alpha$ -Wiener bridges (see [Reference Barczy and Kern9, Reference Barczy and Pap10]) in the mean-field limit, where the Brownian bridge is the archetypal subclass. In fact, we shall see that the mean-field limits ( $n\rightarrow\infty$ ) and the eigenvalue gaps (for any n) are both driven by pinned diffusions.

This paper is organized as follows. In Section 2, we study several mathematical properties of the system in (2) and their implications on $\{\boldsymbol{{H}}_t(n)\}_{t\in\mathbb{T}}$ as $t\rightarrow T$ and $n\rightarrow\infty$ . We also provide some numerical simulations for demonstration purposes. Section 3 is the conclusion.

2. Main results

For the remainder of this paper, we let the components of the diagonal matrix process $\{\boldsymbol{\Lambda}_t(n)\}_{t\in\mathbb{T}}$ be governed by the system of SDEs given in (2), and choose any continuously unitary process $\{\boldsymbol{{U}}_t(n)\}_{t\in\mathbb{T}}$ when working with $\boldsymbol{{H}}_t(n) = \boldsymbol{{U}}_t(n) \boldsymbol{\Lambda}_t(n) \boldsymbol{{U}}^*_t(n)$ for $t\in\mathbb{T}$ .

Lemma 2.1. Let $Z^{(i)}=\hat{Z}\sim\mathcal{N}(z,\kappa)$ for each $i\in\mathcal{I}$ . Also let $G^{(n)} \sim \mathcal{N}\left(z, \Gamma^{(n)} \right)$ be a Gaussian random variable with

\begin{align}\Gamma^{(n)} = \kappa + \sigma^2T\left(\rho^2 + \frac{1 - \rho^2}{n^2}|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}\right), \notag\end{align}

where $||\boldsymbol{\beta}^{(n)}||^{2}_{L^2} = \sum_{j=1}^n (\beta^{(j,n)})^2$ . If the map $f\;:\;\mathbb{T}_{-} \rightarrow\mathbb{R}$ in (2) satisfies

  1. (i) $\lim_{t\rightarrow T}\int_0^t f(s) \,\textrm{d} s = \infty$ , and

  2. (ii) $\int_0^\tau f(s) \,\textrm{d} s < \infty$ for any $\tau\in\mathbb{T}_{-}$ ,

then the following holds:

(3) \begin{align}\lim_{t\rightarrow T} \lambda^{(i,n)}_t \overset{\text{law}}{=} G^{(n)}.\end{align}

Proof. Using (2), the weighted ensemble average process has the representation

(4) \begin{align}A^{(n)}_t = \frac{1}{n}\sum_{j=1}^n \beta^{(j,n)}\lambda^{(j,n)}_t = \hat{Z} + \sigma\rho B_t + \frac{\sigma\sqrt{1 - \rho^2}}{n}\sum_{j=1}^n \beta^{(j,n)} W^{(j)}_t,\end{align}

since we have the normalization condition $\sum_{i=1}^n \beta^{(i,n)} = n$ . The solution of (2) is thus given by

(5) \begin{align*}\lambda^{(i,n)}_t &= \hat{Z} + \sigma\rho B_t \notag\\[5pt] & \quad + \sigma\sqrt{1 - \rho^2}\left(\frac{1}{n}\sum_{j=1}^n \beta^{(j,n)}W^{(j)}_t + \int_0^t \frac{\gamma(t)}{\gamma(s)}\,\textrm{d} W^{(i)}_s - \frac{1}{n}\sum_{j=1}^n \int_0^t \frac{\gamma(t)}{\gamma(s)}\beta^{(j,n)}\,\textrm{d} W^{(j)}_s \right) &\cr &\,\,\,\,\,\,\,= \hat{Z} + \sigma\rho B_t + \sigma\sqrt{1 - \rho^2}\left(\frac{1}{n}\sum_{j=1}^n \beta^{(j,n)}W^{(j)}_t\right) \notag\\[5pt] &\quad + \sigma\sqrt{1 - \rho^2}\left(\gamma(t)Y^{(i)}_t - \frac{1}{n}\sum_{j=1}^n \beta^{(j,n)}\gamma(t)Y^{(j)}_t \right),\end{align*}

where we define the function $\gamma\;:\;\mathbb{R}_+\rightarrow\mathbb{R}_+$ and the process $\{Y^{(i)}_t\}_{t\in\mathbb{T}}$ as follows:

\begin{align}\gamma(t) = \exp\left( - \int_0^t f(u)\,\textrm{d} u \right) \text{and}\; Y^{(i)}_t = \int_0^t \gamma(s)^{-1}\,\textrm{d} W^{(i)}_s, \text{for $t\in\mathbb{T}_{-}$.} \notag\end{align}

Since affine transformations of Gaussian processes are Gaussian, $\lambda^{(i,n)}_t$ is Gaussian for any $t\in\mathbb{T}_{-}$ and $i\in\mathcal{I}$ . Following the arguments of [Reference Hildebrandt and Rœlly29, Reference Li38], we apply Itô’s integration-by-parts formula to get

\begin{align}\,\textrm{d}\!\left(\gamma(t)^{-1} W^{(i)}_t\right) = \,\textrm{d} Y^{(i)}_t - W^{(i)}_t\frac{\gamma'(t)}{\gamma^{2}(t)}\,\textrm{d} t, \notag\end{align}

where $\gamma'(t) = \,\textrm{d} \gamma(t) / \,\textrm{d} t$ . Hence, computing the derivative and integrating over time, we have

\begin{align}\gamma(t)^{-1} W^{(i)}_t = Y^{(i)}_t - \int_0^t W^{(i)}_s\frac{\gamma'(s)}{\gamma^{2}(s)}\,\textrm{d} s = Y^{(i)}_t + \int_0^t W^{(i)}_s\frac{f(s)}{\gamma(s)}\,\textrm{d} s. \notag\end{align}

Rearranging the terms and multiplying both sides by $\gamma(t)$ , we reach

\begin{align}Y^{(i)}_t &= \gamma(t)^{-1}W^{(i)}_t - \int_0^t W^{(i)}_s\frac{f(s)}{\gamma(s)}\,\textrm{d} s \notag \\[5pt] &\Rightarrow \gamma(t)Y^{(i)}_t = W^{(i)}_t - \int_0^t W^{(i)}_sU(s,t)\,\textrm{d} s, \notag\end{align}

where we have

\begin{align}U(s,t)=f(s)\gamma(s)^{-1}\gamma(t). \notag\end{align}

Since $\lim_{t\rightarrow T}\int_0^t f(s) \,\textrm{d} s = \infty$ and $\int_0^\tau f(s) \,\textrm{d} s < \infty$ for any $\tau\in\mathbb{T}_{-}$ , we have

\begin{align}&\int_0^\tau U(s,t) \,\textrm{d} s = \frac{\gamma(t)}{\gamma(\tau)} - \gamma(t) \Rightarrow \lim_{t\rightarrow T} \int_0^\tau U(s,t) \,\textrm{d} s = 0, \notag \\[5pt] &\int_0^t U(s,t) \,\textrm{d} s = 1 - \gamma(t) \Rightarrow \lim_{t\rightarrow T} \int_0^t U(s,t) \,\textrm{d} s = 1, \notag\end{align}

for any $\tau\in\mathbb{T}_{-}$ , which implies that $U\;:\;\mathbb{T}^2\rightarrow\mathbb{R}_+$ is an approximation to the identity as in [Reference Li38]. This means we have the following convergence:

(6) \begin{align}\mathbb{P}\left( \lim_{t\rightarrow T} \left( W^{(i)}_t - \int_0^t W^{(i)}_sU(s,t) \,\textrm{d} s \right) = 0 \right) = 1,\end{align}

given that the Brownian motion $\{W^{(i)}_t\}_{t\in\mathbb{T}}$ has continuous sample paths $\mathbb{P}$ -almost surely (a.s.). Therefore, we have

\begin{align}&\mathbb{P}\left( \lim_{t\rightarrow T} \gamma(t)Y^{(i)}_t = 0 \right) = 1, \notag \\[5pt] &\mathbb{P}\left( \lim_{t\rightarrow T} \frac{1}{n}\sum_{j=1}^n \gamma(t)Y^{(j)}_t = 0 \right) = 1, \notag\end{align}

which in turn gives us the following:

\begin{align}\mathbb{P}\left( \lim_{t\rightarrow T} \left( \gamma(t)Y^{(i)}_t - \frac{1}{n}\sum_{j=1}^n \gamma(t)Y^{(j)}_t\right) = 0 \right) = 1. \notag\end{align}

Therefore, taking the limit as $t\rightarrow T$ of $\lambda^{(i,n)}_t$ as solved in (5), we get

(7) \begin{align}\lim_{t\rightarrow T} \lambda^{(i,n)}_t &= \hat{Z} + \sigma\rho B_T + \sigma\sqrt{1 - \rho^2}\left(\frac{1}{n}\sum_{j=1}^n \beta^{(j,n)} W^{(j)}_T\right) \text{$\mathbb{P}$-a.s.,}\end{align}

which provides $L^1$ -convergence as $\lambda^{(i,n)}_t$ is Gaussian for $t\in\mathbb{T}_{-}$ . Since $\hat{Z}$ is mutually independent from $\{B_t\}_{t\in\mathbb{T}}$ and $\{W^{(i)}_t\}_{t\in\mathbb{T}}$ , it follows that

(8) \begin{align}\hat{Z} + \sigma\rho B_T + \sigma\sqrt{1 - \rho^2}&\left(\frac{1}{n}\sum_{j=1}^n \beta^{(j,n)}W^{(j)}_T\right) \overset{\text{law}}{=} \hat{Z} + \sigma\sqrt{\rho^2 + \frac{1 - \rho^2}{n^2}|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}}X_T \notag \\[5pt] & \sim \mathcal{N}\left(z, \kappa + \sigma^2T\left(\rho^2 + \frac{1 - \rho^2}{n^2}|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}\right) \right), \end{align}

given that $\{X_t\}_{t\in\mathbb{T}}$ is a standard $(\mathbb{P},\{\mathcal{F}_{t}\})$ -Brownian motion, where $X_T\sim \mathcal{N}\left(0, T\right)$ .

When each eigenvalue process in the system starts from the same random number $\hat{Z}$ , the terminal random variable to which each eigenvalue $\{\lambda^{(i,n)}_t\}_{t\in\mathbb{T}}$ converges no longer depends on the index i. In addition, if we set $\beta^{(i,n)}=1$ for all $i\in\mathcal{I}$ , then we get

\begin{align}\Gamma^{(n)} \mapsto \kappa + \sigma^2T\left(\rho^2 + \frac{1 - \rho^2}{n}\right). \notag\end{align}

The result below demonstrates the convergence behaviour of the matrix system when the initial condition is a constant.

Proposition 2.1. Keep the conditions of Lemma 2.1, where the initial state $\hat{Z}=z$ is fixed for all $i\in\mathcal{I}$ . Then, with $G^{(n)} \sim \mathcal{N}\left(z, \Gamma^{(n)} \right)$ where

(9) \begin{align}\Gamma^{(n)} = \sigma^2T\left(\rho^2 + \frac{1 - \rho^2}{n^2}|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}\right),\end{align}

the following holds:

(10) \begin{align}\lim_{t\rightarrow T} \boldsymbol{{H}}_t(n) \overset{\text{law}}{=} G^{(n)}\boldsymbol{{I}}(n),\end{align}

where $\boldsymbol{{I}}(n)$ is the $n \times n$ identity matrix.

Proof. If $\hat{Z}=\lambda_0^{(i)}=z$ is a fixed number for all $i\in\mathcal{I}$ , this equivalently sets $\kappa=0$ in (8), where we get the terminal random variable that satisfies

(11) \begin{align}z + \sigma\rho B_T + \sigma\sqrt{1 - \rho^2}&\left(\frac{1}{n}\sum_{j=1}^n \beta^{(j,n)}W^{(j)}_T\right) \sim \mathcal{N}\left(z, \sigma^2T\left(\rho^2 + \frac{1 - \rho^2}{n^2}|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}\right) \right). \end{align}

As seen from (3), the terminal law is independent of the index $i\in\mathcal{I}$ , which provides us with the observation

(12) \begin{align}\lim_{t\rightarrow T} \lambda^{(i,n)}_t \overset{\text{law}}{=} G^{(n)} \Rightarrow \lim_{t\rightarrow T} \boldsymbol{\Lambda}_t(n) \overset{\text{law}}{=} G^{(n)}\boldsymbol{{I}}(n).\end{align}

Hence, using (12), we get

\begin{align}\lim_{t\rightarrow T} \boldsymbol{{H}}_t(n) &\overset{\text{law}}{=} \boldsymbol{{U}}_T(n)G^{(n)}\boldsymbol{{I}}(n)\boldsymbol{{U}}^*_T(n) \notag \\[5pt] &= G^{(n)}\boldsymbol{{U}}_T(n)\boldsymbol{{U}}^*_T(n) \notag \\[5pt] &= G^{(n)}\boldsymbol{{I}}(n), \notag\end{align}

since $\{\boldsymbol{{U}}_t(n)\}_{t\in\mathbb{T}}$ is a continuous unitary matrix process, which completes the proof.

If $\hat{Z}=z$ and $\rho=0$ , then $G^{(n)}\sim\mathcal{N}(z, n^{-2}\sigma^2T|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2})$ —if the system starts from a fixed value and there is no common noise factor in the system, Proposition 2.1 shows us that the system converges to the identity matrix scaled by a Gaussian random variable whose variance is inversely proportional to the dimension of the matrix, as $t\rightarrow T$ . For us, the case where $\hat{Z}=0$ is of fundamental importance; for this case, if we also have $\beta^{(j,n)} = 1$ , then the law further simplifies to $G^{(n)}\sim\mathcal{N}\left(0, n^{-1}\sigma^2T\right)$ .

Corollary 2.1. Keep the conditions of Lemma 2.1. Then $\{\boldsymbol{{H}}_t(n)\}_{t\in\mathbb{T}}$ converges to the weighted ensemble average of its eigenvalues $A^{(n)}_T\boldsymbol{{I}}(n)$ as $t\rightarrow T$ , $\mathbb{P}$ -a.s.

Proof. From Equations (4) and (7), we see that each eigenvalue process $\{\lambda^{(i,n)}_t\}_{t\in\mathbb{T}}$ converges to the weighted ensemble average $A^{(n)}_T$ as $t\rightarrow T$ , $\mathbb{P}$ -a.s. Thus, we have the following:

\begin{align}\lim_{t\rightarrow T} \boldsymbol{\Lambda}_t(n) = A^{(n)}_T\boldsymbol{{I}}(n). \notag\end{align}

The result follows since $\{\boldsymbol{{U}}_t(n)\}_{t\in\mathbb{T}}$ is a continuous unitary process.

Every unitary matrix has an exponential representation in terms of some Hermitian matrix. If $\{\boldsymbol{{U}}_t(n)\}_{t\in\mathbb{T}}$ is deterministic with

\begin{align}\boldsymbol{{U}}_t(n) = e^{i \boldsymbol{{V}}(n)\mu(t)}, \notag\end{align}

where $\boldsymbol{{V}}$ is a Hermitian matrix and $\mu\;:\;\mathbb{T}\rightarrow\mathbb{R}$ is a differentiable function, then $\{\boldsymbol{{H}}_t(n)\}_{t\in\mathbb{T}}$ is governed by

(13) \begin{align}\boldsymbol{{H}}_t(n) = \int_0^t\mathcal{L}\left[i \frac{\partial \mu}{ \partial s}\boldsymbol{{V}}(n), \boldsymbol{{H}}_s(n) \right]\,\textrm{d} s + \boldsymbol{{U}}_t(n)\left(\int_0^t\textrm{d} \boldsymbol{\Lambda}_s(n)\right)\boldsymbol{{U}}_t^*(n),\quad \text{for all $t\in\mathbb{T}_{-}$},\end{align}

where $\mathcal{L}$ is the commutator with

\begin{align*}\mathcal{L}\left[i \frac{\partial \mu}{ \partial t}\boldsymbol{{V}}(n), \boldsymbol{{H}}_t(n) \right]=i \frac{\partial \mu}{ \partial t}\left(\boldsymbol{{V}}(n) \boldsymbol{{H}}_t(n) - \boldsymbol{{H}}_t(n)\boldsymbol{{V}}(n)\right)\!. \end{align*}

This follows since $\,\textrm{d}\boldsymbol{{U}}_t(n)\,\textrm{d}\boldsymbol{\Lambda}_t(n) =\,\textrm{d}\boldsymbol{\Lambda}_t(n)\,\textrm{d}\boldsymbol{{U}}^*_t(n)=\,\textrm{d}\boldsymbol{{U}}_t(n)\,\textrm{d}\boldsymbol{{U}}^*_t(n)=0$ , and using Itô’s integration-by-parts formula,

\begin{align}\,\textrm{d}\boldsymbol{{H}}_t(n) = \,\textrm{d}\boldsymbol{{U}}_t(n)\boldsymbol{\Lambda}_t(n)\boldsymbol{{U}}^*_t(n) + \boldsymbol{{U}}_t(n)\,\textrm{d}\boldsymbol{\Lambda}_t(n)\boldsymbol{{U}}^*_t(n) + \boldsymbol{{U}}_t(n)\boldsymbol{\Lambda}_t(n)\,\textrm{d}\boldsymbol{{U}}^*_t(n), \notag\end{align}

where we have $\,\textrm{d}\boldsymbol{\Lambda}_t(n)$ from (2). Also, since $\boldsymbol{{U}}_t(n) = e^{i \boldsymbol{{V}}(n)\mu(t)}$ , we further get

\begin{align}&\,\textrm{d}\boldsymbol{{U}}_t(n)\boldsymbol{\Lambda}_t(n)\boldsymbol{{U}}^*_t(n) = i \frac{\partial \mu}{ \partial t}\boldsymbol{{V}}(n) \boldsymbol{{U}}_t(n)\boldsymbol{\Lambda}_t(n)\boldsymbol{{U}}^*_t(n) \,\textrm{d} t= i \frac{\partial \mu}{ \partial t}\boldsymbol{{V}}(n)\boldsymbol{{H}}_t(n) \,\textrm{d} t, \notag\\[5pt] &\boldsymbol{{U}}_t(n)\boldsymbol{\Lambda}_t(n)\,\textrm{d}\boldsymbol{{U}}^*_t(n) = -i \frac{\partial \mu}{ \partial t} \boldsymbol{{U}}_t\boldsymbol{\Lambda}_t(n)\boldsymbol{{V}}(n) \boldsymbol{{U}}^*_t(n)\,\textrm{d} t = -i \frac{\partial \mu}{ \partial t}\boldsymbol{{H}}_t(n)\boldsymbol{{V}}(n)\,\textrm{d} t, \notag\end{align}

which provides us with

\begin{align}\,\textrm{d}\boldsymbol{{U}}_t(n)\boldsymbol{\Lambda}_t(n)\boldsymbol{{U}}^*_t(n) + \boldsymbol{{U}}_t(n)\boldsymbol{\Lambda}_t(n)\,\textrm{d}\boldsymbol{{U}}^*_t(n) = \mathcal{L}\left[i \frac{\partial \mu}{ \partial t}\boldsymbol{{V}}(n), \boldsymbol{{H}}_t(n) \right]\,\textrm{d} t. \notag\end{align}

Thus, if a random matrix process satisfies the SDE (13), where its eigenvalues $\{\boldsymbol{\Lambda}_t(n)\}_{t\in\mathbb{T}}$ are driven by (2) with the conditions in Lemma 2 met, then we are working with a system where $\lim_{t\rightarrow T} \boldsymbol{{H}}_t(n) = A^{(n)}_T\boldsymbol{{I}}(n)$ .

The eigenvalues of $\{\boldsymbol{{H}}_t(n)\}_{t\in\mathbb{T}}$ are Gaussian, and the following result provides their covariance structure, which in turn generalizes the covariance trajectories of [Reference Mengütürk42]. Without loss of much generality, we shall set $\hat{Z}=0$ as a fundamental scenario for the analyses below.

Proposition 2.2. Keep the conditions of Proposition 2.1 with $\hat{Z}=0$ . Then

(14) \begin{align}\mathbb{E}[\lambda^{(i,n)}_t \lambda^{(j,n)}_t] &= \sigma^2t\left(\rho^2 + \frac{(1-\rho^2)|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}}{n^2}\right) + \sigma^2(1-\rho^2)\mathbf{1}(i=j)\int_0^t \frac{\gamma(t)^2}{\gamma(s)^2}\,\textrm{d} s \notag \\[5pt] &+ \frac{\sigma^2(1-\rho^2)}{n}\left((\beta^{(i,n)}+\beta^{(j,n)})\int_0^t \frac{\gamma(t)}{\gamma(s)} \,\textrm{d} s - 2\frac{|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}}{n}\int_0^t \frac{\gamma(t)}{\gamma(s)} \,\textrm{d} s\right) \notag \\[5pt] &-\frac{\sigma^2(1-\rho^2)}{n}\left((\beta^{(i,n)}+\beta^{(j,n)})\int_0^t \frac{\gamma(t)^2}{\gamma(s)^2} \,\textrm{d} s - \frac{|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}}{n}\int_0^t \frac{\gamma(t)^2}{\gamma(s)^2} \,\textrm{d} s\right)\end{align}

is the covariance process of the system for $i,j\in\mathcal{I}$ , where $\mathbf{1}(.)$ is the indicator function.

Proof. Note that $\mathbb{E}[\lambda^{(i,n)}_t]=0$ when $\hat{Z}=0$ . Hence, $\mathbb{E}[\lambda^{(i,n)}_t \lambda^{(j,n)}_t]$ is the covariance at every $t\in\mathbb{T}$ . Using (5) with $\hat{Z}=0$ for any $i,j\in\mathcal{I}$ , we have

\begin{align}\psi^{(i,j)}_t &= \sigma^2\rho^2B_t^2 \notag \\[5pt] &\quad + \sigma\rho B_t\sigma\sqrt{1 - \rho^2}\left( \frac{1}{n}\sum_{k=1}^n\beta^{(k,n)} W^{(k)}_t + \int_0^t \frac{\gamma(t)}{\gamma(s)}\,\textrm{d} W^{(i)}_s - \frac{1}{n}\sum_{k=1}^n \int_0^t \beta^{(k,n)}\frac{\gamma(t)}{\gamma(s)}\,\textrm{d} W^{(k)}_s \right) \notag \\[5pt] &\quad + \sigma\rho B_t\sigma\sqrt{1 - \rho^2}\left( \frac{1}{n}\sum_{l=1}^n \beta^{(l,n)}W^{(l)}_t + \int_0^t \frac{\gamma(t)}{\gamma(s)}\,\textrm{d} W^{(j)}_s - \frac{1}{n}\sum_{l=1}^n \int_0^t \beta^{(l,n)}\frac{\gamma(t)}{\gamma(s)}\,\textrm{d} W^{(l)}_s \right). \notag\end{align}

For the full product, we thus have

(15) \begin{align}\lambda^{(i,n)}_t \lambda^{(j,n)}_t &= \psi^{(i,j)}_t + \frac{\sigma^2(1-\rho^2)}{n^2}\sum_{k=1}^n\sum_{l=1}^n \beta^{(k,n)}\beta^{(l,n)}W^{(k)}_t W^{(l)}_t \notag\\[5pt] & \quad + \frac{\sigma^2(1-\rho^2)}{n}\sum_{k=1}^n\beta^{(k,n)}\int_0^t \frac{\gamma(t)}{\gamma(s)}W^{(k)}_t\,\textrm{d} W^{(j)}_s \notag \\[5pt] & \quad + \frac{\sigma^2(1-\rho^2)}{n}\sum_{l=1}^n\beta^{(l,n)}\int_0^t \frac{\gamma(t)}{\gamma(s)}W^{(l)}_t\,\textrm{d} W^{(i)}_s \notag\\[5pt] & \quad +\sigma^2(1-\rho^2)\int_0^t \int_0^t\frac{\gamma(t)^{2}}{\gamma(s)^{2}}\,\textrm{d} W^{(i)}_s\,\textrm{d} W^{(j)}_s \notag\\[5pt] & \quad - \frac{\sigma^2(1-\rho^2)}{n}\sum_{l=1}^n \beta^{(l,n)}\int_0^t\int_0^t\frac{\gamma(t)^{2}}{\gamma(s)^{2}}\,\textrm{d} W^{(i)}_s\,\textrm{d} W^{(l)}_s \notag \\[5pt] & \quad -\frac{\sigma^2(1-\rho^2)}{n^2}\sum_{k=1}^n\sum_{l=1}^n\beta^{(k,n)}\beta^{(l,n)}\int_0^t \frac{\gamma(t)}{\gamma(s)}W^{(l)}_t\,\textrm{d} W^{(k)}_s \notag\\[5pt] &\quad - \frac{\sigma^2(1-\rho^2)}{n}\sum_{k=1}^n \beta^{(k,n)}\int_0^t\int_0^t\frac{\gamma(t)^{2}}{\gamma(s)^{2}}\,\textrm{d} W^{(j)}_s\,\textrm{d} W^{(k)}_s \notag \\[5pt] & \quad +\frac{\sigma^2(1-\rho^2)}{n^2}\sum_{k=1}^n\sum_{l=1}^n \beta^{(k,n)}\beta^{(l,n)}\int_0^t\int_0^t\frac{\gamma(t)^{2}}{\gamma(s)^{2}}\,\textrm{d} W^{(l)}_s\,\textrm{d} W^{(k)}_s.\end{align}

Next, we take the expectation of the product above. First, since $\{B_t\}_{t\in\mathbb{T}}$ is a mutually independent Brownian motion, we get the following:

\begin{align}\mathbb{E}\left[\psi^{(i,j)}_t\right] &= \mathbb{E}\left[\sigma^2\rho^2B_t^2 \right] \notag \\[5pt] &=\sigma^2\rho^2 t. \notag\end{align}

All Brownian motions $\{W^{(i)}_t\}_{t\in\mathbb{T}}$ are mutually independent. Using Itô isometry, we have

\begin{align}&\sum_{k=1}^n\beta^{(k,n)}\mathbb{E}\left[\int_0^t \int_0^t \frac{\gamma(t)}{\gamma(s)} \,\textrm{d} W^{(k)}_s\,\textrm{d} W^{(j)}_s\right] = \beta^{(j,n)}\int_0^t \frac{\gamma(t)}{\gamma(s)} \,\textrm{d} s \notag \\[5pt] &\sum_{l=1}^n\beta^{(l,n)}\mathbb{E}\left[\int_0^t\int_0^t \frac{\gamma(t)}{\gamma(s)}\,\textrm{d} W^{(l)}_s \,\textrm{d} W^{(i)}_s\right] = \beta^{(i,n)}\int_0^t \frac{\gamma(t)}{\gamma(s)} \,\textrm{d} s \notag \\[5pt] &\sum_{k=1}^n\sum_{l=1}^n\beta^{(k,n)}\beta^{(l,n)}\mathbb{E}\left[ \int_0^t\int_0^t\frac{\gamma(t)}{\gamma(s)}\,\textrm{d} W^{(k)}_s\,\textrm{d} W^{(l)}_s\right] = || \boldsymbol{\beta}^{(n)}||^{2}_{L^2}\int_0^t \frac{\gamma(t)}{\gamma(s)} \,\textrm{d} s. \notag\end{align}

In addition, we have the following:

\begin{align}&\mathbb{E}\left[\sum_{l=1}^n \beta^{(l,n)}\int_0^t\int_0^t\frac{\gamma(t)^2}{\gamma(s)^2}\,\textrm{d} W^{(i)}_s\,\textrm{d} W^{(l)}_s\right] = \beta^{(i,n)}\int_0^t\frac{\gamma(t)^2}{\gamma(s)^2}\,\textrm{d} s, \notag \\[5pt] &\mathbb{E}\left[\sum_{k=1}^n \beta^{(k,n)}\int_0^t\int_0^t\frac{\gamma(t)^2}{\gamma(s)^2}\,\textrm{d} W^{(j)}_s\,\textrm{d} W^{(k)}_s\right] = \beta^{(j,n)}\int_0^t\frac{\gamma(t)^2}{\gamma(s)^2}\,\textrm{d} s. \notag\end{align}

Summing all the components, we have the following:

(16) \begin{align} & \mathbb{E}[\lambda^{(i,n)}_t \lambda^{(j,n)}_t] = \nonumber \\ & \quad \sigma^2\rho^2 t + \frac{\sigma^2(1-\rho^2)}{n^2}\sum_{k=1}^n (\beta^{(k,n)})^2\mathbb{E}\left[W^{(k)}_t W^{(k)}_t\right] + \sigma^2(1-\rho^2)\mathbf{1}(i=j)\int_0^t \frac{\gamma(t)^2}{\gamma(s)^2}\,\textrm{d} s \notag \\[5pt] &\qquad + \frac{\sigma^2(1-\rho^2)}{n}(\beta^{(i,n)}+\beta^{(j,n)})\int_0^t \frac{\gamma(t)}{\gamma(s)} \,\textrm{d} s - 2\frac{\sigma^2(1-\rho^2)|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}}{n^2}\int_0^t \frac{\gamma(t)}{\gamma(s)} \,\textrm{d} s \notag \\[5pt] &\qquad -\frac{\sigma^2(1-\rho^2)}{n}(\beta^{(i,n)}+\beta^{(j,n)})\int_0^t \frac{\gamma(t)^2}{\gamma(s)^2} \,\textrm{d} s + \frac{\sigma^2(1-\rho^2)|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}}{n^2}\int_0^t \frac{\gamma(t)^2}{\gamma(s)^2} \,\textrm{d} s,\end{align}

which completes the proof.

The covariance structure in [Reference Mengütürk42] becomes the following corollary of Proposition 2.2.

Corollary 2.2. Keep the conditions of Proposition 2.1 with $\hat{Z}=0$ , and set $\beta^{(i,n)}=1$ for all $i\in\mathcal{I}$ . Then the following holds:

(17) \begin{align}\mathbb{E}[\lambda^{(i,n)}_t \lambda^{(j,n)}_t] &= \sigma^2t\left(\rho^2 + \frac{(1-\rho^2)}{n}\right) \notag \\[5pt] &\quad + \sigma^2(1-\rho^2)\left(\mathbf{1}(i=j)\int_0^t \frac{\gamma(t)^2}{\gamma(s)^2}\,\textrm{d} s - \frac{1}{n}\int_0^t \frac{\gamma(t)^2}{\gamma(s)^2}\,\textrm{d} s\right),\end{align}

as the covariance process of the system for $i,j\in\mathcal{I}$ , where $\mathbf{1}(.)$ is the indicator function.

The next result consolidates Proposition 2.1 with the limiting behaviour of the covariance structure given in Proposition 2.2.

Corollary 2.3. Keep the conditions of Proposition 2.1 with $\hat{Z}=0$ . Then

\begin{align}&\lim_{t\rightarrow T}\mathbb{E}[\lambda^{(i,n)}_t \lambda^{(j,n)}_t] = \sigma^2T\left(\rho^2 + \frac{(1-\rho^2)}{n^2}|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}\right). \notag\end{align}

Proof. Since we have the conditions (i) $\lim_{t\rightarrow T}\int_0^t f(s) \,\textrm{d} s = \infty$ and (ii) $\int_0^\tau f(s) \,\textrm{d} s < \infty$ for any $\tau\in\mathbb{T}_{-}$ , we must have, for $k\in[1,\infty)$ , the following:

\begin{align}\lim_{t\rightarrow T}&\int_0^t kf(s) \,\textrm{d} s = \infty, \notag \\[5pt] &\int_0^\tau kf(s) \,\textrm{d} s < \infty \notag\end{align}

for any $\tau\in\mathbb{T}_{-}$ . Hence, we define the scaled function

\begin{align}g_k(t) = kf(t) \notag\end{align}

for all $t\in\mathbb{T}$ and let

\begin{align}\Psi_k(t) = \gamma(t)^k = \exp\left( - \int_0^t g_k(u)\,\textrm{d} u \right) \quad \text{and} \quad Z_t = \int_0^t \Psi_k(s)^{-1}\,\textrm{d} s, \text{for $t\in\mathbb{T}_{-}$.} \notag\end{align}

Using integration by parts, we get

\begin{align}\,\textrm{d}\left(\Psi_k(t)^{-1}t\right) = \,\textrm{d} Z_t - t\frac{\Psi'_{\!\!k}(t)}{\Psi_k(t)^{2}}\,\textrm{d} t \quad \Rightarrow \quad \Psi_k(t)^{-1} t = Z_t - \int_0^t s\frac{\Psi'_{\!\!k}(s)}{\Psi_k(s)^{2}}\,\textrm{d} s, \notag\end{align}

and therefore,

\begin{align}\Psi_k(t)^{-1} t = Z_t + \int_0^t s\frac{g_k(s)}{\Psi_k(s)}\,\textrm{d} s \quad \Rightarrow \quad \Psi_k(t)Z_t = t - \int_0^t sV_k(s,t)\,\textrm{d} s, \notag\end{align}

where $V_k(s,t)=g_k(s)\Psi_k(s)^{-1}\Psi_k(t)$ . Using the continuity of the function t, and taking steps similar to those of Lemma 2.1, we thus have

(18) \begin{align}\lim_{t\rightarrow T} \left( t - \int_0^t sV_k(s,t) \,\textrm{d} s \right) = 0,\end{align}

which implies that

\begin{align}&\lim_{t\rightarrow T} \left(\sigma^2(1-\rho^2)\mathbf{1}(i=j)\int_0^t \frac{\gamma(t)^2}{\gamma(s)^2}\,\textrm{d} s\right) = 0, \notag \\[5pt] &\lim_{t\rightarrow T} \left(\frac{\sigma^2(1-\rho^2)}{n}\left((\beta^{(i,n)}+\beta^{(j,n)})\int_0^t \frac{\gamma(t)}{\gamma(s)} \,\textrm{d} s - 2\frac{|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}}{n}\int_0^t \frac{\gamma(t)}{\gamma(s)} \,\textrm{d} s\right)\right)= 0, \notag \\[5pt] &\lim_{t\rightarrow T} \left(\frac{\sigma^2(1-\rho^2)}{n}\left((\beta^{(i,n)}+\beta^{(j,n)})\int_0^t \frac{\gamma(t)^2}{\gamma(s)^2} \,\textrm{d} s - \frac{|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}}{n}\int_0^t \frac{\gamma(t)^2}{\gamma(s)^2} \,\textrm{d} s\right)\right) = 0, \notag\end{align}

if we choose $k=1$ and $k=2$ . The result then follows from Proposition 2.2.

In random matrix theory, it is typical to see non-colliding eigenvalues; this behaviour arises endogenously from many random Hermitian matrices studied in the literature where the matrix entries are continuous semimartingales (see Bru’s theorem [Reference Bru19, Reference Katori and Tanemura34]). On the other hand, we start with the eigenvalue matrix $\{\boldsymbol{\Lambda}_t(n)\}_{t\in\mathbb{T}}$ , which may collide. For eigenvalue gap dynamics, we have the following result.

Proposition 2.3. Keep the conditions of Lemma 2.1. Let $\{\boldsymbol{{S}}_t(n)\}_{t\in\mathbb{T}}$ be a matrix-valued process where each element is given by

\begin{align}S^{(i,j)}_t=\lambda^{(i,n)}_t - \lambda^{(j,n)}_t, \notag\end{align}

so that $\boldsymbol{{S}}_t(n)$ is a skew-symmetric matrix with zero diagonals for all $t\in\mathbb{T}$ . Then the following holds:

\begin{align}S^{(i,j)}_t \overset{\text{law}}{=} \sigma\sqrt{2(1 - \rho^2)}\int_0^t \frac{\gamma(t)}{\gamma(s)}\,\textrm{d} \hat{W}^{(i,j)}_s \quad \text{for $i\neq j$} \quad\text{and} \quad \text{$\forall t\in\mathbb{T}_{-}$,} \notag\end{align}

where $\{\hat{W}^{(i,j)}_t\}_{t\in\mathbb{T}}$ is a standard $(\mathbb{P},\{\mathcal{F}_{t}\})$ -Brownian motion. Hence,

\begin{align}\lim_{t\rightarrow T}\boldsymbol{{S}}_t(n)=\textbf{0}(n). \notag\end{align}

Proof. It is clear that $\boldsymbol{{S}}_t(n)$ is a skew-symmetric matrix with zero diagonals. Using (5), we have

\begin{align}S^{(i,j)}_t &= \sigma\sqrt{1 - \rho^2}\left(\int_0^t \frac{\gamma(t)}{\gamma(s)}\,\textrm{d} W^{(i)}_s - \int_0^t \frac{\gamma(t)}{\gamma(s)}\,\textrm{d} W^{(j)}_s\right) \notag \\[5pt] &\overset{\text{law}}{=} \sigma\sqrt{1 - \rho^2}\sqrt{2}\int_0^t \frac{\gamma(t)}{\gamma(s)}\,\textrm{d} \hat{W}^{(i,j)}_s. \notag\end{align}

Finally, $\lim_{t\rightarrow T}\boldsymbol{{S}}_t(n)=\textbf{0}(n)$ follows from (6) since $\{\hat{W}^{(i,j)}_t\}_{t\in\mathbb{T}}$ has continuous paths $\mathbb{P}$ -a.s.

The result essentially shows that the expected distance between any pair of eigenvalues is itself a Gaussian process with zero mean, which shows that the eigenvalues collide on average. Note also that each eigenvalue gap process $\{S^{(i,j)}_t\}_{t\in\mathbb{T}}$ above is a diffusion that is pinned to zero at time T.

We shall now take limits with respect to the size of the system, that is, as $n\rightarrow\infty$ , to get the mean-field limits of the interacting eigenvalues. First, we label the following assumption that we shall use when we study convergence properties with respect to the size.

Assumption 2.1. Kolmogorov’s strong law property holds:

(19) \begin{align}\mathcal{K} = \sum_{k=1}^{\infty}\frac{(\beta^{(k)})^2}{k^2} < \infty,\end{align}

where $\beta^{(k)}=\beta^{(k,n)}$ for all $k\leq n$ .

As an example, if we set $\beta^{(k)}=1$ , then $\mathcal{K}=\pi^2/6$ . We are now in a position to state the following result.

Proposition 2.4. Keep the conditions of Lemma 2.1 and define

\begin{align}\lim_{n\rightarrow\infty} \lambda^{(i,n)}_t \triangleq \xi^{(i)}_t, \notag\end{align}

given that (19) holds. Then

(20) \begin{align}\xi^{(i)}_t = \hat{Z} + \sigma\rho B_t + \sigma\sqrt{1 - \rho^2}\left(\int_0^t \frac{\gamma(t)}{\gamma(s)}\,\textrm{d} W^{(i)}_s \right),\end{align}

for $i\in\mathcal{I}$ . If $\rho=0$ and $\hat{Z}=0$ , then the $\{\xi^{(i)}_t\}_{t\in\mathbb{T}}$ are mutually independent.

Proof. Since Kolmogorov’s strong law property (19) holds, the strong law of large numbers gives the following limits:

(21) \begin{align}&\lim_{n\rightarrow \infty}\frac{1}{n}\sum_{j=1}^n \beta^{(j,n)} W^{(j)}_t= 0, \end{align}
\begin{align}&\lim_{n\rightarrow \infty}\frac{1}{n}\sum_{j=1}^n \int_0^t \frac{\gamma(t)}{\gamma(s)}\beta^{(j,n)}\,\textrm{d} W^{(j)}_s = 0, \notag\end{align}

which hold $\mathbb{P}$ -a.s., since we have $\int_0^t \gamma(t)^2\gamma(s)^{-2}\,\textrm{d} s < \infty$ . The SDE given in (20) then follows from (5). The mutual independence when $\rho=0$ and $\hat{Z}=0$ is due to mutually independent Brownian motions $\{W^{(i)}_t\}_{t\in\mathbb{T}}$ across $i\in\mathcal{I}$ .

Example 2.1. If $f(t)=\theta^t\log(\theta)/(\theta^T-\theta^t)$ for some $\theta\in(1,\infty)$ for $t\in\mathbb{T}_{-}$ , then

(22) \begin{align}\xi^{(i)}_t &= \hat{Z} + \sigma\rho B_t + \sigma\sqrt{1 - \rho^2}\left(\int_0^t \frac{(\theta^T-\theta^t)}{(\theta^T-\theta^s)}\,\textrm{d} W^{(i)}_s\right).\end{align}

Note that if $\theta=\exp(1)$ , then $f(t)=\exp(t)/(\exp(T)-\exp(t))$ for $t\in\mathbb{T}_{-}$ .

Example 2.2. If $\rho=0$ and $f(t)=\cot(T-t)$ for $t\in\mathbb{T}_{-}$ , then

(23) \begin{align}\xi^{(i)}_t &= \hat{Z} + \sigma\int_0^t \frac{\sin\!(T-t)}{\sin\!(T-s)}\,\textrm{d} W^{(i)}_s,\end{align}

given that $T <\pi$ holds.

Example 2.3. If $\hat{Z}=0$ , $\rho=0$ , and $f(t)=\alpha/(T-t)$ for $t\in\mathbb{T}_{-}$ for some $\alpha\in(0,\infty)$ , then the mean-field limit consists of mutually independent $\alpha$ -Wiener bridges for $i\in\mathcal{I}$ , where

(24) \begin{align}\xi^{(i)}_t &= \sigma\int_0^t \frac{(T-t)^\alpha}{(T-s)^\alpha}\,\textrm{d} W^{(i)}_s.\end{align}

More specifically, if we have $\alpha=1$ and $\sigma=1$ , then each $\{\xi^{(i)}_t\}_{t\in\mathbb{T}}$ is a mutually independent standard Brownian bridge.

The mean-fields given in (22), (23), and (24) are examples amongst many others that can be found by choosing different f that satisfy the conditions in Lemma 2.1.

Remark 2.1. If there is no common noise in the system with $\rho=0$ and we have $\hat{Z}=0$ , then the following holds:

\begin{align}S^{(i,j)}_t \overset{\text{law}}{=} \sqrt{2}\xi^{(i)}_t \quad \text{for $t\in\mathbb{T}$ and $i\neq j$}. \notag\end{align}

Hence, each gap process $\{S^{(i,j)}_t\}_{t\in\mathbb{T}}$ for any matrix dimension n behaves as the (scaled) mean-field limit of the system as $n\rightarrow\infty$ . However, $\{S^{(i,j)}_t\}_{t\in\mathbb{T}}$ are not mutually independent across $i,j\in\mathcal{I}$ even when $\rho=0$ .

Proposition 2.5 below provides us with a consistency result for the system, where the double limits of every $\{\lambda^{(i,n)}_t\}_{t\in\mathbb{T}}$ as $n\rightarrow\infty$ and $t\rightarrow T$ are exchangeable; that is, the order of taking these limits does not matter.

Proposition 2.5. Keep the conditions of Lemma 2.1 and let (19) hold. Then

\begin{align*}\lim_{t\rightarrow T} \lim_{n\rightarrow \infty} \lambda^{(i,n)}_t &= \lim_{n\rightarrow \infty} \lim_{t\rightarrow T} \lambda^{(i,n)}_t \notag \\[5pt] &= \hat{Z} + \sigma\rho B_T \quad \rm{\forall i\in\mathcal{I}, \mathbb{P}\textit{-a.s.}}\end{align*}

Proof. Using (20) and the convergence in (6), we get the following double limit:

\begin{align*}\lim_{t\rightarrow T} \lim_{n\rightarrow \infty} \lambda^{(i,n)}_t &= \lim_{t\rightarrow T}\xi^{(i)}_t \notag \\[5pt] &= \hat{Z} + \sigma\rho B_T \quad \text{$\mathbb{P}$-a.s.}\end{align*}

When we start with $\lim_{t\rightarrow T} \lambda^{(i,n)}_t$ , using (7) and (21), we have

\begin{align}\lim_{n\rightarrow \infty} \lim_{t\rightarrow T} \lambda^{(i,n)}_t &= \hat{Z} + \sigma\rho B_T + \lim_{n\rightarrow \infty}\sigma\sqrt{1 - \rho^2}\left(\frac{1}{n}\sum_{j=1}^n \beta^{(j,n)} W^{(j)}_T\right) \notag \\[5pt] &=\hat{Z} + \sigma\rho B_T, \notag\end{align}

$\mathbb{P}$ -a.s. Hence, the iterated limits commute for any $i\in\mathcal{I}$ .

If $\hat{Z}=0$ and if there is no common noise with $\rho=0$ , it can be seen from Proposition 2.5 that the entire system converges to zero in the above double limits, irrespective of their order. If $\hat{Z}=0$ but $\rho\neq 0$ , then the system converges to the same random variable dictated by the common noise. Note also that if $\rho\in\{-1,1\}$ , then the whole system is essentially driven by the common noise process $\{B_t\}_{t\in\mathbb{T}}$ , which can be seen from (5).

2.1. A numerical study of eigenvalue convergence

To demonstrate how the system of eigenvalues behaves as $t\rightarrow T$ and $n\rightarrow\infty$ , we shall provide numerical simulations. We discretize the SDE in (2) using the Euler–Maruyama scheme over the lattice $0 = t_0 \leq t_1 \leq \ldots \leq t_m \leq T$ for some $m\in\mathbb{N}_+$ . We denote our numerical approximation of $\{\lambda^{(i,n)}_t\}_{t\in\mathbb{T}}$ by $\{\hat{\lambda}^{(i,n)}_{t_k}\}_{t_k\in\mathbb{T}}$ for $i\in\mathcal{I}$ , and work with the following:

(25) \begin{align}\hat{\lambda}^{(i,n)}_{t_{k+1}} = \hat{\lambda}^{(i,n)}_{t_k} &+ f(t_k)\left(\frac{1}{n}\sum_{j=1}^n \beta^{i,n}\hat{\lambda}^{(j,n)}_{t_k} - \hat{\lambda}^{(i,n)}_{t_k}\right)\delta \notag \\[5pt] &+ \sigma\left(\rho\left(B_{t_{k+1}} - B_{t_k}\right) + \sqrt{1-\rho^2}\left(W^{(i,n)}_{t_{k+1}} - W^{(i,n)}_{t_k}\right)\right), \text{for every $i\in$}\mathcal{I},\end{align}

where we set $\hat{\lambda}^{(i,n)}_{t_0} = 0$ , $\delta=T/m$ , and $t_k = k\delta$ . As an example, we choose $f(t)=\theta^t\log(\theta)/(\theta^T-\theta^t)$ for some $\theta\in(1,\infty)$ , as in Example 2.1, so that

\begin{align}\lambda^{(i,n)}_t = \sigma\rho B_t + \sigma\sqrt{1 - \rho^2}&\left(\frac{1}{n}\sum_{j=1}^n \beta^{(j,n)}W^{(j)}_t + \int_0^t \frac{(\theta-\theta^t)}{(\theta-\theta^s)}\,\textrm{d} W^{(i)}_s \right. \notag\\[5pt] &\quad \left. - \frac{1}{n}\sum_{j=1}^n \int_0^t \frac{(\theta-\theta^t)}{(\theta-\theta^s)}\beta^{(j,n)}\,\textrm{d} W^{(j)}_s \right) \notag\end{align}

for every $i\in\mathcal{I}$ , where we set $T=1$ for parsimony. We also choose $m=1000$ for the time lattice, so that every time-step we move on is $\delta=0.001$ . For the averaging coefficients $\beta^{(j,n)}$ for $j=1,\ldots,n$ , we choose the following scheme as an example:

\begin{align}\beta^{(j,n)} &= \frac{2 j n}{n(n+1)} \notag \\[5pt] & \quad \Rightarrow \sum_{j=1}^n \beta^{(j,n)} = n. \notag\end{align}

Hence, our choice satisfies the normalization condition in (1). In addition, we have the limit

(26) \begin{align}\lim_{n\rightarrow \infty} \sigma^2T\sqrt{1-\rho^2}\frac{|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}}{n^2} &= \lim_{n\rightarrow \infty} \frac{\sigma^2T\sqrt{1-\rho^2}}{n^2}\sum_{j=1}^n \frac{4j^2n^2}{(n^2+n)^2} \notag \\[5pt] &= \lim_{n\rightarrow \infty} \frac{4\sigma^2T\sqrt{1-\rho^2}}{(n^2+n)^2}\sum_{j=1}^n j^2 \notag \\[5pt] &= 0,\end{align}

which implies that $\Gamma^{(n)}$ in (9) as provided in Proposition 2.1 converges to the following:

\begin{align}\lim_{n\rightarrow \infty} \Gamma^{(n)} = \sigma^2T\rho^2. \notag\end{align}

The convergence in (26) further gives us

\begin{align}\lim_{n\rightarrow \infty} \mathbb{E}[\lambda^{(i,n)}_t \lambda^{(j,n)}_t] &= \sigma^2t\rho^2 + \sigma^2(1-\rho^2)\mathbf{1}(i=j)\int_0^t \frac{(\theta-\theta^t)^2}{(\theta-\theta^s)^2}\,\textrm{d} s, \notag\end{align}

by Proposition 2.2. Finally, we thus have the double limit of the covariance process given by

\begin{align}\lim_{n\rightarrow \infty} \lim_{t\rightarrow T} \mathbb{E}[\lambda^{(i,n)}_t \lambda^{(j,n)}_t] &= \lim_{t\rightarrow T} \lim_{n\rightarrow \infty} \mathbb{E}[\lambda^{(i,n)}_t \lambda^{(j,n)}_t] \notag \\[5pt] &= \sigma^2T\rho^2.\notag\end{align}

If there is no common noise in the system with $\rho=0$ , the variance of the system converges to zero as $n\rightarrow \infty$ and $t\rightarrow T$ . If there is common noise with $\rho\neq0$ , there is irreducible variance in the system at these limits. For the simulations below, we shall set $\sigma=1$ without loss of much generality.

2.1.1. No common noise $\rho=0$

First, we consider the case where there is no common noise in the system, and gradually increase the dimension of the matrix process $\{\boldsymbol{{H}}_t(n)\}_{t\in\mathbb{T}}$ . From Proposition 2.1, for any fixed n, we have the terminal law

\begin{align}\lim_{t\rightarrow T} \boldsymbol{{H}}_t(n) \sim \mathcal{N}\left(0, \Gamma^{(n)} \right) \boldsymbol{{I}}(n), \notag\end{align}

where the variance in our example is given by

(27) \begin{align}\Gamma^{(n)} &= \sigma^2T\frac{|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}}{n^2} = \frac{4}{(n^2+n)^2}\sum_{j=1}^n j^2,\end{align}

with $\sigma=1$ and $T=1$ . From Corollary 2.1, we also know that this is the law of the weighted ensemble average of the eigenvalues $A_T^{(n)}$ .

For our simulations below, we begin with the case where we set $\theta=2$ . Note that $\theta$ has no impact on the terminal law of the system at time T, which can also be seen from (27); however, this parameter affects the covariance structure of the system until time T. Since every eigenvalue has to converge in time to the same random variable $A_T^{(n)}$ , different choices of $\theta$ can be interpreted as controls on the speed of convergence to $A_T^{(n)}$ . In other words, even if the covariance of the system may increase with different choices of $\theta$ , all the eigenvalues of the system must converge to the same random value nonetheless, which creates different pressure points on the system. Therefore, we shall later change this parameter to demonstrate its impact on the trajectories of the eigenvalues.

In Figure 1, we have $n=10$ , $n=100$ , $n=500$ , and $n=5000$ . Note that the system ends up closer to zero at time T as we increase the dimension of $\{\boldsymbol{{H}}_t(n)\}_{t\in\mathbb{T}}$ . This is no surprise, since the variance of the terminal distribution is inversely proportional to n. For the given choices of n, we thus have $\Gamma^{(10)} \approx 0.1273$ , $\Gamma^{(100)} \approx 0.0133$ , $\Gamma^{(500)} \approx 0.0027$ , and $\Gamma^{(5000)} \approx 0.0003$ , respectively. In Figure 2, we plot the terminal law of each case in accordance with the eigenvalue trajectory plots above. By (26), the terminal law converges (in terms of distributions) to Dirac at zero as $n\rightarrow\infty$ . In Figure 3, we provide an example where we keep $n=5000$ but choose $\theta=2\times10^6$ . As discussed above, this doesn’t change the terminal law, and hence we still have $\Gamma^{(5000)} \approx 0.0003$ . However, the shape of the evolution changes: note how the maximum covariance of the system is shifted closer to T, relative to the case of $\theta=2$ at the bottom right of Figure 1. As a result, the pressure to converge to $A_T^{(n)}$ increases with increasing $\theta$ as $t\rightarrow T$ .

Figure 1. Top left and right: $n=10$ and $n=100$ . Bottom left and right: $n=500$ and $n=5000$ .

Figure 2. Top left and right: $n=10$ and $n=100$ . Bottom left and right: $n=500$ and $n=5000$ .

Figure 3. Left: $n=5000$ , $\theta=2\times10^6$ , $\rho=0$ . Right: distribution of $A_T^{(n)}$ .

2.1.2. Common noise $\rho\neq 0$

We shall now admit common noise in the system with $\rho=0.5$ as an example. Now the variance in our example is given by

\begin{align}\Gamma^{(n)} &= \sigma^2T\left(0.25 + \frac{0.75|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}}{n^2}\right) = 0.25 +\frac{3}{(n^2+n)^2}\sum_{j=1}^n j^2, \notag\end{align}

with $\sigma=1$ and $T=1$ . Instead of gradually increasing the dimension of $\{\boldsymbol{{H}}_t(n)\}_{t\in\mathbb{T}}$ as before, we shall start with $n=5000$ to avoid repetition. Moreover, we shall provide simulations for both $\theta=2$ and $\theta=2\times10^6$ to compare these two cases. First, we provide a sample for $\theta=2$ in Figure 4.

Figure 4. Left: $n=5000$ , $\theta=2$ , $\rho=0.5$ . Right: Distribution of $A_T^{(n)}$ .

Figure 5. Left: $n=5000$ , $\theta=2\times10^6$ , $\rho=0.5$ . Right: Distribution of $A_T^{(n)}$ .

Finally, we set $\theta=2\times10^6$ in Figure 5. Note that the maximum variance is again shifted towards T as we increase $\theta$ , which in turn increases the pressure to converge to $A_T^{(n)}$ .

Because of the common noise factor, we have $\Gamma^{(5000)} \approx 0.2502$ instead of $\Gamma^{(5000)} \approx 0.0003$ as when we had $\rho=0$ . This is a considerable increase in the variance of the random terminal value, the ensemble average $A_T^{(n)}$ , which is why we have a higher probability of getting eigenvalue trajectories that end up away from zero as $t\rightarrow T$ .

We have only considered a specific model of a system that can be studied as part of our proposed framework. Many other examples can be constructed by choosing different f, as long as it satisfies the conditions in Lemma 2.1. We shall briefly discuss another example, which yields mutually independent $\alpha$ -Wiener bridges as $n\rightarrow\infty$ , where we have

(28) \begin{align}\lambda^{(i,n)}_t &= \frac{1}{n}\sum_{j=1}^n W^{(j)}_t + \int_0^t \frac{(T-t)^\alpha}{(T-s)^\alpha}\,\textrm{d} W^{(i)}_s - \frac{1}{n}\sum_{j=1}^n \int_0^t\frac{(T-t)^\alpha}{(T-s)^\alpha}\,\textrm{d} W^{(j)}_s,\end{align}

for $i\in\mathcal{I}$ . Using Corollary 2.2, we get the covariance process

(29) \begin{equation}\mathbb{E}[\lambda^{(i,n)}_t \lambda^{(j,n)}_t] =\begin{cases}\frac{t}{n} + \mathbf{1}(i=j)\frac{T\left(1 - \left(\frac{T-t}{T}\right)^{2\alpha}\right) - t}{2\alpha -1} - \frac{T\left(1 - \left(\frac{T-t}{T}\right)^{2\alpha}\right) - t}{n(2\alpha -1)}, & \alpha\neq \frac{1}{2},\\[5pt] \frac{t}{n} + \mathbf{1}(i=j)(t-T)\log\left(\frac{T-t}{T}\right) - \frac{(t-T)}{n}\log\left(\frac{T-t}{T}\right), & \alpha=\frac{1}{2}. \\[5pt] \end{cases}\end{equation}

Note that for $\alpha\neq \frac{1}{2}$ ,

\begin{align}\lim_{t\rightarrow T} \frac{T\left(1 - \left(\frac{T-t}{T}\right)^{2\alpha}\right) - t}{2\alpha -1} &= \lim_{t\rightarrow T} \frac{T-t}{2\alpha -1} - \lim_{t\rightarrow T}\frac{T\left(\frac{T-t}{T}\right)^{2\alpha}}{2\alpha -1} \notag \\[5pt] &= 0. \notag\end{align}

Also, for $\alpha = \frac{1}{2}$ , using l’Hôpital’s rule, we have

\begin{align}\lim_{t\rightarrow T} (t-T)\log(\frac{T-t}{T}) &= \frac{\log\left(\frac{T-t}{T}\right)}{(t-T)^{-1}} \notag \\[5pt] &=\lim_{t\rightarrow T} \frac{\frac{\partial}{\partial t}\log(\frac{T-t}{T})}{\frac{\partial}{\partial t}(t-T)^{-1}} \notag \\[5pt] &= \lim_{t\rightarrow T} \frac{(t-T)^{-1}}{-(t-T)^{-2}} \notag \\[5pt] &=\lim_{t\rightarrow T} (T-t) \notag \\[5pt] &= 0. \notag\end{align}

This means that for any fixed n, we get the following covariance time-limit:

(30) \begin{align}\lim_{t\rightarrow T}\mathbb{E}[\lambda^{(i,n)}_t \lambda^{(j,n)}_t] = \frac{T}{n}.\end{align}

On the other hand, we get the covariance space-limit

(31) \begin{equation}\lim_{n\rightarrow\infty}\mathbb{E}[\lambda^{(i,n)}_t \lambda^{(j,n)}_t] =\begin{cases}\mathbf{1}(i=j)\frac{T\left(1 - \left(\frac{T-t}{T}\right)^{2\alpha}\right) - t}{2\alpha -1}, & \alpha\neq \frac{1}{2},\\[5pt] \mathbf{1}(i=j)(t-T)\log(\frac{T-t}{T}), & \alpha=\frac{1}{2}. \\[5pt] \end{cases}\end{equation}

We thus have the commutative double limit of the covariance:

\begin{align}\lim_{t\rightarrow T}\lim_{n\rightarrow\infty}\mathbb{E}[\lambda^{(i,n)}_t \lambda^{(j,n)}_t] &= \lim_{n\rightarrow\infty}\lim_{t\rightarrow T}\mathbb{E}[\lambda^{(i,n)}_t \lambda^{(j,n)}_t] \notag \\[5pt] &= 0. \notag\end{align}

We shall omit further numerical simulations for this family of eigenvalues in order to avoid repetition.

2.2. Different systems with the same terminal laws

One of the key observations from Proposition 2.1 is that the terminal law of $\{\boldsymbol{{H}}_t(n)\}_{t\in\mathbb{T}}$ does not depend on the choice of f (as long as it satisfies the aforementioned conditions). This means that we can work with multiple matrix systems with different eigenvalue dynamics driven by different f functions, and still converge to random variables equal in law to each other. More precisely, let $\{\boldsymbol{{H}}^{(k)}_t(n)\}_{t\in\mathbb{T}}$ be a Hermitian-valued process, where $\{\boldsymbol{\Lambda}^{(k)}_t(n)\}_{t\in\mathbb{T}}$ is its corresponding eigenvalue matrix process, with $\boldsymbol{{H}}^{(k)}_t(n) = \boldsymbol{{U}}^{(k)}_t(n) \boldsymbol{\Lambda}^{(k)}_t(n) \boldsymbol{{U}}^{*(k)}_t(n)$ for all $t\in\mathbb{T}$ and $k=1,\ldots,m$ for some $m\in\mathbb{N}_+$ . We denote the individual eigenvalues by

\begin{align}\boldsymbol{\Lambda}^{(k)}_t(n)=\text{diag}\{\lambda_t^{(1,n,k)}, \ldots, \lambda_t^{(n,n,k)}\}, \notag\end{align}

where the initial states are $\boldsymbol{\Lambda}^{(k)}_0(n)=\textbf{0}$ . We again let $\{A_t^{(n,k)}\}_{t\in\mathbb{T}}$ be the weighted ensemble average process given by

\begin{align}A_t^{(n,k)} = \frac{1}{n}\sum_{i=1}^n \beta^{(i,n,k)} \lambda_t^{(i,n,k)} \quad \text{$\forall t\in\mathbb{T}$}, \notag\end{align}

with $|\beta^{(i,n,k)}| <\infty$ and $\sum_{i=1}^n \beta^{(i,n,k)} = n$ . For the following arguments, we also set

\begin{align}\boldsymbol{\beta}^{(n,k)}=\boldsymbol{\beta}^{(n,l)}=\boldsymbol{\beta}^{(n)} \notag\end{align}

for $k,l=1,\ldots,m$ . As before, eigenvalue dynamics form an interacting system of SDEs

(32) \begin{align}\,\textrm{d} \lambda^{(i,n,k)}_t &= f^{(k)}(t)\left(A^{(n,k)}_t - \lambda^{(i,n,k)}_t \right)\,\textrm{d} t + \sigma\left(\rho \,\textrm{d} B^{(k)}_t + \sqrt{1 -\rho^2}\,\textrm{d} W^{(i,k)}_t\right),\end{align}

for all $t\in\mathbb{T}_{-}$ and $i\in\mathcal{I}$ , and $k=1,\ldots,m$ , where each $f^{(k)}\;:\;\mathbb{T}_{-} \rightarrow\mathbb{R}$ is a continuous measurable function satisfying

\begin{align}\int_0^t\exp\left( -\int_s^t f^{(k)}(u)\,\textrm{d} u\right)\,\textrm{d} s < \infty, \notag\end{align}

and $\sigma\neq 0$ and $\rho\in[\!-\!1,1]$ . Here, $\{W^{(i,k)}_t\}_{t\in\mathbb{T}}$ and $\{B^{(k)}_t\}_{t\in\mathbb{T}}$ are mutually independent standard $(\mathbb{P},\{\mathcal{F}_{t}\})$ -Brownian motions.

Proposition 2.6. Let each map $f^{(p)}$ satisfy

  1. (i) $\lim_{t\rightarrow T}\int_0^t f^{(p)}(s) \,\textrm{d} s = \infty$ , and

  2. (ii) $\int_0^\tau f^{(p)}(s) \,\textrm{d} s < \infty$ for any $\tau\in\mathbb{T}_{-}$ ,

for $p=1,\ldots,m$ . Then the following holds:

\begin{align}\lim_{t\rightarrow T} \boldsymbol{{H}}^{(k)}_t(n) \overset{\text{law}}{=} \lim_{t\rightarrow T}\boldsymbol{{H}}^{(l)}_t(n), \notag\end{align}

for every $k,l=1,\ldots,m$ .

Proof. The statement follows from $\boldsymbol{\Lambda}^{(k)}_0(n)=\textbf{0}$ and $\boldsymbol{\beta}^{(n,k)}=\boldsymbol{\beta}^{(n,l)}=\boldsymbol{\beta}^{(n)}$ for every $k,l=1,\ldots,m$ and using Proposition 2.1.

Finally, using the mutual independence of all the $(\mathbb{P},\{\mathcal{F}_{t}\})$ -Brownian motions above, the time limit of the ensemble average of the matrices can easily be computed:

\begin{align}\lim_{t\rightarrow T} \frac{1}{m}\sum_{k=1}^m\boldsymbol{{H}}^{(k)}_t(n) \overset{\text{law}}{=} R^{(n)} \boldsymbol{{I}}(n), \notag\end{align}

where $R^{(n)} \sim \mathcal{N}\left(0, \Theta^{(n)} \right)$ with

\begin{align}\Theta^{(n)} = \frac{\sigma^2T}{m}\left(\rho^2 + \frac{1 - \rho^2}{n^2}|| \boldsymbol{\beta}^{(n)}||^{2}_{L^2}\right). \notag\end{align}

Therefore, without knowing the details about each $f^{(k)}$ that dictates the stochastic dynamics of the underlying eigenvalues for each $\{\boldsymbol{{H}}^{(k)}_t(n)\}_{t\in\mathbb{T}}$ , we can still derive conclusions about the limiting behaviour of these matrices as discussed above.

3. Conclusion

We studied a multivariate system modelled by a random matrix whose eigenvalues interact in a mean-field way and converge in time to their weighted ensemble average. We produced a class of Hermitian-valued processes that converge to the identity matrix scaled by a Gaussian random variable with variance inversely proportional to the size of the system. As future research, the framework can be extended so that there are multiple distinct time points over the evolution of the system at which the eigenvalues converge to their respective weighted ensemble averages in a successive manner. For this direction, a different but related framework was studied in [Reference Mengütürk and Mengütürk43] to analyse energy-based quantum state reduction phenomena when quantum observables are measured sequentially at different times. We believe that the mathematical machinery we have provided in this paper can be used to produce mean-field interacting counterparts of such quantum reduction models.

Acknowledgements

The author is grateful to the anonymous referee for their review of the paper.

Funding information

There are no funding bodies to thank in relation to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

Data

The data related to the simulations are available from the author upon request.

References

Adler, M., Delépine, J. and van Moerbeke, P. (2009). Dyson’s nonintersecting Brownian motions with a few outliers. Commun. Pure Appl. Math. 62, 334395.CrossRefGoogle Scholar
Adler, M., Ferrari, P. L. and van Moerbeke, P. (2010). Airy processes with wanderers and new universality classes. Ann. Prob. 38, 714769.CrossRefGoogle Scholar
Adler, M. and van Moerbeke, P. (2005). PDEs for the joint distributions of the Dyson, Airy and Sine Processes. Ann. Prob. 33, 13261361.CrossRefGoogle Scholar
Adler, M., van Moerbeke, P. and Wang, D. (2013). Random matrix minor processes related to percolation theory. Random Matrices Theory Appl. 2, article no. 1350008.CrossRefGoogle Scholar
Adler, S. L., Brody, D. C., Brun, T. A. and Hughston, L. P. (2001). Martingale models for quantum state reduction. J. Phys. A 34, article no. 8795.CrossRefGoogle Scholar
Adler, S. L. and Horwitz, L. P. (2000). Structure and properties of Hughston’s stochastic extension of the Schrödinger equation. J. Math. Phys. 41, article no. 2485.CrossRefGoogle Scholar
Akemann, G. (2007). Matrix models and QCD with chemical potential. Internat. J. Mod. Phys. A 22, 10771122.CrossRefGoogle Scholar
Bahcall, S. R. (1996). Random matrix model for superconductors in a magnetic field. Phys. Rev. Lett. 77, article no. 5276.CrossRefGoogle Scholar
Barczy, M. and Kern, P. (2011). General $\alpha$ -Wiener bridges. Commun. Stoch. Anal. 5, 585608.Google Scholar
Barczy, M. and Pap, G. (2010). $\alpha$ -Wiener bridges: singularity of induced measures and sample path properties. Stoch. Anal. Appl. 28, 447466.CrossRefGoogle Scholar
Beenakker, C. W. J. (1997). Random-matrix theory of quantum transport. Rev. Mod. Phys. 69, article no. 731.CrossRefGoogle Scholar
Bhadola, P. and Deo, N. (2015). Study of RNA structures with a connection to random matrix theory. Chaos Solitons Fractals 81, 542550.CrossRefGoogle Scholar
Bleher, P. M. and Kuijlaars, A. B. J. (2004). Large n limit of Gaussian random matrices with external source, Part I. Commun. Math. Phys. 252, 4376.CrossRefGoogle Scholar
Bohigas, O., Giannoni, M. J. and Schmit, C. (1984). Characterization of chaotic quantum spectra and universality of level fluctuation laws. Phys. Rev. Lett. 52, article no. 1.CrossRefGoogle Scholar
Brézin, E. and Hikami, S. (1998). Level spacing of random matrices in an external source. Phys. Rev. E 58, 71767185.CrossRefGoogle Scholar
Brézin, E. and Hikami, S. (1998). Universal singularity at the closure of a gap in a random matrix theory. Phys. Rev. E 57, 41404149.CrossRefGoogle Scholar
Brody, D. C. and Hughston, L. P. (2005). Finite-time stochastic reduction models. J. Math. Phys. 46, article no. 082101.CrossRefGoogle Scholar
Brody, D. C. and Hughston, L. P. (2006). Quantum noise and stochastic reduction. J. Phys. A 39, article no. 833.CrossRefGoogle Scholar
Bru, M.-F. (1989). Diffusions of perturbed principal component analysis. J. Multivariate Anal. 29, 127136.CrossRefGoogle Scholar
Bru, M.-F. (1991). Wishart processes. J. Theoret. Prob. 4, 725751.CrossRefGoogle Scholar
Carmona, R. A., Fouque, J. P. and Sun, L. H. (2015). Mean field games and systemic risk. Commun. Math. Sci. 13, 911933.CrossRefGoogle Scholar
Corwin, I. and Hammond, A. (2014). Brownian Gibbs property for Airy line ensembles. Invent. Math. 195, 441508.CrossRefGoogle Scholar
Dyson, F. J. (1962). A Brownian-motion model for the eigenvalues of a random matrix. J. Math. Phys. 3, article no. 1191.CrossRefGoogle Scholar
García del Molino, L. C., Pakdaman, K., Touboul, J. and Wainrib, G. (2013). Synchronization in random balanced networks. Phys. Rev. E 88, article no. 042824.CrossRefGoogle ScholarPubMed
Gisin, N. and Percival, I. C. (1992). The quantum-state diffusion model applied to open systems. J. Phys. A 25, article no. 5677.CrossRefGoogle Scholar
Grabiner, D. J. (1999). Brownian motion in a Weyl chamber, non-colliding particles, and random matrices. Ann. Inst. H. Poincaré Prob. Statist. 35, 177204.CrossRefGoogle Scholar
Guhr, T., Müller-Groeling, A. and Weidenmüller, H. A. (1998). Random-matrix theories in quantum physics: common concepts. Phys. Reports 299, 189425.CrossRefGoogle Scholar
Haake, F. (2001). Quantum Signatures of Chaos. Springer, Berlin, Heidelberg.CrossRefGoogle Scholar
Hildebrandt, F. and Rœlly, S. (2020). Pinned diffusions and Markov bridges. J. Theoret. Prob. 33, 906917.CrossRefGoogle Scholar
Hughes, C. P., Keating, J. P. and O’Connell, N. (2000). Random matrix theory and the derivative of the Riemann zeta function. Proc. R. Soc. London A 456, 26112627.CrossRefGoogle Scholar
Hughston, L. P. (1996). Geometry of stochastic state vector reduction. Proc. R. Soc. London A 452, 953979.CrossRefGoogle Scholar
Johansson, K. (2005). The arctic circle boundary and the Airy process. Ann. Prob. 33, 130.CrossRefGoogle Scholar
Katori, M. (2014). Determinantal martingales and noncolliding diffusion processes. Stoch. Process. Appl. 124, 37243768.CrossRefGoogle Scholar
Katori, M. and Tanemura, H. (2004). Symmetry of matrix-valued stochastic processes and noncolliding diffusion particle systems. J. Math. Phys. 45, article no. 3058.CrossRefGoogle Scholar
Keating, J. P. and Snaith, N. C. (2003). Random matrices and L-functions. J. Phys. A 36, article no. 2859.Google Scholar
Laloux, L., Cizeau, P., Bouchaud, J.-P. and Potters, M. (1999). Noise dressing of financial correlation matrices. Phys. Rev. Lett. 83, article no. 1467.CrossRefGoogle Scholar
Laloux, L., Cizeau, P., Potters, M. and Bouchaud, J.-P. (2000). Random matrix theory and financial correlations. Internat. J. Theoret. Appl. Finance 3, 391397.CrossRefGoogle Scholar
Li, X.-M. (2018). Generalised Brownian bridges: examples. Markov Process. Relat. Fields 24, 151163.Google Scholar
Liechty, K. and Wang, D. (2016). Nonintersecting Brownian motions on the unit circle. Ann. Prob. 44, 11341211.CrossRefGoogle Scholar
Lillo, F. and Mantegna, R. N. (2005). Spectral density of the correlation matrix of factor models: a random matrix theory approach. Phys. Rev. E 72, article no. 016219.CrossRefGoogle ScholarPubMed
Mengütürk, L. A. (2016). Stochastic Schrödinger evolution over piecewise enlarged filtrations. J. Math. Phys. 57, article no. 032106.CrossRefGoogle Scholar
Mengütürk, L. A. (2021). A family of interacting particle systems pinned to their ensemble average. J. Phys. A. 54, article no. 435001.CrossRefGoogle Scholar
Mengütürk, L. A. and Mengütürk, M. C. (2020). Stochastic sequential reduction of commutative Hamiltonians. J. Math. Phys. 61, article no. 102104.CrossRefGoogle Scholar
Mezzadri, F. and Snaith, N. C. (2005). Recent Perspectives in Random Matrix Theory and Number Theory. Cambridge University Press.CrossRefGoogle Scholar
Muir, D. R. and Mrsic-Flogel, T. (2015). Eigenspectrum bounds for semirandom matrices with modular and spatial structure for neural networks. Phys. Rev. E 91, article no. 042808.CrossRefGoogle ScholarPubMed
Orland, H. and Zee, A. (2002). RNA folding and large N matrix theory. Nuclear Phys. B 620, 456476.CrossRefGoogle Scholar
Prähofer, M. and Spohn, H. (2002). Scale invariance of the PNG droplet and the Airy process. J. Statist. Phys. 108, 10711106.CrossRefGoogle Scholar
Rajan, K. and Abbott, L. F. (2006). Eigenvalue spectra of random matrices for neural networks. Phys. Rev. Lett. 97, article no. 188104.CrossRefGoogle ScholarPubMed
Seligman, T. H., Verbaarschot, J. J. M. and Zirnbauer, M. R. (1984). Quantum spectra and transition from regular to chaotic classical motion. Phys. Rev. Lett. 53, article no. 215.CrossRefGoogle Scholar
Tracy, C. A. and Widom, H. (2006). The Pearcey process. Commun. Math. Phys. 263, 381400.CrossRefGoogle Scholar
Verbaarschot, J. J. M. and Wettig, T. (2000). Random matrix theory and chiral symmetry in QCD. Annual Rev. Nuclear Particle Sci. 50, 343410.CrossRefGoogle Scholar
Vernizzi, G. and Orland, H. (2015). Random matrix theory and ribonucleic acid (RNA) folding. In The Oxford Handbook of Random Matrix Theory, Oxford University Press, pp. 873897.Google Scholar
Wigner, E. (1955). Characteristic vectors of bordered matrices with infinite dimensions. Ann. Math. 62, 548564.CrossRefGoogle Scholar
Yamamoto, N. and Kanazawa, T. (2009). Dense QCD in a finite volume. Phys. Rev. Lett. 103, article no. 032001.CrossRefGoogle Scholar
Figure 0

Figure 1. Top left and right: $n=10$ and $n=100$. Bottom left and right: $n=500$ and $n=5000$.

Figure 1

Figure 2. Top left and right: $n=10$ and $n=100$. Bottom left and right: $n=500$ and $n=5000$.

Figure 2

Figure 3. Left: $n=5000$, $\theta=2\times10^6$, $\rho=0$. Right: distribution of $A_T^{(n)}$.

Figure 3

Figure 4. Left: $n=5000$, $\theta=2$, $\rho=0.5$. Right: Distribution of $A_T^{(n)}$.

Figure 4

Figure 5. Left: $n=5000$, $\theta=2\times10^6$, $\rho=0.5$. Right: Distribution of $A_T^{(n)}$.