Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-27T04:57:13.125Z Has data issue: false hasContentIssue false

Gaussian fluctuations in the equipartition principle for Wigner matrices

Published online by Cambridge University Press:  23 August 2023

Giorgio Cipolloni
Affiliation:
Princeton Center for Theoretical Science, Princeton University, Princeton, NJ 08544, USA; E-mail: gc4233@princeton.edu
László Erdős
Affiliation:
Institute of Science and Technology Austria, Am Campus 1, 3400 Klosterneuburg, Austria; E-mail: lerdos@ist.ac.at
Joscha Henheik
Affiliation:
Institute of Science and Technology Austria, Am Campus 1, 3400 Klosterneuburg, Austria; E-mail: joscha.henheik@ist.ac.at
Oleksii Kolupaiev
Affiliation:
Institute of Science and Technology Austria, Am Campus 1, 3400 Klosterneuburg, Austria; E-mail: oleksii.kolupaiev@ist.ac.at

Abstract

The total energy of an eigenstate in a composite quantum system tends to be distributed equally among its constituents. We identify the quantum fluctuation around this equipartition principle in the simplest disordered quantum system consisting of linear combinations of Wigner matrices. As our main ingredient, we prove the Eigenstate Thermalisation Hypothesis and Gaussian fluctuation for general quadratic forms of the bulk eigenvectors of Wigner matrices with an arbitrary deformation.

Type
Mathematical Physics
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press

1 Introduction

The general principle of the equipartition of energy for a classical ergodic system asserts that in equilibrium, the total energy is equally distributed among all elementary degrees of freedom. A similar principle for the kinetic energy has recently been verified for a general quantum system coupled to a heat bath [Reference Łuczka33] (see also the previous works on the free Brownian particle and a dissipative harmonic oscillator in [Reference Bialas, Spiechowicz and Łuczka6, Reference Spiechowicz, Bialas and Łuczka37, Reference Bialas, Spiechowicz and Łuczka7] and extensive literature therein). Motivated by E. Wigner’s original vision to model any sufficiently complex quantum system by random matrices, a particularly strong microcanonical version of the equipartition principle for Wigner matrices was first formulated and proven in [Reference Bao, Erdős and Schnelli5]. In its simplest form, consider a fully mean-field random Hamilton operator H acting on the high dimensional quantum state space $\mathbf {C}^N$ that consists of the sum of two independent $N\times N$ Wigner matrices,

$$ \begin{align*}H = W_1+W_2\,, \end{align*} $$

as two constituents of the system. Recall that Wigner matrices $W= (w_{ij})$ are real or complex Hermitian random matrices with independent (up to the symmetry constraint $w_{ij}=\bar w_{ji}$ ), identically distributed entries. Let $\textbf {u}$ be a normalised eigenvector of H with eigenvalue (energy) $\lambda =\langle \textbf {u}, H\textbf {u} \rangle $ ; then equipartition asserts that $\langle \textbf {u}, W_l\textbf {u} \rangle \approx \frac {1}{2}\lambda $ for $l=1,2$ . In [Reference Bao, Erdős and Schnelli5, Theorem 3.4], even a precise error bound was proven (i.e., that

(1.1) $$ \begin{align} \Big| \langle \textbf{u}, W_l\textbf{u} \rangle - \frac{1}{2}\lambda \Big| \le \frac{N^\epsilon}{\sqrt{N}}\,, \qquad l=1,2\, \end{align} $$

holds with very high probability for any fixed $\epsilon>0$ ). This estimate is optimal up the $N^\epsilon $ factor. The main result of the current paper is to identify the fluctuation in (1.1). More precisely, we will show that

$$ \begin{align*}\sqrt{N} \Big[ \langle \mathbf{u}, W_l\mathbf{u} \rangle - \frac{1}{2}\lambda \Big] \end{align*} $$

converges to a centred normal distribution as $N\to \infty $ . We also compute its variance that turns out to be independent of the energy $\lambda $ but depends on the symmetry class (real or complex). The result can easily be extended to the case when H is a more general linear combination of several independent Wigner matrices.

The estimate (1.1) is reminiscent to the recently proven Eigenstate Thermalisation Hypothesis (ETH), also known as the Quantum Unique Ergodicity (QUE),Footnote 1 for Wigner matrices in [Reference Cipolloni, Erdős and Schröder16, Theorem 2.2] (see also [Reference Cipolloni, Erdős and Schröder20, Theorem 2.6] for an improvement), which asserts that

(1.2) $$ \begin{align} \Big| \langle \textbf{u}, A\textbf{u}\rangle - \langle A\rangle \Big| \le \frac{N^\epsilon}{\sqrt{N}}\,, \qquad \langle A\rangle:= \frac{1}{N} \mathrm{Tr} A\, \end{align} $$

holds for any bounded deterministic matrix A. In fact, even the Gaussian fluctuation of

(1.3) $$ \begin{align} \sqrt{N} \Big[ \langle \textbf{u}, A\textbf{u}\rangle - \langle A\rangle \Big] \end{align} $$

was proven in [Reference Cipolloni, Erdős and Schröder18, Theorem 2.2] and [Reference Cipolloni, Erdős and Schröder20, Theorem 2.8] (see also [Reference Benigni and Lopatto10] and the recent generalisation [Reference Benigni and Cipolloni9] to off-diagonal elements as well as joint Gaussianity of several quadratic forms). Earlier results on ETH [Reference Erdős, Schlein and Yau24, Reference Knowles and Yin30, Reference Bloemendal, Erdős, Knowles, Yau and Yin13, Reference Benigni and Lopatto11] and its fluctuations [Reference Bourgade and Yau14, Reference Knowles and Yin29, Reference Tao and Vu38, Reference Marcinek and Yau35] for Wigner matrices typically concerned rank one or finite rank observables A.

Despite their apparent similarities, the quadratic form in (1.1) is essentially different from that in (1.2) since $W_l$ is strongly correlated with $\mathbf {u}$ , whereas A in (1.2) is deterministic. This explains the difference in the two leading terms; note that $\frac {1}{2}\lambda $ in (1.1) is energy-dependent and it is far from the value $\langle W_l\rangle \approx 0$ , which might be erroneously guessed from (1.2). Still, the basic approach leading to ETH (1.2) is very useful to study $\langle \textbf {u}, W_l\textbf {u} \rangle $ as well. The basic idea is to condition on one of the Wigner matrices – for example, $W_2$ –and consider $H=W_1+W_2$ in the probability space of $W_1$ as a Wigner matrix with an additive deterministic deformation $W_2$ . Assume that we can prove the generalisation of ETH (1.2) for such deformed Wigner matrices. In the space of $W_1$ , this would result in a concentration of $\langle \mathbf {u}, W_2\mathbf {u}\rangle $ around some quantity $f(W_2)$ depending only on $W_2$ ; however, the answer will nontrivially depend on the deformation (i.e., it will not be simply $\langle W_2\rangle $ ). Once the correct form of $f(W_2)$ is established, we can find its concentration in the probability space of $W_2$ , yielding the final answer.

To achieve these results, we prove more general ETH and fluctuation results for eigenvector overlaps of deformed Wigner matrices of the general form $H=W+D$ , where W is an $N\times N$ Wigner matrix and D is an arbitrary, bounded deterministic matrix. The goal is to establish the concentration and the fluctuation of the quadratic form $\langle \mathbf {u}, A\mathbf {u}\rangle $ for a normalised eigenvector $\mathbf {u}$ of H with a bounded deterministic matrix A. We remark that for the special case of a rank one matrix $A=|\mathbf {q}\rangle \langle \mathbf {q}|$ , ETH is equivalent to the complete isotropic delocalisation of the eigenvector $\mathbf {u}$ (i.e., that $|\langle \mathbf {q}, \mathbf {u}\rangle |\le N^\epsilon /\sqrt {N}$ for any deterministic vector $\mathbf {q}$ with $\|\mathbf {q}\|=1$ ). For a diagonal deformation D, this has been achieved in [Reference Lee and Schnelli32, Reference Landon and Yau31], and for the general deformation D, this has been achieved in [Reference Erdős, Krüger and Schröder27]. The normal fluctuation of $\langle \mathbf {u}, A\mathbf {u}\rangle $ for a finite rank A and diagonal D was obtained in [Reference Benigni8].

It is well-known that for very general mean-field type random matrices H, their resolvent $G(z)=(H-z)^{-1}$ concentrates around a deterministic matrix $M=M(z)$ ; such results are called local laws, and we will recall them precisely in (4.1). Here, M is a solution of the matrix Dyson equation (MDE), which, in the case of $H=W+D$ , reads as

(1.4) $$ \begin{align} -\frac{1}{M(z)}= z-D +\langle M (z)\rangle\,. \end{align} $$

Given M, it turns out that

(1.5) $$ \begin{align} \langle \mathbf{u}_i, A\mathbf{u}_j \rangle \approx \delta_{i, j} \frac{ \langle\Im M(\lambda_i) A\rangle}{\langle \Im M(\lambda_i)\rangle}\,, \end{align} $$

where $\lambda _i$ is the eigenvalue corresponding to the normalised eigenvector $\mathbf {u}_i$ (i.e., $H\mathbf {u}_i=\lambda _i \mathbf {u}_i$ ). Since the eigenvalues are rigid (i.e., they fluctuate only very little), the right-hand side of (1.5) is essentially deterministic and, in general, it depends on the energy. Similarly to (1.3), we will also establish the Gaussian fluctuation around the approximation (1.5). For zero deformation, $D=0$ , the matrix M is constant and (1.5) recovers (1.2)–(1.3) as a special case. For simplicity, in the current paper we establish all these results only in the bulk of the spectrum, but similar results may be obtained at the edge and at the possible cusp regime of the spectrum as well; the details are deferred to later works.

We now comment on the new aspects of our methods. The proof of (1.5) relies on a basic observation about the local law for $H=W+D$ . Its average form asserts that

(1.6) $$ \begin{align} \Big| \big\langle (G(z)-M(z)) A\big\rangle\Big| \le \frac{N^\epsilon}{N \eta}\,, \qquad \eta : =|\Im z| \gg \frac{1}{N}\, \end{align} $$

holds with very high probability and the error is essentially optimal for any bounded deterministic matrix A. However, there is a codimension one subspace of the matrices A for which the estimate improves to $N^\epsilon /(N\sqrt {\eta })$ , gaining a $\sqrt {\eta }$ factor in the relevant small $\eta \ll 1$ regime. For Wigner matrices $H=W$ without deformation, the traceless matrices A played this special role. The key idea behind the proof of ETH for Wigner matrices in [Reference Cipolloni, Erdős and Schröder16] was to decompose any deterministic matrix as $A =: \langle A\rangle + \mathring {A}$ into its tracial and traceless parts and prove multi-resolvent generalisations of the local law (1.6) with an error term distinguishing whether the deterministic matrices are traceless or not. For example, for a typical A, we have

$$ \begin{align*} \langle G(z) A G(z)^* A \rangle\sim \frac{1}{\eta}, \end{align*} $$

but for the traceless part of A, we have

$$ \begin{align*} \langle G(z) \mathring{A} G(z)^* \mathring{A} \rangle\sim 1\,, \qquad \eta\gg \frac{1}{N}\,, \end{align*} $$

with appropriately matching error terms. In general, each traceless A improves the typical estimate by a factor $\sqrt {\eta }$ . ETH then follows from the spectral theorem

(1.7) $$ \begin{align} \frac{1}{N}\sum_{i,j=1}^N \frac{\eta}{(\lambda_i-e)^2+\eta^2} \frac{\eta}{(\lambda_j-e)^2+\eta^2} |\langle \mathbf{u}_i, \mathring{A} \mathbf{u}_j\rangle|^2 = \langle \Im G(z) \mathring{A} \Im G(z) \mathring{A} \rangle \le N^\epsilon\,. \end{align} $$

Choosing $z=e+\mathrm {i} \eta $ appropriately with $\eta \sim N^{-1+\epsilon }$ , we obtain that $ |\langle \mathbf {u}_i, \mathring {A} \mathbf {u}_j\rangle |^2\le N^{-1+3\epsilon }$ which even includes an off-diagonal version of (1.2).

To extend this argument to deformed Wigner matrices requires identifying the appropriate singular (‘tracial’) and regular (‘traceless’) parts of an arbitrary matrix. It turns out that the improved local laws around an energy $e=\Re z$ hold if A is orthogonalFootnote 2 to $\Im M(e)$ ; see (2.11) for the new definition of $\mathring {A}$ , which denotes the regular part of A. In this theory, the matrix $\Im M$ emerges as the critical eigenvector of a linear stability operator $\mathcal {B}=I-M \langle \cdot \rangle M^*$ related to the MDE (1.4). The major complication compared with the pure Wigner case in [Reference Cipolloni, Erdős and Schröder16] is that now the regular part of a matrix becomes energy-dependent. In particular, in a multi-resolvent chain $\langle G(z_1)A_1 G(z_2) A_2 \ldots \rangle $ , it is a priori unclear at which spectral parameters the matrices $A_i$ should be regularised; it turns out that the correct regularisation depends on both $z_i$ and $z_{i+1}$ (see (4.10) later). A similar procedure was performed for the Hermitisation of non-Hermitian i.i.d. matrices with a general deformation in [Reference Cipolloni, Erdős, Henheik and Schröder21]; see [Reference Cipolloni, Erdős, Henheik and Schröder21, Appendix A] for a more conceptual presentation. Having identified the correct regularisation, we derive a system of master inequalities for the error terms in multi-resolvent local laws for regular observables; a similar strategy (with minor modifications) has been used in [Reference Cipolloni, Erdős and Schröder19, Reference Cipolloni, Erdős and Schröder20] for Wigner matrices and in [Reference Cipolloni, Erdős, Henheik and Schröder21] for i.i.d. matrices. To keep the presentation short, here we will not aim at the most general local laws with optimal errors, unlike in [Reference Cipolloni, Erdős and Schröder19, Reference Cipolloni, Erdős and Schröder20]. Although these would be achievable with our methods, here we prove only what is needed for our main results on the equipartition.

The proof of the fluctuation around the ETH uses Dyson Brownian motion (DBM) techniques – namely, the Stochastic Eigenstate Equation for quadratic forms of eigenvectors. This theory has been gradually developed for Wigner matrices in [Reference Bourgade and Yau14, Reference Bourgade, Yau and Yin15, Reference Marcinek and Yau35]. We closely follow the presentation in [Reference Cipolloni, Erdős and Schröder18, Reference Cipolloni, Erdős and Schröder20]. The extension of this technique to deformed Wigner matrices is fairly straightforward, so our presentation will be brief. The necessary inputs for this DBM analysis follow from the multi-resolvent local laws that we prove for deformed Wigner matrices.

In a closing remark, we mention that the original proof of (1.1) in [Reference Bao, Erdős and Schnelli5] was considerably simpler than that of (1.2). This may appear quite surprising due to the complicated correlation between $\mathbf {u}$ and $\mathcal {W}$ , but a special algebraic cancellation greatly helped in [Reference Bao, Erdős and Schnelli5]. Namely, with the notation $\mathcal {W}:=W_1-W_2$ and $G(z):=(H-z)^{-1}$ , $z=e+\mathrm {i} \eta \in \mathbf {C}_+$ , a relatively straightforward cumulant expansion showed that $ \langle \Im G(z) \mathcal {W} \Im G(z) \mathcal {W} \rangle $ is essentially boundedFootnote 3 even for spectral parameters z very close to the real axis, $\eta \ge N^{-1+\epsilon /2}$ . Within this cumulant expansion, an algebraic cancellation emerged due to the special form of $\mathcal {W}$ . Then, exactly as in (1.7), we obtain $|\langle \mathbf {u}, \mathcal {W}\mathbf {u}\rangle |^2\lesssim N^{1+\epsilon }\eta ^2 = N^{-1+2\epsilon }$ . In particular, it shows that $\langle \mathbf {u}, \mathcal {W} \mathbf {u}\rangle = \langle \mathbf {u}, W_1 \mathbf {u}\rangle - \langle \mathbf {u}, W_2 \mathbf {u}\rangle $ is essentially of order $N^\epsilon /\sqrt {N}$ for every eigenvector of H. Since $\langle \mathbf {u}, W_1 \mathbf {u}\rangle + \langle \mathbf {u}, W_2 \mathbf {u}\rangle = \langle \mathbf {u}, H \mathbf {u}\rangle =\lambda $ , we immediately obtain the equipartition (1.1). A similar idea proved the more general case – see (2.4) later. Note, however, that this trick does not help in establishing the fluctuations of $\langle \mathbf {u}, W_l \mathbf {u}\rangle $ . In fact, the full ETH analysis for deformed Wigner matrices needs to be performed to establish the necessary a priori bounds for the Dyson Brownian motion arguments.

Notations and conventions

For positive quantities $f,g$ , we write $f\lesssim g$ and $f\sim g$ if $f \le C g$ or $c g\le f\le Cg$ , respectively, for some constants $c,C>0$ which depend only on the constants appearing in the moment condition – see (2.1) later. For any natural number n, we set $[n]: =\{ 1, 2,\ldots ,n\}$ .

We denote vectors by bold-faced lower case Roman letters $\mathbf {x}, \mathbf {y}\in \mathbf {C} ^N$ for some $N\in \mathbf {N}$ . Vector and matrix norms, $\lVert \mathbf {x}\rVert $ and $\lVert A\rVert $ , indicate the usual Euclidean norm and the corresponding induced matrix norm. For any $N\times N$ matrix A, we use the notation $\langle A\rangle := N^{-1}\mathrm {Tr} A$ to denote the normalised trace of A. Moreover, for vectors $\mathbf {x}, \mathbf {y}\in \mathbf {C}^N$ and matrices $A\in \mathbf {C}^{N\times N}$ , we define the scalar product

$$\begin{align*}\langle \mathbf{x},\mathbf{y}\rangle:= \sum_{i=1}^N \overline{x}_i y_i\,. \end{align*}$$

Finally, we will use the concept of ‘with very high probability’ (w.v.h.p.), meaning that for any fixed $D>0$ , the probability of an N-dependent event is bigger than $1-N^{-D}$ for $N\ge N_0(D)$ . We introduce the notion of stochastic domination (see, for example, [Reference Erdős, Knowles, Yau and Yin25]): given two families of non-negative random variables

$$\begin{align*}X=\left(X^{(N)}(u) : N\in\mathbf{N}, u\in U^{(N)} \right) \quad \mathrm{and}\quad Y=\left(Y^{(N)}(u) : N\in\mathbf{N}, u\in U^{(N)} \right) \end{align*}$$

indexed by N (and possibly some parameter u in some parameter space $U^{(N)}$ ), we say that X is stochastically dominated by Y, if for all $\xi , D>0$ , we have

(1.8) $$ \begin{align} \sup_{u\in U^{(N)}} \mathbf{P}\left[X^{(N)}(u)>N^\xi Y^{(N)}(u)\right]\le N^{-D} \end{align} $$

for large enough $N\ge N_0(\xi ,D)$ . In this case, we use the notation $X\prec Y$ or $X= \mathcal {O}_\prec (|Y|)$ . We also use the convention that $\xi>0$ denotes an arbitrary small exponent which is independent of N.

2 Main results

We consider $N\times N$ real symmetric or complex Hermitian Wigner matrices $W = W^*$ having single-entry distributions $w_{ab}=N^{-1/2}\chi _{\mathrm {od}}$ , for $a>b$ , and $w_{aa}=N^{-1/2}\chi _d$ , where $\chi _{\mathrm {od}}$ and $\chi _{\mathrm {od}}$ are two independent random variables satisfying the following assumptions:

Assumption 2.1. We assume that $\chi _{\mathrm {d}}$ is a real centred random variable, and that $\chi _{\mathrm {od}}$ is a real or complex random variable such that $\mathbf {E} \chi _{\mathrm {od}}=0$ and $\mathbf {E} |\chi _{\mathrm {od}}|^2=1$ ; additionally, in the complex case, we also assume that $\mathbf {E}\chi _{\mathrm {od}}^2=0$ . Customarily, we use the parameter $\beta =1,2$ to indicate the real or complex case, respectively. Furthermore, we assume that all the moments of $\chi _{\mathrm {od}}$ and $\chi _{\mathrm {d}}$ exist; that is, for any $p\in \mathbf {N}$ , there exists a constant $C_p>0$ such that

(2.1) $$ \begin{align} \mathbf{E}|\chi_{\mathrm{od}}|^p+\mathbf{E} |\chi_{\mathrm{d}}|^p\le C_p. \end{align} $$

For definiteness, in the sequel, we perform the entire analysis for the complex case – the real case being completely analogous and hence omitted.

The equipartition principle concerns linear combinations of Wigner matrices. Fix $k \in \mathbf {N}$ and consider

(2.2) $$ \begin{align} H:=p_1W_1+\dots+p_kW_k \end{align} $$

for some fixed N-independent vector $\textbf {p} = (p_1, ... , p_k) \in \mathbf {R}^k$ of weights and for k independent $N\times N$ Wigner matrices $W_l$ , belonging to the same symmetry class (i.e., the off-diagonal random variables $\chi _{\mathrm {od}}$ are either real or complex for each of the $W_l$ , $l \in [k]$ ). Then, denoting by $\{\lambda _i\}_{i \in [N]}$ the eigenvalues of H, arranged in increasing order, with associated normalised eigenvectors $\{\mathbf {u}_i\}_{i \in [N]}$ , the total energy $\langle \textbf {u}_i, H \textbf {u}_i \rangle = \lambda _i$ of the composite system (2.2) is proportionally distributed among the k constituents; that is,

(2.3) $$ \begin{align} \langle \mathbf{u}_i, p_lW_l \, \mathbf{u}_i\rangle\approx \frac{p_l^2 }{\Vert \textbf{p}\Vert^2} \lambda_i \end{align} $$

for every $l \in [k]$ , where $\Vert \textbf {p}\Vert := \big (\sum _{l=1}^k |p_l|^2\big )^{1/2}$ denotes the usual $\ell ^2$ -norm. This phenomenon, known as equipartition, was first proven in [Reference Bao, Erdős and Schnelli5, Theorem 3.4] with an optimal error estimate:

(2.4) $$ \begin{align} \left|\langle \mathbf{u}_i, \, p_lW_l \mathbf{u}_j\rangle-\delta_{i,j}\frac{p_l^2 }{\Vert \textbf{p}\Vert^2} \lambda_i\right|\prec \frac{1}{\sqrt{N}}\,. \end{align} $$

Our main result is the corresponding Central Limit Theorem to (2.4) for $i = j$ (i.e. the proof of Gaussian fluctuations in Equipartition for Wigner matrices – for energies in the bulk of the spectrum of H).

Theorem 2.2 (Gaussian Fluctuations in Equipartition).

Fix $k\in \mathbf {N}$ . Let $W_1,\dots , W_k$ be independent Wigner matrices satisfying Assumption 2.1, all of which are in the same real ( $\beta =1$ ) or complex ( $\beta =2$ ) symmetry class, and $\textbf {p} = (p_1,\dots ,p_k)\in \mathbf {R}^k$ be N-independent. Define H as in (2.2) and denote by $\{\lambda _i\}_{i \in [N]}$ the eigenvalues of H, arranged in increasing order, with associated normalised eigenvectors $\{\mathbf {u}_i\}_{i \in [N]}$ . Then, for fixed $\kappa> 0$ , every $l \in [k]$ and for every bulk index $i \in [\kappa N, (1-\kappa )N]$ , it holds that

(2.5) $$ \begin{align} \sqrt{\frac{\beta N }{2} \, \frac{\Vert \textbf{p}\Vert^2}{p_l^2 \, \big(\Vert \textbf{p}\Vert^2 - p_l^2\big) }} \; \left[\langle \mathbf{u}_i, \, p_l W_l \mathbf{u}_i\rangle-\frac{p_l^2 }{\Vert \textbf{p}\Vert^2} \lambda_i\right]\Longrightarrow \mathcal{N}(0,1) \end{align} $$

in the sense of moments,Footnote 4 where $\mathcal {N}(0,1)$ denotes a real standard Gaussian.

By polarisation, we will also obtain the following:

Corollary 2.3. Under the assumptions from Theorem 2.2, the random vector $\textbf {X} = (X_1, ... , X_k) \in \mathbf {R}^k$ with

(2.6) $$ \begin{align} X_l:= \sqrt{\frac{\beta N}{2}} \; \left[\langle \mathbf{u}_i, \, p_l W_l \mathbf{u}_i\rangle-\frac{p_l^2 }{\Vert \textbf{p}\Vert^2} \lambda_i\right]\,, \quad l \in [k]\, \end{align} $$

is approximately (in the sense of moments) jointly Gaussian with covariance structure

$$ \begin{align*} \mathrm{Cov}(X_l, X_m) = \frac{p_l^2 \big(\delta_{l,m} \Vert \textbf{p}\Vert^2 - p_m^2\big)}{\Vert \textbf{p}\Vert^2}\,. \end{align*} $$

Remark 2.4. We stated Theorem 2.2 only for diagonal overlaps for simplicity. However, one can see that, following the proof in [Reference Benigni and Cipolloni9, Section 3], it is possible to obtain an analogous Central Limit Theorem (CLT) for off–diagonal overlaps as well:

(2.7) $$ \begin{align} \sqrt{\frac{\Vert \textbf{p}\Vert^2\beta N}{p_l^2 \, \big(\Vert \textbf{p}\Vert^2 - p_l^2\big) }} \; \big|\langle \mathbf{u}_i, \, p_l W_l \mathbf{u}_j\rangle \big|\Longrightarrow \big|\mathcal{N}(0,1)\big|. \end{align} $$

This also gives an analogous version of (2.3) for off-diagonal overlaps. Furthermore, again following [Reference Benigni and Cipolloni9, Theorem 2.2], it is also possible to derive a multivariate CLT jointly for diagonal and off–diagonal overlaps. See also Remark 2.10 below for further explanation.

Theorem 2.2 and Corollary 2.3 will follow as a corollary to the Eigenstate Thermalisation Hypothesis (ETH) and its Gaussian fluctuations for deformed Wigner matrices, which we present as Theorem 2.7 and Theorem 2.9 in the following subsection.

Remark 2.5. By a quick inspection of our proof of Theorem 2.2, given in Section 3, it is possible to generalise the Equipartition principle (2.4) as well as its Gaussian fluctuations (2.7) to linear combinations of deformed Wigner matrices (i.e., each $W_l$ in (2.2) being replaced by $W_l + D_l$ , where $D_l = D_l^*$ is an essentially arbitrary bounded deterministic matrix – see Assumption 2.8 later). However, for brevity of the current paper, we refrain from presenting this extension explicitly.

2.1 ETH and its fluctuations for deformed Wigner matrices

In this section, we consider deformed Wigner matrices, $H = W + D$ , with increasingly ordered eigenvalues $\lambda _1\le \lambda _2\le \dots \le \lambda _N$ and corresponding orthonormal eigenvectors $\mathbf {u}_1,\dots , \mathbf {u}_N$ . Here, $D =D^* \in \mathbf {C}^{N \times N}$ is a self-adjoint matrix with uniformly bounded norm (i.e., $\lVert D\rVert \le C_D$ for some N-independent constant $C_D>0$ ). Although the Eigenstate Thermalisation Hypothesis (ETH) will be shown to hold for general deformations D, we shall require slightly stronger assumptions for proving the Gaussian fluctuations (see Assumption 2.8 below).

In order to state our results on the ETH and its fluctuations (Theorems 2.7 and 2.9, respectively), we need to introduce the concept of regular observables – first in a simple form in Definition 2.11 (later along the proofs we will need a more general version in Definition 4.2). For this purpose, we introduce $M(z)$ being the unique solution of the Matrix Dyson Equation (MDE):Footnote 5

(2.8) $$ \begin{align} -\frac{1}{M(z)}=z-D+\langle M(z)\rangle, \qquad\quad \Im M(z)\Im z>0. \end{align} $$

The self consistent density of states (scDos) is then defined as

(2.9) $$ \begin{align} \rho(e):=\frac{1}{\pi}\lim_{\eta\downarrow 0}\langle \Im M(e+\mathrm{i}\eta)\rangle\,. \end{align} $$

We point out that not only $\langle \Im M(e+\mathrm {i}\eta )\rangle $ has an extension to the real axis, but the whole matrix $M(e) := \lim _{\eta \downarrow 0} M(e + \mathrm {i} \eta )$ is well-defined (see Lemma B.1 (b) of the arXiv: 2301.03549 version of [Reference Cipolloni, Erdős, Henheik and Schröder21]). The scDos $\rho $ is a compactly supported Hölder- $1/3$ continuous function on $\mathbf {R}$ which is real-analytic on the set $\{ \rho> 0\}$ Footnote 6 . Moreover, for any small $\kappa> 0$ (independent of N), we define the $\kappa $ -bulk of the scDos as

(2.10) $$ \begin{align} \mathbf{B}_\kappa = \left\{ x \in \mathbf{R} \; : \; \rho(x) \ge \kappa^{1/3} \right\}\,, \end{align} $$

which is a finite union of disjoint compact intervals (see Lemma B.2 in the arXiv: 2301.03549 version of [Reference Cipolloni, Erdős, Henheik and Schröder21]). For $\Re z \in \mathbf {B}_\kappa $ , it holds that $\Vert M(z) \Vert \lesssim 1$ , as easily follows by taking the imaginary part of (2.8).

Definition 2.6 (Regular observables – One-point regularisation).

Fix $\kappa> 0$ and an energy $e\in \mathbf {B}_\kappa $ in the bulk. Given a matrix $A\in \mathbf {C}^{N\times N}$ , we define its one-point regularisation w.r.t. the energy e, denoted by $\mathring {A}^e$ , as

(2.11) $$ \begin{align} \mathring{A}=\mathring{A}^e:=A-\frac{\langle \Im M(e) A\rangle}{\langle \Im M(e)\rangle}\,. \end{align} $$

Moreover, we call A regular w.r.t. the energy e if and only if $A = \mathring {A}^e$ .

Notice that in the analysis of Wigner matrices without deformation, $D=0$ , in [Reference Cipolloni, Erdős and Schröder16, Reference Cipolloni, Erdős and Schröder17, Reference Cipolloni, Erdős and Schröder19, Reference Cipolloni, Erdős and Schröder20], M was a constant matrix and the regular observables were simply given by traceless matrices (i.e., $\mathring {A}= A-\langle A\rangle $ ). For deformed Wigner matrices, the concept of regular observables depends on the energy.

Next, we define the quantiles $\gamma _i$ of the density $\rho $ implicitly by

(2.12) $$ \begin{align} \int_{-\infty}^{\gamma_i} \rho(x)\,\mathrm{d} x=\frac{i}{N}, \qquad i\in[N]. \end{align} $$

We can now formulate the ETH in the bulk for deformed Wigner matrices which generalises the same result for Wigner matrices, $D=0$ , from [Reference Cipolloni, Erdős and Schröder16].

Theorem 2.7 (Eigenstate Thermalisation Hypothesis).

Let $\kappa> 0$ be an N-independent constant and fix a bounded deterministic $D =D^* \in \mathbf {C}^{N \times N}$ . Let $H = W + D$ be a deformed Wigner matrix, where W satisfies Assumption 2.1, and denote the orthonormal eigenvectors of H by $\{ \textbf {u}_i\}_{i \in [N]}$ . Then, for any deterministic $A\in \mathbf {C}^{N\times N}$ with $\Vert A \Vert \lesssim 1$ , it holds that

(2.13) $$ \begin{align} \max_{i, j}\left|\langle \textbf{u}_i,\mathring{A}^{\gamma_i}\textbf{u}_j\rangle \right| = \max_{i, j}\left|\langle \textbf{u}_i,A\textbf{u}_j\rangle-\delta_{ij}\frac{\langle A\Im M(\gamma_i)\rangle}{\langle \Im M(\gamma_i)\rangle} \right| \prec \frac{1}{\sqrt{N}}\,, \end{align} $$

where the maximum is taken over all $i,j \in [N]$ such that the quantiles $\gamma _i, \gamma _j \in \mathbf {B}_\kappa $ defined in (2.12) are in the $\kappa $ -bulk of the scDos $\rho $ .

This ‘Law of Large Numbers’-type result (2.13) is complemented by the corresponding Central Limit Theorem (2.14), which requires slightly strengthened assumptions on the deformation D.

Assumption 2.8. We assume that $D \in \mathbf {C}^{N \times N}$ is a bounded self-adjoint deterministic matrix such that

  1. (i) the unique solution $M(z)$ to (2.8) is uniformly bounded in norm (i.e. $\sup _{z \in \mathbf {C}} \Vert M(z) \Vert \le C_M$ for some N-independent constant $C_M> 0$ );

  2. (ii) the scDos $\rho $ is Hölder- $1/2$ regular (i.e., it does not have any cusps – see Footnote 6).

The requirements on D in Assumption 2.8 are natural and they hold for typical applications; see Remark 2.12 later for more details. We can now formulate our result on the Gaussian fluctuations in the ETH which generalises the analogous result for Wigner matrices, $D=0$ from [Reference Cipolloni, Erdős and Schröder18].

Theorem 2.9 (Fluctuations in ETH).

Fix $\kappa ,\sigma> 0 \ N$ -independent constants and let $H = W + D$ be a deformed Wigner matrix, where W satisfies Assumption 2.1 and D satisfies Assumption 2.8. Denote the orthonormal eigenvectors of H by $\{ \textbf {u}_i\}_{i \in [N]}$ and fix an index $i \in [N]$ , such that the quantile $\gamma _i \in \mathbf {B}_\kappa $ defined in (2.12) is in the bulk. Then, for any deterministic Hermitian matrix $A\in \mathbf {C}^{N\times N}$ with $\Vert A \Vert \lesssim 1$ (which we assume to be real in the case of a real Wigner matrix) satisfying $\langle (A-\langle {A}\rangle )^2\rangle \ge \sigma $ , it holds that

(2.14) $$ \begin{align} \sqrt{\frac{\beta N }{2 \, \mathrm{Var}_{\gamma_i}(A)}}\left[\langle \textbf{u}_i,A\textbf{u}_i\rangle-\frac{\langle A\Im M(\gamma_i)\rangle}{\langle \Im M(\gamma_i)\rangle} \right]\Longrightarrow \mathcal{N}(0,1) \end{align} $$

in the sense of moments (see Footnote 4), whereFootnote 7

(2.15) $$ \begin{align} \mathrm{Var}_{\gamma_i}(A) := \frac{1}{\langle \Im M (\gamma_i)\rangle^2} \left( \big\langle \big(\mathring{A}^{\gamma_i} \Im M(\gamma_i)\big)^2\big\rangle - \frac{1}{2} \Re\left[ \frac{\big\langle \big(M(\gamma_i)\big)^2 \mathring{A}^{\gamma_i} \big\rangle^2}{1 - \big\langle \big(M(\gamma_i)\big)^2 \big\rangle} \right] \right) \,. \end{align} $$

This variance is strictly positive with an effective lower bound

(2.16) $$ \begin{align} \mathrm{Var}_{\gamma_i}(A)\ge c \, \langle (A-\langle{A}\rangle)^2\rangle \end{align} $$

for some constant $c = c(\kappa , \Vert D \Vert )> 0.$

Remark 2.10. We stated Theorem 2.9 only for diagonal overlaps to keep the statement simple, but a corresponding CLT for off–diagonal overlaps as well as a multivariate CLT for any finite family of diagonal and off–diagonal overlaps can also be proven.

We decided not to give a detailed proof of these facts in the current paper in order to present the main new ideas in the analysis of deformed Wigner matrices in the simplest possible setting consisting of only diagonal overlaps. But we remark that following an analysis similar to [Reference Benigni and Cipolloni9, Section 3], combined with the details presented in Section 5, would give an analogous result to [Reference Benigni and Cipolloni9, Theorem 2.2] also in the deformed Wigner matrices setup. However, this would require introducing several new notations that would obfuscate the main novelties in the analysis of deformed Wigner matrices compared to the Wigner case, which instead are clearer in the simpler setup of Section 5.

In the following two remarks we comment on the condition $\langle (A-\langle {A}\rangle )^2\rangle \ge \sigma $ and on Assumption 2.8.

Remark 2.11. The restriction to matrices satisfying $\langle (A-\langle {A}\rangle )^2\rangle \ge \sigma $ (i.e., $A - \langle A \rangle $ being of high rank) is technical. It is due to the fact that our multi-resolvent local laws for resolvent chains $\langle G(z_1) A_1 G(z_2) A_2 \ldots \rangle $ in Proposition 4.4 are non-optimal in terms of the norm for the matrices $A_i$ ; they involve the Euclidean norm $\Vert A_i\Vert $ and not the smaller Hilbert-Schmidt norm $\sqrt {\langle |A_i|^2\rangle }$ which would be optimal. For the Wigner ensemble, this subtlety is the main difference between the main result in [Reference Cipolloni, Erdős and Schröder18] for high rank observable matrices A and its extension to any low rank A in [Reference Cipolloni, Erdős and Schröder20]. Following the technique in [Reference Cipolloni, Erdős and Schröder20], it would be possible to achieve the estimate with the optimal norm of A also for deformed Wigner matrices. However, we refrain from doing so, since in our main application, Theorem 2.2, A itself will be a Wigner matrix which has high rank.

Remark 2.12. We have several comments on Assumption 2.8.

  1. (i) The boundedness of $\Vert M (z)\Vert $ is automatically fulfilled in the bulk $\mathbf {B}_\kappa $ (see remark below (2.10)) or when $\Re z$ is away from the support of the scDos $\rho $ (see [Reference Alt, Erdős and Krüger3, Proposition 3.5]) without any further condition. However, the uniform (in z) estimate formulated in Assumption 2.8 does not hold for arbitrary D. A sufficient condition for the boundedness of $\Vert M\Vert $ in terms of the spectrum of D is given in [Reference Alt, Erdős and Krüger3, Lemma 9.1 (i)]. This especially applies if the eigenvalues $\{d_i\}_{i \in [N]}$ of D (in increasing order) form a piecewise Hölder- $1/2$ regular sequenceFootnote 8 (see [Reference Alt, Erdős and Krüger3, Lemma 9.3]). In particular, by eigenvalue rigidity [Reference Ajanki, Erdős and Krüger2, Reference Erdős, Krüger and Schröder27], it is easy to see that any ‘Wigner-like’ matrix D has Hölder- $1/2$ regular sequence of eigenvalues with very high probability. This is important for the applicability of Theorem 2.9 below in the proof of our main result, Theorem 2.2, given in Section 3.

  2. (ii) The assumption that $\rho $ does not have any cusps is a typical condition and of technical nature (needed in the local law (4.1) and in Lemma A.2). In case that the sequence of matrices $D=D_N$ has a limiting density of states with single interval support, then also $\rho $ , the scDos of $W+D$ , has single interval support [Reference Biane12]; in particular, $\rho $ has no cusps [Reference Alt, Erdős and Krüger3]. Again, this is important for the applicability of Theorem 2.9 in the proof of our main result, in which case D is a Wigner matrix with a semicircle as the limiting density of states.

In the following Section 3, we will prove our main result, Theorem 2.2, assuming Theorems 2.7 and 2.9 on deformed Wigner matrices as inputs. These will be proven in Sections 4 and 5, respectively. Both proofs crucially rely on an averaged local law for two resolvents and two regular observables, Proposition 4.4, which we prove in Section 6. Several additional technical and auxiliary results are deferred to the Appendix.

3 Fluctuations in equipartition: Proof of Theorem 2.2

It is sufficient to prove Theorem 2.2 only for $k=2$ with $p_1, p_2 \neq 0$ , since we can view the sum (2.2) as the sum of $p_1W_1$ and

$$ \begin{align*} \sum\limits_{l = 2}^k p_lW_l\stackrel{\mathrm{d}}{=}\left(\sum\limits_{ l=2}^kp_l^2\right)^{1/2}\widetilde{W}\,, \end{align*} $$

where $\widetilde {W}$ is a Wigner matrix independent of $W_1$ and the equality is understood in distribution.

As a main step, we shall prove the following lemma, where we condition on $W_2$ .

Lemma 3.1. Under the assumptions of Theorem 2.2 with $k=2$ , it holds that

(3.1) $$ \begin{align} \quad\qquad\mathbf{E}_{W_1} \langle \mathbf{u}_i,p_2W_2\mathbf{u}_i\rangle &= \frac{p_2^2}{\|\textbf{p}\|^2}\gamma_i+\mathcal{O}_\prec\left(N^{-1/2-\epsilon}\right)\,, \end{align} $$
(3.2) $$ \begin{align} \frac{\beta N}{2} \mathrm{Var}_{W_1}[\langle \mathbf{u}_i,p_2W_2\mathbf{u}_i\rangle ] &= \frac{ p_1^2p_2^2}{\Vert \textbf{p}\Vert^2}+\mathcal{O}_\prec\left(N^{-\epsilon}\right)\,,\ \qquad\ \end{align} $$

for any $\epsilon> 0$ , where $\gamma _i$ is the $i^{\mathrm {th}}$ quantile of the semicircular density with radius $2 \Vert \textbf {p}\Vert $ ; that is,

$$ \begin{align*}\frac{1}{2\pi \Vert \textbf{p}\Vert^2} \int_{-\infty}^{\gamma_i} \sqrt{ [4 \Vert \textbf{p}\Vert^2 -x]_+}\mathrm{d} x = \frac{i}{N}\,. \end{align*} $$

Expectation and variance are taken in the probability space of $W_1$ , conditioned on $W_2$ being in an event of very high probability, while the stochastic domination in the error terms are understood in the probability space of $W_2$ .

Proof of Theorem 2.2.

First, we note that all requirements for applying Theorem 2.9 to $H = p_1 W_1 + D$ , with $D = p_2 W_2$ for some fixed realisation of $W_2$ in a very high probability event, are satisfied. This follows from Remark 2.12 and $\langle (W_2 - \langle W_2 \rangle )^2\rangle \gtrsim 1$ with very high probability. Next, observe that replacing $\gamma _i$ in (3.1) by the eigenvalue $\lambda _i$ appearing in (2.7) is trivial by the usual eigenvalue rigidity $|\gamma _i - \lambda _i| \prec 1/N$ for Wigner matrices in the bulk [Reference Erdős and Yau26]. Thus, Theorem 2.9 shows that, conditioned on a fixed realisation of $W_2$ ,

(3.3) $$ \begin{align} \sqrt{ \frac{\beta N}{2} }\left[\langle \mathbf{u}_i,p_2W_2\mathbf{u}_i\rangle - \frac{p_2^2}{\|\textbf{p}\|^2}\lambda_i\right] \end{align} $$

is approximately Gaussian with an approximately constant variance (independent of $W_2$ ) given in (3.2). Since this holds with very high probability w.r.t. $W_2$ , this proves (2.7) for $l=2$ ; the proof for $l=1$ is the same.

Proof of Corollary 2.3.

We formulated Theorem 2.9 as a CLT for overlaps $\langle \textbf {u}_i,A\textbf {u}_i\rangle $ for a single deterministic matrix A, but by standard polarisation it also shows the joint approximate Gaussianity of any p-vector

(3.4) $$ \begin{align} \big( \langle \textbf{u}_i,A_1\textbf{u}_i\rangle, \langle \textbf{u}_i,A_2\textbf{u}_i\rangle, \ldots, \langle \textbf{u}_i,A_p\textbf{u}_i\rangle\big) \end{align} $$

for any fixed k and deterministic observables $A_1, A_2, \ldots A_p$ satisfying $\langle (A_j- \langle A_j\rangle )^2\rangle \ge c$ , $j\in [p]$ . Namely, using Theorem 2.9 to compute the moments of $\langle \textbf {u}_i,A(\textbf {t}) \textbf {u}_i\rangle $ for the linear combination $A(\textbf {t})= \sum _j t_j A_j$ with any real vector $\textbf {t} = (t_1, t_2,\ldots , t_p)$ , we can identify any joint moments of the coordinates of the vector in (3.4), and we find that they satisfy the (approximate) Wick theorem.

Now we can follow the above proof of Theorem 2.2, but without the simplification $k=2$ . Conditioning on $W_2,\ldots , W_k$ and working in the probability space of $W_1$ , by the polarisation argument above, we find that not only each $X_l$ from (2.6) is asymptotically Gaussian with a variance independent of $W_2, \ldots , W_k$ , but they are jointly Gaussian for $l=2,3,\ldots , k$ . This is sufficient for the joint Gaussianity of the entire vector $\textbf {X}$ since $\sum _l X_l=0$ . This completes the proof of Corollary 2.3.

The proof of Lemma 3.1 is divided into the computation of the expectation (3.1) and the variance (3.2).

3.1 Computation of the expectation (3.1)

As in the proof of Theorem 2.2 above, we condition on $W_2$ and work in the probability space of $W_1$ (i.e., we consider $p_2 W_2$ as a deterministic deformation of $p_1W_1$ ). This allows us to use Theorem 2.9 asFootnote 9

(3.5) $$ \begin{align} \mathbf{E}_{W_1}\langle \mathbf{u}_i,p_2W_2 \mathbf{u}_i\rangle = \frac{\langle p_2W_2\Im M_2(\gamma_{i,2}))\rangle}{\langle \Im M_2(\gamma_{i,2})\rangle}+\mathcal{O}_\prec\left(N^{-1/2-\epsilon}\right) \end{align} $$

for some constant $\epsilon> 0$ . Here, $M_2(z)$ , depending on $W_2$ , is the unique solution of the MDE

(3.6) $$ \begin{align} -\frac{1}{M_2(z)}=z-p_2W_2+p_1^2\langle M_2(z)\rangle\,, \end{align} $$

corresponding to the matrix $p_1W_1+p_2W_2$ , where $p_2W_2$ is considered a deformation, and $\gamma _{i,2}$ is the $i^{\mathrm {th}}$ quantile of the scDos $\rho _2$ corresponding to (3.6). The subscript ‘ $2$ ’ for $M_2$ , $\rho _2$ and $\gamma _{i,2}$ in (3.5) and (3.6) indicates that these objects are dependent on $W_2$ and hence random.

The Stieltjes transform $m_2(z)$ of $\rho _2$ is given by the implicit equation

$$ \begin{align*}m_2(z):=\langle M_2(z)\rangle = \frac{1}{p_2}\cdot \Big\langle \frac{1}{W_2-\frac{1}{p_2}(z+p_1^2m_2(z))} \Big\rangle \end{align*} $$

with the usual side condition $\Im z\cdot \Im m_2(z)>0$ . Applying the standard local law for the resolvent of $W_2$ on the right-hand side shows that

(3.7) $$ \begin{align} \Big| m_2(z) - \frac{1}{p_2} m_{\mathrm{sc}}(w_2)\Big|\prec \frac{1}{N |\Im w_2|}, \qquad w_2:=\frac{1}{p_2}(z+p_1^2m_2(z)), \end{align} $$

where $m_{\mathrm {sc}}$ is the Stieltjes transform of the standard semicircle law (i.e., it satisfies the quadratic equation

(3.8) $$ \begin{align} m_{\mathrm{sc}}(w)^2+w m_{\mathrm{sc}}(w)+1=0\, \end{align} $$

with the side condition $\Im w\cdot \Im m_{\mathrm {sc}}(w)>0$ ). Note that in (3.7), $w_2$ is random, it depends on $W_2$ , but the local law for $\langle (W_2-w)^{-1}\rangle $ holds uniformly in the spectral parameter $|\Im w|\ge N^{-1}$ . Hence, a standard grid argument and the Lipschitz continuity of the resolvents shows that it holds for any (random) w with $|\Im w|\ge N^{-1+\xi }$ with any fixed $\xi>0$ .

Applying (3.8) at $w=w_2$ together with (3.7) implies that

(3.9) $$ \begin{align} - \frac{1}{\Vert \textbf{p} \Vert \, m_2(z)} =\frac{z}{\Vert \textbf{p}\Vert} + \Vert \textbf{p}\Vert\, m_2(z) + \mathcal{O}_\prec \Big( \frac{1}{N |\Im w_2|}\Big). \end{align} $$

We view this relation as a small additive perturbation of the exact equation

(3.10) $$ \begin{align} - \frac{1}{m_{\mathrm{sc}}\big(\frac{z}{\Vert \textbf{p}\Vert}\big)} =\frac{z}{\Vert \textbf{p}\Vert} + m_{\mathrm{sc}}\big(\tfrac{z}{\Vert \textbf{p}\Vert}\big) \end{align} $$

to conclude

(3.11) $$ \begin{align} \left| \Vert \textbf{p} \Vert \, m_2(z) - m_{\mathrm{sc}}\big(\tfrac{z}{\Vert \textbf{p}\Vert}\big)\right|\prec \frac{1}{N|\Im z|}\,, \qquad |\Im z|\ge N^{-1+\xi}\,, \end{align} $$

using that $|\Im w_2|\gtrsim |\Im z|$ from the definition of $w_2$ in (3.7) and that $\Im z\cdot \Im m_2(z)>0$ . The conclusion (3.11) requires a standard continuity argument, starting from a z with a large imaginary part and continuously reducing the imaginary part by keeping the real part fixed (the same argument is routinely used in the proof of the local law for Wigner matrices, see, for example, [Reference Erdős and Yau26]).

The estimate (3.11) implies that the quantiles of $\rho _2$ satisfy the usual rigidity estimate (i.e.,

(3.12) $$ \begin{align} |\gamma_{2,i} -\gamma_i|\prec \frac{1}{N} \end{align} $$

for bulk indices $i\in [\kappa N, (1-\kappa )N]$ with any N-independent $\kappa>0$ ). Moreover, (3.11) also implies that for any z in the bulk of the semicircle (i.e., $|\Im m_{sc}(z)|\ge c>0$ for some $c>0$ ) independent of N, we have $|\Im m_2(z)|\ge c/2$ as long as $|\Im z|\ge N^{-1+\xi }$ . Using the definition of $w_2$ in (3.7) again, this shows $|\Im w_2|\sim |\Im w|$ for the deterministic $w:=\frac {1}{p_2}(z+p_1^2 m_{\mathrm {sc}}(z))$ for any z with $|\Im z|\ge N^{-1+\xi }$ . Feeding this information into (3.9) and viewing it again as a perturbation of (3.10) but with the improved deterministic bound $\mathcal {O}_\prec (1/(N|\Im w|))$ , we obtain

(3.13) $$ \begin{align} \left| \Vert \textbf{p} \Vert \, m_2(z) - m_{\mathrm{sc}}\big(\tfrac{z}{\Vert \textbf{p}\Vert}\big)\right|\prec \frac{1}{N|\Im w|}\,, \qquad \mbox{with}\quad w=\frac{1}{p_2}(z+p_1^2 m_{\mathrm{sc}}(z))\,, \end{align} $$

uniformly in $|\Im z|\ge N^{-1+\xi }$ . In particular, when z is in the bulk of the semicircle, then we have that

$$ \begin{align*}\left| \Vert \textbf{p} \Vert \, m_2(z) - m_{\mathrm{sc}}\big(\tfrac{z}{\Vert \textbf{p}\Vert}\big)\right|\prec \frac{1}{N},\end{align*} $$

and this relation holds even down to the real axis by the Lipschitz continuity (in fact, real analyticity) of the Stieltjes transform $m_2(z)$ in the bulk.

In the following, we will use the shorthand notation $A\approx B$ for two (families of) random variables A and B if and only if $\vert A - B \vert \prec N^{-1}$ . Evaluating (3.6) at $z=\gamma _{i,2}$ , we have

(3.14) $$ \begin{align} M_2(\gamma_{i,2})=\frac{1}{p_2}\cdot \frac{1}{W_2-w_{i,2}}\,, \qquad w_{i,2}:=\frac{1}{p_2}(\gamma_{i,2}+p_1^2\,m_2(\gamma_{i,2}))\,, \end{align} $$

and note that $w_{i,2} \approx w_i:=\frac {1}{p_2}\left (\gamma _{i}+\frac {p_1^2}{\lVert \boldsymbol{p}\rVert }\,m_{\mathrm {sc}}\left (\frac {\gamma _{i}}{\lVert \boldsymbol{p}\rVert }\right )\right )$ by (3.13) and since $\gamma _{i,2}\approx \gamma _i$ in the bulk by rigidity (3.12).

Now we are ready to evaluate the rhs. of (3.5). By elementary manipulations using (3.14), we can now write the rhs. of (3.5) as

(3.15) $$ \begin{align} \frac{\langle p_2W_2\Im M_2(\gamma_{i,2})\rangle}{\langle \Im M_2(\gamma_{i,2})\rangle} = \gamma_{i,2} + \frac{p_1^2}{p_2} \frac{\Im \big[ \langle (W_2 - w_{i,2})^{-1} \rangle^2\big] }{\Im \, \langle (W_2 - w_{i,2})^{-1} \rangle}\,. \end{align} $$

Using (3.13), we obtain

(3.16) $$ \begin{align} \langle (W_2-w_{i,2})^{-1}\rangle \approx \langle (W_2-w_i)^{-1}\rangle \approx m_{\mathrm{sc}}(w_i) \end{align} $$

with very high $W_2$ -probability. Continuing with (3.15) and using $\gamma _{i,2} \approx \gamma _i$ , we thus find

(3.17) $$ \begin{align} \frac{\langle p_2W_2\Im M_2(\gamma_{i,2}))\rangle}{\langle \Im M_2(\gamma_{i,2})\rangle} \approx \gamma_{i} + \frac{p_1^2}{p_2} \frac{\Im \big[ m_{\mathrm{sc}}(w_i)^2\big]}{\Im m_{\mathrm{sc}}(w_i)}\,. \end{align} $$

Next, we combine (3.8) with $p_2 m_2(\gamma _{i}) \approx m_{\mathrm {sc}}(w_i)$ from (3.7), (3.14) and (3.16) and find that

(3.18) $$ \begin{align} m_{\mathrm{sc}}(w_i)^2 \approx -\frac{p_2^2}{p_1^2+p_2^2}\left(1+\frac{1}{p_2}\gamma_{i} \, m_{\mathrm{sc}}(w_i)\right)\,. \end{align} $$

Hence, plugging (3.18) into (3.17), we deduce

$$ \begin{align*} \frac{\langle p_2W_2\Im M_2(\gamma_{i,2}))\rangle}{\langle \Im M_2(\gamma_{i,2})\rangle} \approx \left(1 - \frac{p_1^2}{p_1^2+p_2^2}\right)\gamma_{i} = \frac{p_2^2}{\|\boldsymbol{p}\|^2}\gamma_i\,. \end{align*} $$

This completes the proof of (3.1).

3.2 Computation of the variance (3.2)

As in the calculation of the expectation in Section 3.1, we first condition on $W_2$ and work in the probability space of $W_1$ . So, we apply Theorem 2.9 to the matrix $p_1W_1+p_2W_2$ , where the second term is considered a fixed deterministic deformation. Indeed, using the same notations as in Section 3.1, this gives that the lhs. of (3.2) equals

(3.19) $$ \begin{align} p_2^2\, \mathrm{Var}_{\gamma_{i,2}}\left(W_2\right)= p_2^2\frac{1}{\langle \Im M_2 (\gamma_{i,2})\rangle^2} \left( \big\langle \big(\mathring{W}_2^{\gamma_{i,2}} \Im M_2(\gamma_{i,2})\big)^2\big\rangle - \frac{p_1^2}{2} \Re\left[ \frac{\big\langle \big(M_2(\gamma_{i,2})\big)^2 \mathring{W}_2^{\gamma_{i,2}} \big\rangle^2}{1 - p_1^2\big\langle \big(M_2(\gamma_{i,2})\big)^2 \big\rangle} \right] \right) \end{align} $$

up to an additive error of order $\mathcal {O}_\prec \big (N^{-\epsilon }\big )$ , which will appear on the rhs. of (3.2). The factor $p_1^2$ in the second term of (3.19) is a natural rescaling caused by applying Theorem 2.9 to a deformation of $p_1W_1$ instead of a Wigner matrix $W_1$ . Further, we express $M_2$ in terms of a Wigner resolvent $G:=(W_2-w)^{-1}$ and use local laws not only for a single resolvent $\langle G\rangle $ but also their multi-resolvent versions for $\langle G^2\rangle $ and $\langle GG^*\rangle $ (see [Reference Cipolloni, Erdős and Schröder19]). With a slight abuse of notation we shall henceforth drop the subscript ‘ $2$ ’ in $\gamma _{i,2}$ and $w_{i,2}$ and replace them by their deterministic values $\gamma _i$ and $w_i$ , respectively, at a negligible error of order $N^{-1}$ exactly as in Section 3.1. Note that $\Im w_i\gtrsim 1$ for bulk indices i, so all resolvents below are stable and all denominators are well separated away from zero; this is needed to justify the $\approx $ relations below.

The first term in (3.19) can be rewritten as (here $G=G(w_i)$ and $m_{\mathrm {sc}}:=m_{\mathrm {sc}}(w_i)$ for brevity)

(3.20) $$ \begin{align} \left\langle\big(\mathring{W}_2^{\gamma_i} \Im M_2(\gamma_i)\big)^2\right\rangle &\approx \frac{1}{2} \Re\left({\left\vert \frac{w_i}{p_2}-\frac{\gamma_i}{p_1^2+p_2^2}\right\vert}^2\langle GG^*\rangle-\left(\frac{w_i}{p_2}-\frac{\gamma_i}{p_1^2+p_2^2}\right)^2\langle G^2\rangle\right)\nonumber\\ &\approx \frac{1}{2}\Re\left({\left\vert \frac{w_i}{p_2}-\frac{\gamma_i}{p_1^2+p_2^2}\right\vert}^2 \frac{\vert m_{\mathrm{sc}}\vert^2}{1-\vert m_{\mathrm{sc}}\vert^2}-\left(\frac{w_i}{p_2} -\frac{\gamma_i}{p_1^2+p_2^2}\right)^2\frac{m_{\mathrm{sc}}^2}{1-m_{sc}^2}\right) \nonumber\\ &\approx\frac{p_1^4}{2p_2^2(p_1^2+p_2^2)^2}\Re\left(\frac{1}{1-\vert m_{\mathrm{sc}}\vert^2} - \frac{1}{1-m_{\mathrm{sc}}^2} \right), \end{align} $$

where in the last step we used (3.18). Similarly, for the second term in (3.19), we have

(3.21) $$ \begin{align} \frac{\left\langle\big(M_2(\gamma_i)\big)^2 \mathring{W}_2^{\gamma_i}\right\rangle^2}{1 - p_1^2\left\langle \big(M(\gamma_i)\big)^2 \right\rangle} &\approx \frac{ \Big[ \frac{1}{p_2}\left\langle \frac{1}{p_2}G +\left(\frac{w_i}{p_2}-\frac{\gamma_i}{p_1^2+p_2^2}\right)G^2\right\rangle\Big]^2}{1-\frac{p_1^2}{p_2^2}\langle G^2\rangle} \approx \frac{m_{\mathrm{sc}}^2(p_2^2-(p_1^2+p_2^2)m_{\mathrm{sc}}^2)}{p_2^2(p_1^2+p_2^2)^2(1-m_{\mathrm{sc}}^2)}\,. \end{align} $$

Plugging (3.20) and (3.21) into (3.19) we obtain

(3.22) $$ \begin{align} p_2^2 \, \mathrm{Var}_{\gamma_i}(W_2) \approx \frac{2p_2^2+p_2\gamma_i\Re m_{\mathrm{sc}}}{\left( \Im m_{\mathrm{sc}}\right)^2}\cdot \frac{p_1^2p_2^2}{2(p_1^2+p_2^2)^2}\,. \end{align} $$

Taking the imaginary part of (3.18), we find that $|m_{\mathrm {sc}}|^2 \approx \frac {p_2^2}{p_1^2 + p_2^2}$ , and hence, using (3.8) again, we infer

(3.23) $$ \begin{align} \frac{1}{p_1^2 + p_2^2} \frac{2p_2^2+p_2\gamma_i\Re m_{sc}}{\left( \Im m_{\mathrm{sc}}\right)^2} \approx \frac{\Re [|m_{\mathrm{sc}}|^2 - m_{\mathrm{sc}}^2]}{(\Im m_{\mathrm{sc}})^2} = 2\,. \end{align} $$

Combining (3.22) and (3.23) with (3.19), this completes the proof of (3.2). This proves Lemma 3.1.

4 Multi–resolvent local laws: Proof of Theorem 2.7

To study the eigenvectors of H, we analyse its resolvent $G(z):=(H-z)^{-1}$ , with $z\in \mathbf {C}\setminus \mathbf {R}$ . It is well known [Reference Erdős, Krüger and Schröder27, Reference Alt, Erdős, Krüger and Schröder4] that $G(z)$ becomes approximately deterministic in the large N limit. Its deterministic approximation (as a matrix) is given by $M(z)$ , the unique solution of (2.8), in the following averaged and isotropic sense:

(4.1) $$ \begin{align} |\langle (G(z)-M(z))B\rangle|\prec \frac{1}{N|\Im z|}, \qquad |\langle\mathbf{x}\,, (G(z)-M(z)) \mathbf{y}\rangle|\prec \frac{1}{\sqrt{N|\Im z|}} \,, \end{align} $$

uniformly in deterministic vectors $\lVert \mathbf {x}\rVert +\lVert \mathbf {y}\rVert \lesssim 1$ and deterministic matrices $\lVert B\rVert \lesssim 1$ . To be precise, while the local laws (4.1) hold for $\Re z \in \mathbf {B}_\kappa $ and $\mathrm {dist}(\Re z, \mathrm {supp}(\rho )) \gtrsim 1$ for arbitrary bounded self-adjoint deformations $D =D^*$ (see [Reference Erdős, Krüger and Schröder27, Theorem 2.2]), the complementary regime requires the strengthened Assumption 2.8 on D (see [Reference Alt, Erdős, Krüger and Schröder4]). Note that cusps for $\rho $ have been excluded in Assumption 2.8. Hence, the complementary regime only consists of edges, which are covered in [Reference Alt, Erdős, Krüger and Schröder4, Theorem 2.6] under the requirement that $\Vert M \Vert $ is bounded – which was also supposed in Assumption 2.8.

The isotropic bound $\langle \mathbf {x}, \Im G(z) \mathbf {x}\rangle \prec 1$ from (4.1) immediately gives an (almost) optimal bound on the delocalisation of eigenvectors: $|\langle \mathbf {u}_i,\mathbf {x}\rangle |\prec N^{-1/2}$ [Reference Lee and Schnelli32, Reference Landon and Yau31, Reference Erdős, Krüger and Schröder27, Reference Alt, Erdős, Krüger and Schröder4, Reference Benigni and Lopatto10]. However, these estimates are not precise enough to conclude optimal bounds for eigenvector overlaps and generic matrices A as in Theorem 2.7; in fact, by (4.1) we can only obtain the trivial bound $|\langle \mathbf {u}_i,A\mathbf {u}_j\rangle |\prec 1$ . Instead of the single resolvent local law (4.1), we rely on the fact that (see (4.17) below)

(4.2) $$ \begin{align} N \big|\langle \mathbf{u}_i,A\mathbf{u}_j\rangle \big|^2\lesssim \langle \Im G(\gamma_i+\mathrm{i}\eta)A \Im G(\gamma_j+\mathrm{i}\eta)A^*\rangle\, \end{align} $$

for $\eta \sim N^{-1+\epsilon }$ , where $\epsilon>0$ is small but fixed, $\gamma _i, \gamma _j \in \mathbf {B}_\kappa $ are in the bulk and we estimate the rhs. of (4.2). In particular, to prove Theorem 2.7, we will use the multi-resolvent local laws from Proposition 4.4 below.

Multi-resolvent local laws are natural generalisations of (4.1), and they assert that longer products

(4.3) $$ \begin{align} G_1 B_1 G_2 \, \cdots \, G_{k-1} B_{k-1} G_{k} \end{align} $$

of resolvents $G_i := G(z_i)$ and deterministic matricesFootnote 10 $B_1, ... , B_{k-1}$ also become approximately deterministic both in average and isotropic sense in the large N limit as long as $N|\Im z_i|\gg 1$ . The deterministic approximation to the chain (4.3) is denoted by

(4.4) $$ \begin{align} M(z_1, B_1, z_2, ... , z_{k-1}, B_{k-1}, z_{k}). \end{align} $$

It is not simply $M(z_1) B_1 M(z_2) B_2\ldots $ (i.e., it cannot be obtained by mechanically replacing each G with M as (4.1) might incorrectly suggest). Instead, it is defined recursively in the length k of the chain as follows (see [Reference Cipolloni, Erdős, Henheik and Schröder21, Definition 4.1]):

Definition 4.1. Fix $k \in \mathbf {N}$ and let $z_1, ... , z_k \in \mathbf {C} \setminus \mathbf {R}$ be spectral parameters. As usual, the corresponding solutions to (2.8) are denoted by $M(z_j)$ , $j \in [k]$ . Then, for deterministic matrices $B_1, ... , B_{k-1}$ , we recursively define

(4.5) $$ \begin{align} M(z_1,B_1, ... B_{k-1} , z_{k}) = \big(\mathcal{B}_{1k}\big)^{-1}&\bigg[M(z_1) B_1 M(z_{2}, ... , z_{k}) \\ & + \sum_{l = 2}^{k-1} M(z_1) \langle M(z_1, ... , z_l) \rangle M(z_l, ... , z_{k}) \bigg]\,, \nonumber \end{align} $$

where we introduced the shorthand notation

(4.6) $$ \begin{align} \mathcal{B}_{mn} \equiv \mathcal{B}(z_m,z_n)= 1 - M(z_m) \langle \cdot \rangle M(z_n) \end{align} $$

for the stability operator acting on the space of $N\times N$ matrices.

It turns out that the size of $M(z_1, B_1, z_2,\ldots , z_k)$ in the relevant regime of small $\eta :=\min _j |\Im z_j|$ is roughly $\eta ^{-k+1}$ in the worst case, with a matching error term in the corresponding local law. This blow-up in the small $\eta $ regime comes recursively from the large norm of the inverse of the stability operator $\mathcal {B}_{1k}$ in (4.5). However, for a special subspace of observable matrices $B_i$ , called regular matrices, the size of $M(z_1, B_1, z_2,\ldots , z_k)$ is much smaller. For Wigner matrices (i.e., for $D=0$ ), the regular observables are simply the traceless matrices (i.e., observables B such that $\langle B\rangle =0$ ). In [Reference Cipolloni, Erdős and Schröder16, Reference Cipolloni, Erdős and Schröder17, Reference Cipolloni, Erdős and Schröder19, Reference Cipolloni, Erdős and Schröder20], it was shown that when the matrices $B_i$ are all traceless, then $M(z_1, B_1, z_2,\ldots , z_k)$ ; hence, (4.3) are smaller by an $\eta ^{k/2}$ -factor than for general $B_i$ ’s.

The situation for deformed Wigner matrices is more complicated since the concept of regular observables will be dependent on the precise location in the spectrum of H (i.e., dependent on the energy). More precisely, we will require that the trace of A tested against a deterministic energy dependent matrix has to vanish; this reflects the inhomogeneity introduced by D. Analogously to the Wigner case, in Proposition 4.4 below, we will show that resolvent chains (4.3) are much smaller when the deterministic matrices $B_i$ are regular.

Next, we give the definition of regular matrices in the chain (4.3). Using the notation A for regular matrices, we will consider chains of resolvents and deterministic matrices of the form

(4.7) $$ \begin{align} \langle G_1 A_1 \, \cdots \, G_k A_k \rangle \end{align} $$

in the averaged case, or

(4.8) $$ \begin{align} \big(G_1 A_1 \, \cdots \, A_k G_{k+1} \big)_{\boldsymbol{x}\boldsymbol{y}} \end{align} $$

in the isotropic case, with $G_i:=G(z_i)$ and $A_i$ being regular matrices according to the following Definition 4.2 (cf. [Reference Cipolloni, Erdős, Henheik and Schröder21, Definition 4.2]), which generalises the earlier Definition 2.6.

Definition 4.2 (Regular observables – two-point regularisation in chains).

Fix a parameter $\kappa> 0$ and let $\delta = \delta (\kappa , \Vert D\Vert )> 0$ be small enough (see the discussion below). Consider one of the two expressions (4.7) or (4.8) for some fixed length $k \in \mathbf {N}$ and bounded matrices $\Vert A_i \Vert \lesssim 1$ and let $z_1, ... , z_{k+1} \in \mathbf {C} \setminus \mathbf {R}$ be spectral parameters with $\Re z_j \in \mathbf {B}_\kappa $ . For any $j \in [k]$ , we denote

(4.9) $$ \begin{align} \mathbf{1}_\delta(z_j, z_{j+1}) := \phi_\delta(\Re z_j - \Re z_{j+1} ) \ \phi_\delta(\Im z_j) \ \phi_\delta(\Im z_{j+1}), \end{align} $$

where $0 \le \phi _\delta \le 1$ is a smooth symmetric bump function on $\mathbf {R}$ satisfying $\phi _\delta (x) = 1$ for $|x|\le \delta /2$ and $\phi _\delta (x) = 0$ for $|x| \ge \delta $ . Here and in the following, in case of (4.7), the indices in (4.9) are understood cyclically modulo k.

  1. (a) For $j \in [k]$ , denoting $\mathfrak {s}_j := - \mathrm {sgn}(\Im z_j \Im z_{j+1})$ , we define the (two-point) regularisation of $A_j$ from (4.7) or (4.8) w.r.t. the spectral parameters $(z_j, z_{j+1})$ as

    (4.10) $$ \begin{align} \mathring{A}_j^{{z_j,z_{j+1}}} := A_j - \mathbf{1}_\delta(z_j, z_{j+1})\frac{\langle M(\Re z_j+ \mathrm{i} \Im z_j) A_j M(\Re z_{j+1} + \mathfrak{s}_j \mathrm{i} \Im z_{j+1}) \rangle}{\langle M(\Re z_j + \mathrm{i} \Im z_j) M(\Re z_{j+1} + \mathfrak{s}_j \mathrm{i} \Im z_{j+1}) \rangle} \,. \end{align} $$
  2. (b) Moreover, we call $A_j$ regular w.r.t. $(z_j, z_{j+1})$ if and only if $\mathring {A}^{z_j,z_{j+1}}_j = A_j$ .

As already indicated above, the two-point regularisation generalises Definition 2.6 in the sense that

(4.11) $$ \begin{align} \mathring{A}^{e \pm \mathrm{i} \eta, e \pm \mathrm{i} \eta} \longrightarrow \mathring{A}^e\,, \quad \text{and} \quad \mathring{A}^{e \pm \mathrm{i} \eta, e \mp \mathrm{i} \eta} \longrightarrow \mathring{A}^e \,, \quad \text{as} \quad \eta \downarrow 0\,, \end{align} $$

with a linear speed of convergence, for $e \in \mathbf {B}_\kappa $ and any bounded deterministic $A \in \mathbf {C}^{N \times N}$ , where we used that, by taking the imaginary part of (2.8), $M(z)M(z)^* = \Im M(z)/(\langle \Im M(z)\rangle + \Im z)$ .

Moreover, we point out that the above Definition 4.2 of the regularisation is identical to [Reference Cipolloni, Erdős, Henheik and Schröder21, Defs. 3.1 and 4.2] when dropping the summand with $\mathfrak {s} \tau = -1$ in Equation (3.7) of [Reference Cipolloni, Erdős, Henheik and Schröder21]. In particular, for spectral parameters $z_j, z_{j+1}$ satisfying $\mathbf {1}_\delta (z_j,z_{j+1})> 0$ (for some $\delta>0 $ small enough), it holds that the denominator in (4.10) is bounded away from zero, which shows that the linear map $A \mapsto \mathring {A}$ is bounded. Additionally, we have the following Lipschitz property (see [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemma 3.3]):

(4.12) $$ \begin{align} \mathring{A}^{z_1,z_2}=\mathring{A}^{w_1,w_2}+\mathcal{O}\big(|z_1-w_1|+|z_2-w_2|\big) I \end{align} $$

for any $z_1,z_2,w_1,w_2\in \mathbf {C}\setminus \mathbf {R}$ such that $\Im z_i\Im w_i>0$ . It is important that the error in (4.12) is a constant times the identity matrix, indicated by $\mathcal {O}(\cdot ) I$ .

Next, we give bounds on the size of $M(z_1, A_1, ... A_{k-1}, z_{k})$ , the deterministic approximation to the chain $G_1 A_1 \, \cdots \, A_{k-1}G_{k}$ introduced in Definition 4.1; the proof of this lemma is presented in Appendix A.

Lemma 4.3. Fix $\kappa> 0$ . Let $k \in [4]$ and $z_1, ... , z_{k+1} \in \mathbf {C} \setminus \mathbf {R}$ be spectral parameters with $\Re z_j \in \mathbf {B}_\kappa $ . Set $\eta :=\min _j |\Im z_j|$ . Then, for bounded regular deterministic matrices $A_1, ... , A_{k}$ (according to Definition 4.2), we have the bounds

(4.13) $$ \begin{align}\Vert M(z_1, A_1, ... , A_{k}, z_{k+1}) \Vert &\lesssim\begin{cases} \frac{1}{\eta^{\lfloor k/2 \rfloor}} & \kern21pt\text{if} \ \eta \le 1 \\\frac{1}{\eta^{k+1}} &\kern21pt \text{if} \ \eta> 1 \end{cases} \,, \end{align} $$
(4.14) $$ \begin{align}\vert \langle M(z_1, A_1, ... , A_{k-1}, z_{k})A_k \rangle \vert & \lesssim \begin{cases} \frac{1}{\eta^{\lfloor k/2\rfloor-1 }}\vee 1 &\text{if} \ \eta \le 1 \\\frac{1}{\eta^{k}} &\text{if} \ \eta> 1 \end{cases}\,. \end{align} $$

For the presentation of Proposition 4.4, the main technical result underlying the proof of Theorem 2.7, we would only need (4.13) and (4.14) for $k \in [2]$ from the previous lemma. However, the remaining bounds covered by Lemma 4.3 will be instrumental in several parts of our proofs (see Section 6 and Appendix A).

Proposition 4.4. Fix $\epsilon>0$ , $\kappa> 0$ , $k\in [2]$ and consider $z_1,\dots ,z_{k+1} \in \mathbf {C} \setminus \mathbf {R}$ with $\Re z_j \in \mathbf {B}_\kappa $ . Consider regular matrices $A_1,\dots ,A_k$ with $\lVert A_i\rVert \le 1$ , deterministic vectors $\mathbf {x}, \mathbf {y}$ with $\lVert \mathbf {x}\rVert +\lVert \mathbf {y} \rVert \lesssim 1$ , and set $G_i:=G(z_i)$ . Then, uniformly in $\eta :=\min _j |\Im z_j|\ge N^{-1+\epsilon }$ , we have the averaged local law

(4.15a) $$ \begin{align} \big|\langle \big(G_1A_1\dots G_k-M(z_1, A_1, ... , z_{k})\big)A_k \rangle\big|\prec \begin{cases} \frac{N^{k/2-1}}{\sqrt{N \eta}} \quad &\text{if} \ \eta \le 1 \\ \frac{1}{N \eta^{k+1}} \quad &\text{if} \ \eta> 1 \end{cases} \end{align} $$

and the isotropic local law

(4.15b) $$ \begin{align} \big|\langle \mathbf{x},\big(G_1A_1\dots G_{k+1}-M(z_1, A_1, ... , z_{k+1})\big) \mathbf{y} \rangle\big|\prec \begin{cases} \frac{N^{(k-1)/2}}{\sqrt{N \eta^2}} \quad &\text{if} \ \eta \le 1 \\ \frac{1}{\sqrt{N} \eta^{k+2}} \quad &\text{if} \ \eta> 1 \end{cases}\,. \end{align} $$

In Section 6, we will carry out the proof of Proposition 4.4 in the much more involved $\eta \le 1$ regime. For $\eta> 1$ , the bound simply follows by induction on the number of resolvents in a chain by invoking the trivial estimate $\Vert M(z) \Vert \lesssim 1/|\Im z|$ . The detailed argument has been carried out in [Reference Cipolloni, Erdős and Schröder19, Appendix B] for the case of Wigner matrices. Having Proposition 4.4 at hand, we can now prove Theorem 2.7.

Proof of Theorem 2.7.

By (4.15a) and (4.14) for $k=2$ , it follows that

(4.16) $$ \begin{align} |\langle G_1A_1G_2A_2 \rangle|\prec 1\, \end{align} $$

for arbitrary regular matrices $A_1 = \mathring {A}_1^{z_1, z_2}$ and $A_2 = \mathring {A}_2^{z_2, z_1}$ . Now, using that (see [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemma 3.6] for an analogous statement; see also (4.11) and (4.12))

$$ \begin{align*} \mathring{A}^{\gamma_i} = \mathring{A}^{{\gamma_i \pm \mathrm{i} \eta, \gamma_j \pm 2\mathrm{i} \eta}} + \mathcal{O}\big(|\gamma_i - \gamma_j|+ \eta\big) I = \mathring{A}^{{\gamma_i \pm \mathrm{i} \eta, \gamma_j \mp2\mathrm{i} \eta}} + \mathcal{O}\big(|\gamma_i - \gamma_j|+ \eta\big)I\,, \end{align*} $$

and analogously for $(\mathring {A}^*)^{\gamma _i}$ , we obtain (cf. [Reference Cipolloni, Erdős, Henheik and Schröder21, Sec. 3.3.1])

$$ \begin{align*} \langle \Im G(\gamma_i + \mathrm{i} \eta) \mathring{A}^{\gamma_i} \Im G(\gamma_j + 2\mathrm{i} \eta) (\mathring{A}^*)^{\gamma_i} \rangle \prec 1\,. \end{align*} $$

Moreover, by spectral decomposition, together with the rigidity of eigenvalues (see, for example, [Reference Ajanki, Erdős and Krüger2, Reference Erdős, Krüger and Schröder27]) it follows that (cf. [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemma 3.5])

(4.17) $$ \begin{align} N | \langle \mathbf{u}_i, \mathring{A}^{\gamma_i} \mathbf{u}_j \rangle|^2 \prec (N \eta)^2\langle \Im G(\gamma_i + \mathrm{i} \eta) \mathring{A}^{\gamma_i} \Im G(\gamma_j + 2\mathrm{i} \eta) (\mathring{A}^*)^{\gamma_i} \rangle \prec (N \eta)^2\,. \end{align} $$

Choosing $\eta =N^{-1+\xi /2}$ for some arbitrary small $\xi>0$ , we conclude the desired.

5 Dyson Brownian motion: Proof of Theorem 2.9

The main observation we used to prove Theorem 2.7 in Section 4 is the relation (4.2) (i.e., we related the eigenvector overlaps with a trace of the product of two resolvents and two deterministic matrices). For Theorem 2.7, we only needed an upper bound on the size of the eigenvector overlaps; however, to prove Theorem 2.9, we need to identify their size. For this purpose, the main input is the relation

(5.1) $$ \begin{align} \frac{1}{N^{2\epsilon}} \sum_{|i-i_0|\le N^\epsilon \atop |j-j_0|\le N^\epsilon}N| \langle \mathbf{u}_i, \mathring{A}^{\gamma_i} \mathbf{u}_j \rangle|^2 \sim \langle \Im G(\gamma_{i_0} + \mathrm{i} \eta) \mathring{A}^{\gamma_{i_0}} \Im G(\gamma_{j_0} + 2\mathrm{i} \eta) (\mathring{A}^*)^{\gamma_{i_0}}\rangle, \end{align} $$

with $\eta =N^{-1+\epsilon }$ , for some small fixed $\epsilon>0$ , and $i_0,j_0$ being some fixed bulk indices. The relation (5.1) is clearly not enough to identify the fluctuations of the individual eigenvector overlaps, but it gives a hint on the expression of the variance of these overlaps. More precisely, to identify the fluctuations of $N | \langle \mathbf {u}_i, \mathring {A}^{\gamma _i} \mathbf {u}_i\rangle |^2$ , we will rely on a Dyson Brownian motion analysis which will reveal that

(5.2) $$ \begin{align} N \mathbf{E} [| \langle \mathbf{u}_i, \mathring{A}^{\gamma_i} \mathbf{u}_i \rangle|^2] \approx \frac{1}{\langle \Im M(\gamma_i) \rangle^2} \mathbf{E}\langle \Im G(\gamma_i + \mathrm{i} \eta) \mathring{A}^{\gamma_i} \Im G(\gamma_i + 2\mathrm{i} \eta) (\mathring{A}^*)^{\gamma_i}\rangle, \end{align} $$

and a similar relation holds for higher moments as well. Finally, the rhs. of (5.2) is computed using a multi–resolvent local law (see, for example, (4.15a) for $k=2$ ), and after some algebraic manipulation (see (5.52)-(5.55) below), this results in $\mathrm {Var}_{\gamma _i}(A)$ as defined in (2.15).

Given the optimal a priori bound (2.13), the proof of Theorem 2.9 is very similar to the analysis of the Stochastic Eigenstate Equation (SEE) in [Reference Cipolloni, Erdős and Schröder18, Sections 3-4] and [Reference Cipolloni, Erdős and Schröder20, Section 4]. Even if very similar to those papers, to make the presentation clearer, here we write out the main steps of the proof and explain the differences, but we do not write the details; we defer the interested reader to [Reference Cipolloni, Erdős and Schröder18]. We also remark that the proof in [Reference Cipolloni, Erdős and Schröder18, Reference Cipolloni, Erdős and Schröder20] heavily relies on the analysis of SEE developed in [Reference Bourgade and Yau14] and extends in [Reference Bourgade, Yau and Yin15, Reference Marcinek and Yau35].

Similar to [Reference Cipolloni, Erdős and Schröder18, Reference Cipolloni, Erdős and Schröder20], we only consider the real case; the complex case is completely analogous, and so it is omitted. We prove Theorem 2.9 dynamically; that is, we consider the flow

(5.3) $$ \begin{align} \mathrm{d} W_t=\frac{\mathrm{d} \widetilde{B}_t}{\sqrt{N}}\,, \qquad W_0=W\, \end{align} $$

with $\widetilde {B}_t$ a real symmetric matrix valued Brownian motion (see, for example, [Reference Bourgade and Yau14, Definition 2.1]). Note that $W_t$ has a Gaussian component of size $\sqrt {t}$ ; that is,

$$\begin{align*}W_t\stackrel{\mathrm{d}}{=}W_0+\sqrt{t}U\, \end{align*}$$

with U being a GOE matrix independent of $W_0$ . Denoting by $\lambda _i(t)$ the eigenvalues of $W_t$ (labeled in increasing order) and by $\mathbf {u}_i(t)$ the corresponding orthonormal eigenvectors, we will prove Theorem 2.9 for the eigenvectors $\mathbf {u}_i(T)$ , with $T=N^{-1+\omega }$ , for some small fixed $\omega>0$ . Since T is very small, the Gaussian component added in the flow (5.3) can easily be removed by a standard Green function comparison (GFT) argument as in [Reference Cipolloni, Erdős and Schröder20, Appendix B].

By [Reference Bourgade and Yau14], it is known that the eigenvalues $\lambda _i(t)$ and the eigenvectors $\mathbf {u}_i(t)$ are the unique strong solution of the following system of stochastic differential equations (SDEs):

(5.4) $$ \begin{align} \mathrm{d} \lambda_i(t)&=\frac{\mathrm{d} B_{ii}(t)}{\sqrt{N}}+\frac{1}{N}\sum_{j\ne i} \frac{1}{\lambda_i(t)-\lambda_j(t)} \mathrm{d} t\qquad\qquad\qquad\qquad \end{align} $$
(5.5) $$ \begin{align} \qquad\mathrm{d} \mathbf{u}_i(t)&=\frac{1}{\sqrt{N}}\sum_{j\ne i} \frac{\mathrm{d} B_{ij}(t)}{\lambda_i(t)-\lambda_j(t)}\mathbf{u}_j(t)-\frac{1}{2N}\sum_{j\ne i} \frac{\mathbf{u}_i}{(\lambda_i(t)-\lambda_j(t))^2}\mathrm{d} t\,, \end{align} $$

where the matrix $B(t)=(B_{ij}(t))_{i,j=1}^N$ is a standard real symmetric Brownian motion (see, for example, [Reference Bourgade and Yau14, Definition 2.1]).

Even if in Theorem 2.9 we want to prove a CLT only for diagonal overlaps $\langle \mathbf {u}_i, A\mathbf {u}_i\rangle $ , by (5.5), it follows that there is no closed equation for such quantities. For this reason, following [Reference Bourgade, Yau and Yin15, Section 2.3], we study the evolution of the perfect matching observable (see (5.7) below) along the flow (5.5).

5.1 Perfect matching observable and proof of Theorem 2.9

We introduce the notation

(5.6) $$ \begin{align} p_{ij}=p_{ij}(t)=\langle \mathbf{u}_i,A \mathbf{u}_j\rangle-\delta_{ij}C_0\, \end{align} $$

with A being a fixed real symmetric deterministic matrix A and $C_0$ being a fixed constant independent of i. Note that compared to [Reference Cipolloni, Erdős and Schröder18, Reference Cipolloni, Erdős and Schröder20] in (5.6), we define the diagonal $p_{ii}$ without subtracting their expectation (see (2.13) above), but rather a generic constant $C_0$ which we will choose later (see (5.48) below). The reason behind this choice is that in the current setting, unlike in the Wigner case [Reference Cipolloni, Erdős and Schröder18, Reference Cipolloni, Erdős and Schröder20], the expectation of $p_{ii}$ is now i–dependent. Hence, the flow (5.10) below would not be satisfied if we had defined (5.7) with the centred $p_{ii}$ ’s.

To study moments of the $p_{ij}$ ’s, we use the particle representation introduced in [Reference Bourgade and Yau14] and further developed in [Reference Bourgade, Yau and Yin15, Reference Marcinek and Yau35]. A particle configuration, corresponding to a certain monomials of $p_{ij}$ ’s, can be encoded by a function $\boldsymbol {\eta }:[N]\to \mathbf {N}_0$ . The image $\eta _j=\boldsymbol {\eta }(j)$ denotes the number of particle at the site j, and $\sum _j\eta _j=n$ denotes the total number of particles. Additionally, given a particle configuration $\boldsymbol {\eta }$ , by $\boldsymbol {\eta }^{ij}$ , with $i\ne j$ , we denote a new particle configuration in which a particle at the site i moved to a new site j; if there is no particle in i, then $\boldsymbol {\eta }^{ij}=\boldsymbol {\eta }$ . We denote the set of such configuration by $\Omega ^n$ .

Fix a configuration $\boldsymbol {\eta }$ , and then we define the perfect matching observable (see [Reference Bourgade, Yau and Yin15, Section 2.3]):

(5.7) $$ \begin{align} f_{\boldsymbol{\lambda},t,C_0,C_1}(\boldsymbol{\eta}):= \frac{N^{n/2}}{ [2 C_1]^{n/2}} \frac{1}{(n-1)!!}\frac{1}{\mathcal{M}(\boldsymbol{\eta}) }\mathbf{E}\left[\sum_{G\in\mathcal{G}_{\boldsymbol{\eta}}} P(G)\Bigg| \boldsymbol{\lambda}\right] \,, \quad \mathcal{M}(\boldsymbol{\eta}):=\prod_{i=1}^N (2\eta_i-1)!!\, \end{align} $$

with n being the total number of particles in the configuration η. The sum in (5.7) is taken over $\mathcal {G}_{\boldsymbol {\eta }}$ , which denotes the set of perfect matchings on the complete graph with vertex set

$$\begin{align*}\mathcal{V}_{\boldsymbol{\eta}}:=\{(i,a): 1\le i\le n, 1\le a\le 2\eta_i\}\,. \end{align*}$$

We also introduced the short–hand notation

(5.8) $$ \begin{align} P(G):=\prod_{e\in\mathcal{E}(G)}p(e), \qquad p(e):=p_{i_1i_2}\,, \end{align} $$

where $e=\{(i_1,a_1),(i_2,a_2)\}\in \mathcal {V}_{\boldsymbol {\eta }}^2$ , and $\mathcal {E}(G)$ denotes the edges of G. Note that in (5.7) we took the conditional expectation with respect to the entire trajectories of the eigenvalues, λ = {λ(t)} t∈[0, T] for some fixed $0<T\ll 1$ . We also remark that the definition (5.7) differs slightly from [Reference Cipolloni, Erdős and Schröder18, Eq. (3.9)] and [Reference Cipolloni, Erdős and Schröder20, Eq. (4.6)], since we now do not normalise by $\langle (A-\langle A\rangle )^2\rangle $ but using a different constant $C_1$ which we will choose later in the proof (see (5.48) below); this is a consequence of the fact that the diagonal overlaps $p_{ii}$ are not correctly centred and normalised. Note that we did not incorporate the factor $2$ in (5.7) into the constant $C_1$ , since $C_1$ will be chosen as a normalisation constant to compensate the size of the matrix A, while the factor $2$ represents the fact that diagonal overlaps, after the proper centering and normalisation depending on A and i, would be centred Gaussian random variable of variance two. Furthermore, we consider eigenvalues paths $\{\boldsymbol {\lambda }({t})\}_{{t}\in [{0},{T}]}$ which lie in the event

(5.9) $$ \begin{align} \widetilde{\Omega}=\widetilde{\Omega}_\xi,:= \Big\{ \sup_{0\le t \le T} \max_{i\in [N]} \eta_{\mathrm{f}}(\gamma_i(t))^{-1}| \lambda_i(t)-\gamma_i(t)| \le N^\xi\Big\} \end{align} $$

for any $\xi>0$ , where $\eta _{\mathrm {f}}(\gamma _i(t))$ is the local fluctuation scale defined as in [Reference Erdős, Krüger and Schröder28, Definition 2.4]. In most instances, we will use this rigidity estimate in the bulk regime when $\eta _{\mathrm {f}}(\gamma _i(t))\sim N^{-1}$ – at the edges $\eta _{\mathrm {f}}(\gamma _i(t))\sim N^{-2/3}$ . We recall that here $\gamma _i(t)$ denote the quantiles of $\rho _t$ defined as in (2.12). The fact that the event $ \widetilde {\Omega }$ holds with very high probability follows by [Reference Alt, Erdős, Krüger and Schröder4, Corollary 2.9].

By [Reference Bourgade, Yau and Yin15, Theorem 2.6], it follows that $f_{\boldsymbol{\lambda},t}$ is a solution of the parabolic discrete partial differential equation (PDE):

(5.10) $$ \begin{align} \partial_t f_{\boldsymbol{\lambda},t}&=\mathcal{B}(t)f_{\boldsymbol{\lambda},t}\,,\qquad\qquad\qquad\qquad\quad\qquad\qquad \end{align} $$
(5.11) $$ \begin{align}\!\mathcal{B}(t)f_{\boldsymbol{\lambda},t}&=\sum_{i\ne j} c_{ij}(t) 2\eta_i(1+2\eta_j)\big(f_{\boldsymbol{\lambda},t}(\boldsymbol{\eta}^{ij}) -f_{\boldsymbol{\lambda},t}(\boldsymbol{\eta})\big)\,, \end{align} $$

where

(5.12) $$ \begin{align} c_{ij}(t):= \frac{1}{N(\lambda_i(t) - \lambda_j(t))^2}\,. \end{align} $$

In the remainder of this section, we may often omit $\boldsymbol {\lambda }$ from the notation since the paths of the eigenvalues are fixed within this proof.

The main result of this section is the following Proposition 5.1, which will readily prove Theorem 2.9. For this purpose, we define a version of $f_t(\boldsymbol {\eta })$ with centred and rescaled $p_{ii}$ :

(5.13) $$ \begin{align} q_{\boldsymbol{\lambda},t}(\boldsymbol{\eta}):= \left(\prod_{i=1}^N\frac{1}{ \mathrm{Var}_{\gamma_i}(A)^{\eta_i/2}}\right) \frac{N^{n/2}}{2^{n/2} (n-1)!!}\frac{1}{\mathcal{M}(\boldsymbol{\eta}) }\mathbf{E}\left[\sum_{G\in\mathcal{G}_{\boldsymbol{\eta}}} Q(G)\Bigg| \boldsymbol{\lambda}\right] \end{align} $$

with $\mathring {A}^{\gamma _i}$ denoting the regular component of A defined as in (2.11):

$$\begin{align*}\mathring{A}^{\gamma_i}:=A-\frac{\langle A\Im M(\gamma_i)\rangle }{\langle \Im M(\gamma_i)\rangle}, \end{align*}$$

and

(5.14) $$ \begin{align} Q(G):=\prod_{e\in\mathcal{E}(G)}q(e), \qquad q(e):=\langle \mathbf{u}_{i_1},\mathring{A}^{\gamma_{i_1}} \mathbf{u}_{i_2}\rangle. \end{align} $$

Note that the definition in (5.14) is not asymmetric for $i_1\ne i_2$ , since in this case, $\langle \mathbf {u}_{i_1},\mathring {A}^{\gamma _{i_1}} \mathbf {u}_{i_2}\rangle =\langle \mathbf {u}_{i_1},\mathring {A}^{\gamma _{i_2}} \mathbf {u}_{i_2}\rangle $ .

We now comment on the main difference between $q_t$ and $f_t$ from (5.13) and (5.7), respectively. First of all, we notice the $q(e)$ ’s in (5.14) are slightly different compared with the $p(e)$ ’s from (5.6). In particular, we choose the $q(e)$ ’s in such a way that the diagonal overlaps have very small expectation (i.e., much smaller than their fluctuations size). The price to pay for this choice is that the centering is i–dependent; hence, $q_t$ is not a solution of an equation of the form (5.10)-(5.11). We also remark that later within the proof, $C_0$ from (5.6) will be chosen as

$$\begin{align*}C_0 = \frac{\langle A\Im M(\gamma_{i_0})\rangle}{\langle \Im M(\gamma_{i_0})\rangle} \end{align*}$$

for some fixed $i_0$ such that $\gamma _{i_0}\in \mathbf {B}_\kappa $ is in the bulk (recall (2.10)). The idea behind this choice is that the analysis of the flow (5.10)–(5.11) will be completely local. We can thus fix a base point $i_0$ and ensure that the corresponding overlap is exactly centred; then the nearby overlaps for indices $|i-i_0|\le K$ , for some N–dependent $K>0$ , will not be exactly centred, but their expectation will be very small compared to the size of their fluctuations:

$$\begin{align*}\frac{\langle A\Im M(\gamma_{i_0})\rangle}{\langle \Im M(\gamma_{i_0})\rangle}-\frac{\langle A\Im M(\gamma_i)\rangle}{\langle \Im M(\gamma_i)\rangle}=\mathcal{O}\left(\frac{K}{N}\right)\,. \end{align*}$$

A consequence of this choice is also that the normalisation for $q_t$ and $f_t$ is different: for $q_t$ , we chose a normalisation that is i–dependent, whereas for $f_t$ , the normalisation $C_1$ is i–independent and later, consistently with the choice of $C_0$ , it will be chosen as

$$\begin{align*}C_1=\mathrm{Var}_{\gamma_{i_0}}(A)^{n/2}, \end{align*}$$

which is exactly the normalisation that makes $f_t(\boldsymbol {\eta })=1$ when $\boldsymbol {\eta }$ is such that $\eta _{i_0}=n$ and zero otherwise.

Proposition 5.1. For any $n\in \mathbf {N}$ , there exists $c(n)>0$ such that for any $\epsilon>0$ , and for any $T\ge N^{-1+\epsilon }$ , it holds

(5.15) $$ \begin{align} \sup_{\boldsymbol{\eta}}\ \left|q_T(\boldsymbol{\eta})- \mathbf{1}(\mathrm{n}\,\, \mathrm{even})\right|\lesssim N^{-c(n)}\,, \end{align} $$

with very high probability. The supremum is taken over configurations $\boldsymbol {\eta }$ supported on bulk indices, and the implicit constant in (5.15) depends on n and $\epsilon $ .

Proof of Theorem 2.9.

Fix $n\in \mathbf {N}$ , an index i such that $\gamma _i \in \mathbf {B}_\kappa $ is in the bulk, and choose a configuration $\boldsymbol {\eta }$ such that $\eta _i=n$ and $\eta _j=0$ for any $j\ne i$ . Then by Proposition 5.1, we conclude that

$$\begin{align*}\mathbf{E}\left[\sqrt{\frac{N}{2\mathrm{Var}_{\gamma_i}(A)}}\langle \mathbf{u}_i(T),\mathring{A}^{\gamma_i}\mathbf{u}_{i}(T)\rangle\right]^n=\mathbf{1}({n}\,\, \mathrm{even})({n}-1)!!+\mathcal{O}\left( {N}^{-{c}({n})}\right), \end{align*}$$

with $T=N^{-1+\epsilon }$ , for some very small fixed $\epsilon>0$ , and $c(n)>0$ . Here $\mathring {A}^{\gamma _i}$ is defined in (2.11) and $\mathrm {Var}_{\gamma _i}(A)$ is defined in (2.15). Then, by a standard GFT argument (see, for example, [Reference Cipolloni, Erdős and Schröder20, Appendix B]), we see that

$$\begin{align*}\mathbf{E}\left[\sqrt{\frac{N}{2\mathrm{Var}_{\gamma_i}(A)}}\langle \mathbf{u}_i(T),\mathring{A}^{\gamma_i}\mathbf{u}_{i}(T)\rangle\right]^n=\mathbf{E}\left[\sqrt{\frac{N}{2\mathrm{Var}_{\gamma_i}(A)}}\langle \mathbf{ u}_i(0),\mathring{A}^{\gamma_i}\mathbf{u}_{i}(0)\rangle\right]^n+\mathcal{O}\left( N^{-c(n)}\right). \end{align*}$$

This shows that the Gaussian component added by the dynamics (5.3) can be removed at the price of a negligible error implying (2.14).

The lower bound on the variance (2.16) is an explicit calculation relying on the definition of M from (2.8). In particular, we use that

  1. (i) A, and hence $\mathring {A}^{\gamma _i}$ , are self-adjoint;

  2. (ii) $\Im M(\gamma _i) \ge g$ for some $g = g(\kappa , \Vert D \Vert )> 0$ since we are in the bulk;

  3. (iii) $\langle \mathring {A}^{\gamma _i} \Im M(\gamma _i)\rangle = 0$ by definition of the regularisation;

  4. (iv) $[\Re M(\gamma _i), \Im M(\gamma _i)] = 0$ from (2.8).

Then, after writing $\mathrm {Var}_{\gamma _i}(A)$ as a sum of squares and abbreviating $\Im M = \Im M(\gamma _i)$ , we find

$$ \begin{align*} \mathrm{Var}_{\gamma_i}(A) \ge \frac{\big\langle \big(\sqrt{\Im M} \big[ A - \frac{\langle A (\Im M)^2\rangle}{ \langle (\Im M)^2\rangle}\big]\sqrt{\Im M} \big)^2 \big\rangle}{\langle (\Im M)^2\rangle} \ge g^2 \frac{\big\langle \big[ A - \frac{\langle A (\Im M)^2\rangle}{ \langle (\Im M)^2\rangle}\big]^2 \big\rangle}{\langle (\Im M)^2\rangle} \ge \frac{g^2}{\langle (\Im M)^2\rangle} \langle (A - \langle A \rangle)^2 \rangle\,, \end{align*} $$

where in the last step we used the trivial variational principle $\langle (A - \langle A \rangle )^2 \rangle = \inf _{t \in \mathbf {R}} \langle ( A - t )^2 \rangle $ . This completes the proof of Theorem 2.9.

5.2 DBM analysis

Similar to [Reference Cipolloni, Erdős and Schröder18, Section 4.1] and [Reference Cipolloni, Erdős and Schröder20, Section 4.2], we introduce an equivalent particle representation to encode moments of the $p_{ij}$ ’s. In particular, here, and previously in [Reference Cipolloni, Erdős and Schröder18, Reference Cipolloni, Erdős and Schröder20], we relied on the particle representation (5.16)–(5.18) below since our arguments heavily build on [Reference Marcinek and Yau35], which use this latter representation.

Consider a particle configuration $\boldsymbol {\eta }\in \Omega ^n$ for some fixed $n\in \mathbf {N}$ (i.e., $\boldsymbol {\eta }$ is such that $\sum _j\eta _j=n$ ). We now define the new configuration space

(5.16) $$ \begin{align} \Lambda^n:= \{ \mathbf{x}\in [N]^{2n} \, : \, n_i(\mathbf{x}) \mbox{ is even for every } i\in [N]\big\}, \end{align} $$

where

(5.17) $$ \begin{align} n_i(\mathbf{x}):=|\{a\in [2n]:x_a=i\}| \end{align} $$

for all $i\in \mathbf {N}$ .

By the correspondence

(5.18) $$ \begin{align} \boldsymbol{\eta} \leftrightarrow \mathbf{x}\qquad \eta_i=\frac{n_i( \mathbf{x})}{2}\,, \end{align} $$

it is easy to see that these two representations are basically equivalent. The only difference is that x uniquely determines η, but η determines only the coordinates of x as a multi-set and not its ordering.

From now on, given a function f defined on $\Omega ^n$ , we will always consider functions g on $\Lambda ^n\subset [N]^{2n}$ defined by

$$\begin{align*}f(\boldsymbol{\eta})= f(\phi(\mathbf{x}))= g(\mathbf{x}), \end{align*}$$

with ϕ n →Ω n , ϕ(x) = η being the projection from the x-configuration space to the η-configuration space using (5.18). We thus defined the observable

(5.19) $$ \begin{align} g_t(\mathbf{x})=g_{\boldsymbol{\lambda},t}(\mathbf{x}):= f_{\boldsymbol{\lambda},t}( \phi(\mathbf{x}))\, \end{align} $$

with f λ, t from (5.7). Note that $g_t(\mathbf {x})$ is equivariant under permutation of the arguments (i.e., it depends on x only as a multi–set). Similarly, we define

(5.20) $$ \begin{align} r_t(\mathbf{x})=r_{\boldsymbol{\lambda},t}(\mathbf{x}):= q_{\boldsymbol{\lambda},t}( \phi(\mathbf{x}))\,. \end{align} $$

We remark that $g_t$ and $r_t$ are the counterpart of $f_t$ and $q_t$ , respectively, in the $\textbf {x}$ -configuration space.

We can thus now write the flow (5.10)–(5.11) in the $\textbf {x}$ –configuration space:

(5.21) $$ \begin{align} \!\!\!\!\partial_t g_t(\mathbf{x})&=\mathcal{L}(t)g_t(\mathbf{x})\qquad\qquad \end{align} $$
(5.22) $$ \begin{align} \mathcal{L}(t):=\sum_{j\ne i}\mathcal{L}_{ij}(t), \quad \mathcal{L}_{ij}(t)g(\mathbf{x}):&= c_{ij}(t) \frac{n_j(\mathbf{x})+1}{n_i(\mathbf{x})-1}\sum_{a\ne b\in[2 n]}\big(g(\mathbf{x}_{ab}^{ij})-g(\mathbf{x})\big), \end{align} $$

where

(5.23) $$ \begin{align} \mathbf{x}_{ab}^{ij}:=\mathbf{x}+\delta_{x_a i}\delta_{x_b i} (j-i) (\mathbf{e}_a+\mathbf{e}_b) \end{align} $$

with $\mathbf {e}_a\in \mathbf {R}^{2n}$ denoting the standard unit vector (i.e., $\mathbf {e}_a(b)=\delta _{ab}$ ). We remark that this flow is map on functions defined on Λ n ⊂ [N]2n which preserves equivariance.

For the following analysis, it is convenient to define the scalar product and the natural measure on $\Lambda ^n$ :

(5.24) $$ \begin{align} \langle f, g\rangle_{\Lambda^n}=\langle f, g\rangle _{\Lambda^n, \pi}:=\sum_{\mathbf{x}\in \Lambda^n}\pi(\mathbf{x})\bar f(\mathbf{x})g(\mathbf{x}), \qquad \pi(\mathbf{x}):=\prod_{i=1}^N ((n_i(\mathbf{x})-1)!!)^2, \end{align} $$

as well as the norm on $L^p(\Lambda ^n)$ :

(5.25) $$ \begin{align} \lVert f\rVert_p=\lVert f\rVert_{L^p(\Lambda^n,\pi)}:=\left(\sum_{\mathbf{x}\in \Lambda^n}\pi(\mathbf{x})|f(\mathbf{x})|^p\right)^{1/p}. \end{align} $$

The operator $\mathcal {L}=\mathcal {L}(t)$ is symmetric with respect to the measure $\pi $ and it is a negative in $L^2(\Lambda ^n)$ , with associated Dirichlet form (see [Reference Marcinek34, Appendix A.2]):

$$\begin{align*}D(g)=\langle g, (-\mathcal{L}) g\rangle_{\Lambda^n} = \frac{1}{2} \sum_{\mathbf{x}\in \Lambda^n}\pi(\mathbf{x}) \sum_{i\ne j} c_{ij}(t) \frac{n_j(\mathbf{x})+1}{n_i(\mathbf{x})-1} \sum_{a\ne b\in[2 n]}\big|g(\mathbf{x}_{ab}^{ij})-g(\mathbf{x})\big|^2. \end{align*}$$

Finally, by $\mathcal {U}(s,t)$ , we denote the semigroup associated to $\mathcal {L}$ ; that is, for any $0\le s\le t$ , it holds

(5.26) $$ \begin{align} \partial_t\mathcal{U}(s,t)=\mathcal{L}(t)\mathcal{U}(s,t), \quad \mathcal{U}(s,s)=I. \end{align} $$

5.3 Short-range approximation

As a consequence of the singularity of the coefficients $c_{ij}(t)$ in (5.22), the main contribution to the flow (5.21) comes from nearby eigenvalues; hence, its analysis will be completely local. For this purpose, we define the sets

(5.27) $$ \begin{align} \mathcal{J}=\mathcal{J}_\kappa:=\{ i\in [N]:\, \gamma_i(0)\in \mathbf{B}_{\kappa}\}, \end{align} $$

which correspond to indices with quantiles $\gamma _i(0)$ (recall (2.12)) in the bulk.

Fix a point $\mathbf {y}\in \mathcal {J}^{2n}$ and an N-dependent parameter K such that $1\ll K\ll \sqrt {N}$ . We remark that $\mathbf {y}\in \mathcal {J}^{2n}$ will be fixed for the rest of the analysis. Next, we define the averaging operator as a simple multiplication operator by a ‘smooth’ cut-off function:

(5.28) $$ \begin{align} \mathrm{Av}(K,\mathbf{y})h(\mathbf{x}):=\mathrm{Av}(\mathbf{x};K,\mathbf{y})h(\mathbf{x}), \qquad \mathrm{Av}(\mathbf{x}; K, \mathbf{y}):=\frac{1}{K}\sum_{j=K}^{2K-1} \mathbf{1}(\lVert\mathbf{x}-\mathbf{y}\rVert_1<\mathrm{j}), \end{align} $$

with $\lVert\mathbf{x}-\mathbf{y}\rVert_1 {:=} \sum^{2n}_{a=1} |x_a-y_a|$ . For notational simplicity, we may often omit $K,\textbf {y}$ from the notation since they are fixed throughout the proof:

(5.29) $$ \begin{align} \mathrm{Av}(\textbf{x})=\mathrm{Av}(\mathbf{x};K,\mathbf{y})h(\mathbf{x}), \qquad \mathrm{Av} h(\textbf{x})=\mathrm{Av}((\textbf{x}))h((\textbf{x})). \end{align} $$

Additionally, fix an integer with 1 ≪ K and define the short-range coefficients

(5.30) $$ \begin{align} c_{ij}^{\mathcal{S}}(t):=\begin{cases} c_{ij}(t) &\mathrm{if}\,\, i,j\in \mathcal{J} \,\, \mathrm{and}\,\, |i-j|\le \ell \\ 0 & \mathrm{otherwise}, \end{cases} \end{align} $$

where c ij (t) is defined in (5.12). The parameter $\ell $ is the length of the short-range interaction.

We now define a short–range approximation of $r_t$ , with $r_t$ defined in (5.20). Note that in the definition of the short–range flow (5.31) below, there is a slight notational difference compared to [Reference Cipolloni, Erdős and Schröder18, Section 4.2] and [Reference Cipolloni, Erdős and Schröder20, Section 4.2.1]: we now choose an initial condition $h_0$ which depends on $r_0$ rather than $g_0$ . This minor difference is caused by the fact that in [Reference Cipolloni, Erdős and Schröder18, Reference Cipolloni, Erdős and Schröder20], the observable $g_t$ was already centred and rescaled, whereas in the current case, the centred and rescaled version of $g_t$ is given by $r_t$ . Hence, the definition in (5.31) is still conceptually the same as the one in [Reference Cipolloni, Erdős and Schröder18, Reference Cipolloni, Erdős and Schröder20] (see also the paragraph above (5.47) for a more detailed explanation). We point out that we make this choice to ensure that the infinite norm of the short-range approximation is always bounded by $N^\xi $ (see below (5.32)). The short-range approximation $h_t=h_t(\mathbf {x})$ is defined as the unique solution of the parabolic equation

(5.31) $$ \begin{align} \begin{aligned} \partial_t h_t(\mathbf{x}; \ell, K,\mathbf{y})&=\mathcal{S}(t) h_t(\mathbf{x}; \ell, K,\mathbf{y})\\ h_0(\mathbf{x};\ell, K,\mathbf{y})=h_0(\mathbf{x};K,\mathbf{y}):&=\mathrm{Av}(\mathbf{x}; K,\mathbf{y})(r_0(\mathbf{x})-\mathbf{1}(\mathrm{n} \,\, \mathrm{even})), \end{aligned} \end{align} $$

where

(5.32) $$ \begin{align} \mathcal{S}(t):=\sum_{j\ne i}\mathcal{S}_{ij}(t), \quad \mathcal{S}_{ij}(t)h(\mathbf{x}):=c_{ij}^{\mathcal{S}}(t)\frac{n_j(\mathbf{x})+1}{n_i(\mathbf{x})-1}\sum_{a\ne b\in [2n]}\big(h(\mathbf{x}_{ab}^{ij})-h(\mathbf{ x})\big). \end{align} $$

In the remainder of this section, we may often omit K, $\mathbf {y}$ and $\ell $ from the notation, since they are fixed for the rest of the proof. We conclude this section defining the transition semigroup $\mathcal {U}_{\mathcal {S}}(s,t)=\mathcal {U}_{\mathcal {S}}(s,t;\ell )$ associated to the short-range generator $\mathcal {S}(t)$ . Note that $\lVert h_t\rVert _\infty \le N^\xi $ , for any $t\ge 0$ and any small $\xi>0$ , since $\mathcal {U}_{\mathcal {S}}(s,t)$ is a contraction and $\lVert h_0\rVert _\infty \le N^\xi $ by (2.13), as a consequence of $h_t(\textbf {x})$ being supported on $\textbf {x}\in \mathcal {J}^{2n}$ .

5.4 $L^2$ –estimates

To prove the $L^\infty $ –bound in Proposition 5.1, we first prove an $L^2$ –bound in Proposition 5.3 below and then use an ultracontractivity argument for the parabolic PDE (5.21) (see [Reference Cipolloni, Erdős and Schröder18, Section 4.4]) to get an $L^\infty $ –bound. To get an $L^2$ –bound, we will analyse $h_t$ , the short–range version of the observable $g_t$ from (5.19), and then we will show that $h_t$ and $g_t$ are actually close to each other using the following finite speed of propagation (see [Reference Cipolloni, Erdős and Schröder18, Proposition 4.2, Lemmas 4.3–4.4]):

Lemma 5.2. Let 0 ≤ s 1s 2s 1 + ℓN −1 and f be a function on Λ n . Then for any x ∈Λ n supported on J, it holds

(5.33) $$ \begin{align} \Big| (\mathcal{U}(s_1,s_2)-\mathcal{U}_{\mathcal{S}}(s_1,s_2;\ell) ) f(\mathbf{x}) \Big|\lesssim N^{1+n\xi}\frac{s_2-s_1}{\ell} \| f\|_\infty\,, \end{align} $$

for any small ξ > 0. The implicit constant in (5.15) depends on n, ϵ, δ.

To estimate several terms in the analysis of (5.31), we will rely on the multi–resolvent local laws from Proposition 4.4 (in combination with the extensions in Lemma A.1 in Lemma A.2). For this purpose, for a small ω > 2ξ > 0, we define the very high probability event (see Lemmas A.1A.2)

(5.34) $$ \begin{align} \small & \widehat{\Omega}=\widehat{\Omega}_{\omega, \xi}:= \nonumber\\ &\bigcap_{\substack{e_i\in\mathbf{B}_\kappa, \atop |\Im z_i|\ge N^{-1+\omega}}}\Bigg[\bigcap_{k=2}^n \left\{\sup_{0\le t \le T}\Big|\langle G_t(z_1)\mathring{A}_1\dots G_t(z_k)\mathring{A}_k\rangle-\mathbf{1}({k}=2)\langle {M}(\mathrm{z}_1, \mathring{{A}}_1, {z}_2) \mathring{{A}}_2\rangle\Big|\le \frac{N^{\xi+k/2-1}}{\sqrt{N\eta}}\right\} \nonumber\\ &\qquad\quad\cap \left\{\sup_{0\le t \le T}\big|\langle G_t(z_1)\mathring{A}_1\rangle\big|\le \frac{N^\xi}{N\sqrt{|\Im z_1|}}\right\}\Bigg] \nonumber\\ &\bigcap_{z_1,z_2: e_1\in\mathbf{B}_\kappa,\atop |e_1-e_2|\ge c_1, |\Im z_i|\ge N^{-1+\omega}} \left\{\sup_{0\le t \le T}\big|\langle (G_t(z_1)B_1G_t(z_2)B_2\rangle\big|\le N^\xi\right\}\,, \end{align} $$

where $\mathring {A}_1,\dots ,\mathring {A}_k$ are regular matrices defined as in Definition 4.2 (here we used the short–hand notation $\mathring {A}_i=\mathring {A}_i^{z_i,z_{i+1}}$ ),

(5.35) $$ \begin{align} \langle M(z_1, \mathring{A}_1, z_2) \mathring{A}_2\rangle=\langle M(z_1)\mathring{A}_1 M(z_2)\mathring{A}_2\rangle +\frac{\langle M(z_1)\mathring{A}_1M(z_2)\rangle\langle M(z_2)\mathring{A}_2M(z_1)\rangle}{1-\langle M(z_1)M(z_2)\rangle}\,, \end{align} $$

η:= min{|ℑz i |:i ∈ [k]}, $c_1>0$ is a fixed small constant, and $B_1,B_2$ are norm bounded deterministic matrices. We remark that for $|e_1-e_2|\ge c_1$ , we have the norm bound $\lVert M(z_1,B_1, z_2)\rVert \lesssim 1$ , with $M(z_1,B_1, z_2)$ being defined in (4.4). Then, by standard arguments (see, for example, [Reference Cipolloni, Erdős and Schröder20, Eq. (4.30)]), we conclude the bound (recall that $\widetilde {\Omega }_\xi $ from (5.9) denotes the rigidity event)

(5.36) $$ \begin{align} \max_{i,j\in \mathcal{J}}|\langle\mathbf{u}_i(t), A \mathbf{u}_j(t)\rangle |\le \frac{N^{\omega}}{\sqrt{N}} \qquad \,\, \mbox{on } \;\widehat{\Omega}_{\omega,\xi} \cap\widetilde{\Omega}_\xi\,, \end{align} $$

simultaneously for all $i,j\in \mathcal {J}$ and $0\le t\le T$ . Additionally, using the notation $\rho _{i,t}:=|\Im \langle M_t(z_i)\rangle |$ on $\widehat {\Omega }_{\omega ,\xi }\cap \widetilde {\Omega }_\xi $ , it also holds that

(5.37) $$ \begin{align} |\langle\mathbf{u}_i(t), A \mathbf{u}_j(t)\rangle|\le N^\omega \sqrt{\frac{\langle \Im G(\gamma_i(t)+\mathrm{i}\eta)A\Im G(\gamma_j(t)+\mathrm{i}\eta)A \rangle}{N\rho_{i,t}\rho_{j,t}}}\lesssim \frac{N^\omega}{N^{1/4}}\,, \end{align} $$

when one among i and j is in the bulk and $|i-j|\ge c N$ , for some small constant c depending on $c_1$ from (5.34). Here we used that for the index in the bulk – say i – we have $\rho _{i,t}\sim 1$ and for the other index $\rho _{j,t}\gtrsim N^{-1/2}$ as a consequence of $\eta \gg N^{-1}$ . We point out that this nonoptimal bound $N^{-1/4}$ , instead of the optimal $N^{-1/2}$ , follows from the fact that the bound from Lemma A.2 is not optimal when one of the two spectral parameters is close to an edge; this is exactly the same situation as in [Reference Cipolloni, Erdős and Schröder20, Eq. (4.31)], where we get an analogous nonoptimal bound for overlaps of eigenvectors that are not in the bulk.

We are now ready to prove the main technical proposition of this section. Note the additional term $KN^{-1/2}$ in the error $\mathcal {E}$ in (5.39) compared to [Reference Cipolloni, Erdős and Schröder18, Proposition 4.2] and [Reference Cipolloni, Erdős and Schröder20, Proposition 4.4]; this is a consequence of the fact the $p_{ii}$ ’s in (5.6) are not correctly centred. We stress that the base point $\textbf {y}$ in Proposition 5.3 is fixed throughout the remainder of this section.

Proposition 5.3. For any parameters satisfying N −1ηT 1ℓN −1KN −1N −1/2, and any small ϵ, ξ > 0, it holds

(5.38) $$ \begin{align} \lVert h_{T_1}(\cdot; \ell, K, \mathbf{y})\rVert_2\lesssim K^{n/2}\mathcal{E}, \end{align} $$

with

(5.39) $$ \begin{align} \mathcal{E}:= N^{n\xi}\left(\frac{N^\epsilon\ell}{K}+\frac{NT_1}{\ell}+\frac{N\eta}{\ell}+\frac{N^\epsilon}{\sqrt{N\eta}}+\frac{1}{\sqrt{K}}+\frac{K}{\sqrt{N}}\right), \end{align} $$

uniformly in particle configurations $\textbf {y}$ , such that $y_a=i_0$ for any $a\in [2n]$ and $i_0\in \mathcal {J}$ , and eigenvalue trajectory λ in the high probability event $\widetilde{\Omega}_\xi \cap \widehat{\Omega}_{\omega,\xi} $ .

Proof. The proof of this proposition is very similar to the one of [Reference Cipolloni, Erdős and Schröder18, Proposition 4.2] and [Reference Cipolloni, Erdős and Schröder20, Proposition 4.4]. We thus only explain the main differences here. In the following, the star over $\sum $ denotes that the summation runs over two n-tuples of fully distinct indices. The key idea in this proof is that in order to rely on the multi–resolvent local laws (5.34), we replace the operator $\mathcal {S}(t)$ in (5.31) with the new operator

(5.40) $$ \begin{align} \mathcal{A}(t):=\sum_{\mathbf{i}, \mathbf{j}\in [N]^n}^*\mathcal{A}_{ \mathbf{i}\mathbf{j}}(t), \quad \mathcal{A}_{\mathbf{i}\mathbf{j}}(t)h(\mathbf{x}):=\frac{1}{\eta}\left(\prod_{r=1}^n a_{i_r,j_r}^{\mathcal{S}}(t)\right)\sum_{\mathbf{a}, \mathbf{b}\in [2n]^n}^*(h(\mathbf{x}_{\mathbf{a}\mathbf{b}}^{\mathbf{i}\mathbf{j}})-h(\mathbf{x})), \end{align} $$

where

(5.41) $$ \begin{align} a_{ij}=a_{ij}(t):=\frac{\eta}{N((\lambda_i(t)-\lambda_j(t))^2+\eta^2)}, \end{align} $$

and a ij S are their short-range version defined as in (5.30) with $c_{ij}(t)$ replaced with $a_{ij}(t)$ , and

(5.42) $$ \begin{align} \mathbf{x}_{\mathbf{a}\mathbf{b}}^{\mathbf{i}\mathbf{j}}:=\mathbf{x}+\left(\prod_{r=1}^n \delta_{x_{a_r}i_r}\delta_{x_{b_r}i_r}\right)\sum_{r=1}^n (j_r-i_r) (\mathbf{e}_{a_r}+\mathbf{e}_{b_r}). \end{align} $$

The main idea behind this replacement is that infinitesimally $\mathcal {S}(t)$ averages only in one direction at a time, whereas $\mathcal {A}(t)$ averages in all directions simultaneously. This is expressed by the fact that $\mathbf {x}^{ij}_{ab}$ from (5.23) changes two entries of $\mathbf {x}$ per time; instead, $\mathbf {x}_{\mathbf {a}\mathbf {b}}^{\mathbf {i}\mathbf {j}}$ changes all the coordinates of $\mathbf {x}$ at the same time (i.e., let $\mathbf {i}:=(i_1,\dots , i_n), \mathbf {j}:=(j_1,\dots , j_n)\in [N]^n$ , with $\{i_1,\dots ,i_n\}\cap \{j_1,\dots , j_n\}=\emptyset $ ; then $\mathbf {x}_{\mathbf {a}\mathbf {b}}^{\mathbf {i}\mathbf {j}}\ne \mathbf {x}$ if and only if for all $r\in [n]$ , it holds that $x_{a_r}=x_{b_r}=i_r$ ). Technically, the replacement of $\mathcal {S}(t)$ by $\mathcal {A}(t)$ can be performed at the level of Dirichlet forms.

Lemma 5.4 (Lemma 4.6 of [Reference Cipolloni, Erdős and Schröder18]).

Let S(t), $\mathcal{A}$ (t) be defined in (5.32) and (5.40), respectively, and let $\mu $ denote the uniform measure on $\Lambda ^n$ for which $\mathcal{A}$ (t) is reversible. Then there exists a constant C(n) > 0 such that

(5.43) $$ \begin{align} \langle h, \mathcal{S}(t) h\rangle_{\Lambda^n, \pi}\le C(n) \langle h, \mathcal{A}(t) h\rangle_{\Lambda^n,\mu}\le 0 \end{align} $$

for any hL 2 n ), on the very high probability event $\widetilde{\Omega}_\xi \cap \widehat{\Omega}_{\omega,\xi}$ .

We start noticing the fact that by (5.31), it follows

(5.44) $$ \begin{align} \partial_t \lVert h_t\rVert_2^2=2\langle h_t, \mathcal{S}(t) h_t\rangle_{\Lambda^n}. \end{align} $$

Then, combining this with (5.43), and using that ${\mathbf x}_{{\mathbf a}{\mathbf b}}^{{\mathbf i}{\mathbf j}}={\mathbf x}$ unless ${\mathbf x}_{a_r}={\mathbf x}_{b_r}=i_r$ for all r ∈ [n], we conclude that

(5.45) $$ \begin{align} \begin{aligned} \partial_t \lVert h_t\rVert_2^2&\le C(n) \langle h_t,\mathcal{A}(t) h_t\rangle_{\Lambda^n,\mu} \\[4pt] &=\frac{C(n) }{2\eta}\sum_{\mathbf{x}\in \Lambda^n}\sum_{\mathbf{i},\mathbf{j}\in [N]^n}^* \left(\prod_{r=1}^n a_{i_r j_r}^{\mathcal{S}}(t)\right)\sum_{\mathbf{a}, \mathbf{b}\in [2n]^n}^*\overline{h_t}(\mathbf{ x})\big(h_t(\mathbf{x}_{\mathbf{a}\mathbf{b}}^{\mathbf{i} \mathbf{j}})-h_t(\mathbf{x})\big)\left(\prod_{r=1}^n \delta_{x_{a_r}i_r}\delta_{x_{b_r}i_r}\right). \end{aligned} \end{align} $$

Then, proceeding as in the proof of [Reference Cipolloni, Erdős and Schröder18, Proposition 4.5] (see also [Reference Cipolloni, Erdős and Schröder20, Eq. (4.40)]), we conclude that

(5.46) $$ \begin{align} \partial_t\lVert h_t\rVert_2^2\le -\frac{C_1(n)}{2\eta}\langle h_t\rangle_2^2+\frac{C_3(n)}{\eta}\mathcal{E}^2K^n, \end{align} $$

which implies ∥h T 1 2 2C(n) $\mathcal{E}$ 2 K n , by a simple Gronwall inequality, using that T 1η.

We point out that to go from (5.45) to (5.46), the proof is completely analogous to [Reference Cipolloni, Erdős and Schröder18, Proof of Proposition 4.5], with the only exception being the proof of [Reference Cipolloni, Erdős and Schröder18, Eqs. (4.41), (4.43)]. We thus now explain how to obtain the analog of [Reference Cipolloni, Erdős and Schröder18, Eqs. (4.41), (4.43)] in the current case as well. The fact that we now have the bound (5.37) rather than the stronger bound $N^{-1/3}$ as in [Reference Cipolloni, Erdős and Schröder20, Eq. (4.31)] does not cause any difference in the final estimate. We thus focus on the main new difficulty in the current analysis (i.e., that in (5.31), we choose the initial condition depending on $r_0$ rather than $g_0$ ). We recall that the difference between $r_0$ and $g_0$ is that $r_0$ is defined in such a way that all the eigenvector overlaps are precisely centred and normalised in an i–dependent way, whereas for $g_0$ , we can choose the i–independent constant $C_0,C_1$ so that only the overlap corresponding to a certain base point $i_0$ is exactly centred and normalised, while the nearby overlaps are centred and normalised only modulo a negligible error $K/N$ (see also the paragraph below (5.14) for a detailed explanation). This additional difficulty requires that to prove the analog of [Reference Cipolloni, Erdős and Schröder18, Eqs. (4.41), (4.43)], we need to estimate the error produced by this mismatch.

Using that the function f(x) ≡ 1(n even) is in the kernel of $\mathcal{L}$ (t) for any fixed x ∈Γ and for any fixed i, a, b, we conclude (recall the notation from (5.29))

(5.47) $$ \begin{align} \begin{aligned} &h_t(\mathbf{x}_{\mathbf{a}\mathbf{b}}^{\mathbf{i} \mathbf{j}})\\[4pt] &=\mathcal{U}_{\mathcal{S}}(0,t)\big((\mathrm{Av} r_0)(\mathbf{x}_{\mathbf{a}\mathbf{b}}^{\mathbf{i}\mathbf{j}})-(\mathrm{Av}\mathbf{1}({n} \,\, \mathrm{even}))(\mathbf{x}_{\mathbf{a}\mathbf{b}}^{\mathbf{i}\mathbf{j}})\big) \\[4pt] &=\mathrm{Av}(\mathbf{x}_{\mathbf{a}\mathbf{b}}^{\mathbf{i}\mathbf{j}})\big(\mathcal{U}_{\mathcal{S}}(0,t)r_0(\mathbf{x}_{\mathbf{a}\mathbf{b}}^{\mathbf{i}\mathbf{j}})-\mathbf{1}({n}\,\, \mathrm{even})\big)+\mathcal{O}\left(\frac{{N}^{\epsilon+{n}\xi} \ell}{{K}}\right) \\[4pt] &=\left(\mathrm{Av}(\mathbf{x})+\mathcal{O}\left(\frac{\ell}{K}\right)\right)\left(\mathcal{U}(0,t)r_0(\mathbf{x}_{\mathbf{a}\mathbf{b}}^{\mathbf{i}\mathbf{j}})-\mathbf{1}({n}\,\, \mathrm{even})+\mathcal{O}\left(\frac{{N}^{1+{n}\xi}{t}}{\ell}\right)\right)+\mathcal{O}\left(\frac{N^{\epsilon+n\xi} \ell}{K}\right) \\[4pt] &=\mathrm{Av}(\mathbf{x})\big(g_t(\mathbf{x}_{\mathbf{a}\mathbf{b}}^{\mathbf{i}\mathbf{j}})-\mathbf{1}({n}\,\, \mathrm{even})\big)+\mathcal{O}\left(\frac{{N}^{\epsilon+{n}\xi} \ell}{{K}}+\frac{{N}^{1+{n}\xi}{t}}{\ell}+\frac{{N}^\xi {K}}{\sqrt{{N}}}\right), \end{aligned} \end{align} $$

where in the definition of $g_t$ from (5.7) and (5.19), we chose

(5.48) $$ \begin{align} C_0:=\frac{\langle A\Im M(\gamma_{i_0})\rangle }{\langle \Im M(\gamma_{i_0})\rangle}, \qquad C_1:=\mathrm{Var}_{\gamma_{i_0}}(A), \end{align} $$

and the error terms are uniform in x ∈Γ. Here, $i_0$ is the index defined below (5.39). The first three inequalities are completely analogous to [Reference Cipolloni, Erdős and Schröder18, Eq. (4.41)]. We now explain how to obtain the last inequality at the price of the additional negligible error $KN^{-1/2}$ . Recall the definition of $r_t$ from (5.13) and (5.20); we now show that for any $\textbf {x}$ supported in the bulk, it holds

(5.49) $$ \begin{align} \lVert r_0(\textbf{x})-g_0(\textbf{x})\rVert_\infty\lesssim \frac{N^\xi K}{\sqrt{N}} \end{align} $$

for $C_0,C_1$ chosen as in (5.48). Using (5.49), together with $\mathcal {U}(0,t)g_0=g_t$ , this proves the last equality in (5.47). The main input in the proof of (5.49) is the following approximation result:

(5.50) $$ \begin{align} \mathring{A}^{\gamma_{i_0}}-\mathring{A}^{\gamma_{i_r}}=\mathcal{O}(|\gamma_{i_0}-\gamma_{i_r}|)I=\mathcal{O}(KN^{-1})I. \end{align} $$

We now explain the proof of (5.49); for simplicity, we present the proof only in the case $n=2$ . To prove (5.49), we see that

$$\begin{align*}2|\langle \mathbf{u}_i,\mathring{A}^{\gamma_{i_0}} \mathbf{u}_j\rangle|^2+\langle \mathbf{u}_i,\mathring{A}^{\gamma_{i_0}} \mathbf{u}_i\rangle\langle \mathbf{u}_j,\mathring{A}^{\gamma_{i_0}} \mathbf{u}_j\rangle=2|\langle \mathbf{u}_i,\mathring{A}^{\gamma_i} \mathbf{u}_j\rangle|^2+\langle \mathbf{u}_i,\mathring{A}^{\gamma_i} \mathbf{u}_i\rangle\langle \mathbf{u}_j,\mathring{A}^{\gamma_j} \mathbf{ u}_j\rangle+\mathcal{O}\left(\frac{1}{N}\cdot\frac{N^\xi K}{\sqrt{N}}\right), \end{align*}$$

where we used (5.50) to replace the ‘wrong’ $\mathring {A}^{\gamma _{i_0}}$ with the ‘correct’ $\mathring {A}^{\gamma _i}$ together with the a priori a bound $\langle \mathbf {u}_i,\mathring {A}^{\gamma _i} \mathbf {u}_j \rangle \le N^\xi N^{-1/2}$ . Then, multiplying this relation by N, we obtain (5.49). Additionally, since $r_0$ and $g_0$ contain a different rescaling in terms of $\mathrm {Var}_{\gamma _{i_0}}(A)$ and $\mathrm {Var}_{\gamma _i}(A)$ , we also used that by similar computations,

$$\begin{align*}\mathrm{Var}_{\gamma_{i_0}}(A)=\mathrm{Var}_{\gamma_i}(A)+\mathcal{O}\left(\frac{N^\xi K}{\sqrt{N}}\right). \end{align*}$$

In particular, we used this approximation to compensate the mismatch that only the diagonal overlaps corresponding to the index $i_0$ are properly centred and normalised in the definition of $g_0$ , whereas for nearby indices, we use this approximation to replace the approximate centering $C_0$ and normalisation $C_1$ from (5.48) with the correct one, which is the one in the definition of $r_0$ .

Then, proceeding as in the proof of [Reference Cipolloni, Erdős and Schröder20, Eq. (4.41)], we conclude the analog of [Reference Cipolloni, Erdős and Schröder18, Eq. (4.43)]:

(5.51) $$ \begin{align} \begin{aligned} &\sum_{\mathbf{j}}^*\left(\prod_{r=1}^n a_{i_r j_r}^{\mathcal{S}}(t)\right)\big(g_t(\mathbf{x}_{\mathbf{a}\mathbf{b}}^{\mathbf{i} \mathbf{j}})-\mathbf{1}({n}\,\, \mathrm{even})\big) \\ & \quad= \sum_{\mathbf{j}}\left(\prod_{r=1}^n a_{i_r j_r}(t)\right)\left(\frac{N^{n/2}}{\mathrm{Var}_{\gamma_{i_0}}(A)^{n/2} 2^{n/2}(n-1)!!}\sum_{G\in \mathcal{G}_{\boldsymbol{\eta}^{\mathbf{j}}}}P(G)-\mathbf{1}({n}\,\, \mathrm{even})\right) \\ &\qquad+\mathcal{O}\left(\frac{N^{n\xi}}{N\eta}+\frac{N^{1+n\xi}\eta}{\ell}+\frac{N^\xi K}{\sqrt{N}}\right). \end{aligned} \end{align} $$

Given (5.51), the remaining part of the proof is completely analogous to [Reference Cipolloni, Erdős and Schröder18, Eqs. (4.44)–(4.51)], except for the slightly different computation

(5.52) $$ \begin{align} &\frac{\langle \Im G(\lambda_{i_{r_1}}+\mathrm{i} \eta)\mathring{A}^{\gamma_{i_0}}\Im G (\lambda_{i_{r_2}}+\mathrm{i} \eta)\mathring{A}^{\gamma_{i_0}}\rangle}{\mathrm{Var}_{\gamma_{i_0}}(A)} \nonumber\\ &\qquad\qquad=-\frac{1}{4\mathrm{Var}_{\gamma_{i_0}}(A)}\sum_{\sigma,\tau\in \{+,-\}}\langle G(\lambda_{i_{r_1}}+\sigma\mathrm{i} \eta)\mathring{A}^{\gamma_{i_0}} G (\lambda_{i_{r_2}}+\tau\mathrm{i} \eta)\mathring{A}^{\gamma_{i_0}}\rangle\nonumber\\ &\qquad \qquad= -\frac{1}{4\mathrm{Var}_{\gamma_{i_0}}(A)}\sum_{\sigma,\tau\in \{+,-\}}\left\langle G(\lambda_{i_{r_1}}+\mathrm{i} \sigma \eta)\mathring{A}^{\gamma_{i_{r_1}}+\mathrm{i} \sigma\eta,\gamma_{i_{r_2}}+\mathrm{i}\tau\eta} G (\lambda_{i_{r_2}}+\mathrm{i} \tau \eta)\mathring{A}^{\gamma_{i_{r_2}}+\mathrm{i} \tau\eta, \gamma_{i_{r_1}}+\mathrm{i} \sigma\eta}\right\rangle \nonumber\\ &\qquad\qquad\quad+\mathcal{O}\left(\frac{K}{N^2\eta^{3/2}}+\frac{K^2}{N^2\eta} +\frac{1}{N\sqrt{\eta}}\right)\nonumber\\ &\qquad\qquad=\langle \Im M(\gamma_{i_0})\rangle^2+\mathcal{O}\left(\frac{K}{N^2\eta^{3/2}}+\frac{K^2}{N^2\eta}+\frac{1}{\sqrt{N\eta}}\right), \end{align} $$

which replaces [Reference Cipolloni, Erdős and Schröder18, Eqs. (4.47)]. Here, $\mathring {A}^{\gamma _{i_{r_1}}\pm \mathrm {i} \eta ,\gamma _{i_{r_2}}\pm \mathrm {i}\eta }$ is defined as in Definition 4.2. We also point out that in the second equality, we used the approximation (see (4.12))

(5.53) $$ \begin{align} \mathring{A}^{\gamma_{i_0}}=\mathring{A}^{\gamma_{i_{r_1}}\pm\mathrm{i}\eta,\gamma_{i_{r_2}}\pm \mathrm{i} \eta} +\mathcal{O}\big(|\gamma_{i_{r_1}}-\gamma_{i_0}| +|\gamma_{i_{r_2}}-\gamma_{i_0}|+\eta\big)I =\mathring{A}^{\gamma_{i_{r_1}},\gamma_{i_{r_2}}}+\mathcal{O}\left(\frac{K}{N}+\eta\right)I, \end{align} $$

together with (here we present the estimate only for one representative term – the other being analogous)

(5.54) $$ \begin{align} \langle G(\lambda_{i_{r_1}}+\mathrm{i} \eta) G (\lambda_{i_{r_2}}+\mathrm{i} \eta)\mathring{A}^{\gamma_{i_0}}\rangle&=\langle G(\lambda_{i_{r_1}}+\mathrm{i} \eta)G (\lambda_{i_{r_2}}+\mathrm{i} \eta)\mathring{A}^{\gamma_{i_{r_2}}+\mathrm{i} \eta,\gamma_{i_{r_2}}+\mathrm{i}\eta}\rangle+\mathcal{O}\left(1+\frac{K}{N\eta}\right) \nonumber\\ &=\langle M(\gamma_{i_{r_2}}+\mathrm{i} \eta, \mathring{A}^{\gamma_{i_{r_2}}+\mathrm{i} \eta, \gamma_{i_{r_1}}+\mathrm{i}\eta}, \gamma_{i_{r_1}}+\mathrm{i} \eta) \rangle+\mathcal{O}\left( 1+\frac{1}{N\eta^{3/2}}+\frac{K}{N\eta}\right)\nonumber\\ & =\mathcal{O}\left(1+\frac{1}{N\eta^{3/2}}+\frac{K}{N\eta}\right), \end{align} $$

which follows by (5.34) and Lemma 4.3 to estimate the deterministic term, together with the integral representation from [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemma 5.1] (see also (6.17) later), to bound the error terms arising from the replacement (5.53). Additionally, in the third equality, we used the local for two resolvents from (5.34):

(5.55) $$ \begin{align} &-\frac{1}{4}\sum_{\sigma,\tau\in \{+,-\}}\left\langle G(\lambda_{i_{r_1}}+\mathrm{i} \sigma \eta)\mathring{A}^{\gamma_{i_{r_1}}+\mathrm{i} \sigma\eta,\gamma_{i_{r_2}}+\mathrm{i}\tau\eta} G (\lambda_{i_{r_2}}+\mathrm{i} \tau \eta)\mathring{A}^{\gamma_{i_{r_2}}+\mathrm{i} \tau\eta, \gamma_{i_{r_1}}+\mathrm{i} \sigma\eta}\right\rangle \nonumber\\ &\qquad\quad =-\frac{1}{4}\sum_{\sigma,\tau\in \{+,-\}}\left\langle M(\gamma_{i_{r_1}}+\mathrm{i} \sigma \eta, \mathring{A}^{\gamma_{i_{r_1}}+\mathrm{i} \sigma\eta,\gamma_{i_{r_2}}+\mathrm{i}\tau\eta}, \gamma_{i_{r_2}}+\mathrm{i} \tau\eta)\mathring{A}^{\gamma_{i_{r_2}}+\mathrm{i} \tau\eta, \gamma_{i_{r_1}}+\mathrm{i} \sigma\eta}\right\rangle+\mathcal{O}\left(\frac{1}{\sqrt{N\eta}}\right) \nonumber\\ &\qquad\quad = \langle \Im M(\gamma_{i_0})\rangle^2\mathrm{Var}_{\gamma_{i_0}}(A)+\mathcal{O}\left(\frac{1}{\sqrt{N\eta}}+\frac{K}{N}\right), \end{align} $$

with the deterministic term defined in (5.35). In the second equality, we used the approximation (5.53) again. We remark that here we presented this slightly different (compared to [Reference Cipolloni, Erdős and Schröder18, Reference Cipolloni, Erdős and Schröder20]) computation only for chain of length $k=2$ ; the computation for longer chains is completely analogous, and so it is omitted. This concludes the proof of (5.38).

We conclude this section with the proof of Proposition 5.1.

Proof of Proposition 5.1.

Combining the $L^2$ –bound on $h_t$ from Proposition 5.3 and the finite speed of propagation estimates in Lemma 5.2, we can enhance this $L^2$ –bound to an $L^\infty $ –bound completely analogous to the proof of [Reference Cipolloni, Erdős and Schröder18, Proposition 3.2] presented in [Reference Cipolloni, Erdős and Schröder18, Section 4.4].

6 Proof of Proposition 4.4

Our strategy for proving Proposition 4.4 (in the much more involved $\eta \le 1$ regime) is to derive a system of master inequalities (Proposition 6.3) for the errors in the local laws by cumulant expansion and then use an iterative scheme to gradually improve their estimates. The cumulant expansion naturally introduces longer resolvent chains, potentially leading to an uncontrollable hierarchy, so our master inequalities are complemented by a set of reduction inequalities (Lemma 6.4) to estimate longer chains in terms of shorter ones. We have used a similar strategy in [Reference Cipolloni, Erdős and Schröder19, Reference Cipolloni, Erdős and Schröder20] for Wigner matrices, but now, analogously to [Reference Cipolloni, Erdős, Henheik and Schröder21], dealing with non-Hermitian i.i.d. matrices, many new error terms due to several adjustments of the z-dependent two-point regularisations need to be handled. By the strong analogy to [Reference Cipolloni, Erdős, Henheik and Schröder21], our proof of the master inequalities formulated in Proposition 6.3 and given in Section 6.2 will be rather short and focus on the main differences between [Reference Cipolloni, Erdős, Henheik and Schröder21] and the current setup.

As the basic control quantities, analogously to [Reference Cipolloni, Erdős and Schröder19, Reference Cipolloni, Erdős, Henheik and Schröder21], in the sequel of the proof, we introduce the normalised differences

(6.1) $$ \begin{align} \!\!\Psi_k^{\mathrm{av}}(\boldsymbol{z}_k, \boldsymbol{A}_k) &:= N \eta^{k/2} |\langle G_1 A_1 \cdots G_{k} A_k - M(z_1, A_1, ... , z_{k}) A_k \rangle |\,, \end{align} $$
(6.2) $$ \begin{align} \Psi_k^{\mathrm{iso}}(\boldsymbol{z}_{k+1}, \boldsymbol{A}_{k}, \boldsymbol{x}, \boldsymbol{y}) &:= \sqrt{N \eta^{k+1}} \left\vert \big( G_1 A_1 \cdots A_{k} G_{k+1} - M(z_1, A_1, ... , A_{k}, z_{k+1}) \big)_{\boldsymbol{x} \boldsymbol{y}} \right\vert \end{align} $$

for $k \in \mathbf {N}$ , where we used the short-hand notations

$$ \begin{align*} G_i:= G(z_i)\,, \quad \eta := \min_i |\Im z_i|\,, \quad \boldsymbol{z}_k :=(z_1, ... , z_{k})\,, \quad \boldsymbol{A}_k:=(A_1, ... , A_{k})\,. \end{align*} $$

The deterministic matrices $\Vert A_i \Vert \le 1$ , $i \in [k]$ , are assumed to be regular (i.e., $A_i = \mathring {A}^{z_i, z_{i+1}}$ ; see Definition 4.2) and the deterministic counterparts used in (6.1) and (6.2) are given recursively in Definition 4.1. For convenience, we extend the above definitions to $k=0$ by

$$ \begin{align*} \Psi_0^{\mathrm{av}}(z):= N \eta |\langle G(z) - M(z)\rangle |\,, \quad \Psi_0^{\mathrm{iso}}(z, \boldsymbol{x}, \boldsymbol{y}) := \sqrt{N \eta} \big| \big(G(z) - M(z)\big)_{\boldsymbol{x} \boldsymbol{y}} \big| \end{align*} $$

and observe that

(6.3) $$ \begin{align} \Psi_0^{\mathrm{av}} + \Psi_0^{\mathrm{iso}} \prec 1 \end{align} $$

is the usual single-resolvent local law from (4.1), where here and in the following, the arguments of $\Psi _k^{\mathrm {av/iso}}$ shall occasionally be omitted. We remark that the index k counts the number of regular matrices in the sense of Definition 4.2.

Throughout the entire argument, let $\epsilon> 0$ and $\kappa> 0$ be arbitrary but fixed and let

(6.4) $$ \begin{align} \mathbf{D}^{(\epsilon, \kappa)}:= \big\{ z \in \mathbf{C} : \Re z \in \mathbf{B}_{\kappa}\,, \ N^{100} \ge |\Im z| \ge N^{-1+\epsilon}\big\} \end{align} $$

be the spectral domain, where the $\kappa $ -bulk $\mathbf {B}_\kappa $ has been introduced in (2.10). Strictly speaking, we would need to define an entire (finite) family of slightly enlarged spectral domains along which the above-mentioned iterative scheme for proving Proposition 4.4 is conducted. Since this has been carried out in detail in [Reference Cipolloni, Erdős, Henheik and Schröder21] (see, in particular, [Reference Cipolloni, Erdős, Henheik and Schröder21, Figure 2]), we will neglect this technicality and henceforth assume all bounds on $\Psi _k^{\mathrm {av/iso}}$ to be uniform on $\mathbf {D}^{(\epsilon , \kappa )}$ in the following sense.

Definition 6.1 (Uniform bounds in the spectral domain).

Let $\epsilon> 0$ and $\kappa> 0$ as above and let $k \in \mathbf {N}$ . We say that the bounds

(6.5) $$ \begin{align} \begin{aligned} \big\vert \langle G(z_1) B_1 \ \cdots \ G(z_k) B_k - M(z_1, B_1, ... , z_k) B_k \rangle \big\vert &\prec \mathcal{E}^{\mathrm{av}}\,, \\[2mm] \left\vert \big( G(z_1) B_1 \ \cdots \ B_k G(z_{k+1}) - M(z_1, B_1, ... , B_k, z_{k+1}) \big)_{\boldsymbol{x} \boldsymbol{y}} \right\vert &\prec \mathcal{E}^{\mathrm{iso}} \end{aligned} \end{align} $$

hold $(\epsilon , \kappa )$ -uniformly (or simply uniformly) for some deterministic control parameters $\mathcal {E}^{\mathrm {av/iso}} = \mathcal {E}^{\mathrm {av/iso}}(N, \eta )$ , depending only on N and $ \eta := \min _i |\Im z_i|$ , if the implicit constants in (6.5) are uniform in bounded deterministic matrices $\Vert B_j \Vert \le 1$ , deterministic vectors $\Vert \boldsymbol {x} \Vert , \Vert \boldsymbol {y} \Vert \le 1$ , and admissible spectral parameters $z_j\in \mathbf {D}^{(\epsilon , \kappa )}$ satisfying $ 1 \ge \eta := \min _j |\Im z_j|$ .

Moreover, we may allow for additional restrictions on the deterministic matrices. For example, we may talk about uniformity under the additional assumption that some (or all) of the matrices are regular (in the sense of Definition 4.2).

Note that (6.5) is stated for a fixed choice of spectral parameters $z_j$ in the left-hand side, but it is in fact equivalent to an apparently stronger statement when the same bound holds with a supremum over the spectral parameters (with the same constraints). While one implication is trivial, the other direction follows from (6.5) by a standard grid argument (see, for example, the discussion after [Reference Cipolloni, Erdős and Schröder19, Definition 3.1]).

We can now formulate Proposition 4.4 in the language of our basic control quantities $\Psi _k^{\mathrm {av/iso}}$ .

Lemma 6.2 (Estimates on $\Psi ^{\mathrm {av/iso}}_1$ and $\Psi ^{\mathrm {av/iso}}_2$ ).

For any $\epsilon> 0$ and $\kappa> 0$ , we have

(6.6) $$ \begin{align} \Psi_1^{\mathrm{av}} + \Psi_1^{\mathrm{iso}} \prec 1 \qquad \text{and} \qquad \Psi_2^{\mathrm{av}} + \Psi_2^{\mathrm{iso}}\prec \sqrt{N \eta} \end{align} $$

$(\epsilon , \kappa )$ -uniformly in regular matrices.

Proof of Proposition 4.4.

The $\eta \ge 1$ case was already explained right after Proposition 4.4. The more critical $\eta \le 1$ case immediately follows from Lemma 6.2.

6.1 Master inequalities and reduction lemma: Proof of Lemma 6.2

We now state the relevant part of a nonlinear infinite hierarchy of coupled master inequalities for $\Psi ^{\mathrm {av}}_k$ and $\Psi ^{\mathrm {iso}}_k$ . In fact, for our purposes, it is sufficient to have only the inequalities for $k \in [2]$ . Slightly simplified versions of these master inequalities will be used in Appendix A for general $k \in \mathbf {N}$ . The proof of Proposition 6.3 is given in Section 6.2.

Proposition 6.3 (Master inequalities; see Proposition 4.9 in [Reference Cipolloni, Erdős, Henheik and Schröder21]).

Assume that for some deterministic control parameters $ \psi _j^{\mathrm {av/iso}}$ , we have that

(6.7) $$ \begin{align} \Psi_j^{\mathrm{av/iso}} \prec \psi_j^{\mathrm{av/iso}}\,, \quad j \in [4]\, \end{align} $$

holds uniformly in regular matrices. Then we have

(6.8a) $$ \begin{align} \Psi_1^{\mathrm{av}} &\prec 1 + \frac{\psi_1^{\mathrm{av}}}{N \eta} + \frac{\psi_1^{\mathrm{iso}} + (\psi_2^{\mathrm{av}})^{1/2} }{(N \eta)^{1/2}} + \frac{(\psi_2^{\mathrm{iso}})^{1/2}}{(N \eta)^{1/4}} \,,\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{align} $$
(6.8b) $$ \begin{align} \Psi_1^{\mathrm{iso}} &\prec 1 + \frac{\psi_1^{\mathrm{iso}} + \psi_1^{\mathrm{av}} }{(N \eta)^{1/2}} + \frac{(\psi_2^{\mathrm{iso}})^{1/2}}{(N \eta)^{1/4}} \,,\qquad\qquad\qquad \qquad\qquad\qquad \qquad\ \ \,\qquad\qquad\end{align} $$
(6.8c) $$ \begin{align} \Psi_2^{\mathrm{av}} &\prec 1 + \frac{(\psi_1^{\mathrm{av}})^{2} + {(\psi_1^{\mathrm{iso}})^2} + \psi_2^{\mathrm{av}}}{N \eta}+ \frac{\psi_2^{\mathrm{iso}} + (\psi_4^{\mathrm{av}})^{1/2} }{(N \eta)^{1/2}} + \frac{(\psi_3^{\mathrm{iso}})^{1/2} + (\psi_4^{\mathrm{iso}})^{1/2}}{(N \eta)^{1/4}} \,,\qquad \end{align} $$
(6.8d) $$ \begin{align} \Psi_2^{\mathrm{iso}} &\prec 1 + \psi_1^{\mathrm{iso}} + \frac{ \psi_1^{\mathrm{av}} \psi_1^{\mathrm{iso}} + (\psi_1^{\mathrm{iso}})^2 }{N \eta}+ \frac{\psi_2^{\mathrm{iso}} + (\psi_1^{\mathrm{iso}} \psi_3^{\mathrm{iso}})^{1/2}}{(N \eta)^{1/2}} + \frac{(\psi_3^{\mathrm{iso}})^{1/2} + \hspace{-2pt}(\psi_4^{\mathrm{iso}})^{1/2}}{(N \eta)^{1/4}}\,, \end{align} $$

again uniformly in regular matrices.

As shown in the above proposition, resolvent chains of length $k=1,2$ are estimated by resolvent chains up to length $2k$ . To avoid the indicated infinite hierarchy of master inequalities with higher and higher k indices, we will need the following reduction lemma.

Lemma 6.4 (Reduction inequalities; see Lemma 4.10 in [Reference Cipolloni, Erdős, Henheik and Schröder21]).

As in (6.7), assume that $\Psi _j^{\mathrm {av/iso}} \prec \psi _j^{\mathrm {av/iso}}$ holds for $1 \le j \le 4$ uniformly in regular matrices. Then we have

(6.9) $$ \begin{align} \Psi_4^{\mathrm{av}} \prec (N\eta)^2 + (\psi_2^{\mathrm{av}})^2\,, \end{align} $$

uniformly in regular matrices, and

(6.10) $$ \begin{align} \begin{aligned} \Psi_3^{\mathrm{iso}} &\prec N \eta \left( 1+ \frac{\psi_2^{\mathrm{iso}}}{\sqrt{N\eta}} \right) \left( 1 + \frac{\psi_2^{\mathrm{av}}}{N\eta} \right)^{1/2}\,,\\ \Psi_4^{\mathrm{iso}} &\prec (N \eta)^{3/2}\left( 1+ \frac{\psi_2^{\mathrm{iso}}}{\sqrt{N\eta}} \right) \left( 1 + \frac{\psi_2^{\mathrm{av}}}{N\eta} \right)\,, \end{aligned} \end{align} $$

again uniformly in regular matrices.

Proof. This is completely analogous to [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemma 4.10] and hence omitted. The principal idea is to write out the lhs. of (6.9) and (6.10) by spectral decomposition and tacitly employ a Schwarz inequality. This leaves us with shortened chains, where certain resolvents G are replaced with absolute values $|G|$ , which can be handled by means of a suitable integral representation [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemma 6.1].

Now the estimates (6.6) follow by combining Proposition 6.3 and Lemma 6.4 in an iterative scheme, which has been carried out in detail in [Reference Cipolloni, Erdős, Henheik and Schröder21, Section 4.3]. This completes the proof of Lemma 6.2.

6.2 Proof of the master inequalities in Proposition 6.3

The proof of Proposition 6.3 is very similar to the proof of the master inequalities in [Reference Cipolloni, Erdős, Henheik and Schröder21, Proposition 4.9]. Therefore, we shall only elaborate on (6.8a) as a showcase in some detail and briefly discuss (6.8b)-(6.8d) afterwards.

First, we notice that [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemma 5.2] also holds for deformed Wigner matrices (see Lemma 6.5 below). In order to formulate it, recall the definition of the second order renormalisation, denoted by underline, from [Reference Cipolloni, Erdős, Henheik and Schröder21, Equation (5.3)]. For a function $f(W)$ of the Wigner matrix W, we define

(6.11) $$ \begin{align} \underline{W f(W)} := W f(W) - \widetilde{\mathbb{E}} \big[ \widetilde{W} (\partial_{\widetilde{W}}f)(W) \big]\,, \end{align} $$

where $\partial _{\widetilde {W}}$ denotes the directional derivative in the direction of $\widetilde {W}$ , which is a GUE matrix that is independent of W. The expectation is taken w.r.t. the matrix $\widetilde {W}$ . Note that if W itself a GUE matrix, then $\mathbf {E} \underline {Wf(W)} = 0$ , whereas for W with general single entry distributions, this expectation is independent of the first two moments of W. In other words, the underline renormalises the product $W f(W)$ to second order.

We note that $\widetilde {\mathbf {E}} \widetilde {W} R \widetilde {W} = \langle R \rangle $ and furthermore, that the directional derivative of the resolvent is given by $\partial _{\widetilde {W}} G = -G \widetilde {W} G$ . For example, in the special case $f(W) = (W + D -z)^{-1} = G$ , we thus have

$$ \begin{align*} \underline{WG} = WG + \langle G \rangle G \end{align*} $$

by definition of the underline in (6.11).

Lemma 6.5. Under the assumption (6.7), for any regular matrix $A = \mathring {A}$ , we have that

(6.12) $$ \begin{align} \langle (G-M)\mathring{A}\rangle = -\langle\underline{WG\mathring{A}'}\rangle+\mathcal{O}_{\prec}\left(\mathcal{E}_1^{\mathrm{av}}\right)\, \end{align} $$

for some other regular matrix $A'=\mathring {A}'$ , which linearly depends on A (see (6.21) for an explicit formula). Here, $G=G(z)$ and $\eta :=|\Im z|$ . For the error term, we used the shorthand notation

(6.13) $$ \begin{align} \mathcal{E}_1^{\mathrm{av}} :=\frac{1}{N\eta^{1/2}}\left(1+\frac{\psi_1^{\mathrm{av}}}{N\eta}\right)\,. \end{align} $$

By simple complex conjugation of (6.12), we may henceforth assume that $z = e + \mathrm {i} \eta $ with $\eta> 0$ . The representation (6.12) will be verified later. Now, using (6.12), we compute the even moments of $\langle (G-M)\mathring {A}\rangle $ as

(6.14) $$ \begin{align} \mathbf{E}\left\vert\langle(G-M)A\rangle\right\vert{}^{2p}=\left\vert -\mathbf{E} \langle \underline{WG}A'\rangle\langle (G-M)A\rangle^{p-1}\langle (G-M)^*A^*\rangle^p\right\vert+\mathcal{O}_\prec \left(\left(\mathcal{E}_1^{\mathrm{av}}\right)^{2p}\right) \end{align} $$

and then apply a so-called cumulant expansion to the first summand. More precisely, we write out the averaged traces and employ an integration by parts (see, for example, [Reference Cipolloni, Erdős and Schröder16, Eq. (4.14)])

(6.15) $$ \begin{align} \mathbf{E} w_{ab} f(W) = \mathbf{E} |w_{ab}|^2 \mathbf{E} \partial_{w_{ba}}f(W) + ... \quad \text{with} \quad \mathbf{E} |w_{ab}|^2 = \frac{1}{N}\,, \end{align} $$

indicating higher derivatives and an explicit error term, which can be made arbitrarily small, depending on the number of involved derivatives (see, for example, [Reference Erdős, Krüger and Schröder27, Proposition 3.2]). We note that, if W were a GUE matrix, the relation (6.15) would be exact without higher derivatives, which shall be discussed below.

Considering the explicitly written Gaussian term in (6.15) for the main term in (6.14), we find that it is bounded from above by (a p-dependent constant times)

(6.16) $$ \begin{align} \mathbf{E}\left[\frac{\vert \langle GGA'GA\rangle\vert + \vert\langle G^*GA'G^*A^*\rangle\vert}{N^2}\vert \langle (G-M)A\rangle\vert^{2p-2} \right]\,. \end{align} $$

The main technical tool to estimate (6.16) is the following contour integral representation for the square of resolvent (see [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemma 5.1]). This is given by

(6.17) $$ \begin{align} G(z)^2=\frac{1}{2\pi i}\int\limits_{\Gamma}\frac{G(\zeta)}{(\zeta-z)^2}\mathrm{d} \zeta\,, \end{align} $$

where the contour $\Gamma =\Gamma (z)$ is the boundary of a finite disjoint union of half bands which are parametrised counterclockwise. Here, J is a finite disjoint unionFootnote 11 of closed intervals, which we take as $\mathbf {B}_{\kappa '}$ for a suitable $ \kappa ' \in (0, \kappa )$ – to be chosen below – and hence contains $e=\Re z$ ; the parameter is chosen to be smaller than say, $\eta /2$ . Applying the integral identity (6.17) to the product $GG$ in the term $\langle GGA'GA\rangle $ yields that

(6.18) $$ \begin{align} \left\vert\langle GGA'GA\rangle\right\vert \lesssim \left\vert\int\limits_{\Gamma}\frac{\langle G(\zeta)A'GA\rangle}{(\zeta-z)^2}\mathrm{d}\zeta\right\vert\,. \end{align} $$

Now, we split the contour $\Gamma $ in three parts; i.e.,

(6.19) $$ \begin{align} \Gamma = \Gamma_1 + \Gamma_2 + \Gamma_3\,. \end{align} $$

As depicted in Figure 1, the first part, $\Gamma _1$ , of the contour consists of the entire horizontal part of $\Gamma $ . The second part, $\Gamma _2$ , covers the vertical components up to $|\Im \zeta | \le N^{100}$ . Finally, $\Gamma _3$ consists of the remaining part with $|\Im \zeta |> N^{100}$ . The contribution coming from $\Gamma _3$ can be estimated with a trivial norm bound on G. To estimate the integral over $\Gamma _2$ , we choose the parameter $\kappa '$ in the definition of $J = \mathbf {B}_{\kappa '}$ in such a way that the distance between z and $\Gamma _2$ is greater than $\delta> 0$ from Definition 4.2. Hence, for $\zeta \in \Gamma _2$ , every matrix is considered regular w.r.t. $(z,\zeta )$ .

Figure 1 The contour $\Gamma $ is split into three parts (see (6.19)). Depicted is the situation, where the bulk $\mathbf {B}_{\kappa }$ consists of two components. The boundary of the associated domain $\mathbf {D}^{(\epsilon , \kappa )}$ is indicated by the two U-shaped dashed lines. Modified version of [Reference Cipolloni, Erdős, Henheik and Schröder21, Figure 4].

Therefore, after splitting the contour and estimating each contribution as just described, we find, with the aid of Lemma 4.3,

In this integral over $J= \mathbf {B}_{\kappa '}$ , the horizontal part $\Gamma _1$ , we decompose A and $A'$ in accordance to the spectral parameters of the adjacent resolvents and use the following regularity property:

Now the integral over J is represented as a sum of four integrals: one of them contains two regular matrices, and the rest have at least one identity matrix with a small error factor. For the first one, we use the same estimates as for the integral over the vertical part $\Gamma _2$ . For the other terms, thanks to the identity matrix, we use a resolvent identity (e.g., for

) and note that the $\big (\vert x-e\vert +\eta \big )$ -error improves the original $1/\eta $ -blow up of $\int _J \frac {1}{(x-e)^2+\eta ^2}\mathrm {d} x$ to an only $|\log \eta |$ -divergent singularity, which is incorporated into ‘ $\prec $ ’.

For the term $\langle G^*GA'G^*A^*\rangle $ from (6.16), we use a similar strategy: after an application of the Ward identity $G^* G = \Im G/\eta $ , we decompose the deterministic matrices $A, A'$ according to the spectral parameters of their neighbouring resolvents in the product. This argument gives us that each of the terms $\langle GGA'GA\rangle $ and $\langle G^*GA'G^*A^*\rangle $ is stochastically dominated by

$$ \begin{align*} \frac{1}{\eta}\left(1+\frac{\psi_2^{\mathrm{av}}}{N\eta}\right). \end{align*} $$

Contributions stemming from higher order cumulants in (6.16) are estimated exactly in the same way as in [Reference Cipolloni, Erdős, Henheik and Schröder21, Section 5.5]. The proof of (6.8a) is concluded by applying Young inequalities to (6.16) (see [Reference Cipolloni, Erdős, Henheik and Schröder21, Section 5.1]).

Finally, we show that (6.12) holds:

Proof of Lemma 6.5.

The proof of this representation is simpler than the proof of its analogue [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemma 5.2] because in the current setting, all terms in [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemma 5.2] containing the particular chiral symmetry matrix $E_{-}$ are absent. In the same way as in [Reference Cipolloni, Erdős, Henheik and Schröder21], we arrive at the identity

(6.20) $$ \begin{align} \langle (G-M)A\rangle = -\langle \underline{WG} \mathcal{X}\left[A\right]M \rangle+\langle G-M\rangle \langle (G-M)\mathcal{X}[A]M \rangle\,, \end{align} $$

where we introduced the bounded linear operator $\mathcal {X}[B] := \big (1 - \langle M \cdot M \rangle \big )^{-1}[B]$ . Indeed, boundedness follows from the explicit formula

$$ \begin{align*} \mathcal{X}[B] = B + \frac{\langle MBM\rangle}{1 - \langle M^2 \rangle} \end{align*} $$

by means of the lower bound

$$ \begin{align*} \vert 1 - \langle M^2 \rangle \vert = \vert (1 - \langle M M^* \rangle) - 2\mathrm{i} \langle M \Im M \rangle \vert \ge 2 \langle \Im M \rangle^2 \gtrsim 1\,, \end{align*} $$

obtained by taking the imaginary part of the MDE (2.8), in combination with $\Vert M \Vert \lesssim 1$ .

Next, completely analogous to [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemma 5.4], we find the decomposition

(6.21) $$ \begin{align} \mathcal{X}[A]M = \big(\mathcal{X}[A]M\big)^\circ + \mathcal{O}(\eta) I = \mathring{A}' + \mathcal{O}(\eta)I \,, \qquad A':=\mathcal{X}[A]M\,. \end{align} $$

Plugging this into (6.20), we thus infer

(6.22) $$ \begin{align} \langle (G-M)A\rangle = -\langle \underline{WG}A'\rangle + \langle G-M\rangle \langle (G-M)\mathring{A}'\rangle + \left(-\langle \underline{WG}\rangle + \langle G-M\rangle^2\right) \mathcal{O}(\eta). \end{align} $$

The second term in the rhs. of (6.22) is obviously bounded by $\psi _1^{\mathrm {av}}/(N^2\eta ^{3/2})$ , and in the third term, we use the usual local law (4.1) to estimate it by $\mathcal {O}_\prec \big (N^{-1}\big )$ . Combining this information gives (6.12).

Notice that the above arguments leading to (6.8a) are completely identical to the ones required in the proof of the analogous master inequality in [Reference Cipolloni, Erdős, Henheik and Schröder21, Proposition 4.9] with one minor but key modification: every term involving the chiral symmetry matrix $E_-$ in [Reference Cipolloni, Erdős, Henheik and Schröder21] is simply absent, and hence, with the notation of [Reference Cipolloni, Erdős, Henheik and Schröder21], all sums over signs $\sum _{\sigma = \pm }\cdots $ collapse to a single summand with $E_+ \equiv I$ . With this recipe, the proofs of (6.8b)–(6.8d) are completely analogous to the ones given in [Reference Cipolloni, Erdős, Henheik and Schröder21, Sections 5.2–5.5] and hence omitted.

A Additional technical lemmas

In this appendix, we prove several technical lemmas underlying the proofs our main results.

A.1 Bounds on averaged multi-resolvent chains

For the proof of Theorem 2.9, we need to extend the key estimate $\big \vert \big \langle G(z_1) \mathring {A}_1 G(z_2) \mathring {A}_2 \big \rangle \big \vert \prec 1$ for $\Re z_1, \Re z_2 \in \mathbf {B}_\kappa $ from (4.16), which underlies the proof of the ETH in Theorem 2.7 in two directions. First, we need to consider chains with an arbitrary number of resolvents in Lemma A.1, but we will only need a weak suboptimal bound which makes its proof quite direct and short. Second, in Lemma A.2, we no longer restrict $\Re z_1, \Re z_2 \in \mathbf {B}_\kappa $ to the bulk but assume that $|\Re z_1 - \Re z_2| + |\Im z_1| + |\Im z_2| \ge \nu $ for some N-independent constant $\nu> 0$ and additionally allow for arbitrary (non-regular) matrices $A_1, A_2$ . This second extension requires Assumption 2.8 on the deformation D (i.e., the boundedness of $M(z)$ also for $\Re z \notin \mathbf {B}_\kappa $ ); this is a slightly stronger requirement than just the boundedness of D assumed in Theorem 2.7. Both extensions are relevant for constructing the high probability event $\widehat {\Omega }$ in (5.34) for the DBM analysis. The proofs of Lemma A.1 and Lemma A.2 are simple extensions and slight adjustments of the arguments used in Proposition 4.4, and they will only be sketched.

Lemma A.1. Fix $\epsilon>0$ , $\kappa> 0$ , $k\in \mathbf {N}$ , and consider $z_1,\dots ,z_{k} \in \mathbf {C} \setminus \mathbf {R}$ with $\Re z_j \in \mathbf {B}_\kappa $ . Consider regular matrices $A_1,\dots ,A_k$ with $\lVert A_i\rVert \le 1$ , deterministic vectors $\mathbf {x}, \mathbf {y}$ with $\lVert \mathbf {x}\rVert +\lVert \mathbf {y} \rVert \lesssim 1$ , and set $G_i:=G(z_i)$ . Define

(A.1) $$ \begin{align} \mathcal{G}_k:=\widehat{G}_1A_1\dots A_{k-1}\widehat{G}_kA_k, \qquad \widehat{G}_j\in \{G_j,|G_j|\}\,. \end{align} $$

Then, uniformly in $\eta :=\min _j |\Im z_j|\ge N^{-1+\epsilon }$ , we have

(A.2) $$ \begin{align} \big|\langle \mathcal{G}_k \rangle\big|\prec \frac{N^{k/2-1}}{\sqrt{N\eta}}\,. \end{align} $$

Proof. We only consider the case when $\mathcal {G}_k=G_1A_1\dots A_{k-1}G_kA_k$ ; the general case when some $G_j$ is replaced with $|G_j|$ is completely analogous and so omitted. To keep the notation short, with a slight abuse of notation we will often denote $(GA)^k=G_1A_1\dots A_{k-1}G_kA_k$ .

We split the proof into three steps. In Step (i), we first prove the slightly weaker bound $\big | \langle \mathcal {G}_k \rangle \big |\prec N^{k/2-1}$ for any $k\ge 3$ and a similar bound in isotropic sense; then, using Step (i) as an input, we will prove the better estimate (A.2) for $k=3,4$ . Finally, we prove (A.2) for any $k\ge 3$ .

Step (i): Similar to the proof of the reduction inequalities in Lemma 6.4 (see [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemma 4.10]), we readily see that for $k=2j$ (we omit the indices),

(A.3) $$ \begin{align} \small \langle (GA)^{2j}\rangle \lesssim N \begin{cases} \langle |G|A(GA)^{j/2-1}|G|A(G^*A)^{j/2-1}\rangle^2 &\quad j\,\,\mathrm{even}, \\ \langle |G|A(GA)^{(j-1)/2}|G|A(G^*A)^{(j -1)/2}\rangle \langle |G|A(GA)^{(j-3)/2}|G|A(G^*A)^{(j-3)/2}\rangle &\quad j\,\,\mathrm{odd}, \\ \end{cases} \end{align} $$

and for $k=2j-1$ ,

(A.4) $$ \begin{align} \langle (GA)^{2j-1}\rangle \lesssim \langle|G|A(GA)^{j-2}|G|A(G^*A)^{j-2} \rangle^{1/2}\langle|G|A(GA)^{j-1}|G|A(G^*A)^{j-1} \rangle^{1/2}. \end{align} $$

We proceed by induction on the length of the chain. First, we use (A.3) for $j=2$ , together with $\langle GAGA\rangle \prec 1$ from Proposition 4.4, to get the bound $\langle \mathcal {G}_4 \rangle \prec N$ and then use this bound as an input to obtain $\langle \mathcal {G}_3 \rangle \prec N^{1/2}$ using (A.4). Then proceeding exactly in the same way, we prove that if $\langle \mathcal {G}_l \rangle \prec N^{l/2-1}$ holds for any $l\le k$ , then the same bound holds for chains of length $l=k+1$ and $k+2$ as well. Similarly, in the isotropic chains, we prove $\langle \mathbf {x}, \mathcal {G}_k\mathbf {y} \rangle \prec N^{(k-1)/2}$ ; this concludes Step (i).

Step (ii): Given the bounds $\langle \mathcal {G}_k \rangle \prec N^{k/2-1}$ , $ \langle \mathbf {x}, \mathcal {G}_k\mathbf {y} \rangle \prec N^{(k-1)/2}$ , the estimate in (A.2) for $k=3,4$ immediately follows by writing the equation for $\mathcal {G}_k$ , performing cumulant expansion and using the corresponding bounds on $M(z_1, ... , z_k)$ from Lemma 4.3. This was done in [Reference Cipolloni, Erdős and Schröder19, Proof of Proposition 3.5]; hence, we omit the details.

Step (iii): The proof of (A.2) for $k\ge 5$ proceeds by induction. We first show that it holds for $k=5,6$ , and then we prove that if it holds for $k-2$ and $k-1$ , then it holds for k and $k+1$ as well.

By Step (ii), it follows that (A.2) holds for $k=3,4$ , and for $k=2$ , we have $\langle GAGA\rangle \prec 1$ . Then by (A.3), we immediately conclude that the same bound is true for $k=6$ , which together with (A.4) also implies the desired bound for $k=5$ . The key point is that (A.3) splits a longer k-chain (k even) into a product of shorter chains of length $k_1$ , $k_2$ with $k_1+k_2=k$ . As long as $k\ge 5$ , at least one of the shorter chains has already length at least three, so we gain the factor $(N\eta )^{-1/2}$ . In fact, chains of length $k=2$ , from which we do not gain any extra factor, $| \langle GAGA\rangle | \prec 1$ , appear only once when we apply (A.3) for $k=6$ . But in this case, the other factor is a chain of length four with a gain of a $(N\eta )^{-1/2}$ factor. Similarly, (A.4) splits the long k chain (k odd) into the square root of two chains of length $k-1$ and $k+1$ , and for $k\ge 5$ , we have the $(N\eta )^{-1/2}$ factor from both. The induction step then readily follows by using again (A.3)–(A.4) as explained above; this concludes the proof. In fact, in most steps of the induction, we gain more than one factor $(N\eta )^{-1/2}$ ; this would allow us to improve the bound (A.2), but for the purpose of the present paper, the suboptimal estimate (A.2) is sufficient.

We now turn to the second extension of (4.16) allowing for arbitrary spectral parameters (i.e., not necessarily in the bulk, but separated by a safe distance $\nu> 0$ ).

Lemma A.2. Fix $\epsilon , \nu> 0$ and let the deformation $D \in \mathbf {C}^{N \times N}$ satisfy Assumption 2.8. Let $z_1, z_2 \in \mathbf {C} \setminus \mathbf {R}$ be spectral parameters with $\Delta :=|\Re z_1 - \Re z_2| + |\Im z_1| + |\Im z_2| \ge \nu> 0$ and $B_1, B_2 \in \mathbf {C}^{N \times N}$ bounded deterministic matrices. Then, uniformly in $\eta := \min \big ( |\Im z_1|, |\Im z_2|\big ) \ge N^{-1+\epsilon }$ , it holds that

(A.5) $$ \begin{align} \big| \big\langle G(z_1) B_1 G(z_2) B_2\big\rangle \big| \prec 1\,. \end{align} $$

Proof. The proof is very similar to that of Proposition 4.4, relying on a system of master inequalities (Proposition 6.3) complemented by the reduction inequalities (Lemma 6.4); we just comment on the minor differences.

Recall that the naive size, in an averaged sense, of a chain

(A.6) $$ \begin{align} G_1B_1G_2B_2\ldots G_{k-1} B_{k-1} G_k \end{align} $$

with k resolvents and arbitrary deterministic matrices in between is of order $\eta ^{-k+1}$ ; generically, this is the size of the corresponding deterministic term in the usual multi-resolvent local law (see [Reference Cipolloni, Erdős and Schröder19, Theorem 2.5] with $a = 0$ for the case of Wigner matrices)

(A.7) $$ \begin{align} \begin{aligned} |\langle G_1 B_1 \cdots G_{k} B_k - M(z_1, B_1, ... , z_{k}) B_k \rangle | \prec \frac{1}{N \eta^k} \\ \left\vert \big( G_1 B_1 \cdots B_{k} G_{k+1} - M(z_1, B_1, ... , B_{k}, z_{k+1}) \big)_{\boldsymbol{x} \boldsymbol{y}} \right\vert \prec \frac{1}{\sqrt{N \eta} \, \eta^k} \end{aligned} \end{align} $$

with the customary short-hand notations

$$ \begin{align*} G_i:= G(z_i)\,, \quad \eta := \min_i |\Im z_i|\,, \quad \boldsymbol{z}_k :=(z_1, ... , z_{k})\,, \quad \boldsymbol{B}_k:=(B_1, ... , B_{k})\,. \end{align*} $$

In the following, we will consider every deterministic matrix $B_j$ together with its neighbouring resolvents, $G_jB_j G_{j+1}$ , which we will call the unit of $B_j$ . Two units are called distinct if they do not share a common resolvent, and we will count the number of such distinct units.Footnote 12 The main mechanism for the improvement over (A.7) in Proposition 4.4 for regular matrices was that for every distinct unit $G_j B_j G_{j+1}$ in the initial resolvent chain with a regular $B_j$ , the naive size of M gets reduced by an $\eta $ -factor, yielding the bound $\eta ^{-\lfloor k/2\rfloor +1 }$ in (4.14) when all matrices are regular.Footnote 13 In the most relevant regime of small $\eta \sim N^{-1+\epsilon }$ , this improvement in M is (almost) matched by the corresponding improvement in the error term; see $N^{k/2-1}$ in (4.15a) (except that for odd k, the error is bigger by an extra $\eta ^{-1/2}\sim N^{1/2}$ ).

The key point is that if the spectral parameters $z_j$ and $z_{j+1}$ are ‘far away’ in the sense that

(A.8) $$ \begin{align} \Delta_{j} := |\Re z_j- \Re z_{j+1}| + |\Im z_j| + |\Im z_{j+1}| \ge \nu> 0\,, \end{align} $$

then any matrix $B_j$ in the chain $\ldots G_j B_j G_{j+1}\ldots $ behaves as if it were regular. The reason is that the corresponding stability operator $\mathcal {B}_{j,j+1}$ from (4.6) (explicitly given in (A.14) and (A.15) below) has no singular direction and its inverse is bounded; that is,

$$ \begin{align*} \Vert \mathcal{B}_{j,j+1}^{-1}[R] \Vert \lesssim \Vert R \Vert \quad \text{for all} \quad j \in [k]\,, \quad R \in \mathbf{C}^{N \times N}\,. \end{align*} $$

For example, using the definition (4.5), we have

(A.9) $$ \begin{align} \| M(z_1, B, z_2)\|\lesssim 1 \end{align} $$

whenever $\Delta _{12}\ge \nu $ ; hence, $\mathcal {B}_{12}^{-1}$ is bounded. Therefore, when mimicking the proof of Proposition 4.4, instead of counting regular matrices with distinct units, we need to count the distinct units within the chain (A.6) for which the corresponding spectral parameters are far away; their overall effects are the same – modulo a minor difference, that now the errors for odd k do not get increased by $\eta ^{-1/2}$ when compared to the M-bound (see later).

To be more precise, in our new setup we introduce the modifiedFootnote 14 normalised differences

(A.10) $$ \begin{align} \!\!\kern-1pt\!\widetilde{\Psi}_k^{\mathrm{av}}(\boldsymbol{z}_k, \boldsymbol{B}_k) &:= N \eta^{\lfloor k/2 \rfloor} |\langle G_1 B_1 \cdots G_{k} B_k - M(z_1, B_1, ... , z_{k}) B_k \rangle |\,,\quad\qquad \end{align} $$
(A.11) $$ \begin{align} \widetilde{\Psi}_k^{\mathrm{iso}}(\boldsymbol{z}_{k+1}, \boldsymbol{B}_{k}, \boldsymbol{x}, \boldsymbol{y}) &:= \sqrt{N \eta} \, \eta^{\lfloor k/2 \rfloor} \left\vert \big( G_1 B_1 \cdots B_{k} G_{k+1} - M(z_1, B_1, ... , B_{k}, z_{k+1}) \big)_{\boldsymbol{x} \boldsymbol{y}} \right\vert \end{align} $$

for $k \in \mathbf {N}$ as a new set of basic control quantities (cf. (6.1) and (6.2)). The deterministic counterparts M used in (A.10) and (A.11) are again given recursively in Definition 4.1. Contrary to (6.1) and (6.2), the deterministic matrices $\Vert B_j \Vert \le 1$ , $j \in [k]$ are not assumed to be regular. This ‘lack of regularity’ is compensated by the requirement that consecutive spectral parameters $z_j, z_{j+1}$ of the unit of $B_j$ satisfy (A.8). Just as in Definition 4.2, in case of (A.10), the indices in (A.8) are understood cyclically modulo k. Chains satisfying (A.8) for all $j \in [k]$ are called good. Hence, in a good chain, one can potentially gain a factor $\eta $ from every unit $G_j B_j G_{j+1}$ . Therefore, analogous to the regularity requirement for all deterministic matrices in Definition 6.1, the normalised differences $\widetilde {\Psi }_k^{\mathrm {av/iso}}$ in (A.10) and (A.11) will only be used for good chains.

As already indicated above, the analogy between our new setup and the setup of Proposition 4.4 is not perfect due to the following reason: for $k=1$ , the error bounds in (A.7) improve by $\sqrt {\eta }$ for $B_1$ being a regular matrix, but for $\Delta _1 \ge \nu> 0$ , the improvement is by a full power of $\eta $ .Footnote 15 This discrepancy causes slightly different $\eta $ -powers for odd k in all estimates (cf. (A.10) and (A.11)).

We now claim that for good chains, the requirement (A.8) for all $j\in [k]$ reduces the naive sizes of the errors in the usual multi-resolvent local laws (A.7) at least by a factor $\eta ^{\lceil k/2 \rceil }$ for $k=1,2$ . Previously, in the proof of Proposition 4.4, these sizes got reduced by a factor $\sqrt {\eta }$ for every matrix $B_j$ which was regular in the sense of Definition 4.2. Now, compared to this regularity gain, the main effect for our new, say, $\nu $ -gain for good chains is that for every $j \in [k]$ , the inverse of the stability operator (4.6) (explicitly given in (A.14) and (A.15) below) is bounded; that is,

(A.12) $$ \begin{align} \Vert \mathcal{B}_{j,j+1}^{-1}[R] \Vert \lesssim \Vert R \Vert \quad \text{for all} \quad j \in [k]\,, \quad R \in \mathbf{C}^{N \times N}. \end{align} $$

Armed with (A.12), completely analogous to Proposition 4.4, one then starts a proof of the master inequalities (similar to those in Proposition 6.3). First, one establishes suitable underlined lemmas (cf. Lemma 6.5 and [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemmas 5.2, 5.6, 5.8, and 5.9]), where now no splitting of observables into singular and regular parts (see, for example, (6.21) and [Reference Cipolloni, Erdős, Henheik and Schröder21, Equation (5.35)]) is necessary, since the bounded matrices $B_j$ are arbitrary. Afterwards, the proof proceeds by cumulant expansion (see (6.16)), where resolvent chains of length k are estimated by resolvent chains of length up to $2k$ . This potentially infinite hierarchy is truncated by suitable reduction inequalities, as in Lemma 6.4. Along this procedure, we also create non-good chains, but a direct inspectionFootnote 16 shows that there are always sufficiently many good chains left that provide the necessary improvements, exactly as in the proof of Proposition 4.4.

Just to indicate this mechanism, consider, for example, the Gaussian term appearing in the cumulant expansion of (A.10) for $k=2$ , analogous to (6.16). In this case, we encounter the following term with five resolvents that we immediately estimate in terms of chains with four resolvents:

(A.13) $$ \begin{align} \frac{\vert \langle G_2 B_2 G_1 B_1 G_2 G_1 B_1 G_2 B_2 \rangle\vert }{N^2} \prec \frac{\vert \langle G_2 B_2 G_1 B_1 G_2 B_1 G_2 B_2 \rangle\vert + \vert \langle G_2 B_2 G_1 B_1 G_1 B_1 G_2 B_2 \rangle\vert }{N^2}\,. \end{align} $$

Here we used that $\Delta =|\Re z_1 - \Re z_2| + |\Im z_1| + |\Im z_2| \ge \nu> 0$ to reduce $G_2 G_1$ to a single G term. Strictly speaking, the estimate (A.13) directly follows from the resolvent identity $G_2G_1= (G_2-G_1)/(z_1-z_2)$ only when $|z_1-z_2| \sim |\Re z_1 - \Re z_2| + |\Im z_1-\Im z_2|\gtrsim \nu $ ; this latter condition follows from $\Delta \ge \nu $ only if $\Im z_1\cdot \Im z_2 <0$ . In the remaining case, when $z_1\approx z_2$ , but both with a large imaginary part (since $\Delta \ge \nu $ ), we can use an appropriate contour integral representation

$$ \begin{align*}G(z_2)G(z_1) =\frac{1}{2\pi \mathrm{i}} \int_\Gamma \frac{G(\zeta)}{(\zeta-z_1)(\zeta-z_2)} \mathrm{d}\zeta \end{align*} $$

similar to (6.17) with a contour well separated from $z_1, z_2$ . Hence, we obtain a four-resolvent chain $\langle G_2 B_2 G_1 B_1 G(\zeta ) B_1 G_2 B_2 \rangle $ on the rhs. of (A.13), where the spectral parameter $\zeta $ of the new $G(\zeta )$ resolvent is ‘far away’ from the other spectral parameters, and it can be treated as $\langle G_2 B_2 G_1 B_1 G_j B_1 G_2 B_2 \rangle $ , $j=1,2$ . In fact, in our concrete application of Lemma A.2, we always know that not only $\Delta \ge \nu $ , but already $|\Re z_1 - \Re z_2|\ge \nu $ ; hence, the argument with the resolvent identity is always sufficient.

Note that the two chains on the rhs. of (A.13) cannot be directly cast in the form (A.10) since not every unit has well separated spectral parameters (e.g., we have $G_2B_1G_2$ ). Hence, these chains are not good. However, after application of a reduction inequality (see (A.3)), we find that

$$ \begin{align*} \vert \langle G_2 B_2 G_1 B_1 G_2 B_1 G_2 B_2 \rangle\vert \prec N \big( \langle |G_2| B_2 |G_1| B_2^* \rangle \langle |G_1|B_1 |G_2|B_1^* \rangle \langle |G_2| B_1 |G_2| B_1^*\rangle \langle |G_2| B_2 |G_2| B_2^*\rangle\big)^{1/2} \end{align*} $$

and analogously for the second summand in (A.13). Estimating the two shorter non-good chains involving only $G_2$ by $1/\eta $ via a trivial Schwarz inequality, this yields that

$$ \begin{align*} \frac{\vert \langle G_2 B_2 G_1 B_1 G_2 G_1 B_1 G_2 B_2 \rangle\vert }{N^2} \prec \frac{ \big( \langle |G_2| B_2 |G_1| B_2^* \rangle \langle |G_1|B_1 |G_2|B_1^* \rangle \big)^{1/2}}{N\eta}\,, \end{align*} $$

where the remaining shorter chains are good and can be estimated in terms of $\widetilde {\Psi }_2^{\mathrm {av}}$ . A similar mechanism works for any other term. This completes the discussion of the discrepancies between the current setup and Proposition 4.4.

Notice that this argument always assumes that we have a single resolvent local law and that M’s are bounded. At potential cusps in the scDos $\rho $ we do not have a single resolvent local law (see the discussion below (4.1)) and the estimates on $\Vert M(z)\Vert $ for $\Re z$ close to edges (and cusps) of $\rho $ may deteriorate for general deformation D. However, these two phenomena are simply excluded by Assumption 2.8 on the deformation D (see also Remark 2.12). In particular, this assumption allows us to show exactly the same estimates on $M(z_1, ... , z_k)$ as given in Lemma 4.3, which serve as an input in the proof sketched above.

To conclude, similar to Proposition 4.4, our method again shows that

$$ \begin{align*} \big| \big\langle \big(G(z_1) B_1 G(z_2) - M(z_1, B_1, z_2)\big)B_2\big\rangle \big| \prec \frac{1}{(N \eta)^{1/2}}, \end{align*} $$

which, together with the corresponding bound (A.9), immediately yields the desired bound and completes the proof of Lemma A.2.

A.2 Proof of Lemma 4.3

The proof is completely analogous to the proof of Lemma 4.3 from [Reference Cipolloni, Erdős, Henheik and Schröder21]; hence, we only show how the three main technical aspects of the latter should be adjusted to our setup of deformed Wigner matrices. In general, the setup of [Reference Cipolloni, Erdős, Henheik and Schröder21] is more complicated due to the chiral symmetry which involves summations over signs $\sigma =\pm $ . As a rule of thumb, we can obtain the necessary M-formulas for our current case just by mechanically using the corresponding formulas in [Reference Cipolloni, Erdős, Henheik and Schröder21] and drop the $\sigma =-1$ terms.

Recursive Relations: The principal idea is to derive several different recursive relations for $M(z_1, ... , z_k)$ (which itself is defined by one of those in Definition 4.1) by a so-called meta argument [Reference Cook, Hachem, Najim and Renfrew22, Reference Cipolloni, Erdős, Henheik and Schröder21]. These alternative recursions can then be employed to prove Lemma 4.3 iteratively in the number of spectral parameters. These recursive relations are identical to those in Lemma D.1 of the arXiv: 2301.03549 version of [Reference Cipolloni, Erdős, Henheik and Schröder21] when dropping the $\sigma = -1$ terms in Equations (D.1) and (D.2) therein and writing the $N \times N$ identity instead of $E_+$ .

Stability Operator: The inverse of the stability operator (4.6) can be expressed in the following explicit form:

(A.14) $$ \begin{align} \mathcal{B}_{12}^{-1}[R]=R+\frac{\langle R \rangle}{1-\langle M_1M_2\rangle}M_1 M_2 = R + \frac{1}{\beta_{12}} \langle R \rangle M_1 M_2 \,, \end{align} $$

where $\beta _{12} := 1 - \langle M_1 M_2 \rangle $ is the only nontrivial eigenvalue of $\mathcal {B}_{12}$ . Completely analogous to [Reference Cipolloni, Erdős, Henheik and Schröder21, Lemma B.2 (b)], it holds that

(A.15) $$ \begin{align} |\beta_{12}| \gtrsim\big( |\Re z_1 - \Re z_2| + |\Im z_1| + |\Im z_2|\big) \wedge 1\,, \end{align} $$

which, in combination with (A.14), in particular implies (4.13) for $k=1$ and (4.14) for $k=2$ .

Longer Chains: To prove (4.13) for $k=3$ , similar to [Reference Cipolloni, Erdős, Henheik and Schröder21], we verify at first (4.13) for $k=2$ in the case when exactly one observable is regular. For this purpose, we again use the recursive relation of the form Equation (D.12) of the arXiv: 2301.03549 version of [Reference Cipolloni, Erdős, Henheik and Schröder21]:

$$ \begin{align*} M(z_1,A_1,z_2,A_2,z_3) =M(z_1,\mathcal{X}_{12}[A_1]M_2A_2,z_3)+M(z_1,\mathcal{X}_{12}[A_1]M_2,z_3)\langle M(z_2,A_2,z_3)\rangle\,, \end{align*} $$

where we denoted the linear operator $\mathcal {X}_{mn}$ as

(A.16) $$ \begin{align} \mathcal{X}_{mn}[R] := \big( 1 - \langle M_m \cdot M_n\rangle \big)^{-1}[R] \qquad \text{for} \quad R \in \mathbf{C}^{N \times N}\,. \end{align} $$

Now, similar to the arguments around Equation (D.13) of the arXiv: 2301.03549 version of [Reference Cipolloni, Erdős, Henheik and Schröder21], we observe a balancing cancellation in the last term, which comes from the continuity with respect to one of spectral parameters of the regular part of a deterministic matrix when another spectral parameter is fixed (see (4.12)).

Acknowledgements

G.C. and L.E. gratefully acknowledge many discussions with Dominik Schröder at the preliminary stage of this project, especially his essential contribution to identify the correct generalisation of traceless observables to the deformed Wigner ensembles.

Competing interest

The authors have no competing interest to declare.

Funding statement

L.E. and J.H. acknowledges support by ERC Advanced Grant ‘RMTBeyond’ No. 101020331.

Ethical standards

The research meets all ethical guidelines, including adherence to the legal requirements of the study country.

Author contributions

All authors contributed equally.

Footnotes

1 ETH for Wigner matrices was first conjectured by Deutsch [Reference Deutsch23]. Quantum ergodicity has a long history in the context of the quantisations of chaotic classical dynamical systems starting from the fundamental theorem by S˘nirel’man [Reference S˘nirel’man36]. For more background and related literature, see the Introduction of [Reference Cipolloni, Erdős and Schröder16].

2 The space of matrices is equipped with the usual Hilbert-Schmidt scalar product.

3 This means up to an $N^\epsilon $ factor with arbitrary small $\epsilon $ .

4 Given a sequence of N-dependent random variables, we say that $X_N$ converges to $X_\infty $ in the sense of moments if for any $k\in \mathbf {N}$ , it holds $\mathbf {E} |X_N|^k=\mathbf {E}|X_\infty |^k+\mathcal {O}(N^{-c(k)})$ , for some small possibly k–dependent constant $c(k)>0$ .

5 The MDE for very general mean-field random matrices has been introduced in [Reference Ajanki, Erdős and Krüger2] and further analysed in [Reference Alt, Erdős and Krüger3]. The properties we use here have been summarised in Appendix B of the arXiv: 2301.03549 version of [Reference Cipolloni, Erdős, Henheik and Schröder21].

6 The scDos has been thoroughly analysed in increasing generality in [Reference Ajanki, Erdős and Krüger1, Reference Ajanki, Erdős and Krüger2, Reference Alt, Erdős and Krüger3]. It is supported on finitely many finite intervals. Roughly speaking, there are three regimes: the bulk, where $\rho $ is well separated away from 0; the edge, where $\rho $ vanishes as a square root at the edges of each supporting interval that are well separated; and the cusp, where two supporting intervals (almost) meet and $\rho $ behaves (almost) as a cubic root. Correspondingly, $\rho $ is locally real analytic, Hölder- $1/2$ or Hölder- $1/3$ continuous, respectively. Near the singularities, it has an approximately universal shape. No other singularity type can occur, and for typical deformation D, there is no cusp regime.

7 See the first paragraph of Section 5 for an explanation of why the variance takes this specific form.

8 In this context, Hölder- $1/2$ regularity means that $|d_i -d_j|\le C_0 (|i-j|/N)^{1/2}$ for some universal constant $C_0> 0$ .

9 Note that Theorem 2.7 alone would prove (3.5) only with $\epsilon =0$ , but the convergence in the sense of moments from Theorem 2.9 gains a factor $N^{-\epsilon }$ with a positive $\epsilon $ .

10 We will use the the notational convention, that the letter B denotes arbitrary (generic) matrices, while A is reserved for regular matrices, in the sense of Definition 4.2 below.

11 Note that, for our concrete setting (6.17), one closed interval (i.e., half of Figure 1) would be sufficient. However, we formulated it more generally here in order to ease the relevant modifications for the (omitted) proofs of (6.8b)–(6.8d).

12 However, in the averaged case, one of the k resolvents can be ‘reused’ in this counting.

13 This improvement was also termed as the $\sqrt {\eta }$ -rule, asserting that every regular matrix improves the M bound and the error in the local law by a factor $\sqrt {\eta }$ . This formulation is somewhat imprecise; the M bound always involves integer $1/\eta $ -powers. The correct counting is that each distinct unit of regular matrices yields a factor $\eta $ . However, the $\sqrt {\eta }$ -rule applies to the error term.

14 Notice that for odd k, the $\eta $ -power in the prefactor is slightly different from those in (6.1) and (6.2).

15 For the averaged case, this improvement is really artificial since the $\Delta _1 \ge \nu $ -requirement means that $\eta \gtrsim 1$ .

16 We spare the reader from presenting the case-by-case checking for the new setup, but we point out that this is doable since Lemma A.2, as well as Proposition 4.4, concern chains of length at most $k\le 2$ . Extending these local laws for general k is possible, but it would require a more systematic power-counting of good chains.

References

Ajanki, O. H., Erdős, L. and Krüger, T., Quadratic Vector Equations on Complex Upper Half-plane no. 1261 (American Mathematical Society, 2019).CrossRefGoogle Scholar
Ajanki, O. H., Erdős, L. and Krüger, T., ‘Stability of the matrix Dyson equation and random matrices with correlations’, Probab. Theory Related Fields 173 (2019), 293373.CrossRefGoogle Scholar
Alt, J., Erdős, L. and Krüger, T., ‘The Dyson equation with linear self-energy: Spectral bands, edges and cusps’, Doc. Math. 25 (2020), 14211539.CrossRefGoogle Scholar
Alt, J., Erdős, L., Krüger, T. and Schröder, D., ‘Correlated random matrices: Band rigidity and edge universality’, Ann. Probab. 48 (2020), 9631001.CrossRefGoogle Scholar
Bao, Z., Erdős, L. and Schnelli, K., ‘Equipartition principle for Wigner matrices’, Forum Math. Sigma 9 (2021), E44.CrossRefGoogle Scholar
Bialas, P., Spiechowicz, J. and Łuczka, J., ‘Partition of energy for a dissipative quantum oscillator’, Scientific Reports 8 (2018), 16080.CrossRefGoogle ScholarPubMed
Bialas, P., Spiechowicz, J. and Łuczka, J., ‘Quantum analogue of energy equipartition theorem’, J. Phys. A 52 (2019), 15LT01.CrossRefGoogle Scholar
Benigni, L., ‘Eigenvectors distribution and quantum unique ergodicity for deformed Wigner matrices’, Ann. Inst. Henri Poincaré Probab. Stat. 56 (2020), 28222867.CrossRefGoogle Scholar
Benigni, L. and Cipolloni, G., ‘Fluctuations of eigenvector overlaps and the Berry conjecture for Wigner matrices’, arXiv:2212.10694, 2022.Google Scholar
Benigni, L. and Lopatto, P., ‘Fluctuations in local quantum unique ergodicity for generalized Wigner matrices’, Comm. Math. Phys. 391 (2022), 401454.CrossRefGoogle Scholar
Benigni, L. and Lopatto, P., ‘Optimal delocalization for generalized Wigner matrices’, Adv. Math. 396 (2022), 108109.CrossRefGoogle Scholar
Biane, P., ‘On the free convolution with a semi-circular distribution’, Indiana Univ. Math. J. 46 (1997), 705718.CrossRefGoogle Scholar
Bloemendal, A., Erdős, L., Knowles, A., Yau, H.-T. and Yin, J., ‘Isotropic local laws for sample covariance and generalized Wigner matrices’, Electron. J. Probab. 19 (2014), 153.Google Scholar
Bourgade, P. and Yau, H.-T., ‘The eigenvector moment flow and local quantum unique ergodicity’, Comm. Math. Phys. 350 (2017), 231278.CrossRefGoogle Scholar
Bourgade, P., Yau, H.-T. and Yin, J., ‘Random band matrices in the delocalized phase I: Quantum unique ergodicity and universality’, Comm. Pure Appl. Math. 73 (2020), 15261596.CrossRefGoogle Scholar
Cipolloni, G., Erdős, L. and Schröder, D., ‘Eigenstate thermalization hypothesis for Wigner matrices’, Comm. Math. Phys. 388 (2021), 10051048.CrossRefGoogle Scholar
Cipolloni, G., Erdős, L. and Schröder, D., ‘Thermalisation for Wigner matrices’, J. Funct. Anal. 282 (2022), 109394.CrossRefGoogle Scholar
Cipolloni, G., Erdős, L. and Schröder, D., ‘Normal fluctuation in quantum ergodicity for Wigner matrices’, Ann. Probab. 50 (2022), 9841012.CrossRefGoogle Scholar
Cipolloni, G., Erdős, L. and Schröder, D., ‘Optimal multi-resolvent local laws for Wigner matrices’, Electron. J. Probab. 27 (2022), 138.CrossRefGoogle Scholar
Cipolloni, G., Erdős, L. and Schröder, D., ‘Rank-uniform local law forWigner matrices’, Forum Math. Sigma 10 (2022), E96.CrossRefGoogle Scholar
Cipolloni, G., Erdős, L., Henheik, J. and Schröder, D., ‘Optimal lower bound on eigenvector overlaps for non-Hermitian random matrices’, arXiv:2301.03549v4, 2023.Google Scholar
Cook, N., Hachem, W., Najim, J. and Renfrew, D., ‘Non-Hermitian random matrices with a variance profile (I): Deterministic equivalents and limiting ESDs’, Electron. J. Probab. 23(110) (2018), 161.CrossRefGoogle Scholar
Deutsch, J. M., ‘Quantum statistical mechanics in a closed system’, Phys. Rev. A 43 (1991), 20462049.CrossRefGoogle Scholar
Erdős, L., Schlein, B. and Yau, H.-T., ‘Semicircle law on short scales and delocalization of eigenvectors for Wigner random matrices’, Ann. Probab. 37 (2009), 815852.CrossRefGoogle Scholar
Erdős, L., Knowles, A., Yau, H.-T. and Yin, J., ‘The local semicircle law for a general class of random matrices’, Electron. J. Probab. 18(59) (2013), 158.CrossRefGoogle Scholar
Erdős, L. and Yau, H.-T., ‘A dynamical approach to random matrix theory’, Courant Lect. Notes Math. 28 (2017).Google Scholar
Erdős, L., Krüger, T. and Schröder, D., ‘Random matrices with slow correlation decay’, Forum Math. Sigma 7 (2019), E8.CrossRefGoogle Scholar
Erdős, L., Krüger, T. and Schröder, D., ‘Cusp universality for random matrices I: Local law and the complex Hermitian case’, Comm. Math. Phys. 378 (2020), 12031278.CrossRefGoogle ScholarPubMed
Knowles, A. and Yin, J., ‘Eigenvector distribution ofWigner matrices’, Probab. Theory Related Fields 155 (2013), 543582.CrossRefGoogle Scholar
Knowles, A. and Yin, J., ‘The isotropic semicircle law and deformation of Wigner matrices’, Comm. Pure Appl. Math. 66 (2013), 16631749.CrossRefGoogle Scholar
Landon, B. and Yau, H.-T., ‘Convergence of local statistics of Dyson Brownian motion’, Comm. Math. Phys. 355 (2017), 9491000.CrossRefGoogle Scholar
Lee, J. O. and Schnelli, K., ‘Local deformed semicircle law and complete delocalization for Wigner matrices with random potential’, J. Math. Phys. 54 (2013), 103504.CrossRefGoogle Scholar
Łuczka, J., ‘Quantum counterpart of classical equipartition of energy’, J. Stat. Phys. 179 (2020), 839845.CrossRefGoogle Scholar
Marcinek, J., ‘High dimensional normality of noisy eigenvectors’, Ph.D. Thesis, Harvard University, 2020.Google Scholar
Marcinek, J. and Yau, H.-T., ‘High dimensional normality of noisy eigenvectors’, Comm. Math. Phys. 395 (2022), 10071096.CrossRefGoogle Scholar
S˘nirel’man, A. I.Ergodic properties of eigenfunctions’, Uspekhi Mat. Nauk 29 (1974), 181182.Google Scholar
Spiechowicz, J., Bialas, P. and Łuczka, J., ‘Quantum partition of energy for a free Brownian particle: Impact of dissipation’, Phys. Rev. A 98 (2018), 052107.CrossRefGoogle Scholar
Tao, T. and Vu, V., ‘Random matrices: Universal properties of eigenvectors’, Random Matrices Theory Appl. 1 (2012), 1150001.CrossRefGoogle Scholar
Figure 0

Figure 1 The contour $\Gamma $ is split into three parts (see (6.19)). Depicted is the situation, where the bulk $\mathbf {B}_{\kappa }$ consists of two components. The boundary of the associated domain $\mathbf {D}^{(\epsilon , \kappa )}$ is indicated by the two U-shaped dashed lines. Modified version of [21, Figure 4].