Hostname: page-component-78c5997874-94fs2 Total loading time: 0 Render date: 2024-11-13T04:22:57.043Z Has data issue: false hasContentIssue false

Asymptotic expansion of the expected Minkowski functional for isotropic central limit random fields

Published online by Cambridge University Press:  14 July 2023

Satoshi Kuriki*
Affiliation:
The Institute of Statistical Mathematics
Takahiko Matsubara*
Affiliation:
High Energy Accelerator Research Organization (KEK)
*
*Postal address: The Institute of Statistical Mathematics, 10-3 Midoricho, Tachikawa, Tokyo 190-8562, Japan. Email address: kuriki@ism.ac.jp
**Postal address: Institute of Particle and Nuclear Studies, High Energy Accelerator Research Organization (KEK), Oho 1-1, Tsukuba, Ibaraki 305-0801, Japan. Email address: tmats@post.kek.jp
Rights & Permissions [Opens in a new window]

Abstract

The Minkowski functionals, including the Euler characteristic statistics, are standard tools for morphological analysis in cosmology. Motivated by cosmic research, we examine the Minkowski functional of the excursion set for an isotropic central limit random field, whose k-point correlation functions (kth-order cumulants) have the same structure as that assumed in cosmic research. Using 3- and 4-point correlation functions, we derive the asymptotic expansions of the Euler characteristic density, which is the building block of the Minkowski functional. The resulting formula reveals the types of non-Gaussianity that cannot be captured by the Minkowski functionals. As an example, we consider an isotropic chi-squared random field and confirm that the asymptotic expansion accurately approximates the true Euler characteristic density.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

1.1. Minkowski functional in cosmology

The Minkowski functional is a fundamental concept in integral and stochastic geometry. It is a series of geometric quantities defined for a bounded set in the Euclidean space. In the 2-dimensional case, the Minkowski functional of the set M is a triplet consisting of the area $\mathrm{Vol}_2(M)$ , the half-length of the boundary $\frac{1}{2}\mathrm{Vol}_1(\partial M)$ , and the Euler characteristic $\chi(M)$ times $\pi$ . The Minkowski functional measures the morphological features of M in a different way from conventional moment-type statistics and has been used in various scientific fields.

In cosmology, the Minkowski functional was introduced around the 1990s. It was used to analyze first the large-scale structure of the universe, and then the cosmic microwave background (CMB). In particular, the Minkowski functional for the excursion set of the smoothed CMB map was first analyzed by [Reference Schmalzing and Buchert21] (cf. [Reference Schmalzing and Górski22]). The CMB radiation provides rich information on the early stages of the universe. Its signal is recognized as an isotropic and nearly Gaussian random field. However, hundreds of inflationary models are available to infer the various types of non-Gaussianity. The Minkowski functional is used for the selection of such candidate models.

More precisely, let X(t), $t\in T\subset\mathbb{R}^n$ , be such a random field. The sup-level set with a threshold x,

\[ T_x = \{t\in T \,|\, X(t)\ge x\} = X^{-1}([x,\infty)),\]

is referred to as the excursion set. Subsequently, the Minkowski functional curves $\mathcal{M}_j(T_x)$ , $0\le j\le n$ , can be calculated as functions of x. The departure of the sample Minkowski functional from the expected Minkowski functional under the assumed model is used as a measure for the selection of models.

When X(t), $t\in M$ , is Gaussian, the Minkowski functional density (i.e., the expected Minkowski functional per unit volume) is explicitly known (see (2.11)). However, the expected Minkowski functional for a non-Gaussian random field is unknown (except for the Gaussian-related random fields [Reference Adler and Taylor3, Section 5.2]). In cosmology, a weak non-Gaussianity is expressed through the k-point correlation function, or the kth-order cumulant, for $k\ge 2$ as

(1.1) \begin{equation} \mathrm{cum}(X(t_1),\ldots,X(t_k)) = O\bigl(\nu^{k-2}\bigr), \quad \nu\ll 1,\end{equation}

where $\nu$ is a non-Gaussianity parameter ([Reference Matsubara13]). These asymptotics are the same as that of the central limit random field introduced by [Reference Chamandy, Worsley, Taylor and Gosselin5]. That is, for independent and identically distributed (i.i.d.) random fields $Z_{(i)}(t)$ , $t\in T$ ( $i\ge 1$ ), with zero mean and unit variance, the central limit random field defined by

(1.2) \begin{equation} X(t) = X_N(t) = \frac{1}{\sqrt{N}}\sum_{i=1}^N Z_{(i)}(t), \ \ t\in T\subset\mathbb{R}^n,\end{equation}

has a cumulant of the form (1.1) with $\nu=1/\sqrt{N}$ . This is a typical weakly non-Gaussian random field when N is large. In this study, we discuss the asymptotic expansion of the Minkowski functionals in the framework of the central limit random field.

Research in cosmology and astrophysics is often based on massive numerical simulations. Because the computational cost of the simulator is extremely high, analytic methods, including asymptotic expansion, can be immensely helpful if they provide the same information as simulators.

Finally, we make a few further comments on the use of Minkowski functionals in astrophysics and on recent related work. The systematic application of the Minkowski functional to Planck CMB data was reported in [Reference Collaboration19]. The paper [Reference Fantaye, Marinucci, Hansen and Maino8] discusses the use of the Minkowski functional for the CMB data in the presence of sky masks. In addition to the CMB, the Minkowski functionals have been applied to various types of survey observations, including 2-dimensional maps of the weak-lensing field of galaxy surveys, and 3-dimensional maps of the large-scale structure of the universe probed by galaxy distributions. The exact treatment of the 2-dimensional cosmological data as a random field on the celestial sphere has been studied (e.g., [Reference Fantaye, Marinucci, Hansen and Maino8, Reference Marinucci and Peccati12]). For recent developments in the applications of the Euler characteristic, Minkowski functional, and related geometric methods, see [Reference Pranav20].

1.2. Scope of this paper

In reality, the index set (i.e., the survey area) T is a bounded domain, and boundary corrections should always be incorporated. Let $\mathcal{M}_{j}(T)$ , $0\le j\le n$ , be the Minkowski functional of the domain T. Then, if T is a $C^2$ -stratified manifold of positive reach (see Section 2.1 for the definition), it is known that

(1.3) \begin{align} \mathbb{E}[\chi(T_x)] = \sum_{j=0}^{n} \mathcal{L}_{j}(T)\,\Xi_{j,N}(x),\end{align}

if the expectation exists, where $\mathcal{L}_{j}(T)=\omega_{n-j}^{-1}\binom{n}{j}\mathcal{M}_{n-j}(T)$ is referred to as the Lipschitz–Killing curvature of T, $\omega_{j}$ is the volume of the unit ball in $\mathbb{R}^j$ , and $\Xi_{j,N}(x)$ is the Euler characteristic density of the random field $X_N(t)$ in (1.2) restricted to the j-dimensional linear subspace. See [Reference Worsley28], or [Reference Chamandy, Worsley, Taylor and Gosselin5, Theorem 1] for a proof based on Hadwiger’s theorem.

Moreover, the expected Minkowski functionals are expressed as

(1.4) \begin{align} \mathbb{E}[\mathcal{L}_{k}(T_x)] = \omega_{n-k}^{-1}\binom{n}{k}\mathbb{E}[\mathcal{M}_{n-k}(T_x)] = \sum_{j=0}^{n-k} \genfrac{[}{]}{0pt}{}{k+j}{k} \mathcal{L}_{k+j}(T)\,\Xi_{j,N}(x),\end{align}

where

$$\genfrac{[}{]}{0pt}{}{k+j}{k} = \frac{\Gamma\big(\frac{k+j+1}{2}\big)\Gamma\big(\frac{1}{2}\big)}{\Gamma\big(\frac{k+1}{2} \big)\Gamma \big(\frac{j+1}{2}\big)}.$$

A proof based on Crofton’s theorem is presented in the next section (Section 2.1).

In this study, we obtain the asymptotic expansion formula for the Euler characteristic density $\Xi_{n,N}(x)$ for a large N and examine the effect of the non-Gaussianity. The resulting formula also automatically provides the asymptotic expansion formula for the expected Euler characteristic and the expected Minkowski functionals via (1.3) and (1.4), respectively. Prior research [Reference Hikage, Komatsu and Matsubara9, Reference Matsubara13] has provided a perturbation formula for the expected Minkowski functional up to $O(\nu)$ $\big(\nu=N^{-\frac{1}{2}}\big)$ using the 3-point correlation function for dimension $n\le 3$ , while [Reference Matsubara14] provided the formula up to $O\big(\nu^2\big)$ $\left(\nu^2=N^{-1}\right)$ using the 3- and 4-point correlation functions for $n=2$ . This study completes the asymptotic expansion formulas up to $O\big(N^{-1}\big)$ using the 3- and 4-point correlation functions for an arbitrary dimension n. Moreover, in addition to completing the existing research, we provide informative discussions for arbitrary n which reveal the properties of the Minkowski functionals (e.g., Theorem 2).

In related work, [Reference Chamandy, Worsley, Taylor and Gosselin5] derived an approximation for the expected Euler characteristic of the excursion set of an isotropic central limit random field. The approach in [Reference Chamandy, Worsley, Taylor and Gosselin5] is based on a version of the saddle-point approximation, which is different from the Edgeworth-type expansion approach used in this study. Our results are described in terms of the derivatives of k-point correlation functions, and can be translated in terms of the higher-order spectra. Another difference is that in [Reference Chamandy, Worsley, Taylor and Gosselin5] the threshold x increases as the sample size N increases, whereas x in this study is assumed fixed.

The authors are preparing another cosmic paper [Reference Matsubara and Kuriki15], where the perturbation formula for the Euler characteristic density up to $O\left(\nu^2\right)$ is derived using an alternative approach. The final formulas are presented in terms of higher-order spectra. It has been confirmed that the formulas in the two studies are consistent.

This paper is organized as follows. In Section 2, the Minkowski functional and the Lipschitz–Killing curvature are defined, and (1.4) is proved. The formula for the Euler characteristic density is then presented as the starting point of this study. The main results are presented in Section 3. The isotropic k-point correlation functions are introduced, and then the asymptotic expansion formula for the Euler characteristic density up to $O\big(N^{-1}\big)$ is derived. In addition, a class of non-Gaussianity which cannot be captured by the Minkowski functional approach is identified. In Section 4, as an example, we consider an isotropic chi-squared random field, which is a typical weakly Gaussian random field when the number of degrees of freedom is large. The isotropic chi-squared random field belongs to a class of Gaussian-related random fields whose Euler characteristic density is explicitly known. We analytically and numerically confirm the precision of the asymptotic expansion approximations. Proofs of the main results are presented in Section 5. In the appendix, the Hermite polynomial identities are proved (Section A.1), and the regularity conditions for the asymptotic expansion are summarized (Section A.2).

2. Preliminaries

2.1. Tube and Minkowski functional

We begin with a quick review of the Minkowski functional. Let M be a bounded closed domain in $\mathbb{R}^n$ . For $u\in\mathbb{R}^n$ , let $u_M$ be a point such that $\Vert u_M-u\Vert=\min_{w\in M}\Vert w-u\Vert$ . Note that $u_M$ exists but may not be unique. A tube about M with radius r is defined by a set of points whose distance from M is less than or equal to r:

(2.1) \begin{equation}\mathrm{Tube}(M,r) = \bigl\{ u\in\mathbb{R}^n \mid \Vert u_M-u\Vert \le r \bigr\}.\end{equation}

Then, the critical radius or reach is defined as

\[ r_\mathrm{cri}(M) = \inf\bigl\{r\ge 0 \mid \mbox{$u_M$ is unique for all $u\in\mathrm{Tube}(M,r)$} \bigr\}.\]

The critical radius $r_\mathrm{cri}(M)$ is the maximum radius for which the tube $\mathrm{Tube}(M,r)$ does not have self-overlap ([Reference Kuriki, Takemura and Taylor11, Section 2.2]). M is said to be of positive reach if $r_\mathrm{cri}(M)$ is strictly positive. The classical Steiner formula states that when M is a $C^2$ -stratified manifold ( $C^2$ -piecewise smooth manifold [Reference Takemura and Kuriki24]) of positive reach, for all $r\in[0,r_\mathrm{cri}(M)]$ , the volume of the tube (2.1) can be expressed as a polynomial in r:

(2.2) \begin{equation} \mathrm{Vol}_n(\mathrm{Tube}(M,r)) = \sum_{j=0}^n \omega_{n-j}r^{n-j} \mathcal{L}_j(M) = \sum_{j=0}^n r^{j} \binom{n}{j} \mathcal{M}_j(M),\end{equation}

where $\mathrm{Vol}_n({\cdot})$ is the n-dimensional volume and $\omega_j=\pi^{j/2}/\Gamma(j/2+1)$ is the volume of the unit ball in $\mathbb{R}^j$ . The Minkowski functional $\mathcal{M}_j(M)$ of M and the Lipschitz–Killing curvature $\mathcal{L}_j(M)$ of M are defined as the coefficients of the polynomial. Note that $\mathcal{L}_j(M)$ is defined independently of the dimension n of the ambient space. There are variations of the definitions of the Minkowski functional. The definition in (2.2) is from [Reference Schneider and Weil23, Section 14.2].

Because of the Gauss–Bonnet theorem, the Minkowski functional of the largest degree is proportional to the Euler characteristic of M:

\[ \chi(M) = \mathcal{L}_0(M) = \mathcal{M}_n(M)/\omega_n.\]

Throughout this study, it is assumed that the domain T of the random field X(t) is a $C^2$ -stratified manifold of positive reach. In the following, we prove (1.4). Let A(n, k) be the set of k-dimensional affine subspaces in $\mathbb{R}^n$ . Let $L\in A(n,n-k)$ , and let $X|_L$ be the restriction of X to L, that is, a random field on $T\cap L$ . Because X is isotropic, when L is given, from (1.3) we have

\[ \mathbb{E}[\chi((T\cap L)_x)] = \sum_{j=0}^{n-k} \mathcal{L}_j(T\cap L)\,\Xi_{j,N}(x).\]

Note that ${(T\cap L)_x=T_x\cap L}$ . Let $\mu_{n,n-k}$ be the invariant measure over $A(n,n-k)$ , normalized so that $\mu_{n,k}(\{L\in A(n,k) \,|\, L\cap \mathbb{B}^n\ne\emptyset \}) = \omega_{n-k}$ , where $\mathbb{B}^n$ denotes the unit ball in $\mathbb{R}^n$ . Taking the integral $\int_{A(n,k)}\mathrm{d}\mu_{n,n-k}(L)$ , by the generalized Crofton theorem for a positive-reach set [Reference Hug and Schneider10], we obtain

\[ c_{n,0,k} \mathbb{E}[\mathcal{L}_{k}(T_x)] = \sum_{j=0}^{n-k} c_{n,j,k} \mathcal{L}_{k+j}(T)\,\Xi_{j,N}(x),\]

where

$$c_{n,j,k} = \frac{\Gamma\big(\frac{k+1}{2}\big)\Gamma\big(\frac{k+j+1}{2}\big)}{\Gamma\big(\frac{j+1}{2}\big)\Gamma\big(\frac{n+1}{2}\big)},$$

which proves (1.4).

2.2. Marginal distributions of X and its derivatives

In this study, we deal with the central limit random field $X(t)=X_N(t)$ in (1.2) on $T\subset\mathbb{R}^n$ . We assume that X(t) has zero mean, unit variance, and a smooth sample path $t\mapsto X(t)$ in the following sense: X(t), $\nabla X(t)=(X_i(t))_{1\le i\le n}\in\mathbb{R}^n$ , and $\nabla^2 X(t)=(X_{ij}(t))_{1\le i,j\le n}\in\mathrm{Sym}(n)$ (the set of $n\times n$ real symmetric matrices) exist and are continuous with respect to $t=(t^1,\ldots,t^n)$ almost surely, where

(2.3) \begin{equation} X_i(t) = \frac{\partial}{\partial t^i}X(t), \qquad X_{ij}(t) = \frac{\partial^2}{\partial t^i\partial t^j}X(t).\end{equation}

In addition, it is assumed that $X({\cdot})$ is isotropic. That is, the arbitrary finite marginal distribution $\{X(t)\}_{t\in T^{\prime}}$ , where $T^{\prime}\subset\mathbb{R}^n$ is a finite set, is invariant under the group of rigid motions of t.

This isotropic property implies that the marginal moment is independent of t. Recall that we assumed $\mathbb{E}[X(t)]=0$ and $\mathbb{E}\left[X(t)^2\right]=1$ . The covariance function of an isotropic field is a function of the distance between two points:

(2.4) \begin{equation} \mathbb{E}[X(t_1)X(t_2)] = \mathbb{E}[Z_{(i)}(t_1) Z_{(i)}(t_2)] = \rho\bigl(\tfrac12\Vert t_1-t_2\Vert^2\bigr),\end{equation}

where $Z_{(i)}$ was used in (1.2). This covariance function is sufficiently smooth at $t_1=t_2$ if we assume the following condition on $\rho$ :

(2.5) \begin{equation} \rho(0)=1,\qquad \rho^{\prime}(0)<0, \quad \mbox{ and } \mathrm{d}^4 \rho(x)/\mathrm{d} x^4 \mbox{ exists}.\end{equation}

Under the condition (2.5), $\partial^{4}\rho\bigl(\frac{1}{2}\Vert t_1-t_2\Vert^2\bigr)/ \partial t_{1}^{i_1}\partial t_{1}^{j_1} \partial t_{2}^{i_2}\partial t_{2}^{j_2}$ exists, and the mean square derivatives $X^*_i$ and $X^*_{ij}$ of X exist. Their moments of order up to 2 are obtained by exchanging the derivatives and the expectation symbol $\mathbb{E}[{\cdot}]$ ([Reference Adler1, Theorem 2.2.2]). For instance,

(2.6) \begin{align} \mathbb{E}\big[X^*_{ij}(t) X^*_{kl}(t)\big] = \frac{\partial^4}{\partial t_{1}^{i}\partial t_{1}^{j}\partial t_{2}^{k}\partial t_{2}^{l}} \mathbb{E}[X(t_1)X(t_2)] \Big|_{t_1=t_2=t}.\end{align}

Moreover, (2.6) is equivalent to $\mathbb{E}[X_{ij}(t) X_{kl}(t)]$ when the almost-sure derivatives (2.3) exist. In this manner, we obtain the moments of $(X(t),\nabla X(t),\nabla^2 X(t))$ of order up to 2 as follows: $\mathbb{E}[X_i(t)] = \mathbb{E}[X_{ij}(t)] = \mathbb{E}[X_{i}(t)X(t)] = \mathbb{E}[X_{ij}(t)X_{k}(t)] = 0$ , $\mathbb{E}[X_i(t) X_j(t)]=-\rho^{\prime}(0)\delta_{ij}$ , $\mathbb{E}[X_{ij}(t)X(t)]=\rho^{\prime}(0)\delta_{ij}$ , and

\[ \mathbb{E}\big[X_{ij}(t)X_{kl}(t)\big]=\rho^{\prime\prime}(0) \big(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}+\delta_{ij}\delta_{kl}\big),\]

where $\delta_{ij}$ is the Kronecker delta. In particular, $\nabla X(t)$ is uncorrelated with X(t) for each fixed t. We change the variable from $\nabla^2 X(t)$ to

\begin{equation*} R(t) = (R_{ij}(t))_{1\le i,j\le n} = \nabla^2 X(t) + \gamma X(t) I_n, \quad \gamma=-\rho^{\prime}(0).\end{equation*}

Then R(t) is uncorrelated with X(t) and $\nabla X(t)$ for each fixed t. This simplifies the entire manipulation process. R(t) has zero mean and the covariance structure

(2.7) \begin{align}\mathbb{E}\big[R_{ij}(t) R_{kl}(t)\big]= \alpha \frac{1}{2} \big(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\big) + \beta \delta_{ij}\delta_{kl},\end{align}

where $\alpha = 2\rho^{\prime\prime}(0)$ and $\beta = \rho^{\prime\prime}(0)-\rho^{\prime}(0)^2$ . As an $\binom{n+1}{2}\times\binom{n+1}{2}$ covariance matrix, (2.7) is nonnegative definite if and only if $\alpha\ge 0$ and $\alpha+n\beta\ge 0$ , and is positive definite if and only if

(2.8) \begin{equation} \alpha>0,\,\alpha+n\beta>0, \quad \mbox{or equivalently},\quad \rho^{\prime\prime}(0) > \frac{n}{n+2}\rho^{\prime}(0)^2 > 0.\end{equation}

In this study, we assume (2.8). This is satisfied when ${(X(t),\nabla X(t),\nabla^2 X(t))\in\mathbb{R}^{1+n+\binom{n+1}{2}}}$ has a probability density function. This assumption is sufficient for our motivating applications in cosmology ([Reference Matsubara13]).

In the limit as $N\to\infty$ , for each fixed t, X(t), $\nabla X(t)$ , and R(t) are independently distributed as Gaussian distributions $X(t)\sim\mathcal{N}(0,1)$ , $\nabla X(t)\sim \mathcal{N}_n(0,\gamma I_n)$ , and $R(t)\in\mathrm{Sym}(n)$ is a zero-mean Gaussian random matrix with a covariance structure (2.7). This limiting Gaussian random matrix R(t) is referred to as the Gaussian orthogonal invariant matrix ([Reference Cheng and Schwartzman6]) and has a probability density function

\begin{align*}p^0_{R}(R)\propto\exp\biggl\{-\frac{1}{2\alpha}\mathrm{tr}\bigl(\big(R-\tfrac{1}{n}\mathrm{tr}(R) I_n\big)^2\bigr) - \frac{1}{2 n(\alpha+n\beta)}\mathrm{tr}(R)^2 \biggr\}\end{align*}

with respect to $\mathrm{d} R=\prod_{1\le i\le j\le n} \mathrm{d} R_{ij}$ .

The paper [Reference Cheng and Schwartzman6] proved that in the boundary case $\rho^{\prime\prime}(0)=(n/(n+2))\rho^{\prime}(0)^2>0$ , there exists an isotropic Gaussian random field. For non-Gaussian random fields in the boundary case, nothing seems to be known.

2.3. Euler characteristic density

In the following, let $V(t)=(V_i(t))_{1\le i\le n}=\nabla X(t)$ . The probability density function of (X(t),V(t), R(t)) with respect to the Lebesgue measure $\mathrm{d} X\mathrm{d} V\mathrm{d} R$ , $\mathrm{d} V=\prod_{i=1}^n \mathrm{d} V_i$ , $\mathrm{d} R= \prod_{1\le i\le j\le n} \mathrm{d} R_{ij}$ , is denoted by $p_N(X,V,R)$ if it exists. This is irrespective of the point t.

Morse’s theorem is a fundamental tool for counting the Euler characteristics of a set. The following is a result of the Kac–Rice formula, which is the integral form of Morse’s theorem.

Proposition 1. ([Reference Adler and Taylor2].) Suppose that the regularity conditions in Theorem 11.2.1 of [Reference Adler and Taylor2] with f(t) and g(t) replaced by $\nabla X(t)$ and $(X(t),\nabla^2 X(t))$ , respectively, are satisfied. Then the Euler characteristic density in (1.3) and (1.4) is

(2.9) \begin{equation} \Xi_{n,N}(x) = \int_{x}^\infty \biggl[ \int_{\mathrm{Sym}(n)} \det\bigl({-}R +\gamma x^{\prime} I_n\bigr) p_N(x^{\prime},0,R) \mathrm{d} R \biggr] \mathrm{d} x^{\prime}.\end{equation}

When the random field $X({\cdot})$ is Gaussian, the conditions for Proposition 1 are simplified (see [Reference Adler and Taylor2, Corollary 11.2.2]). We use the formula (2.9) as the starting point of the analysis. In the limit as $N\to\infty$ , $p_{\infty}(x,0,R)=\phi(x)p^0_{V}(0)p^0_{R}(R)$ , where

(2.10) \begin{equation} \phi(x) = \frac{1}{\sqrt{2\pi}}e^{-x^2/2}\end{equation}

is the probability density function of the standard Gaussian distribution $\mathcal{N}(0,1)$ and $p^0_{V}(0)=(2\pi\gamma)^{-n/2}$ is the probability density function of V evaluated at $V=0$ . We then have the well-known result

(2.11) \begin{equation} \Xi_{n,\infty}(x)=\frac{\gamma^{n/2}}{(2\pi)^{n/2}}\phi(x) H_{n-1}(x),\end{equation}

where

(2.12) \begin{equation} H_k(x) = \phi(x)^{-1} \Bigl({-}\frac{\mathrm{d}}{\mathrm{d} x}\Bigr)^k\phi(x)\end{equation}

is the Hermite polynomial (e.g., [Reference Adler and Taylor2, Reference Tomita26]).

The primary purpose of this study is to derive the asymptotic expansion formula for $\Xi_{n,N}(x)$ around $N=\infty$ .

3. Main results

This section presents the main theorems of this study. The statements are given in terms of the isotropic cumulants introduced below.

3.1. Isotropic cumulants and their derivatives

If a function $f(t_1,\ldots,t_k)$ , $t_i\in\mathbb{R}^n$ , is isotropic, then f is a function of the $t_i$ through the distances between them: $\Vert t_i-t_j\Vert$ , $1\le i<j\le k$ . The k-point correlation function of the isotropic central limit random field $X({\cdot})$ in (1.2) is then written as

(3.1) \begin{equation} \mathrm{cum}(X(t_1),\ldots,X(t_k))= N^{-\frac{1}{2}(k-2)} \kappa^{(k)}(x_{12},x_{13},\ldots,x_{k-1,k}),\quad x_{ab}=\tfrac12 \Vert t_a-t_b\Vert^2,\end{equation}

where $\kappa^{(k)}$ denotes the kth-order cumulant of $Z_{(i)}({\cdot})$ in (1.2). Note that the cumulant of a random vector $(Y_1,\ldots,Y_k)$ is defined by

\[ \mathrm{cum}(Y_1,\ldots,Y_k) = \sum ({-}1)^\ell (\ell-1)!\, \mathbb{E}\bigl[\textstyle{\prod_{i\in I_1}}Y_i\bigr]\cdots\mathbb{E}\bigl[\textstyle{\prod_{i\in I_\ell}}Y_i\bigr],\]

where the summation runs over all possible set partitions of $\{1,\ldots,k\}$ such that $I_1\sqcup\cdots\sqcup I_\ell = \{1,\ldots,k\}$ ([Reference McCullagh17]). When and only when the k-dimensional marginal $(X(t_1),\ldots,X(t_k))$ has a kth-order moment, the k-point correlation function exists. The 2-point correlation function $\kappa^{(2)}({\cdot})$ is the covariance function $\rho({\cdot})$ in (2.4). The 3- and 4-point correlation functions are as follows:

\begin{align*}& \mathrm{cum}(X(t_1),X(t_2),X(t_3)) = \mathbb{E}[X(t_1)X(t_2)X(t_3)] = N^{-\frac{1}{2}} \kappa^{(3)}(x_{12},x_{13},x_{23}), \\& \mathrm{cum}(X(t_1),X(t_2),X(t_3),X(t_4)) \\&\qquad = \mathbb{E}[X(t_1)X(t_2)X(t_3)X(t_4)] -\mathbb{E}[X(t_1)X(t_2)]\mathbb{E}[X(t_3)X(t_4)][3] \\&\qquad = N^{-1} \kappa^{(4)}(x_{12},x_{13},x_{14},x_{23},x_{24},x_{34}),\end{align*}

where the expression ‘[Reference Adler and Taylor3]’ represents the three symmetric terms. Note that $\kappa^{(3)}(x_{12},x_{13},x_{23})$ is symmetric in its arguments, but $\kappa^{(k)}$ ( $k\ge 4$ ) are not symmetric.

We assume the smoothness of $\kappa^{(k)}$ in (3.1) as a generalization of (2.5):

(3.2) \begin{equation} \frac{\partial^{2k}\kappa^{(k)}(x_{12},x_{13},\ldots,x_{k-1,k})} {\prod_{1\le a<b\le k} (\partial x_{ab})^{n_{ab}}}\ \ \mbox{exists for $\sum n_{ab}=2k$, $n_{ab}\le 4$},\end{equation}

from which it is proved that

(3.3) \begin{equation} \frac{\partial^{2k}\kappa^{(k)}(x_{12},x_{13},\ldots,x_{k-1,k})} {\partial t_{1}^{i_1}\partial t_{1}^{j_1}\cdots \partial t_{k}^{i_k}\partial t_{k}^{j_k}},\ \ x_{ab}=\tfrac12 \Vert t_a-t_b\Vert^2, \ \ \mbox{exists}.\end{equation}

Under this condition, the kth-order cumulants of $(X(t),\nabla X(t),\nabla^2 X(t))$ are obtained by exchanging the derivatives and the cumulant symbol. For instance,

(3.4) \begin{equation} \mathrm{cum}(X_{i}(t),X_{j}(t),X_{kl}(t)) = \frac{\partial^4}{\partial t_1^i\partial t_2^j\partial t_3^k\partial t_3^l} \mathrm{cum}(X(t_1),X(t_2),X(t_3)) \Big|_{t_1=t_2=t_3=t}.\end{equation}

(The proof is identical to that in Section 2.2. The mean square derivatives $X^*_i$ and $X^*_{ij}$ that exist under (2.5) satisfy (3.4) under (3.3).)

To state the main theorems, the following notation is used:

(3.5) \begin{equation}\kappa^{(k)}_{(a_1 b_1),\ldots,(a_K b_K)}(0) = \kappa^{(k)}_E(0) = \Biggl(\prod_{(a,b)\in E}\Bigl(\frac{\partial}{\partial x_{ab}}\Bigr)\Biggr) \kappa^{(k)}\bigl((x_{ab})_{1\le a<b\le k}\bigr)\Big|_{x_{12}=\cdots=x_{k-1,k}=0}\end{equation}

with $E = \{(a_1,b_1),\ldots,(a_K,b_K)\}$ . Here, an undirected graph (V, E) with vertex set $V=\{1,\ldots,k\}$ and edge set E is considered. Note that the edge set E is a multiset that allows multiple edges connecting a pair of two vertices. Next, a diagram is defined as the undirected graph (V, E) without the information of vertex labels. In addition, isolated vertices are omitted from the diagram. That is, two diagrams are identical if their undirected graphs are identical after a suitable relabeling of the vertices. As shown in Figure 3.1, some diagrams contain cycles, while others do not.

Figure 3.1. Diagrams with cycle ((a)–(c)) and without cycle ((d)). (a) $\kappa^{(3)}_{(12),(12),(13)}(0)$ , (b)  $\kappa^{(3)}_{(12),(13),(23)}(0)$ , (c) $\kappa^{(6)}_{(12),(13),(14),(23),(45),(46)}(0)$ , (d) $\kappa^{(6)}_{(12),(13),(14),(45),(46)}(0)$ .

Table 3.1 lists the cycle-free derivatives (3.5) for $k\le 4$ and their abbreviations. This table is needed for the statement of the main theorem in the next section. The multiplicity is the number of undirected graphs that are identical by way of relabeling the vertices.

Table 3.1. Derivatives of cumulant functions $\kappa^{(k)}$ with cycle-free diagram ( $k=2,3,4$ ).

3.2. Asymptotic expansion of the Euler characteristic density

The two theorems are stated here as the main results. These are proved in the following sections.

Assumption 1.

  1. (i) The covariance function $\rho$ in (2.4) and the third- and fourth-order cumulant functions given by $\kappa^{(k)}$ in (3.1) for $k=3,4$ exist, and they satisfy (2.5), and (3.2) or (3.3), respectively.

  2. (ii) The probability density function $p_N(X,V,R)$ of (X(t),V(t),R(t)) for t fixed exists for $N\ge 1$ , and is bounded for some N. Furthermore, (X(t),V(t),R(t)) has a moment of order $\binom{n+2}{2}+1$ $(\ge 4)$ under $p_1$ .

Theorem 1. Under Assumption 1 , as $N\to\infty$ , the Euler characteristic density (2.9) can be expanded as

\begin{align*}\Xi_{n,N}(x)&= \frac{\gamma^{n/2}}{(2\pi)^{n/2}} \phi(x) \Bigl( H_{n-1}(x) + \frac{1}{\sqrt{N}} \Delta_{1,n}(x) + \frac{1}{N} \Delta_{2,n}(x) \Bigr) + o\big(N^{-1}\big)\end{align*}

uniformly in x, where

\begin{align*}\Delta_{1,n}(x) &= \tfrac{1}{2} \gamma^{-2} \kappa_{11} n (n-1) H_{n-2}(x) - \tfrac{1}{2} \gamma^{-1} \kappa_{1} n H_{n}(x) + \tfrac{1}{6} \kappa_{0} H_{n+2}(x),\end{align*}
\begin{align*}\Delta_{2,n}(x) &= \Bigl({-}\tfrac{1}{6}\gamma^{-3} (3\widetilde\kappa_{111}^{a} + \widetilde\kappa_{111}^{d}) +\tfrac{1}{8}\gamma^{-4} \kappa_{11}^2 (n-7) \Bigr) n(n-1)(n-2) H_{n-3}(x) \nonumber \\&\quad + \Bigl( \tfrac{1}{8}\gamma^{-2} \bigl(\widetilde\kappa_{11}^{aa} (n-2) +4\widetilde\kappa_{11}^{a} (n-1)\bigr) \nonumber \\&\qquad\quad -\tfrac{1}{4}\gamma^{-3} \kappa_{1}\kappa_{11} (n-1)(n-4) \Bigr) n H_{n-1}(x) \nonumber \\&\quad + \Bigl({-} \tfrac{1}{4}\gamma^{-1} \widetilde\kappa_{1} +\tfrac{1}{24}\gamma^{-2} \bigl(3\kappa_{1}^2 (n-2) + 2\kappa_{0}\kappa_{11} (n-1)\bigr) \Bigr) n H_{n+1}(x) \nonumber \\&\quad + \Bigl( \tfrac{1}{24} \kappa_{0} -\tfrac{1}{12}\gamma^{-1} \kappa_{0}\kappa_{1} n \Bigr) H_{n+3}(x) +\tfrac{1}{72} \kappa_{0}^2 H_{n+5}(x). \nonumber\end{align*}

Here, the symbols $\gamma$ , $\kappa_{0}$ , $\kappa_{1}$ , $\kappa_{11}$ , $\widetilde\kappa_{0}$ , $\widetilde\kappa_{1}$ , $\widetilde\kappa_{11}^{a}$ , $\widetilde\kappa_{11}^{aa}$ , $\widetilde\kappa_{111}^{d}$ , and $\widetilde\kappa_{111}^{a}$ are defined in Table 3.1.

Note that, in principle, the asymptotic expansion of $\Xi_{n,N}(x)$ can be obtained up to an arbitrary order in N by modifying Assumption 1. In $\Delta_{1,n}(x)$ and $\Delta_{2,n}(x)$ , derivatives with cycles in their diagrams, such as those in Figures 3.1(a)–(b), do not appear in the expansions. Conversely, all types of derivatives of $\kappa^{(3)}$ and $\kappa^{(4)}$ with cycle-free diagrams listed in Table 3.1 appear. The first half of this observation holds true for an arbitrary order as follows.

Theorem 2. Suppose that the diagram of the derivative $\kappa^{(k)}_{(a_1 b_1),\ldots,(a_K b_K)}(0)$ in (3.5) has cycles of length greater than or equal to 2. Then this derivative does not appear in the asymptotic expansion of $\Xi_{n,N}(x)$ even when expanded to $O\big(N^{-\frac{1}{2}(k-2)}\big)$ .

We conjecture that the non-Gaussianity (that is, $\kappa^{(k)}$ and its derivatives) captured by the Minkowski functional approach is characterized by the presence or absence of cycles in the diagrams.

4. Chi-squared random field

Here we consider a chi-squared random field defined by the squared sum of the independent copies of a Gaussian random field.

Let Y(t) be a $C^2$ -Gaussian random field on $T\subset\mathbb{R}^n$ with zero mean and covariance function $\mathbb{E}[Y(s) Y(t)] = \rho_Y\big(\frac{1}{2}\Vert s-t\Vert^2\big)$ such that $\rho_Y(0)=1$ , $\rho_Y^{\prime}(0)=-g<0$ , $\rho_Y^{\prime\prime}(0)>(n/(n+2))\rho_Y^{\prime}(0)^2$ , and $\mathrm{d}^4\rho_Y(x)/\mathrm{d} x^4$ exists. (For example, we could have $\rho_Y(x)=e^{-g x}$ , $g>0$ .) Then, define

\begin{equation*} X(t) = X_N(t) = \frac{1}{\sqrt{2N}} \sum_{i=1}^N \bigl(Y_{(i)}(t)^2-1\bigr),\end{equation*}

where the $Y_{(i)}({\cdot})$ are i.i.d. copies of $Y({\cdot})$ .

The chi-squared random field has been well studied as one of the simplest non-Gaussian random fields (see e.g. [Reference Adler1, Section 7.1], [Reference Worsley27], [Reference Matsubara and Yokoyama16], and [Reference Taylor25]). Here, we verify that Assumption 1 is satisfied.

According to [Reference Worsley27, Lemma 3.2], $\nabla X=\nabla X(t)$ and $\nabla^2 X=\nabla^2 X(t)$ can be decomposed as

(4.1) \begin{equation}\nabla X = 2 g^{\frac{1}{2}} X^{\frac{1}{2}} U, \qquad \nabla^2 X = 2 g\Big(P + U U^\top - X I_n - X^{\frac{1}{2}}R\Big),\end{equation}

where $X=X(t)\sim\chi^2_N$ , $U\sim\mathcal{N}_n(0,I_n)$ , $P\sim\mathcal{W}_{n\times n}(N-1,I_n)$ , and $R\in\mathrm{Sym}(n)$ , which is a Gaussian orthogonal invariant matrix with parameters $\alpha=2\rho_Y^{\prime\prime}(0)/g^2$ and $\beta=\rho_Y^{\prime\prime}(0)/g^2-1$ in (2.7), are independently distributed. From (4.1), $(X,\nabla X,\nabla^2 X)$ has moments of arbitrary order. In addition, the conditional probability density of $(X,\nabla X,\nabla^2 X)$ given R is obtained as

\begin{align*} p(X,\nabla X,\nabla^2 X|R) \propto & \det(P)^{\frac{1}{2}(N-n-2)} e^{-\frac{1}{2}\mathrm{tr}(P)} e^{-\frac{1}{8 g X}\Vert\nabla X\Vert^2} X^{\frac{1}{2}(N-n)-1}e^{-X/2} \mathbb{1}(P\succ 0),\end{align*}

where

\[ P = (2 g)^{-1}\nabla^2 X - (4 g)^{-1}X^{-1}(\nabla X)(\nabla X)^\top + X I_n + X^{\frac{1}{2}}R,\]

and $P\succ 0$ indicates that P is positive definite. This conditional density is continuous and bounded above when N is large. Hence, so is the unconditional density $p(X,\nabla X,\nabla^2 X)=\mathbb{E}^R[p(X,\nabla X,\nabla^2 X|R)]$ , and Assumption 1(ii) is satisfied.

The k-point correlation function $\kappa^{(k)}$ is the same as the cumulant of $Z(t)=(Y(t)^2-1)/\sqrt{2}$ as follows. Let $x_{ab}=\frac{1}{2}\Vert t_a-t_b\Vert^2$ . We have

\begin{align*} & \rho(x_{12}) = \kappa^{(2)}(x_{12}) = \mathrm{cum}(Z(t_1),Z(t_2)) = \rho_Y(x_{12})^2, \\ & \kappa^{(3)}(x_{12},x_{13},x_{23}) = \mathrm{cum}(Z(t_1),Z(t_2),Z(t_3)) = 2^{3/2}\rho_Y(x_{12})\rho_Y(x_{13})\rho_Y(x_{23}),\end{align*}

and

\begin{align*} & \kappa^{(4)}(x_{12},x_{13},\ldots,x_{34}) = \mathrm{cum}(Z(t_1),Z(t_2),Z(t_3),Z(t_4)) \\ & = 4\bigl[\rho_Y(x_{13})\rho_Y(x_{14})\rho_Y(x_{23})\rho_Y(x_{24}) + \rho_Y(x_{12})\rho_Y(x_{14})\rho_Y(x_{23})\rho_Y(x_{34}) \\ &\quad + \rho_Y(x_{12})\rho_Y(x_{13})\rho_Y(x_{24})\rho_Y(x_{34})\bigr].\end{align*}

We see that $\kappa^{(k)}$ , $k=2,3,4$ , as defined above satisfy the requirements (2.5) and (3.2). Hence, Assumption 1(i) is satisfied.

The cumulants listed in Table 3.1 are as follows:

\begin{align*}& \rho(0) = 1, \quad -\gamma = \rho^{\prime}(0) = -2 g, \quad \kappa_{0} = 2\sqrt{2}, \quad \kappa_{1} = -\sqrt{2}\gamma, \quad \kappa_{11} = \gamma^2/\sqrt{2}, \\& \widetilde\kappa_{0} = 12, \quad \widetilde\kappa_{1} = -4\gamma, \quad \widetilde\kappa_{11}^{a} = \gamma^2, \quad \widetilde\kappa_{11}^{aa} = 2\gamma^2, \quad \widetilde\kappa_{111}^{d} = 0, \quad \widetilde\kappa_{111}^{a} = -\gamma^3/2.\end{align*}

The quantities $\Delta_{1,n}(x)$ and $\Delta_{2,n}(x)$ in Theorem 1 are

(4.2) \begin{equation}\begin{aligned}\Delta_{1,n}(x) &= \sqrt{2}\bigl(\tfrac{1}{4}n(n-1) H_{n-2}(x) +\tfrac{1}{2}n H_{n}(x) +\tfrac{1}{3}H_{n+2}(x)\bigr), \\\Delta_{2,n}(x) &= \tfrac{1}{16}n(n-1)(n-2)(n-3) H_{n-3}(x) +\tfrac{1}{4} n^2(n-2) H_{n-1}(x) \\&\quad +\tfrac{1}{12}n(5n+4) H_{n+1}(x) +\tfrac{1}{6}(2n+3) H_{n+3}(x) +\tfrac{1}{9}H_{n+5}(x).\end{aligned}\end{equation}

The exact formula for the isotropic chi-squared random field is obtained in [Reference Worsley27, Theorem 3.5] as follows:

(4.3) \begin{equation}\Xi_{n,N}(x) = \frac{g^{n/2}}{(2\pi)^{n/2}} \frac{1}{2^{N/2-1}\Gamma(N/2)} H_{n-1}^{(N-1)}(\sqrt{y}) e^{-y/2}, \quad y=N+x\sqrt{2N},\end{equation}

where

\begin{equation*} H^{(N)}_n(x) = e^{x^2/2} \Bigl({-}\frac{\mathrm{d}}{\mathrm{d} x}\Bigr)^n \bigl(x^N e^{-x^2/2}\bigr).\end{equation*}

This formula can also be obtained as a special case of the Euler characteristic density of the Gaussian-related random fields (see [Reference Adler and Taylor2, Reference Adler and Taylor3] and [Reference Panigrahi, Taylor and Vadlamani18]).

Through direct calculations using a generating function (not using Theorem 1), the following proposition can be shown. The proof is provided at the end of this section.

Proposition 2. The Euler characteristic density $\Xi_{n,N}(x)$ in (4.3) can be expanded as

\begin{equation*}\Xi_{n,N}(x) = \frac{\gamma^{n/2}}{(2\pi)^{n/2}} \phi(x) \Bigl( H_{n-1}(x) + \frac{1}{\sqrt{N}} \Delta_{1,n}(x) + \frac{1}{N} \Delta_{2,n}(x) \Bigr) + o\big(N^{-1}\big)\end{equation*}

as $N\to\infty$ , where $\Delta_{1,n}(x)$ and $\Delta_{2,n}(x)$ are defined in (4.2).

Figure 4.1. shows the Euler characteristic densities of the chi-squared random field on $\mathbb{R}^4$ when the number of degrees of freedom N is 10, 100, and $\infty$ . The curve converges to the limiting Gaussian curve as N increases.

Figure 4.1. Euler characteristic density for a chi-squared random field on $\mathbb{R}^4$ with N degrees of freedom (dotted line, $N=10$ ; dashed line, $N=100$ ; solid line, $N=\infty$ ).

Figure 4.2. Euler characteristic density for a chi-squared random field on $\mathbb{R}^4$ with 100 degrees of freedom, and its approximations (dot-dashed line, true curve; dotted line, Gaussian approximation; dashed line, first approximation; solid line, second approximation).

Figure 4.3. Approximation error of Euler characteristic density for a chi-squared random field on $\mathbb{R}^4$ with 100 degrees of freedom (dotted line, Gaussian approximation; dashed line, first approximation; solid line, second approximation).

Figure 4.2. shows the Euler characteristic density $\Xi_{n,N}(x)$ ( $n=4$ ) of the chi-squared random field and its Gaussian approximation: $(2\pi)^{-n/2}\phi(x) H_{n-1}(x)$ , the first approximation ${(2\pi)^{-n/2}\phi(x)\bigl( H_{n-1}(x) + \Delta_{1,n}(x)/\sqrt{N} \bigr)}$ , and the second approximation ${(2\pi)^{-n/2} \phi(x) \bigl( H_{n-1}(x) + \Delta_{1,n}(x)/\sqrt{N} + \Delta_{2,n}(x)/N \bigr)}$ .

In Figure 4.2, the four curves are too close to distinguish. Figure 4.3 shows the difference between the three approximations with respect to the true curve $\Xi_{n,N}(x)$ . We see that the first approximation is more accurate than the Gaussian approximation, and that the second approximation is more accurate than the first approximation, as expected.

Proof of Proposition 2. We prove that

(4.4) \begin{equation} \frac{2^{-n/2}}{2^{N/2-1}\Gamma(N/2)} H_{n-1}^{(N-1)}(\sqrt{y}) e^{-y/2}, \quad y=N+x\sqrt{2N},\end{equation}

can be expanded as $\phi(x) \bigl( H_{n-1}(x) + \Delta_{1,n}(x)/\sqrt{N} + \Delta_{2,n}(x)/N \bigr) + o\big(N^{-1}\big)$ , where $\Delta_{1,n}(x)$ and $\Delta_{2,n}(x)$ are given in (4.2).

By multiplying (4.4) by $z^{n-1}/(n-1)!$ and taking the summation over $n\ge 1$ , we obtain the generating function of (4.4) as

(4.5) \begin{equation}\frac{1}{2^{(N-1)/2}\Gamma(N/2)}\big({-}z/\sqrt{2}+\sqrt{y}\big)^{N-1} e^{-\frac{1}{2}\left({-}z/\sqrt{2}+\sqrt{y}\right)^2}, \quad y=N+x\sqrt{2N}.\end{equation}

The generating function (4.5) can be expanded around $N=\infty$ as

\[ \varphi_0(z,x)\bigl(1+p_1(z,x)/\sqrt{N}+p_2(z,x)/N \bigr) + o\big(N^{-1}\big),\]

where

\[ \varphi_0(z,x)=e^{-\frac{1}{2} (x-z)^2}/\sqrt{2\pi},\]

and

\begin{align*} p_1(z,x) &= \bigl( \tfrac{1}{3}x(2x^2-3) -(x-1)(x+1)z +\tfrac{1}{2}x z^2 -\tfrac{1}{6}z^3 \bigr)/\sqrt{2}, \nonumber \\ p_2(z,x) &= \tfrac{1}{36} \big(4x^6-30x^4+27x^2-6\big) -\tfrac{1}{12} (x-2)x(x+2) \big(4x^2-3\big) z \\ &\quad +\tfrac{1}{12} \big(5x^4-15x^2+6\big) z^2-\tfrac{1}{36} x\big(11x^2-21\big) z^3 \\ &\quad +\tfrac{7}{48} (x-1)(x+1) z^4-\tfrac{1}{24} x z^5+\tfrac{1}{144} z^6.\end{align*}

We select the coefficient of the term $z^{n-1}/(n-1)!$ . Because $e^{z x-z^2/2}$ is the generating function of the Hermite polynomial, the coefficient of the term $z^{n-1}/(n-1)!$ in $\varphi(z,x) z^a$ is

\[ \phi(x)H_{n-a-1}(x) (n-1)_a, \quad (n-1)_a = (n-1)(n-2)\cdots (n-a),\]

where $\phi(x)=\varphi(0,x)$ denotes the probability density function of $\mathcal{N}(0,1)$ . Based on this term-rewriting rule, we obtain the two terms $\Delta_{1,n}(x)$ and $\Delta_{2,n}(x)$ in terms of the polynomials in x and the Hermite polynomials in x. By applying the three-term relation

(4.6) \begin{equation} x H_{k}(x) = H_{k+1}(x) + k H_{k-1}(x),\end{equation}

we have the expressions for $\Delta_{1,n}(x)$ and $\Delta_{2,n}(x)$ in terms of the Hermite polynomials only.

5. Proofs of the main results

In this section, we prove the main theorems. The outline is as follows. We first describe the characteristic function of $(X(t),\nabla X(t),\nabla^2 X(t))$ using the 3- and 4-point correlation functions of $Z_{(i)}({\cdot})$ in (1.2) (Section 5.1). The resulting characteristic function is modified into the conditional characteristic function of $(X(t),\nabla^2 X(t))$ when $\nabla X(t)=0$ is given. Then the integral (2.9) is obtained by taking derivatives of the conditional characteristic function (Section 5.2). The validity of the asymptotic expansion is proved separately. Theorem 2 is proved in Section 5.3.

5.1. Isotropic cumulant generating function

The objective function $\Xi_{n,N}(x)$ is an expectation with respect to the distribution of (X, V, R), $V=\nabla X$ , $R=\nabla^2 X+\gamma X I_n$ . The index t is supposed to be fixed and is therefore omitted. To evaluate this, we first identify the characteristic function of $(X,\nabla X,\nabla^2 X)$ . We introduce parameters $T = (\tau_i)\in\mathbb{R}^n$ and $\Theta = (\theta_{ij})\in\mathrm{Sym}(n)$ such that

(5.1) \begin{equation} \theta_{ij} = \frac{1+\delta_{ij}}{2}\tau_{ij} \quad (i\le j).\end{equation}

The characteristic function of the $1+n+n(n+1)/2=\binom{n+2}{2}$ -dimensional random variables $(X,\nabla X,\nabla^2 X)$ is

(5.2) \begin{align}\psi_N(s,T,\Theta) &= \mathbb{E}\Bigl[ e^{{\sqrt{-1}} (s X+\sum_i \tau_i X_i+\sum_{i\le j} \tau_{ij}X_{ij})} \Bigr] = \mathbb{E}\Bigl[ e^{{\sqrt{-1}}(s X+\langle T,\nabla X\rangle+\mathrm{tr}(\Theta \nabla^2 X))} \Bigr].\end{align}

When $(X,\nabla X,\nabla^2 X)$ has the $k_1$ th moments, the cumulant generating function $\log\psi_N(s,T,\Theta)$ has the following Taylor series:

\[ \log\psi_N(s,T,\Theta) = \sum_{k=2}^{k_1} \sum_{u+v+w=k}\frac{N^{-\frac{1}{2}(k-2)} {\sqrt{-1}}^k}{u!\,v!\,w!} K_{u,v,w}(s,T,\Theta) + o\bigl(N^{-\frac{1}{2}(k_1-2)}\bigr),\]

where

(5.3) \begin{equation}\begin{aligned}& N^{-\frac{1}{2}(k-2)} K_{u,v,w} (s,T,\Theta) \\& = \mathrm{cum}\bigl(\underbrace{s X,\ldots,s X}_u,\underbrace{\langle T,\nabla X\rangle,\ldots,\langle T,\nabla X\rangle}_v,\underbrace{\mathrm{tr}(\Theta\nabla^2 X),\ldots,\mathrm{tr}(\Theta\nabla^2 X)}_w\bigr) \\& = N^{-\frac{1}{2}(k-2)} s^u \Biggl(\prod_{b=u+1}^{u+v} \langle T,\nabla_{t_b}\rangle \prod_{c=u+v+1}^{k} \mathrm{tr}\bigl(\Theta\nabla^2_{t_c}\bigr)\Biggr) \kappa^{(k)}\left(\left(\tfrac{1}{2}\Vert t_a-t_b\Vert^2\right)_{a<b}\right)\Big|_{t_1=\cdots=t_{k}}\end{aligned}\end{equation}

with $\nabla_{t_b}=\big(\partial/\partial t_{b}^{i}\big)_{1\le i\le n}$ , $\nabla^2_{t_c}=\left(\partial^2/\partial t_{c}^{i}\partial t_{c}^{j}\right)_{1\le i,j\le n}$ .

For instance,

\begin{align*} N^{-\frac{1}{2}} K_{0,2,1}(s,T,\Theta)&= \mathrm{cum}\left(\langle T,\nabla X\rangle,\langle T,\nabla X\rangle,\mathrm{tr}\big(\Theta \nabla^2 X\big)\right) \\&= \sum_{i,j,k,l=1}^n \tau_i \tau_j \theta_{kl} \mathrm{cum}(X_i,X_j,X_{kl}),\end{align*}

and, as shown in (3.4),

\begin{align*}\mathrm{cum}(X_i,X_j,X_{kl})&= \frac{\partial^4N^{-\frac{1}{2}} \kappa^{(3)}\bigl(\tfrac12\Vert t_1-t_2\Vert^2,\tfrac12\Vert t_1-t_3\Vert^2,\tfrac12\Vert t_2-t_3\Vert^2\bigr)}{\partial t_1^i\partial t_2^j\partial t_3^k\partial t_3^l} \Big|_{t_1=t_2=t_3} \\&= N^{-\frac{1}{2}} \bigl({-}2\kappa_{11} \delta_{ij}\delta_{kl} + \kappa_{11} \big(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}\big)\bigr),\end{align*}

where we let $\kappa_{11} = \partial^2\kappa^{(3)}(x_{12},x_{13},x_{23})/\partial x_{12}\partial x_{13}|_{x_{12}=x_{13}=x_{23}=0}$ . Thus, we obtain

\[ K_{0,2,1}(s,T,\Theta)= -2\kappa_{11}\Vert T\Vert^2\mathrm{tr}(\Theta) + 2 \kappa_{11} T^\top\Theta T.\]

In this example, the factors $\Vert T\Vert^2\mathrm{tr}(\Theta)$ and $T^\top\Theta T$ are invariant under the transformation ${(T,\Theta)\mapsto \left(P T,P\Theta P^\top\right)}$ , $P\in O(n)$ . This results from the isotropic property ${(X,\nabla X,\nabla^2 X)\mathop{=}^d \left(X,P\nabla X, P\nabla^2 X P^\top\right)}$ . The general form of $K_{u,v,w}(s,T,\Theta)$ is specified by the isotropic assumption.

Lemma 1. (i) $K_{u,2v,w}(s,T,\Theta)$ is a linear combination of terms of the form

(5.4) \begin{equation} s^u \prod_{j\ge 0}\big(T^\top \Theta^j T\big)^{v_j} \prod_{k\ge 1}\mathrm{tr}(\Theta^k)^{w_k},\end{equation}

where ${(v_j)_{j\ge 0}}$ and ${(w_k)_{k>0}}$ are non-negative integers satisfying

\begin{equation*} \sum_{j\ge 0}v_j = v, \quad \sum_{j\ge 1} j v_j + \sum_{k\ge 1} k w_k = w.\end{equation*}

(ii) $K_{u,2v+1,w}(s,T,\Theta)=0$ .

Proof of Lemma 1. Because of the isotropic property and the multilinearity of the cumulant, we have

\[ K_{u,v,w}(s,T,\Theta)=s^u K_{u,v,w}(1,T,\Theta)=s^u K_{u,v,w}\big(1,P^\top T,P^\top \Theta P\big), \ \ P\in O(n).\]

By letting $P=-I_n$ , we can observe that $K_{u,v,w}(1,T,\Theta)=0$ when v is odd. When v is even, $K_{u,v,w}(1,T,\Theta)$ is an even polynomial in $\tau_i$ , and is a function of $(\tau_i \tau_j)_{1\le i,j\le n}= T T^\top\in\mathrm{Sym}(n)$ as well.

As an extension of the zonal polynomial in multivariate analysis, [Reference Davis7] introduced an invariant polynomial of two symmetric matrices $A,B\in\mathrm{Sym}(n)$ which is invariant under the transformation ${(A,B)\mapsto \big(P^\top A P, P^\top B P\big)}$ , $P\in O(n)$ .

$K_{u,v,w}(1,T,\Theta)$ is an invariant polynomial in $\big(T T^\top,\Theta\big)$ , which is a linear combination of terms of the form (5.4), as shown in [Reference Davis7, (4.8)].

Next, the expression for the $K_{u,v,w}$ in (5.3) is obtained. The abbreviations $\big(\gamma,\kappa_{0}, \kappa_{1},\kappa_{11},\widetilde\kappa_{0},\widetilde\kappa_{1},\widetilde\kappa_{11}^{a},\widetilde\kappa_{11}^{aa},\widetilde\kappa_{111}^{d},\widetilde\kappa_{111}^{a}\big)$ are defined in Table 3.1. Because of Theorem 2 (to be proved in Section 5.3), the derivatives that have a cycle of length greater than or equal to 2 in their diagram do not contribute to the final results, and are denoted by the symbol ‘ $({*})$ ’ (the explicit form is not required).

Below, we give $K_{u,2v,w}$ for $k=u+2v+w=2,3,4$ :

  • The second-order cumulants:

    \begin{align*}& K_{2,0,0} = s^2, \qquad K_{0,2,0} = \gamma \Vert T\Vert^2, \quad K_{1,0,1} = -\gamma \mathrm{tr}(\Theta), \\& K_{0,0,2} = \rho^{\prime\prime}(0)\big[2\mathrm{tr}(\Theta^2) + \mathrm{tr}(\Theta)^2\big] = ({*}).\end{align*}
  • The third-order cumulants:

    \begin{align*} K_{3,0,0} &= \kappa_0 s^3, \quad K_{1,2,0} = -\kappa_1 s \Vert T\Vert^2, \quad K_{2,0,1}= 2\kappa_1 s^2 \mathrm{tr}(\Theta), \\K_{1,0,2} &= 3\kappa_{11} s \mathrm{tr}(\Theta)^2 + ({*}), \quad K_{0,2,1} = -2\kappa_{11} \Vert T\Vert^2 \mathrm{tr}(\Theta) + 2\kappa_{11} T^\top\Theta T, \\K_{0,0,3} &= ({*}).\end{align*}
  • The fourth-order cumulants:

    \begin{align*}K_{4,0,0} &= \widetilde\kappa_0 s^4, \quad K_{2,2,0} = -\widetilde\kappa_1 s^2 \Vert T\Vert^2, \quad K_{0,4,0} = 3 \widetilde\kappa_{11}^{aa} \Vert T\Vert^4, \\K_{3,0,1} &= 3\widetilde\kappa_1 s^3 \mathrm{tr}(\Theta), \quad K_{2,0,2} = \big(6\widetilde\kappa_{11}^a+2\widetilde\kappa_{11}^{aa}\big) s^2 \mathrm{tr}(\Theta)^2 + ({*}), \\K_{1,2,1} &= -\big(2\widetilde\kappa_{11}^a +\widetilde\kappa_{11}^{aa}\big) s \Vert T\Vert^2 \mathrm{tr}(\Theta) + 2\widetilde\kappa_{11}^a s T^\top\Theta T, \\K_{1,0,3} &= \big(12\widetilde\kappa_{111}^a+4\widetilde\kappa_{111}^d\big) s \mathrm{tr}(\Theta)^3 + ({*}), \\K_{0,2,2} &= -\big(6\widetilde\kappa_{111}^a +2\widetilde\kappa_{111}^d\big) \Vert T\Vert^2 \mathrm{tr}(\Theta)^2 + \big(8\widetilde\kappa_{111}^a +4\widetilde\kappa_{111}^d\big) T^\top\Theta T \mathrm{tr}(\Theta) \\ &\quad -8\widetilde\kappa_{111}^a T^\top\Theta^2 T + ({*}), \quad K_{0,0,4} = ({*}).\end{align*}

The term ‘ ${(*)}$ ’ can be set to be zero in the calculations below.

5.2. Proof of Theorem 1

In the previous section, the Taylor series of the cumulant generating function $\log\psi_N(s,T,\Theta)$ in (5.2) was obtained up to $O\big(N^{-1}\big)$ . This is rewritten as the cumulant generating function of (X, V, R), $V=\nabla X$ , and $R=\nabla^2 X+\gamma X I_n$ .

The distribution of (X, V, R) when $N\to\infty$ is presented in Section 2.2. The characteristic functions of X, V, and R when $N\to\infty$ are

\begin{equation*} \psi^0_{X}(s) = e^{-\frac{1}{2}s^2}, \qquad \psi^0_{V}(T) = e^{-\frac{1}{2}\gamma \Vert T\Vert^2}, \qquad \psi^0_{R}(\Theta) = e^{-\frac{1}{2}\alpha\mathrm{tr}(\Theta^2) -\frac{1}{2}\beta\mathrm{tr}(\Theta)^2},\end{equation*}

where $\gamma=-\rho^{\prime}(0)$ , $\alpha = 2\rho^{\prime\prime}(0)$ , and $\beta = \rho^{\prime\prime}(0)-\rho^{\prime}(0)^2$ .

From this definition, the characteristic function of (X, V, R) is

(5.5) \begin{equation} \mathbb{E}\Bigl[ e^{{\sqrt{-1}}(s X+\langle T,V\rangle+\mathrm{tr}(\Theta R))}\Bigr] = \psi_N(\widetilde s,T,\Theta), \quad \widetilde s=\widetilde s(s,\Theta)=s+\gamma\mathrm{tr}(\Theta).\end{equation}

Because we assume that (X, V, R) has fourth moments (Assumption 1(ii)), the cumulant generating function is

\begin{align*}\log\psi_N(\widetilde s,T,\Theta)&= \log\psi^0_{X}(s) + \log\psi^0_{V}(T) +\log\psi^0_{R}(\Theta) \\ &\quad + \frac{1}{\sqrt{N}} Q_{3}(\widetilde s,T,\Theta) + \frac{1}{N} Q_{4}(\widetilde s,T,\Theta) + o\big(N^{-1}\big),\end{align*}

where

\begin{align*} Q_{k}(\widetilde s,T,\Theta) = \sum_{u+v+w=k}\frac{1}{u!\,v!\,w!} K_{u,v,w}(\widetilde s,T,\Theta),\ \ k=3,4.\end{align*}

Therefore, $\psi_N(\widetilde s,T,\Theta) = \widehat\psi_N(\widetilde s,T,\Theta) + o\big(N^{-1}\big)$ , where

(5.6) \begin{equation}\begin{aligned} \widehat\psi_N(\widetilde s,T,\Theta)&= \psi^0_X(s)\psi^0_V(T)\psi^0_R(\Theta) \\ &\quad \times \biggl(1 +\frac{1}{\sqrt{N}} Q_{3}(\widetilde s,T,\Theta) +\frac{1}{N} Q_{4}(\widetilde s,T,\Theta) +\frac{1}{2N} Q_{3}(\widetilde s,T,\Theta)^2 \biggr).\end{aligned}\end{equation}

Define

(5.7) \begin{equation}\begin{aligned} \psi_{N|V=0}(\widetilde s,\Theta)&= \int_{\mathbb{R}} \int_{\mathrm{Sym}(n)} e^{{\sqrt{-1}}(s x+\mathrm{tr}(\Theta R))} p_N(x,0,R) \mathrm{d} R\,\mathrm{d} x \\&= \frac{1}{(2\pi)^n}\int_{\mathbb{R}^n} \psi_N(\widetilde s,T,\Theta) \mathrm{d} T.\end{aligned}\end{equation}

Its truncated version is

(5.8) \begin{equation}\begin{aligned} \widehat\psi_{N|V=0}(\widetilde s,\Theta)&= \int_{\mathbb{R}} \int_{\mathrm{Sym}(n)} e^{{\sqrt{-1}}(s x+\mathrm{tr}(\Theta R))} \widehat p_N(x,0,R) \mathrm{d} R\,\mathrm{d} x \\&= \frac{1}{(2\pi)^n}\int_{\mathbb{R}^n} \widehat\psi_N(\widetilde s,T,\Theta) \mathrm{d} T,\end{aligned}\end{equation}

where $\widehat p_N(x,V,R)$ denotes the Fourier inversion of $\widehat\psi_N(\widetilde s,T,\Theta)$ . The explicit form of $\widehat p_N(x,V,R)$ is not required here. Recall that the terms in parentheses in (5.6) are a linear combination of terms of the form (5.4). In the following, we obtain the concrete form of (5.8).

In (5.8), the integration with respect to $\mathrm{d} T$ is conducted as an expectation with respect to the Gaussian random variable $T\sim \mathcal{N}_n\big(0,\gamma^{-1}I_n\big)$ , multiplied by the normalizing factor

(5.9) \begin{equation} \frac{1}{(2\pi)^n}\int_{\mathbb{R}^n}e^{-\frac{1}{2}\gamma\Vert T\Vert^2} \mathrm{d} T = (2\pi\gamma)^{-n/2}.\end{equation}

Let

\[ (n)_m = n(n-1)\cdots(n-m+1) = \Gamma(n+1)/\Gamma(n-m+1)\]

be the falling factorial. Because $\Vert T\Vert$ is independent of $\prod_{j\ge 1} \big(T^\top\Theta^j T/\Vert T\Vert^2\big)^{v_j}$ under $T\sim \mathcal{N}_n\big(0,\gamma^{-1}I_n\big)$ , the expectation of the factor of (5.4) including T becomes

\begin{align*} \mathbb{E}^{T}\Bigl[{\prod}_{j\ge 0}\big(T^\top \Theta^j T\big)^{v_j}\Bigr]&= \mathbb{E}^{T}\bigl[\Vert T\Vert^{2 v}\bigr] \mathbb{E}^{T}\Biggl[\frac{\prod_{j\ge 1}\big(T^\top \Theta^j T\big)^{v_j}}{\Vert T\Vert^{2(v-v_0)}}\Biggr] \\&= \mathbb{E}^{T}\bigl[\Vert T\Vert^{2 v}\bigr] \frac{\mathbb{E}^{T}\bigl[\prod_{j\ge 1}\big(T^\top \Theta^j T\big)^{v_j}\bigr]}{\mathbb{E}^{T}\bigl[\Vert T\Vert^{2(v-v_0)}\bigr]} \nonumber \\&= ({-}2/\gamma)^{v_0} ({-}(n/2+v-v_0))_{v_0} \times \zeta_{v_1,v_2,\ldots}(\Theta), \nonumber\end{align*}

where $v=\sum_{j\ge 0} v_j$ and

\begin{equation*} \zeta_{v_1,v_2,\ldots}(\Theta) = \mathbb{E}\biggl[{\prod}_{j\ge 1}\big(\xi^\top \Theta^j \xi\big)^{v_j}\biggr], \quad \xi\sim \mathcal{N}_n(0,I_n)\end{equation*}

is a polynomial in $\mathrm{tr}(\Theta^j)$ , $j\ge 1$ . For example, $\zeta_{1}(\Theta)=\mathrm{tr}(\Theta)$ , $\zeta_{2}(\Theta)=\mathrm{tr}(\Theta^2)$ , $\zeta_{1,1}(\Theta)=\mathrm{tr}(\Theta)^2+2\mathrm{tr}(\Theta^2)$ . Note that $\zeta_{v_1,v_2,\ldots}(\Theta)$ does not include the dimension n explicitly.

The terms in $Q_3$ , $Q_4$ , and $\frac{1}{2}Q_3^2$ in (5.6) including T can be rewritten according to the rules

(5.10) \begin{align}\begin{aligned} \Vert T\Vert^{2} &\mapsto \gamma^{-1} n, \qquad \Vert T\Vert^{4} \mapsto \gamma^{-2} n(n+2), \\ T^\top\Theta^k T &\mapsto \gamma^{-1} \mathrm{tr}(\Theta^k), \quad k=1,2, \\ \Vert T\Vert^2 \cdot T^\top\Theta T &\mapsto \gamma^{-2}(n+2)\mathrm{tr}(\Theta), \\ (T^\top\Theta T)^2 &\mapsto \gamma^{-2}\left[\mathrm{tr}(\Theta)^2+2\mathrm{tr}(\Theta^2)\right],\end{aligned}\end{align}

and multiplied by the normalizing factor (5.9). Then the truncated version of $\psi_{N|V=0}(\widetilde s,\Theta)$ in (5.7) is obtained as

(5.11) \begin{equation}\begin{aligned} \widehat\psi_{N|V=0}(\widetilde s,\Theta) &= \frac{1}{(2\pi)^n}\int_{\mathbb{R}^n} \widehat\psi_N(\widetilde s,T,\Theta) \mathrm{d} T \\ &= \psi^0_{X}(s) \psi^0_{R}(\Theta) (2\pi\gamma)^{-n/2} \\ & \quad \times \biggl(1 + \frac{1}{\sqrt{N}}\widetilde Q_{3}(\widetilde s,\Theta) + \frac{1}{N}\widetilde Q_{4}(\widetilde s,\Theta) + \frac{1}{2N}\widetilde Q_{3}^{(2)}(\widetilde s,\Theta) \biggr),\end{aligned}\end{equation}

where $\widetilde Q_{3}(\widetilde s,\Theta)$ , $\widetilde Q_{4}(\widetilde s,\Theta)$ , and $\widetilde Q_{3}^{(2)}(\widetilde s,\Theta)$ are $Q_{3}(\widetilde s,T,\Theta)$ , $Q_{4}(\widetilde s,T,\Theta)$ , and $Q_{3}(\widetilde s,T,\Theta)^2$ with terms including T replaced according to the rules (5.10).

Recall that the integral we are going to obtain is $\Xi_{n,N}(x)$ in (2.9). Define

\begin{align*}\psi_{N|x,V=0}(\Theta)&= \int_{\mathrm{Sym}(n)}e^{{\sqrt{-1}}\mathrm{tr}(\Theta R)} p_N(x,0,R) \mathrm{d} R \\&= \frac{1}{2\pi}\int_{\mathbb{R}} e^{-{\sqrt{-1}} s x} \psi_{N|V=0}(\widetilde s,\Theta) \mathrm{d} s,\end{align*}

where $\psi_{N|V=0}(\widetilde s,\Theta)$ is defined in (5.7). Using this,

\[ \int_{\mathrm{Sym}(n)} \det({-}R+\gamma x I_n) p_N(x,0,R) \mathrm{d} R = \det\Bigl({-}{\textstyle\frac{1}{\sqrt{-1}}} D_\Theta + \gamma x I_n\Bigr) \Big|_{\Theta=0}\psi_{N|x,V=0}(\Theta),\]

where $D_\Theta$ is an $n\times n$ symmetric matrix differential operator defined by

(5.12) \begin{equation}(D_\Theta)_{ij} = \frac{1+\delta_{ij}}{2}\frac{\partial}{\partial (\Theta)_{ij}} = \frac{\partial}{\partial \tau_{ij}} \quad (i\le j).\end{equation}

Therefore, (2.9) is evaluated as

\begin{equation*} \Xi_{n,N}(x) = \int_{x}^\infty \biggl[ \det\Bigl({-}{\textstyle\frac{1}{\sqrt{-1}}} D_\Theta + \gamma x I_n\Bigr) \Big|_{\Theta=0} \frac{1}{2\pi}\int_{\mathbb{R}} e^{-{\sqrt{-1}} s x} \psi_{N|V=0}(\widetilde s,\Theta) \mathrm{d} s \biggr] \mathrm{d} x.\end{equation*}

The truncated version of $\Xi_{n,N}(x)$ is

(5.13) \begin{equation} \widehat\Xi_{n,N}(x) = \int_{x}^\infty \biggl[ \det\Bigl({-}{\textstyle\frac{1}{\sqrt{-1}}} D_\Theta + \gamma x I_n\Bigr) \Big|_{\Theta=0} \frac{1}{2\pi}\int_{\mathbb{R}} e^{-{\sqrt{-1}} s x} \widehat\psi_{N|V=0}(\widetilde s,\Theta) \mathrm{d} s \biggr] \mathrm{d} x,\end{equation}

which is the valid asymptotic expansion formula for $\Xi_{n,N}(x)$ , as follows.

Lemma 2. Under Assumption 1,

\[ \Xi_{n,N}(x)=\widehat\Xi_{n,N}(x)+o\big(N^{-1}\big) \quad \textit{ as } N\to\infty, \textit{ uniformly in x}.\]

The proof is provided in Section A.2.

Lemma 2 states that our target is $\widehat\Xi_{n,N}(x)$ . The integral in (5.13) with respect to $\mathrm{d} s$ can easily be evaluated using

(5.14) \begin{equation} \frac{1}{2\pi}\int_{\mathbb{R}} e^{-{\sqrt{-1}} s x} \psi^0_{X}(s) \big({\sqrt{-1}} s\big)^k \mathrm{d} s = H_k(x) \phi(x),\end{equation}

where $\phi(x)$ is the probability density function of the standard Gaussian distribution $\mathcal{N}(0,1)$ in (2.10), and $H_k(x)$ is the Hermite polynomial of degree k defined in (2.12). For the derivatives with respect to $\Theta$ , we use the lemma below. The proof is presented in Section A.1.

Lemma 3. For $\psi^0_{R}(\Theta) = e^{-\frac{1}{2}\alpha\mathrm{tr}(\Theta^2) -\frac{1}{2}\beta\mathrm{tr}(\Theta)^2}$ , $\gamma=\sqrt{\alpha/2-\beta}$ , and positive integers $c_i$ such that $m=\sum_{i=1}^k c_i\le n$ ,

(5.15) \begin{equation}\begin{aligned} \det\Bigl({-}{\textstyle\frac{1}{\sqrt{-1}}} D_\Theta + \gamma x I_n\Bigr) \bigl(\psi^0_R(\Theta)\mathrm{tr}(\Theta^{c_1})\cdots\mathrm{tr}(\Theta^{c_k})\bigr)\Big|_{\Theta=0} \\ = {\sqrt{-1}}^m \gamma^{n-m} ({-}1/2)^{m-k} (n)_m H_{n-m}(x).\end{aligned}\end{equation}

We now summarize the entire procedure for obtaining $\widehat\Xi_{n,N}(x)$ in (5.13).

  1. Step 0. Express $K_{u,v,w}(s,T,\Theta)$ with the base functions in (5.4) and the derivatives of $\kappa^{(k)}$ , $k=2,3,4$ (executed in Section 3.1).

  2. Step 1. Expand the expression inside the parentheses in $\widehat\psi_N(\widetilde s,T,\Theta)$ in (5.6), and apply the term-rewriting rules in (5.10) to obtain $\widehat\psi_{N|V=0}(\widetilde s,\Theta)$ in (5.11). The resulting function is the product of $\psi^0_X(s)\psi^0_R(\Theta)$ and a polynomial in s and $\mathrm{tr}(\Theta^k)$ , $k\le 4$ .

  3. Step 2. Applying (5.14) and (5.15) to the result of Step 1 yields the quantity in the inner brackets in (5.13), which is $\phi(x)$ multiplied by a polynomial in $x^k$ and $H_{k}(x)$ .

  4. Step 3. Using the three-term relation $x H_{k}(x) = H_{k+1}(x) + k H_{k-1}(x)$ in (4.6), we rewrite the result of Step 2 to be $\phi(x)$ multiplied by a linear combination of $H_{k}(x)$ .

  5. Step 4. Applying the integration $\int_{x}^\infty \mathrm{d} x$ to the result of Step 3, using

    \[ \int_{x}^\infty H_k (x^{\prime}) \phi(x^{\prime}) \mathrm{d} x^{\prime} = H_{k-1}(x) \phi(x),\]
    yields the integral $\widehat\Xi_{n,N}(x)$ in (5.13). Let $H_{-1}(x) = \phi(x)^{-1}\int_x^\infty \phi(x^{\prime}) \mathrm{d} x^{\prime}$ .

Each of these steps can be carried out using computational algebra. Performing all the steps completes the proof of Theorem 1.

5.3. Undetectable non-Gaussianity and proof of Theorem 2

In this section, we prove Theorem 2.

For $\ell\ge 2$ , let

\[ \Pi_\ell(\Theta) = \mathrm{tr}(\Theta)^\ell - ({-}2)^{\ell-1}\mathrm{tr}(\Theta^\ell).\]

Because of Lemma 3, for positive integers $c_i$ such that $m=\sum_{i=1}^k c_i\le n-\ell$ , we have

\begin{equation*}\begin{aligned}\det & \Bigl({-}{\textstyle\frac{1}{\sqrt{-1}}} D_\Theta + \gamma x I_n\Bigr) \bigl(\psi^0_R(\Theta)\mathrm{tr}(\Theta^{c_1})\cdots\mathrm{tr}(\Theta^{c_k}) \Pi_\ell(\Theta) \bigr)\Big|_{\Theta=0} \\&= {\sqrt{-1}}^{m+\ell} \gamma^{n-(m+\ell)} ({-}1/2)^{(m+\ell)-(k+\ell)} (n)_{m+\ell} H_{n-(m+\ell)}(x) \\&\quad -({-}2)^{\ell-1} {\sqrt{-1}}^{m+\ell} \gamma^{n-(m+\ell)} ({-}1/2)^{(m+\ell)-(k+1)} (n)_{m+\ell} H_{n-(m+\ell)}(x) \\&= 0.\end{aligned}\end{equation*}

Therefore, if terms containing the factor $\Pi_\ell(\Theta)$ exist, they automatically vanish in Step 2. Actually, such a term appears. For instance,

(5.16) \begin{equation}\begin{aligned}K_{0,0,3}(s,T,\Theta)&= \mathrm{tr}\big(\Theta\nabla^2_{t_1}\big)\mathrm{tr}\Big(\Theta\nabla^2_{t_2}\big)\mathrm{tr}\Big(\Theta\nabla^2_{t_3}\big) \kappa^{(3)}\left(\left(\tfrac{1}{2}\Vert t_a-t_b\Vert^2\right)_{1\le a<b\le 3}\right)\bigg|_{t_1=t_2=t_3} \\&= 6\kappa^{(3)}_{(12),(12),(23)}(0) \Pi_2(\Theta)\mathrm{tr}(\Theta) + 2\kappa^{(3)}_{(12),(13),(23)}(0) \Pi_3(\Theta).\end{aligned}\end{equation}

Each term on the right-hand side in (5.16) includes a factor $\Pi_\ell(\Theta)$ and a derivative of $\kappa^{(3)}$ which has a cycle in its diagram (Figures 3.1(a)–(b)).

Proof of Theorem 2. Let $x_{ab}=\frac{1}{2}\Vert t_a-t_b\Vert^2$ . Suppose that the diagram of the derivative

\[ \kappa^{(k)}_E(0) = \Biggl(\prod_{(a,b)\in E}\Bigl(\frac{\partial}{\partial x_{ab}}\Bigr)\Biggr) \kappa^{(k)}\bigl((x_{ab})_{1\le a<b\le k}\bigr)\Big|_{t_1=\cdots=t_k}\]

contains a cycle $C = \{(1,2),\ldots,(\ell-1,\ell),(1,\ell)\} \subset E$ .

The derivative $\kappa^{(k)}_E(0)$ appears in the Taylor series

(5.17) \begin{align} \kappa^{(k)}\bigl((x_{ab})_{1\le a<b}\bigr) = \cdots + \kappa^{(k)}_E(0)\times \Delta\widetilde\Delta + \cdots,\end{align}

where $\Delta=x_{12}\cdots x_{\ell-1,\ell}x_{1,\ell}$ and $\widetilde\Delta=\prod_{(a,b)\in E\setminus C}x_{ab}$ .

We consider applying the operators $\langle T,\nabla_{t_b}\rangle$ and/or $\mathrm{tr}(\Theta\nabla^2_{t_c})$ to (5.17), and evaluate it at $t_1=\cdots=t_k$ . After the application of these differentiation operations, the coefficient $\Delta\widetilde\Delta$ of $\kappa^{(k)}_E(0)$ should be reduced to a nonzero constant. (Otherwise, it vanishes when evaluated at $t_1=\cdots=t_k$ .) At least, $\Delta$ should be reduced to a nonzero constant.

The only possible operation that makes $\Delta$ a nonzero constant is $\prod_{a=1}^\ell \mathrm{tr}\big(\Theta\nabla^2_{t_a}\big)$ :

(5.18) \begin{align}& \mathrm{tr}\big(\Theta\nabla^2_{t_1}\big) \cdots \mathrm{tr}\big(\Theta\nabla^2_{t_\ell}\big) (x_{12}\cdots x_{\ell-1,\ell}x_{1,\ell}) \nonumber\\&\quad = \sum_{{i_1,\ldots,i_v,j_1,\ldots,j_v}=1}^n \theta_{i_1 j_1}\cdots\theta_{i_v j_v} \prod_{a=1}^\ell \frac{\partial^{2}}{\partial t_{a}^{i_a}\partial t_{a}^{j_a}} (x_{12}\cdots x_{\ell-1,\ell}x_{1,\ell}).\end{align}

By selecting the non-vanishing terms, we obtain

\begin{align*}\prod_{a=1}^\ell & \frac{\partial^{2}}{\partial t_{a}^{i_a}\partial t_{a}^{j_a}} (x_{12}\cdots x_{\ell-1,\ell}x_{1,\ell})= \prod_{a=1}^\ell \frac{\partial^{2}x_{a,a+1}}{\partial t_{a}^{i_a}\partial t_{a}^{j_a}} +\prod_{a=1}^\ell \frac{\partial^{2}x_{a,a+1}}{\partial t_{a+1}^{i_{a+1}}\partial t_{a+1}^{j_{a+1}}} \\&\qquad + \sum_{(\epsilon_1,\ldots,\epsilon_\ell)\in\{0,1\}^\ell} \prod_{a=1}^\ell\Biggl((1-\epsilon_a)\frac{\partial^{2}x_{a,a+1}}{\partial t_{a}^{i_a}\partial t_{a+1}^{j_{a+1}}} +\epsilon_a \frac{\partial^{2}x_{a,a+1}}{\partial t_{a+1}^{i_{a+1}}\partial t_{a}^{j_a}}\Biggr) \\&= 2\prod_{a=1}^\ell \delta_{i_a j_a} + ({-}1)^\ell \prod_{a=1}^\ell \bigl(\delta_{i_{a} j_{a+1}} + \delta_{i_{a+1} j_{a}} \bigr)\end{align*}

(letting $x_{\ell,\ell+1}=x_{1,\ell}$ and $i_{\ell+1}=i_1$ , $j_{\ell+1}=j_1$ ), and hence

\[ (5.18) = 2\mathrm{tr}(\Theta)^\ell - ({-}1)^{\ell}2^\ell \mathrm{tr}(\Theta^\ell) = 2\Pi_\ell(\Theta).\]

This implies that the coefficients of $\kappa^{(k)}_E(0)$ have the factor $\Pi_\ell(\Theta)$ .

Appendix A.

A.1. Identities on the Hermite polynomial and proof of Lemma 3

Here we present the identities involving the Hermite polynomial, which are crucial in the derivation of the expansion.

For an $n\times n$ symmetric matrix $A=(a_{ij})$ , the principal minor matrix corresponding to indices $K\subset\{1,\ldots,n\}$ is denoted by $A[K] = (a_{ij})_{i,j\in K}$ . We note that $A=A[\{1,\ldots,n\}]$ .

We first prove the following lemma. Recall that $\Theta=(\theta_{ij})$ and $D_\Theta=(d_{ij})$ are defined in (5.1) and (5.12), respectively.

Lemma 4. For positive integers $c_i$ such that $m=\sum_{i=1}^k c_i\le n$ ,

(A.1) \begin{equation} \det(x I + D_\Theta) \Bigl(e^{\mathrm{tr}(\Theta^2)}\mathrm{tr}(\Theta^{c_1})\cdots\mathrm{tr}(\Theta^{c_\ell})\Bigr)\Big|_{\Theta=0} = ({-}1/2)^{m-\ell} (n)_m H_{n-m}(x).\end{equation}

Proof. Based on the expansion formula

\[ \det\!(x I_n + D_\Theta) = \sum_{k=0}^n x^{n-k} \sum_{K:K\subset\{1,\ldots,n\},\,|K|=k} \det\bigl(D_{\Theta[K]}\bigr),\]

the left-hand side of (A.1) is a polynomial in x with coefficients of the form

(A.2) \begin{equation} \det\big(D_{\Theta[K]}\big) \Bigl(e^{\mathrm{tr}(\Theta^2)}\mathrm{tr}(\Theta^{c_1})\cdots\mathrm{tr}(\Theta^{c_\ell})\Bigr)\Big|_{\Theta=0}.\end{equation}

By symmetry, it suffices to consider the case $K=\{1,\ldots,k\}$ . Let $\Theta_k=\Theta[\{1,\ldots,k\}]$ .

By the definition of the determinant,

\begin{equation*} \det(D_{\Theta_k}) = \sum_{\sigma\in S_k} \mathrm{sgn}(\sigma) d_{1\sigma(1)}\cdots d_{n\sigma(k)},\end{equation*}

where $S_k$ denotes the permutation group on $\{1,\ldots,k\}$ .

We have that $\mathrm{tr}(\Theta^c)$ is a linear combination of terms of the form $\theta_{j_1 j_2}\theta_{j_2 j_3}\cdots\theta_{j_{c-1} j_c}\theta_{j_c j_1}$ . The form

(A.3) \begin{equation} d_{i_1\sigma(i_1)}\cdots d_{i_{e}\sigma(i_{e})} \bigl(\theta_{j_1 j_2}\theta_{j_2 j_3}\cdots\theta_{j_{c-1} j_c}\theta_{j_c j_1}\bigr) \Big|_{\Theta=0}\quad (1\le i_1<\cdots<i_{e}\le k)\end{equation}

is non-vanishing if and only if $e=c$ , the map $\begin{pmatrix} i_1 &\quad \cdots &\quad i_{c} \\ \sigma(i_1) &\quad \cdots &\quad \sigma(i_{c}) \end{pmatrix}$ forms a cycle of length c, and

\[ (j_1,\ldots,j_c) = (i_h,\sigma(i_h),\sigma^2(i_h)\ldots,\sigma^{c-1}(i_h)) \ \ \mbox{or}\ \ (i_h,\sigma^{-1}(i_h),\ldots,\sigma^{-(c-1)}(i_h))\]

for some $h=1,\ldots,c$ (i.e., there are 2c ways for this to hold). The value of (A.3) is ${(1/2)^c}$ if it does not vanish.

The form

(A.4) \begin{equation} d_{i_1\sigma(i_1)}\cdots d_{i_{e}\sigma(i_{e})}e^{\mathrm{tr}(\Theta^2)} \Big|_{\Theta=0} \quad (1\le i_1<\cdots<i_{e}\le k)\end{equation}

is non-vanishing if and only if e is even, and the map $\begin{pmatrix} i_1 &\quad \cdots &\quad i_{e} \\ \sigma(i_1) &\quad \cdots &\quad \sigma(i_{e}) \end{pmatrix}$ is a product of $e/2$ cycles of length 2. The value of (A.4) is 1 if it does not vanish.

Therefore,

\[ \mathrm{sgn}(\sigma) d_{1\sigma(1)}\cdots d_{k\sigma(k)} \bigl(e^{\mathrm{tr}(\Theta^2)}\mathrm{tr}(\Theta^{c_1})\cdots\mathrm{tr}(\Theta^{c_\ell})\bigr)\Big|_{\Theta=0}\]

is non-vanishing if and only if $\sigma$ (expressed as a product of cycles) factors into $\ell$ cycles of length $c_i$ , $i=1,\ldots,\ell$ , and $e/2$ cycles of length 2, where $e=k-\sum_{i=1}^\ell c_i=k-m$ is even. Note that the number of cycles made from c distinct atoms is $(c-1)!$ , and the number of such $\sigma$ is

\begin{align*}& \binom{k-c_1}{c_1} (c_1-1)! \times \binom{k-c_1-c_2}{c_2} (c_2-1)! \times\cdots \\ & \qquad \times \binom{k-c_1-\cdots -c_{\ell-1}}{c_\ell} (c_\ell-1)! \times (k-m)!! \\& = \frac{(k)_m}{\prod c_i} \times\frac{(k-m)!}{2^{\frac{1}{2}(k-m)}\big(\frac{k-m}{2}\big)!} = \frac{k!}{(\prod c_i) 2^{\frac{1}{2}(k-m)}\big(\frac{k-m}{2}\big)!}.\end{align*}

The sign of $\sigma$ is

\[ \mathrm{sgn}(\sigma)=\prod_{i=1}^\ell ({-}1)^{c_i-1} \times ({-}1)^{(k-m)/2} = ({-}1)^{m-\ell+(k-m)/2}.\]

Therefore,

\begin{equation*} \textrm{(A.2)} = \frac{k!}{(\prod c_i) 2^{\frac{1}{2}(k-m)}\big(\frac{k-m}{2}\big)!} \times \prod_{i=1}^\ell \bigl( 2 c_i\times (1/2)^{c_i}\bigr) \times 1 = \frac{k!}{2^{m-\ell+(k-m)/2}\big(\frac{k-m}{2}\big)!}.\end{equation*}

We now show that the left-hand side of (A.1) is

\begin{align*}& \sum_{k=m,\,k-m:\rm even}^n x^{n-k} \binom{n}{k} \frac{k!}{2^{m-\ell+(k-m)/2}\big(\frac{k-m}{2}\big)!} ({-}1)^{m-\ell+(k-m)/2} \\& = ({-}1/2)^{m-\ell}\sum_{k^{\prime}=0}^{[\frac{n-m}{2}]} x^{n-m-2k^{\prime}} \frac{n!}{2^{k^{\prime}}(n-m-2k^{\prime})!k^{\prime}!} ({-}1)^{k^{\prime}} \qquad \Bigl(k^{\prime}=\frac{k-m}{2}\Bigr) \\& = ({-}1/2)^{m-\ell} (n)_m H_{n-m}(x).\end{align*}

Lemma 5. For any $\beta$ , and positive integers $c_i$ such that $m=\sum_{i=1}^k c_i\le n$ ,

(A.5) \begin{equation}\begin{aligned} \det(x I + D_\Theta) \bigl(e^{(1+\beta)\mathrm{tr}(\Theta^2)+\frac{\beta}{2}\mathrm{tr}(\Theta)^2}\mathrm{tr}(\Theta^{c_1})\cdots\mathrm{tr}(\Theta^{c_\ell})\bigr)\Big|_{\Theta=0} \\ = ({-}1/2)^{m-\ell} (n)_m H_{n-m}(x).\end{aligned}\end{equation}

Proof. Equation (A.5) with $\beta=0$ holds from Lemma 4. We demonstrate that the left-hand side of (A.5) is independent of $\beta$ . Using the expansion around $\beta=0$ ,

\begin{align*} e^{\beta (\mathrm{tr}(\Theta^2)+\frac{1}{2}\mathrm{tr}(\Theta)^2)} = \sum_{k\ge 0}\frac{\beta^k}{k!} \sum_{0\le h\le k}\binom{k}{h}\mathrm{tr}(\Theta^2)^{k-h}\mathrm{tr}(\Theta)^{2h} (1/2)^h,\end{align*}

the left-hand side of (A.5) becomes a series in $\beta$ . The coefficient of $\beta^k/k!$ is

\begin{align*} \sum_{0\le h\le k} \binom{k}{h} ({-}1/2)^{2k-(k+h)} (n)_{2k} H_{n-2k}(x) (1/2)^h =0\end{align*}

except when $k=0$ .

Proof of Lemma 3. Recall that $\gamma=\sqrt{\alpha/2-\beta}$ . Equation (A.5) from Lemma 5 with $\beta\,:\!=\,\beta/\gamma^2$ , $\Theta\,:\!=\,{\sqrt{-1}}\gamma\Theta$ , and $x\,:\!=\,-x$ yields (5.15).

A.2. Conditional asymptotic expansion and proof of Lemma 2

We begin by summarizing the asymptotic expansion for the probability density function and the moment in the i.i.d. setting of [Reference Bhattacharya and Rao4].

Let $q_1(x)$ , $x\in\mathbb{R}^k$ , be the probability density of a random vector X with zero mean and covariance $\Sigma\succ 0$ . Let $\psi_1(t)$ be the characteristic function of X. Let $q_N(x)$ , $x\in\mathbb{R}^k$ , be the probability density with the characteristic function $\psi_N(t)=\psi_1(t/\sqrt{N})^{N}$ . Let $\phi_k(x;\Sigma)$ be the probability density of the Gaussian distribution $\mathcal{N}_k(0,\Sigma)$ . Assume that the sth moment exists under $q_1$ . Then

\[ \log\psi_N(t) = N \log\psi_1\Bigl(\frac{t}{\sqrt{N}}\Bigr) = -\frac{1}{2}t^\top\Sigma t + \sum_{j=3}^s \frac{N^{-\frac{1}{2}(j-2)}{\sqrt{-1}}^j}{j!} \sum_{i:|i|=j} c_i t^i + N r_s\Bigl(\frac{t}{\sqrt{N}}\Bigr),\]

where $t=\left(t^1,\ldots,t^k\right)^\top$ and $i=(i_1,\ldots,i_k)$ is a multi-index such that $c_i=(c_{i_1},\ldots,c_{i_k})\in\mathbb{R}^k$ , $t^i=(t^1)^{i_1}\cdots (t^k)^{i_k}$ , and $r_s(t)$ is a function such that $r_s(t)=o\left(|t|^{s-2}\right)$ . Let

(A.6) \begin{equation} \widehat\psi_N^{(s)}(t) = e^{-\frac{1}{2}t^\top\Sigma t}\Biggl(1 + \sum_{j=3}^s N^{-\frac{1}{2}(j-2)} {\sqrt{-1}}^j F_{3(j-2)}(t)\Biggr),\end{equation}

where the expression inside the parentheses represents the expansion of

\[ \exp\Biggr(\sum_{j=3}^s \frac{N^{-\frac{1}{2}(j-2)}{\sqrt{-1}}^j}{j!} \sum_{i:|i|=j} c_i t^i\Biggl)\]

around $N=\infty$ up to the order of $N^{-\frac{1}{2}(s-2)}$ . Here, $F_{3(j-2)}(t)$ is an even or odd polynomial in t of degree $3(j-2)$ . Let

(A.7) \begin{equation} \widehat q_N^{(s)}(x) = \phi_k(x;\Sigma)\biggl(1 + \sum_{j=3}^s N^{-\frac{1}{2}(j-2)} G_{3(j-2)}(x)\biggr), \quad x=(x_1,\ldots,x_k),\end{equation}

be the Fourier inversion of $\widehat\psi_N^{(s)}(t)$ . Here, $G_{3(j-2)}(x)$ is an even or odd polynomial in x of degree $3(j-2)$ .

Proposition 3. (Corollary to [Reference Bhattacharya and Rao4, Theorems 19.1 and 19.2].) Assume that the probability density function $q_N(x)$ exists for $N\ge 1$ , and is bounded for some N. Assume that $\mathbb{E}[\Vert X\Vert^s]<\infty$ under $q_1$ . Then $q_N(x)$ is continuous for sufficiently large N, and

\[ \sup_{x\in\mathbb{R}^k}\!(1+\Vert x\Vert^s)\bigl|q_N(x)-\widehat q_N^{(s)}(x)\bigr| = o\!\left(N^{-\frac{1}{2}(s-2)}\right) \quad \textit{as}\ N\to\infty.\]

Write $x=(x_1,x_2)$ , $x_1\in\mathbb{R}^{k_1}$ , $x_2\in\mathbb{R}^{k_2}$ ( $k_1+k_2=k$ ). In the following, $x_2$ is assumed to be a constant vector $x_{20}$ .

Corollary 1. Let $f(x_1)$ be a function such that $|f(x_1)|\le C (1+\Vert x_1\Vert^{s_1})$ . For $s\ge s_1+k_1+1$ and for $s_0\le s$ ,

\[ \int_{\mathbb{R}^{k_1}}f(x_1) q_N(x_1,x_{20}) \mathrm{d} x_1 = \int_{\mathbb{R}^{k_1}}f(x_1) \widehat q_N^{(s_0)}(x_1,x_{20}) \mathrm{d} x_1 +o\!\left(N^{-\frac{1}{2}(s_0-2)}\right).\]

Proof. We have

\begin{align*} \biggl|\int_{\mathbb{R}^{k_1}} f(x_1) q_N(x_1,x_{20}) \mathrm{d} x_1 - & \int_{\mathbb{R}^{k_1}}f(x_1) \widehat q_N^{(s_0)}(x_1,x_{20}) \mathrm{d} x_1 \biggr| \\\le& \int_{\mathbb{R}^{k_1}}|f(x_1)| |q_N(x_1,x_{20}) - \widehat q_N^{(s)}(x_1,x_{20}) | \mathrm{d} x_1\\ &+ \int_{\mathbb{R}^{k_1}}|f(x_1)| |\widehat q_N^{(s)}(x_1,x_{20}) - \widehat q_N^{(s_0)}(x_1,x_{20})| \mathrm{d} x_1.\end{align*}

The first term is bounded above by

\[ o\!\left(N^{-\frac{1}{2}(s-2)}\right)\times C \int_{\mathbb{R}^{k_1}} \frac{1+\Vert (x_1,x_{20})\Vert^{s_1}}{1+\Vert (x_1,x_{20})\Vert^{s}} \mathrm{d} x_1.\]

This integral exists when $s-s_1-(k_1-1)>1$ .

For the second term, because $\widehat q_N^{(s)}(x_1,x_{20}) - \widehat q_N^{(s_0)}(x_1,x_{20})$ is

\[ \phi_k((x_1,x_{20});\Sigma)\times (\mbox{a polynomial in $x_1$}),\]

the integral exists, and the coefficients of the polynomials are multiples of $N^{-\frac{1}{2}(s_0+1-2)},\ldots,N^{-\frac{1}{2}(s-2)}$ (if $s_0<s$ ), or 0 (if $s_0=s$ ). Therefore, the second term is $O\bigl(N^{-\frac{1}{2}(s_0-1)}\bigr)$ when $s_0<s$ .

The sum of the first and second term is $o\bigl(N^{-\frac{1}{2}(s-2)}\bigr) + O\bigl(N^{-\frac{1}{2}(s_0-1)}\bigr)\mathbb{1}_{\{s_0<s\}}=o\bigl(N^{-\frac{1}{2}(s_0-2)}\bigr)$ .

Proof of Lemma 2. We apply Corollary 1 to evaluate (2.9). The characteristic function $\psi_N(t)$ and its truncated version $\widehat\psi_N^{(s)}(t)$ in (A.6) are given by $\psi_N(\widetilde s,T,\Theta)$ in (5.5) and $\widehat\psi_N(\widetilde s,T,\Theta)$ in (5.6), respectively. The probability density function $q_N(x)$ and its truncated version $\widehat q_N^{(s)}(x)$ in (A.7) are respectively the $p_N(x,V,R)$ and $\widehat p_N(x,V,R)$ used in (5.8). Let $x_1=(X,R)$ , $x_2=V=x_{20}=0$ , and $f(x_1)=\mathbb{1}_{\{X\ge x\}}\det({-}R+\gamma X I)$ . Then

$$\int_{\mathbb{R}^{k_1}} f(x_1) q_N(x_1,0) \mathrm{d} x_1 = \Xi_{n,N}(x)$$

and

$$\int_{\mathbb{R}^{k_1}} f(x_1) \widehat q_N(x_1,0) \mathrm{d} x_1 = \widehat \Xi_{n,N}(x).$$

Here, $k_1=1+n(n+1)/2$ , $k_2=n$ , and $s_1=n$ . Note that $s_1+k_1+1=\binom{n+2}{2}+1$ . The constant C in $|f(x_1)|\le C(1+\Vert x_1\Vert^{s_1})$ in Corollary 1 can be chosen independently of x. Hence, the remainder term is independent of x.

In Theorem 1, we choose $s_0=4$ . If the joint density is bounded and has a moment of order $s\ge \max\big(\binom{n+2}{2}+1,s_0\big)=\binom{n+2}{2}+1$ , then the remaining term is $o\big(N^{-\frac{1}{2}(s_0-2)}\big)=o(N^{-1})$ at least, and the manipulation of the asymptotic expansion is validated.

Acknowledgements

The authors thank the editors and the referees for their suggestions and constructive comments, which significantly improved the original manuscript. The authors are also grateful to Robert Adler and Jonathan Taylor for bringing [Reference Chamandy, Worsley, Taylor and Gosselin5] to their attention; to Nakahiro Yoshida, Shiro Ikeda, and Tsutomu T. Takeuchi for their valuable comments; and to Miho Suzuki for preparing figures.

Funding information

This work was partially supported by JSPS KAKENHI Grants No. JP21H03403 and No. JP19K03835.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Adler, R. J. (1981). The Geometry of Random Fields. John Wiley, Chichester.Google Scholar
Adler, R. J. and Taylor, J. E. (2007). Random Fields and Geometry. Springer, New York.Google Scholar
Adler, R. J. and Taylor, J. E. (2011). Topological Complexity of Smooth Random Functions. Springer, Heidelberg.Google Scholar
Bhattacharya, R. N. and Rao, R. R. (2010). Normal Approximation and Asymptotic Expansions. Society for Industrial and Applied Mathematics, Philadelphia.Google Scholar
Chamandy, N., Worsley, K. J., Taylor, J. and Gosselin, F. (2008). Tilted Euler characteristic densities for central limit random fields, with application to ‘bubbles’. Ann. Statist. 36, 24712507.Google Scholar
Cheng, D. and Schwartzman, A. (2018). Expected number and height distribution of critical points of smooth isotropic Gaussian random fields. Bernoulli 24, 34223446.Google Scholar
Davis, A. W. (1980). Invariant polynomials with two matrix arguments, extending the zonal polynomials. In Multivariate Analysis V: Proceedings of the Fifth International Symposium on Multivariate Analysis, North-Holland, Amsterdam, pp. 287–299.Google Scholar
Fantaye, Y., Marinucci, D., Hansen, F. and Maino, D. (2015). Applications of the Gaussian kinematic formula to CMB data analysis. Phys. Rev. D 91, article no. 063501.Google Scholar
Hikage, C., Komatsu, E. and Matsubara, T. (2006). Primordial non-Gaussianity and analytical formula for Minkowski functionals of the cosmic microwave background and large-scale structure. Astrophys. J. 653, 1126.CrossRefGoogle Scholar
Hug, D. and Schneider, R. (2002). Kinematic and Crofton formulae of integral geometry: recent variants and extensions. In Homenatge al Professor Llus Santaló i Sors, ed. C. Barceló i Vidal, Universitat de Girona, pp. 51–80.Google Scholar
Kuriki, S., Takemura, A. and Taylor, J. E. (2022). The volume-of-tube method for Gaussian random fields with inhomogeneous variance. J. Multivariate Anal. 188, article no. 104819, 23 pp.Google Scholar
Marinucci, D. and Peccati, G. (2011). Random Fields on the Sphere: Representation, Limit Theorems and Cosmological Applications. Cambridge University Press.Google Scholar
Matsubara, T. (2003). Statistics of smoothed cosmic fields in perturbation theory. I. Formulation and useful formulae in second-order perturbation theory. Astrophys. J. 584, 133.Google Scholar
Matsubara, T. (2010). Analytic Minkowski functionals of the cosmic microwave background: second-order non-Gaussianity with bispectrum and trispectrum. Phys. Rev. D 81, article no. 083505.Google Scholar
Matsubara, T. and Kuriki, S. (2021). Weakly non-gaussian formula for the minkowski functionals in general dimensions. Phys. Rev. D 104, 103522.Google Scholar
Matsubara, T. and Yokoyama, J. (1996). Genus statistics of the large-scale structure with non-Gaussian density fields. Astrophys. J. 463, 409419.Google Scholar
McCullagh, P. (1987). Tensor Methods in Statistics. Chapman and Hall, London.Google Scholar
Panigrahi, S., Taylor, J. and Vadlamani, S. (2019). Kinematic formula for heterogeneous Gaussian related fields. Stoch. Process. Appl. 129, 24372465.Google Scholar
Collaboration, Planck (2014). Planck 2013 results. XXIV. Constraints on primordial non-Gaussianity. A&A 571, article no. A24, 58 pp.Google Scholar
Pranav, P. et al. (2019). Topology and geometry of Gaussian random fields I: on Betti numbers, Euler characteristic, and Minkowski functionals. Monthly Notices R. Astronom. Soc. 485, 41674208.Google Scholar
Schmalzing, J. and Buchert, T. (1997). Beyond genus statistics: a unifying approach to the morphology of cosmic structure. Astrophys. J. 482, L1L4.Google Scholar
Schmalzing, J. and Górski, K. M. (1998). Minkowski functionals used in the morphological analysis of cosmic microwave background anisotropy maps. Monthly Notices R. Astronom. Soc. 297, 355365.CrossRefGoogle Scholar
Schneider, R. and Weil, W. (2008). Stochastic and Integral Geometry. Springer, Berlin.Google Scholar
Takemura, A. and Kuriki, S. (2002). On the equivalence of the tube and Euler characteristic methods for the distribution of the maximum of Gaussian fields over piecewise smooth domains. Ann. Appl. Prob. 12, 768796.Google Scholar
Taylor, J. E. (2006). A Gaussian kinematic formula. Ann. Prob. 34, 122158.Google Scholar
Tomita, H. (1986). Curvature invariants of random interface generated by Gaussian fields. Progress Theoret. Phys. 76, 952955.Google Scholar
Worsley, K. J. (1994). Local maxima and the expected Euler characteristic of excursion sets of $\chi^{2}$ and t fields. Adv. Appl. Prob. 26, 1342.Google Scholar
Worsley, K. J. (1995). Boundary corrections for the expected Euler characteristic of excursion sets of random fields, with an application to astrophysics. Adv. Appl. Prob. 27, 943959.Google Scholar
Figure 0

Figure 3.1. Diagrams with cycle ((a)–(c)) and without cycle ((d)). (a) $\kappa^{(3)}_{(12),(12),(13)}(0)$, (b) $\kappa^{(3)}_{(12),(13),(23)}(0)$, (c) $\kappa^{(6)}_{(12),(13),(14),(23),(45),(46)}(0)$, (d) $\kappa^{(6)}_{(12),(13),(14),(45),(46)}(0)$.

Figure 1

Table 3.1. Derivatives of cumulant functions $\kappa^{(k)}$ with cycle-free diagram ($k=2,3,4$).

Figure 2

Figure 4.1. Euler characteristic density for a chi-squared random field on $\mathbb{R}^4$ with N degrees of freedom (dotted line, $N=10$; dashed line, $N=100$; solid line, $N=\infty$).

Figure 3

Figure 4.2. Euler characteristic density for a chi-squared random field on $\mathbb{R}^4$ with 100 degrees of freedom, and its approximations (dot-dashed line, true curve; dotted line, Gaussian approximation; dashed line, first approximation; solid line, second approximation).

Figure 4

Figure 4.3. Approximation error of Euler characteristic density for a chi-squared random field on $\mathbb{R}^4$ with 100 degrees of freedom (dotted line, Gaussian approximation; dashed line, first approximation; solid line, second approximation).