Hostname: page-component-78c5997874-lj6df Total loading time: 0 Render date: 2024-11-10T16:10:33.780Z Has data issue: false hasContentIssue false

Fluctuations of subgraph counts in graphon based random graphs

Published online by Cambridge University Press:  09 December 2022

Bhaswar B. Bhattacharya*
Affiliation:
Department of Statistics and Data Science, University of Pennsylvania, Philadelphia, PA 19104, USA
Anirban Chatterjee
Affiliation:
Department of Statistics and Data Science, University of Pennsylvania, Philadelphia, PA 19104, USA
Svante Janson
Affiliation:
Department of Mathematics, Uppsala University, PO Box 480, SE-751 06 Uppsala, Sweden
*
*Corresponding author. Email: bhaswar@wharton.upenn.edu
Rights & Permissions [Opens in a new window]

Abstract

Given a graphon $W$ and a finite simple graph $H$ , with vertex set $V(H)$ , denote by $X_n(H, W)$ the number of copies of $H$ in a $W$ -random graph on $n$ vertices. The asymptotic distribution of $X_n(H, W)$ was recently obtained by Hladký, Pelekis, and Šileikis [17] in the case where $H$ is a clique. In this paper, we extend this result to any fixed graph $H$ . Towards this we introduce a notion of $H$ -regularity of graphons and show that if the graphon $W$ is not $H$ -regular, then $X_n(H, W)$ has Gaussian fluctuations with scaling $n^{|V(H)|-\frac{1}{2}}$ . On the other hand, if $W$ is $H$ -regular, then the fluctuations are of order $n^{|V(H)|-1}$ and the limiting distribution of $X_n(H, W)$ can have both Gaussian and non-Gaussian components, where the non-Gaussian component is a (possibly) infinite weighted sum of centred chi-squared random variables with the weights determined by the spectral properties of a graphon derived from $W$ . Our proofs use the asymptotic theory of generalised $U$ -statistics developed by Janson and Nowicki [22]. We also investigate the structure of $H$ -regular graphons for which either the Gaussian or the non-Gaussian component of the limiting distribution (but not both) is degenerate. Interestingly, there are also $H$ -regular graphons $W$ for which both the Gaussian or the non-Gaussian components are degenerate, that is, $X_n(H, W)$ has a degenerate limit even under the scaling $n^{|V(H)|-1}$ . We give an example of this degeneracy with $H=K_{1, 3}$ (the 3-star) and also establish non-degeneracy in a few examples. This naturally leads to interesting open questions on higher order degeneracies.

Type
Paper
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1. Introduction

A graphon is a measurable function $W\;:\; [0,1]^2\rightarrow [0,1]$ which is symmetric, that is, $W(x,y)=W(y,x)$ , for all $x,y\in [0,1]$ . Graphons arise as the limit objects of sequences of large graphs and has received phenomenal attention over the last few years. They provide a bridge between combinatorics and analysis and have found applications in several disciplines including statistical physics, probability, and statistics; see for example [Reference Basak and Mukherjee3, Reference Borgs, Chayes, Lovász, Sós and Vesztergombi9Reference Chatterjee and Varadhan12]. For a detailed exposition of the theory of graph limits, we refer to Lovász [Reference Lovász25]. Graphons provide a natural sampling procedure for generating inhomogeneous variants of the classical Erdős–Rényi random graph, a concept that has been proposed independently by various authors (see [Reference Boguná and Pastor-Satorras7, Reference Bollobás, Janson and Riordan8, Reference Diaconis and Freedman14, Reference Lovász and Szegedy26] among others). Formally, given a graphon $W\;:\;[0,1]^{2}\rightarrow [0,1]$ , a $W$ -random graph on the set of vertices $[n]\;:\!=\;\{1,2, \ldots, n\}$ , hereafter denoted by $G(n, W)$ , is obtained by connecting the vertices $i$ and $j$ with probability $W(U_{i},U_{j})$ independently for all $1 \leq i \lt j \leq n$ , where $\{U_{i}\;:\; 1 \leq i \leq n\}$ is an i.i.d. sequence of $U[0,1]$ random variables. An alternative way to achieve this sampling is to generate i.i.d. sequences $\{U_{i}\;:\; 1 \leq i \leq n\}$ and $\{Y_{ij} \;:\; 1\leq i\lt j \leq n\}$ of $U[0,1]$ random variables and then assigning the edge $(i, j)$ whenever $\{Y_{ij}\leq W(U_{i},U_{j})\}$ , for $1 \leq i \lt j \leq n$ . Observe that setting $W = W_p \equiv p\in [0,1]$ gives the classical (homogeneous) Erdős–Rényi random graph model, where every edge is present independently with constant probability $p$ .

Counts of subgraphs encode important structural information about the geometry of a network. In fact, the convergence of a sequence of finite graphs to a graphon is precisely determined by the convergence of its subgraph densities. As a consequence, understanding the asymptotic properties of subgraph counts in $W$ -random graphs is a problem of central importance in graph limit theory. To this end, given a finite graph $H=(V(H), E(H))$ denote by $X_{n}(H,W)$ the number of copies of $H$ in the $W$ -random graph $G(n,W)$ . More formally,

(1.1) \begin{align} X_{n}(H,W)=\sum _{1\leq i_{1}\lt \cdots \lt i_{|V(H)|}\leq n}\sum _{H'\in \mathscr{G}_H(\{i_{1},\ldots, i_{|V(H)|} \}) } \prod _{(i_{s}, i_{t}) \in E(H')} \textbf{1}\left \{Y_{i_{s}i_{t}}\leq W(U_{i_{s}},U_{i_{t}})\right \}, \end{align}

where for any set $S \subseteq [n]$ , $\mathscr G_H(S)$ denotes the collection of all subgraphs of the complete graph $K_{|S|}$ on the vertex set $S$ which are isomorphic to $H$ . (We count unlabelled copies of $H$ . Several other authors count labelled copies, which multiplies $X_n(H,W)$ by $|\textrm{Aut}(H)|$ , cf. (2.7).) The asymptotic distribution of $X_n(H,W_p)$ in the Erdős–Rényi model, where $W = W_p \equiv p$ , has been classically studied (in general with $p=p(n)$ ) using various tools such as $U$ -statistics [Reference Nowicki28, Reference Nowicki and Wierman29], method of moments [Reference Ruciński30], Stein’s method [Reference Barbour, Karoński and Ruciński2], and martingales [Reference Janson18, Reference Janson19], see also [Reference Janson, Łuczak and Rucinski21, Chapter 6], and the precise conditions under which $X_n(H, W_p)$ is asymptotically normal are well understood [Reference Ruciński30]. In particular, when $p \in (0, 1)$ is fixed, $X_n(H, W_p)$ is asymptotically normal for any finite graph $H$ that is non-empty, i.e., has at least one edge.

In this paper, we study the asymptotic distribution of $X_n(H, W)$ for general graphons $W$ . This problem has received significant attention recently, beginning with the work of Féray, Méliot, and Nikeghbali [Reference Féray, Méliot and Nikeghbali16], where the asymptotic normality for homomorphism densities in general $W$ -random graphs was derived using the framework of mod-Gaussian convergence. Using this machinery the authors also obtained moderate deviation principles and local limit theorems for the homomorphism densities in this regime. Very recently, using Stein’s method, rates of convergence to normality (Berry–Esseen type bounds) have been derived as well, see [Reference Kaur and Röllin24] (which also contain further related results) and [Reference Zhang31]. See also [Reference Delmas, Dhersin and Sciauveau13] and the references therein for further results.

However, interestingly, the limiting normal distribution of the subgraph counts obtained in [Reference Féray, Méliot and Nikeghbali16] can be degenerate depending on the structure of the graphon $W$ . This phenomenon was observed in [Reference Féray, Méliot and Nikeghbali16], and it was explored in detail in the recent paper of Hladký, Pelekis, and Šileikis [Reference Hladký, Pelekis and Šileikis17] for the case where $H=K_r$ is the $r$ -clique, for some $r \geq 2$ . They showed that the usual Gaussian limit is degenerate when a certain regularity function, which encodes the homomorphism density of $K_r$ incident to a given ‘vertex’ of $W$ , is constant almost everywhere (a.e.). In this case, the graphon $W$ is said to be $K_r$ -regular and the asymptotic distribution of $X_n(K_r, W)$ (with another normalisation, differing by a factor $n^{1/2}$ ) has both Gaussian and non-Gaussian components. In the present paper, we extend this result to any fixed graph $H$ . To this end, we introduce the analogous notion of $H$ -regularity and show that the fluctuations of $X_n(H, W)$ depends on whether or not $W$ is $H$ -regular. In particular, if $W$ is not $H$ -regular, then, $X_n(H, W)$ is asymptotically Gaussian, using a normalisation factor $n^{|V(H)|-1/2}$ . However, if $W$ is $H$ -regular, then the normalisation factor becomes $n^{|V(H)|-1}$ and yields a limiting distribution of $X_n(H, W)$ that has, in general, a Gaussian component and another independent (non-Gaussian) component which is a (possibly) infinite weighted sum of centred chi-squared random variables. Here, the weights are determined by the spectrum of a graphon obtained from the 2-point conditional densities of $H$ in $W$ , that is, the density of $H$ in $W$ when two vertices of $H$ are mapped to two ‘vertices’ of $W$ , averaged over all pairs of vertices of $H$ . The results are formally stated in Theorem 2.9. Unlike in [Reference Hladký, Pelekis and Šileikis17] which uses the method of moments, our proofs employ the orthogonal decomposition for generalised $U$ -statistics developed by Janson and Nowicki [Reference Janson and Nowicki22] (see also [[Reference Janson20], Chapter 11.3]). This avoids cumbersome moment calculations and provides a more streamlined framework for dealing with the asymmetries of general subgraphs.

There are also exceptional cases, where $W$ is $H$ -regular and normalisation of $X_n(H,W)$ by $n^{|V(H)|-1}$ also yields a degenerate limit; then a non-trivial limit can be found by another normalisation. (We ignore trivial cases when $X_n(H,W)$ is deterministic.) This cannot happen when $H=K_r$ as shown in [Reference Hladký, Pelekis and Šileikis17], but we give an example of this degeneracy with $H=K_{1,3}$ (the 3-star); see Example 4.6. We also show that this higher order degeneracy cannot happen for $H=C_4$ (the 4-cycle) and $H=K_{1,2}$ (the 2-star); see Theorems 4.8 and 4.10, respectively. It is an open problem to decide for which graphs $H$ such higher order degeneracies may occur.

We also study the structure of $W$ is when it is $H$ -regular and one (but not both) of the two components of the limit distribution in Theorem 2.9(2) vanishes, so that the limit distribution either is normal or lacks a normal component. In particular, we show that if $H$ is bipartite and $W$ is $H$ -regular, then the limit lacks a normal component if and only if $W$ is $\{0, 1\}$ -valued almost everywhere (Theorem 4.3).

1.1. Organisation

The rest of the paper is organised as follows. The limit theorems for the subgraph counts are presented in Section 2. We compute the limits in some examples in Section 3. Degeneracies of the asymptotic distributions are discussed in Section 4. The main results are proved in Sections 58.

2. Asymptotic distribution of subgraph counts in $W$ -random graphs

In this section, we will state our main result on the asymptotic distribution $X_n(H, W)$ . The section is organised as follows: In Section 2.1, we recall some basic definitions about graphons. The notions of conditional homomorphism density and $H$ -regularity are introduced in Section 2.2. Some spectral properties of the integral operator corresponding to a graphon are described in Section 2.3. The result is formally stated in Section 2.4.

2.1. Preliminaries

A quantity that will play a central role in our analysis the homomorphism density of a fixed multigraph $F=(V(F), E(F))$ (without loops) in a graphon $W$ , which is defined as

(2.1) \begin{align} t(F,W)=\int _{[0,1]^{|V(F)|}}\prod _{(s, t) \in E(F)}W(x_{s},x_{t})\prod _{a=1}^{|V(F)|}\,\textrm{d} x_{a}. \end{align}

Note that this is the natural continuum analogue of the homomorphism density of a fixed graph $F=(V(F), E(F))$ into finite (unweighted) graph $G=(V(G), E(G))$ which is defined as

(2.2) \begin{align} t(F, G) \;:\!=\;\frac{|\hom (F,G)|}{|V (G)|^{|V (F)|}}, \end{align}

where $|\hom (F,G)|$ denotes the number of homomorphisms of $F$ into $G$ . In fact, it is easy to verify that $t(F, G) = t(F, W^{G})$ , where $W^{G}$ is the empirical graphon associated with the graph $G$ which defined as

(2.3) \begin{align} W^G(x, y) =\boldsymbol 1\{(\lceil |V(G)|x \rceil, \lceil |V(G)|y \rceil )\in E(G)\}. \end{align}

(In other words, to obtain the empirical graphon $W^G$ from the graph $G$ , partition $[0, 1]^2$ into $|V(G)|^2$ squares of side length $1/|V(G)|$ , and let $W^G(x, y)=1$ in the $(i, j)$ -th square if $(i, j)\in E(G)$ , and 0 otherwise.)

Let $H=(V(H), E(H))$ be a simple graph. For convenience, we will throughout the paper assume that $V(H)=\{1, 2, \ldots,|V(H)|\}$ . Then, the homomorphism density defined (2.1) can also interpreted as the probability that a $W$ -random graph on $|V(H)|$ vertices contains $H$ , that is,

(2.4) \begin{align} t(H,W)=\mathbb{P}(G(|V(H)|,W)\supseteq H). \end{align}

To see this, recall the construction of a $W$ -random graph and note from (2.1) that,

(2.5) \begin{align} t(H,W)=\mathbb{E}\left [\prod _{(a,b)\in E(H)}W(U_{a},U_{b})\right ] & = \mathbb{E}\left [\prod _{(a,b)\in E(H)} \textbf{1}\{ Y_{ab}\leq W(U_{a},U_{b}) \}\right ] \nonumber \\[5pt] & = \mathbb{E}\left [ \textbf{1}\{G(|V(H)|,W)\supseteq H \}\right ]. \end{align}

Next, recalling (1.1) note that

(2.6) \begin{align} \mathbb{E}X_{n}(H,W) &=\sum _{1\leq i_{1}\lt \cdots \lt i_{|V(H)|}\leq n}\sum _{H'\in \mathscr{G}_H(\{i_{1},\ldots, i_{|V(H)|}\})}t(H,W) \nonumber \\[5pt] &= \big(\begin{array}{c}{n}\\[-8pt] {|V(H)|}\end{array}\big) \left |\mathscr{G}_H( \{ 1,\ldots, |V(H)| \} ) \right | \cdot t(H,W) \end{align}

where the last equality follows since the number of subgraphs of $K_{|V(H)|}$ on $\{i_{1},\ldots, i_{|V(H)|}\}$ isomorphic to $H$ is the same for any collection of distinct indices $1\leq i_{1}\lt \cdots \lt i_{|V(H)|}\leq n$ . Clearly,

(2.7) \begin{align} \left |\mathscr{G}_H( \{ 1,\ldots, |V(H)| \} ) \right | = \frac{|V(H)|!}{|\textrm{Aut}(H)|}, \end{align}

where $\textrm{Aut}(H)$ is the collection of all automorphisms of $H$ , that is, the collection of permutations $\sigma$ of the vertex set $V(H)$ such that $(x, y) \in E(H)$ if and only if $(\sigma (x), \sigma (y)) \in E(H)$ . This implies, from (2.6),

(2.8) \begin{align} \mathbb{E}X_{n}(H,W) = \frac{(n)_{|V(H)|}}{|\textrm{Aut}(H)|}t(H,W), \end{align}

where $(n)_{|V(H)|}\;:\!=\;n(n-1)\cdots (n-|V(H)|+1)$ .

2.2. Conditional homomorphism densities and $H$ -regularity

In this section, we will formalise the notion of $H$ -regularity of a graphon $W$ . To this end, we need to introduce the notion of conditional homomorphism densities. Throughout, we will assume $H=(V(H), E(H))$ is a non-empty simple graph with vertices labelled $V(H)=\{1, 2, \ldots, |V(H)|\}$ .

Definition 2.1. Fix $1 \leq K \leq |V(H)|$ and an ordered set $\boldsymbol{a}= (a_1, a_2, \ldots, a_K)$ of distinct vertices $a_1, a_2, \ldots, a_K \in V(H)$ . Then the $K$ -point conditional homomorphism density function of $H$ in $W$ given $\boldsymbol{a}$ is defined as

(2.9) \begin{align} t_{\boldsymbol{a}}(\boldsymbol{x},H,W) &\;:\!=\;\mathbb{E}\left [\prod _{(a,b)\in E(H)}W(U_{a},U_{b})\Bigm |U_{a_{j}}=x_{j}, \text{ for } 1\leq j \leq K \right ] \notag \\[5pt] &\phantom \;:\!=\; \mathbb{P}\left (G(|V(H)|,W)\supseteq H \bigm | U_{a_{j}}=x_{j}, \text{ for } 1\leq j \leq K \right ), \end{align}

where $\boldsymbol{x}= (x_1, x_2, \ldots, x_K)$ . In other words, $t_{\boldsymbol{a}}(\boldsymbol{x},H,W)$ is the homomorphism density of $H$ in the graphon $W$ when the vertex $a_j \in V(H)$ is marked with the value $x_j \in [0, 1]$ , for $1 \leq j \leq K$ .

The conditional homomorphism densities will play a crucial role in the description of the limiting distribution of $X_n(H, W)$ . In particular, the $H$ -regularity of a graphon $W$ is determined by the 1-point conditional homomorphism densities, which we formalise below:

Definition 2.2. ( $H$ -regularity of a graphon) A graphon $W$ is said to be $H$ -regular if

(2.10) \begin{align} \overline{t}(x,H,W) \;:\!=\; \frac{1}{|V(H)|}\sum _{a=1}^{|V(H)|}t_a(x,H,W)=t(H,W), \end{align}

for almost every $x \in [0, 1]$ .

Note that in (2.10) it is enough to assume that $\overline{t}(x,H,W)$ is a constant for almost every $x \in [0, 1]$ . This is because

(2.11) \begin{align} \int _0^1 t_{a}(x,H,W) \,\textrm{d} x = t(H, W), \end{align}

for all $a \in V(H)$ . Hence, if $\overline{t}(x,H,W)$ is a constant a.e., then the constant must be $t(H, W)$ . Therefore, in other words, a graphon $W$ is $H$ -regular if the homomorphism density of $H$ in $W$ when one of the vertices of $H$ is marked, is a constant independent of the value of the marking.

Remark 2.3. Note that when $H=K_r$ is the $r$ -clique, for some $r \geq 2$ , then $t_{a}(x,H,W)=t_{b}(x,H,W)$ , for all $1 \leq a \ne b \leq r$ . Hence, (2.10) simplifies to

(2.12) \begin{align} t_{1}(x,K_r,W) = \mathbb{E}\left [\prod _{1\leq a \lt b \leq r}W(U_{a},U_{b})\biggm |U_{1}=x\right ] = t(H,W), \text{ for almost every } x \in [0, 1], \end{align}

which is precisely the notion of $K_{r}$ -regularity defined in [Reference Hladký, Pelekis and Šileikis17].

Remark 2.4. Recall that the degree function of a graphon $W$ is defined as

(2.13) \begin{align} d_{W}(x)\;:\!=\;\int _{[0,1]}W(x,y) \,\textrm{d} y. \end{align}

Note that for $H=K_2$ , (2.9) yields

(2.14) \begin{align} t_{1}(x,K_2,W) = \mathbb{E}\left [W(U_{1},U_{2})\bigm |U_{1}=x\right ] = \int _{[0,1]}W(x,y) \,\textrm{d} y = d_{W}(x). \end{align}

Hence, the notion of $K_2$ -regularity coincides with the standard notion of degree regularity, where the degree function $ d_{W}(x)\;:\!=\;\int _{[0,1]}W(x,y) \,\textrm{d} y$ is constant a.e.

2.3. Spectrum of graphons and 2-point conditional densities

Hereafter, we denote by $\mathcal{W}_0$ the space of all graphons, which is the collection of all symmetric, measurable functions $W \;:\;[0, 1]^2\rightarrow [0, 1]$ . We let also $\mathcal{W}_1$ be the space of all bounded, symmetric, measurable functions $W \;:\; [0, 1]^2\rightarrow [0, \infty )$ . Every graphon $W\in \mathcal{W}_0$ , or more generally $W\in \mathcal{W}_1$ , defines an operator $T_{W}\;:\;L^{2}[0,1]\rightarrow L^{2}[0,1]$ as follows:

(2.15) \begin{equation} (T_Wf)(x)=\int _0^1W(x, y)f(y)\,\textrm{d} y, \end{equation}

for each $f\in L^{2}[0,1]$ . $T_W$ is a symmetric Hilbert–Schmidt operator; thus it is compact and has a discrete spectrum, that is, it has a countable multiset of non-zero real eigenvalues, which we denote by ${\textrm{Spec}}(W)$ , with

(2.16) \begin{align} \sum _{\lambda \in{\textrm{Spec}}(W)}\lambda ^2=\iint W(x,y)^2\,\textrm{d} x\,\textrm{d} y\lt \infty . \end{align}

Moreover, a.e.,

(2.17) \begin{align} (T_{W}f)(x)=\sum _{\lambda \in{\textrm{Spec}}(W)}\lambda \langle f,\phi _{\lambda }\rangle \phi _{\lambda }(x) \end{align}

and

(2.18) \begin{align} W(x,y)=\sum _{\lambda \in{\textrm{Spec}}(W)}\lambda \phi _{\lambda }(x)\phi _{\lambda }(y), \end{align}

where $\{\phi _{\lambda }\}_{\lambda \in{\textrm{Spec}}(W)}$ denotes an orthonormal system of eigenfunctions associated with ${\textrm{Spec}}(W)$ . For a more detailed discussion on the spectral properties of graphons and their role in graph limit theory, see [Reference Lovász25, Chapters 7, 11].

To describe the limiting distribution of $X_n(H, W)$ when $W$ is $H$ -regular, we will need to understand the spectral properties of the following graphon obtained from the 2-point conditional densities:

Definition 2.5. Given a graphon $W \in \mathcal{W}_0$ and a simple connected graph $H=(V(H), E(H))$ , the 2-point conditional graphon induced by $H$ is defined as

(2.19) \begin{align} W_{H}(x,y)=\frac{1}{2|\textrm{Aut}(H)|}\sum _{1\leq a\neq b\leq |V(H)|}t_{a, b}((x,y),H,W), \end{align}

where $t_{a, b}((x,y),H,W)$ is the $2$ -point conditional homomorphism density function of $H$ in $W$ given the vertices $(a, b)$ , as in Definition 2.1.Footnote 1 (The normalisation factor in (2.19) is chosen for later convenience in e.g. (2.25).)

Intuitively, $W_{H}(x,y)$ can be interpreted as the homomorphism density of $H$ in $W$ containing the ‘vertices’ $x, y \in [0, 1]$ .

Note that a graphon $W$ is $H$ -regular (see Definition 2.2) if and only if the 2-point conditional graphon $W_H$ is degree regular (see Remark 2.4). This is because, for all $x \in [0, 1]$ ,

(2.20) \begin{align} \int _0^1 W_{H}(x,y)\,\textrm{d} y & = \frac{|V(H)|-1}{2|\textrm{Aut}(H)|}\sum _{a=1}^{|V(H)|}t_{a}(x,H,W), \end{align}

and the RHS of (2.20) is a constant if and only if $W$ is $H$ -regular. In fact, if $W$ is $H$ -regular, then $\frac{1}{|V(H)|} \sum _{a=1}^{|V(H)|}t_{a}(x,H,W) = t(H, W)$ a.e.; hence, the degree of $W_H$ becomes

(2.21) \begin{align} \int _0^1 W_{H}(x,y)\,\textrm{d} y = \frac{|V(H)|(|V(H)|-1)}{2|\textrm{Aut}(H)|} \cdot t(H, W) & \;:\!=\; d_{W_H}, \end{align}

for almost every $x \in [0, 1]$ . This implies that, if $W$ is $H$ -regular, then $d_{W_H}$ is an eigenvalue of the operator $T_{W_H}$ (recall (2.15)) and $\phi \equiv 1$ is a corresponding eigenvector. In this case, we will use ${\textrm{Spec}}^{-}(W_{H})$ to denote the collection ${\textrm{Spec}}(W_H)$ with the multiplicity of the eigenvalue $d_{W_H}$ decreased by $1$ . (Note that $d_{W_H}\gt 0$ by (2.21) unless $t(H,W)=0$ , or $|V(H)|=1$ ; these cases are both trivial, see Remark 2.10.)

Figure 1. The $(a, b)$ -vertex join of the graphs $H_1$ and $H_2$ .

2.4. Statement of the main result

To state our results on the asymptotic distribution of $X_{n}(H,W)$ , we need to define a few basic graph operations.

Definition 2.6. For a graph $H=(V(H),E(H))$ on vertex set $\{1,2,\cdots, r\}$ define,

(2.22) \begin{align} E^{+}(H)=\{(a,b)\;:\;1\leq a\neq b\leq r, (a,b) \text{ or } (b,a)\in E(H)\} \end{align}

Definition 2.7. Fix $r \geq 1$ and consider two graphs $H_{1}$ and $H_{2}$ on the vertex set $\{1,2,\cdots,r\}$ and edge sets $E(H_1)$ and $E(H_2)$ , respectively.

  • Vertex Join: For $a, b \in \{1,2,\cdots,r\}$ , the $(a, b)$ -vertex join of $H_1$ and $H_2$ is the graph obtained by identifying the $a$ -th vertex of $H_1$ with the $b$ -th vertex of $H_2$ (see Figure 1 for an illustration). The resulting graph will be denoted by

    \begin{align*} H_{1}\bigoplus _{a,b}H_{2}. \end{align*}
  • Weak Edge Join: For $(a,b) \in E^{+}(H_1)$ and $(c,d) \in E^{+}(H_2)$ , with $1 \leq a\neq b\leq r$ and $1 \leq c\neq d \leq r$ , the $(a, b), (c, d)$ -weak edge join of $H_1$ and $H_2$ is the graph obtained identifying the vertices $a$ and $c$ and the vertices $b$ and $d$ and keeping a single edge between the two identified vertices (see Figure 2 for an illustration). The resulting graph will be denoted by

    \begin{align*} H_{1}\mathop{\ominus }\limits _{(a,b),(c,d)} H_{2} . \end{align*}
  • Strong Edge Join: For $(a,b) \in E^{+}(H_1)$ and $(c,d) \in E^{+}(H_2)$ , with $1 \leq a\neq b\leq r$ and $1 \leq c\neq d \leq r$ , the $(a, b), (c, d)$ -strong edge join of $H_1$ and $H_2$ is the multi-graph obtained identifying the vertices $a$ and $c$ and the vertices $b$ and $d$ and keeping both the edges between the two identified vertices (see Figure 2 for an illustration). The resulting graph will be denoted by

    \begin{align*} H_{1}\bigoplus _{(a,b),(c,d)} H_{2}. \end{align*}

Figure 2. The weak and strong edge joins of the graphs $H_1$ and $H_2$ .

Remark 2.8. We note that both the weak and strong edge join operations can be extened to arbitrary $(a,b)\in V(H_{1})^{2}$ and $(c,d)\in V(H_{2})^2$ with $ a\neq b$ and $ c\neq d$ ; in the strong join we keep all edges, but in the weak join we keep the join simple by merging any resulting double edge. (Thus, if either $(a,b)\not \in E^{+}(H_{1})$ or $(c,d)\not \in E^{+}(H_{2})$ , then the weak and strong edge joins are the same graph.)

Having introduced the framework and the relevant definitions, we are now ready to state our main result regarding the asymptotic distribution of $X_n(H, W)$ , the number of copies of $H$ in the $W$ -random graph $G(n, W)$ .

Theorem 2.9. Fix a graphon $W \in \mathcal{W}_0$ and a simple graph $H=(V(H), E(H))$ with vertices labelled $V(H)=\{1, 2, \ldots, |V(H)|\}$ . Then for $X_{n}(H,W)$ as defined in (1.1) the following hold, as $n\to \infty$ :

  1. (1) For any $W$ ,

    (2.23) \begin{align} \frac{X_{n}(H,W)-\frac{(n)_{|V(H)|}}{|\textrm{Aut}(H)|}t(H,W)}{n^{|V(H)|-\frac{1}{2}}}\overset{D}{\longrightarrow }\textsf{N}(0, \tau ^{2}_{H, W}), \end{align}
    where
    (2.24) \begin{align} \tau ^{2}_{H, W} \;:\!=\;\frac{1}{|\textrm{Aut}(H)|^{2}}\left [\sum _{1\leq a, b\leq |V(H)|}t\left (H\bigoplus _{a,b}H,W\right ) - |V(H)|^{2}t(H,W)^{2}\right ] \ge 0. \end{align}
    Moreover, $\tau ^2_{H,W}\gt 0$ if and only if W is not $H$ -regular. Thus, if W is not $H$ -regular, then $X_n(H,W)$ is asymptotically normal.
  2. (2) If W is $H$ -regular, then

    (2.25) \begin{align} \frac{X_{n}(H,W)-\frac{(n)_{|V(H)|}}{|\textrm{Aut}(H)|}t(H,W)}{n^{|V(H)|-1}} \overset{D}{\longrightarrow }\sigma _{H,W} \cdot Z+\sum _{\lambda \in{\textrm{Spec}}^{-}(W_{H})}\lambda (Z_{\lambda }^{2}-1), \end{align}
    where $Z$ and $\{Z_{\lambda }\}_{\lambda \in{\textrm{Spec}}^{-}(W_{H})}$ all are independent standard Gaussians,
    (2.26) \begin{align} \sigma _{H,W}^{2}\;:\!=\;\frac{1}{2|\textrm{Aut}(H)|^{2}}\sum _{(a,b),(c,d)\in E^{+}(H)}\left [t\left (H\mathop{\ominus }\limits _{(a,b),(c,d)} H,W\right )-t\left (H\bigoplus _{(a,b),(c,d)} H,W\right )\right ] \ge 0, \end{align}
    and ${\textrm{Spec}}^{-}(W_{H})$ is the multiset ${\textrm{Spec}}(W_H)$ with multiplicity of the eigenvalue $d_{W_H}$ $($ recall (2.21)) decreased by $1$ .

The sum in (2.25) may be infinite, but it converges in $L^2$ and a.s. by (2.16) (see [Reference Kallenberg23, Lemma 4.16]). The proof of Theorem 2.9 uses the projection method for generalised $U$ -statistics developed in Janson and Nowicki [Reference Janson and Nowicki22], which allows us to decompose $X_{n}(H,W)$ over sums of increasing complexity. (See also [Reference Janson20, Chapter 11.3] and [Reference Kaur and Röllin24].) The terms in the expansion are indexed by the vertices and edges subgraphs of the complete graph of increasing sizes, and the asymptotic behaviour of $X_{n}(H,W)$ is determined by the non-zero terms indexed by the smallest size graphs. Details of the proof are given in Section 5. Various examples are discussed in Section 3.

Remark 2.10. We note some trivial cases, where $X_n(H,W)$ is deterministic. First, $t(H,W)=1$ if and only if $H$ is empty (has no edge), or $W$ is complete, that is, $W\equiv 1$ . In both cases, almost surely $X_{n}(H,W)=\frac{(n)_{|V(H)|}}{|\textrm{Aut}(H)|}$ . Similarly, if $W$ is $H$ -free, that is, $t(H,W)=0$ , then almost surely $X_{n}(H,W)=0$ . Note also that in these cases with $t(H,W)\in{\{0,1\}}$ , we have $\overline{t}(x,H,W) = t(H,W)$ a.e., e.g. by (2.11), and thus $W$ is $H$ -regular. Theorem 2.9 is valid for these cases too (with limits 0), but is not very interesting, and we may without loss of generality exclude these cases and assume $0\lt t(H,W)\lt 1$ .

Remark 2.11. As mentioned earlier, the result in Theorem 2.9(1) has been proved recently by Féray, Méliot, and Nikeghbali [Reference Féray, Méliot and Nikeghbali16, Theorem 21] using the machinery of mod-Gaussian convergence (see also [Reference Austern and Orbanz1, Section 8] for connections to exchangability). They noted that the limiting distribution in [Reference Féray, Méliot and Nikeghbali16, Theorem 21] might be degenerate, that is, $\tau _{H, W}=0$ , and called this case singular. (This is thus our $H$ -regular case). Méliot [Reference Méliot27] studied the (globally) singular graphons, i.e., the graphons $W$ for which $\tau _{H, W}=0$ , for all graphs $H$ . For such graphons [Reference Méliot27] derived the order of fluctuations for the homomorphism densities, but did not identify the limiting distribution.

The main emphasis of the present paper is Theorem 2.9(2), for $H$ -regular graphons, where the more interesting non-Gaussian fluctuation emerges. Moreover, it turns out that there are non-trivial cases where also the limit in Theorem 2.9(2) is degenerate. We discuss this further in Section 4, where we give both an example of such a higher order degeneracy, and examples of graphs $H$ for which this cannot happen for any $W$ . We will also study when one of the two components of the limit (the normal and the non-normal component) vanishes. In particular, in the classical Erdős–Rényi case $W\equiv p$ , Theorem 2.9(2) applies to every $H$ with the non-normal component vanishing, so the limit is normal, which is a classical result; see further Example 3.3.

Remark 2.12. For the closely related problem of counting induced subgraphs isomorphic to $H$ , limit distributions of the type in Theorem 2.9(2) with a non-normal component occur (for special $H$ ) even in the Erdős–Rényi case $W\equiv p$ , but then with normalisation by $n^{|V(H)|-2}$ , see [Reference Barbour, Karoński and Ruciński2, Reference Janson and Nowicki22]. It seems interesting to study induced subgraph counts in $G(n,W)$ for general graphons $W$ with our methods, but we have not pursued this.

Finally, it is worth mentioning that limiting distributions very similar to that in Theorem 2.9(2) also appears in the context of counting monochromatic subgraphs in uniform random colourings of sequences of dense graphs [Reference Bhattacharya and Mukherjee4, Reference Bhattacharya, Diaconis and Mukherjee5]. Although this is a fundamentally different problem, the appearance of similar limiting objects in both situations is interesting.

3. Examples

In this section we compute the limiting distribution of $X_{n}(H,W)$ for various specific choices of $H$ and $W$ using Theorem 2.9.

Example 3.1. (Cliques) Suppose $H=K_r$ , the complete graph on $r$ vertices, for some $r \geq 2$ . This is the case that was studied in [Reference Hladký, Pelekis and Šileikis17]. To see that Theorem 2.9 indeed recovers the main result in [Reference Hladký, Pelekis and Šileikis17], first recall Remark 2.3, which shows that our notion of $H$ -regularity matches with the notion of $K_r$ -regularity defined in [Reference Hladký, Pelekis and Šileikis17]. Next, note that by the symmetry of the vertices of a clique,

(3.1) \begin{align} t\left (H\bigoplus _{a,b}H,W\right ) = t\left (H\bigoplus _{1,1}H,W\right ), \end{align}

for $1\leq a, b\leq |V(H)|$ , and $|\textrm{Aut}(K_r)|= r!$ . Therefore, Theorem 2.9(1) implies, when $W$ is not $K_r$ -regular,

(3.2) \begin{align} \frac{X_{n}(K_r, W) -\big(\begin{array}{c}{n}\\[-8pt] {r}\end{array}\big) t(K_r,W)}{n^{r-\frac{1}{2}}} \overset{D}{\rightarrow } \textsf{N}\left (0, \frac{1}{(r-1)!^2}\left [ t\left (K_r \bigoplus _{1, 1} K_r, W\right ) - t(K_r,W)^{2}\right ] \right ), \end{align}

which is precisely the result in [Reference Hladký, Pelekis and Šileikis17, Theorem 1.2(b)]. For the $K_r$ -regular case, note that by the symmetry of the edges of a clique, the 2-point conditional graphon induced by $K_r$ (recall Definition 2.5) simplifies to

(3.3) \begin{align} W_{K_r}(x,y)=\frac{1}{2(r-2)!} t_{1,2}((x,y),K_r,W). \end{align}

Moreover, for all $(a, b), (c, d) \in E(K_r)$ ,

(3.4) \begin{align} t\left (K_r \mathop{\ominus }\limits _{(a,b),(c,d)} K_r, W\right ) = t\left (K_r \mathop{\ominus }\limits _{(1,2),(1,2)} K_r, W\right ), \end{align}

and similarly for the strong edge-join operation. Hence, Theorem 2.9(2) implies

(3.5) \begin{align} \frac{X_{n}(K_r, W) -\big(\begin{array}{c}{n}\\[-8pt] {r}\end{array}\big) t(K_r,W)}{n^{r-1}} \overset{D}{\rightarrow }\sigma _{K_r,W} \cdot Z+\sum _{\lambda \in{\textrm{Spec}}^{-}(W_{K_r})}\lambda (Z_{\lambda }^{2}-1) \end{align}

with

(3.6) \begin{align} \sigma _{K_r,W}^{2}=\frac{1}{2(r-2)!^{2}} \left \{t\left (H\mathop{\ominus }\limits _{(1,2),(1,2)} H,W\right )-t\left (H\bigoplus _{(1,2),(1,2)} H,W\right )\right \}, \end{align}

as shown in [Reference Hladký, Pelekis and Šileikis17, Theorem 1.2(c)].

Example 3.2. (2-Star) Suppose $H=K_{1,2}$ with the vertices labelled $\{1, 2, 3\}$ as shown in Figure 3. In this case, for any graphon $W \in \mathcal W_0$ ,

(3.7) \begin{align} t_{1}(x,K_{1,2},W)=\int _{0}^1 W(x,y)W(x,z)\,\textrm{d} y\,\textrm{d} z = d_{W}(x)^2, \end{align}

where the degree function $d_{W}(x)$ is defined in (2.13), and

(3.8) \begin{align} t_{2}(x,K_{1,2},W)=t_{3}(x,K_{1,2},W)=\int _0^1 W(x,y)W(y,z)\,\textrm{d} y\,\textrm{d} z = \int _0^1 W(x,y)d_{W}(y) \,\textrm{d} y, \end{align}

Then by Definition 2.2, (3.7) and (3.8), $W$ is $K_{1,2}$ -regular if and only if

(3.9) \begin{align} d_{W}(x)^{2} + 2\int _{0}^{1}W(x,y)d_{W}(y)dy = 3t\bigl (K_{1,2},W\bigr ), \quad \text{for a.e. }x\in [0,1] . \end{align}

In particular, if $W$ is degree regular, then the left-hand side of (3.9) is constant, and thus $W$ is $K_{1,2}$ -regular. (We conjecture that the converse holds too, but we have not verified this.)

Figure 3. The different non-isomorphic graphs that can be obtained by the vertex join of two copies of $K_{1, 2}$ (with vertices labelled $\{1, 2, 3\}$ as in the inset).

Figure 4. (a) The weak edge join of two copies of $K_{1,2}$ and (b) the strong edge join of two copies of $K_{1,2}$ .

Therefore, from Theorem 2.9 we have the following:

  • If (3.9) does not hold, then

    (3.10) \begin{align} \frac{X_{n}(K_{1, 2},W)- 3\big(\begin{array}{c}{n}\\[-8pt] {3}\end{array}\big) t(K_{1,2},W)}{n^{\frac{5}{2}}}\overset{D}{\rightarrow }\textsf{N}(0, \tau ^{2}_{K_{1, 2}, W}) \end{align}
    with
    (3.11) \begin{align} \tau ^{2}_{K_{1, 2}, W} \;:\!=\;\frac{1}{4}\Big \{t(K_{1, 4},W) + 4 t(P_4, W) + 4 t(B_4, W) - 9 t(K_{1, 2},W)^{2} \Big \}, \end{align}
    where the graphs $K_{1, 4}$ , $P_4$ , and $B_4$ are as shown in Figure 3. Note that $K_{1, 4}$ is the 4-star (obtained by joining the two central vertices of the 2-stars), $P_4$ is the path with 4 edges (obtained by joining a leaf vertex of one 2-star with a leaf vertex of another), and $B_4$ is the graph obtained by joining the central vertex of one 2-star with a leaf vertex of another. For a concrete example of a graphon which is not $K_{1, 2}$ -regular, consider $W_0(x,y)\;:\!=\;xy$ . In this case, $d_{W_0}(x) = \frac 12x$ , for all $x\in [0,1]$ , and (3.9) does not hold; hence, $W_0$ is not $K_{1,2}$ -regular.
  • On the other hand, when (3.9) holds,

    (3.12) \begin{align} \frac{X_{n}(K_{1, 2},W)- 3\big(\begin{array}{c}{n}\\[-8pt] {3}\end{array}\big) t(K_{1,2},W)}{n^{2}} \overset{D}{\rightarrow }\sigma _{K_{1, 2},W} \cdot Z+\sum _{\lambda \in{\textrm{Spec}}^{-}(W_{K_{1, 2}})}\lambda (Z_{\lambda }^{2}-1), \end{align}
    with
    (3.13) \begin{align} \sigma _{K_{1, 2},W}^{2}\;:\!=\;t(K_{1, 3}, W )+t(P_3,W)-t(K_{1, 3}^+,W)-t(P_3^+,W), \end{align}
    where $K_{1, 3}$ is the 3-star and $P_3$ is the path shown in Figure 4(a) (obtained by the weak edge-join of two copies of $K_{1, 2}$ using the edges $(1,2),(1,2)$ and $(1,2),(2,1)$ , respectively), and the $K_{1, 3}^{+}$ and $P_3^{+}$ are the multigraphs shown in Figure 4(b) (obtained by the strong edge-join of two copies of $K_{1, 2}$ using the edges $(1,2),(1,2)$ and $(1,2),(2,1)$ , respectively). Moreover, in this case, the 2-point conditional graphon $W_{K_{1, 2}}$ simplifies to:
    (3.14) \begin{align} W_{K_{1, 2}}(x, y)= \frac{1}{2} \left \{ W(x, y)( d_W(x) + d_W(y) ) + \int W(x, z) W(y, z) \,\textrm{d} z \right \}, \end{align}
    since $t_{1,2}(x, y, K_{1, 2}, W) = t_{1, 3}(x, y, K_{1, 2}, W)= W(x, y) d_W(x)$ and $t_{2, 3}(x, y, K_{1, 2}, W) = \int _{[0, 1]} W(x, z) W(y, z) \,\textrm{d} z$ , and similarly for the others. For a concrete example of graphon which is $K_{1, 2}$ -regular consider
    (3.15) \begin{align} \tilde{W}(x,y)\;:\!=\; \begin{cases} p & \text{ if }(x,y)\in \left [0, \frac{1}{2}\right ]^2 \bigcup \left [\frac{1}{2}, 1\right ]^2, \\[5pt] 0 & \text{ otherwise} . \end{cases} \end{align}
    Note that this is a 2-block graphon (with equal block sizes) taking value $p$ in the diagonal blocks and zero in the off-diagonal blocks. (One can think of this as the ‘disjoint union two Erdős–Rényi graphons’.) It is easy to check that this graphon is degree regular, hence $K_{1, 2}$ -regular. In fact, in this case
    (3.16) \begin{align} \tilde{W}_{K_{1,2}}(x,y)= \begin{cases} \frac{3p^{2}}{4} & \text{ if }(x,y)\in \left [0, \frac{1}{2}\right ]^2 \bigcup \left [\frac{1}{2}, 1\right ]^2, \\[5pt] 0 & \text{ otherwise} . \end{cases} \end{align}
    and $\sigma _{K_{1, 2},\tilde{W}}^{2} =\frac{1}{2}p^3(1-p)$ . Moreover,
    (3.17) \begin{align}{\textrm{Spec}}(\tilde{W}_{K_{1, 2}})=\{3p^{2}/8,3p^{2}/8\}, \end{align}
    with the eigenfunctions $1$ and $\boldsymbol{1}\{[0,1/2]\} - \boldsymbol{1}\{[1/2,1]\}$ , respectively. In particular, $d_{W_{K_{1,2}}}=3p^2/8$ in agreement with (2.21). Consequently, ${\textrm{Spec}}^{-}(\tilde{W}_{K_{1, 2}})=\{3p^{2}/8\}$ .

Example 3.3. (Erdős–Rényi graphs) Suppose that $W=W_p\equiv p$ for some $p\in (0,1)$ . By symmetry, $ \overline{t}(x,H,W)$ does not depend on $x$ , and thus $W_p$ is $H$ -regular for every $H$ . Furthermore, by (2.19), also the 2-point conditional graphon $W_H$ is constant, which implies (see also Proposition 4.1) that ${\textrm{Spec}}^-(W_H)=\emptyset$ and thus the limit in Theorem 2.9(2) is normal for every non-empty $H$ . (We have $\sigma ^2_{H,W}\gt 0$ by (2.26).) As said earlier, this is a classical result, see e.g. [Reference Barbour, Karoński and Ruciński2, Reference Janson18, Reference Janson19, Reference Janson, Łuczak and Rucinski21, Reference Nowicki28Reference Ruciński30].

4. Degeneracies of the asymptotic distribution

In this section, we will discuss the degeneracies of asymptotic distribution when $W$ in $H$ -regular; we will throughout the section tacitly ignore the trivial cases in Remark 2.10, i.e., we assume that $0\lt t(H,W)\lt 1$ . Towards this denote

(4.1) \begin{align} Z_n(H, W) \;:\!=\; \frac{X_{n}(H,W)-\frac{(n)_{|V(H)|}}{|\textrm{Aut}(H)|}t(H,W)}{n^{|V(H)|-1}} . \end{align}

Theorem 2.9(2) shows that when $W$ is $H$ -regular,

(4.2) \begin{align} Z_n(H, W) \overset{D}{\rightarrow }\sigma _{H,W} \cdot Z +\sum _{\lambda \in{\textrm{Spec}}^{-}(W_{H})}\lambda (Z_{\lambda }^{2}-1), \end{align}

where $Z, \{Z_{\lambda }\}_{\lambda \in{{\textrm{Spec}}}^{-}(W_{H})}$ are all independent standard Gaussians, and $\sigma _{H,W}^{2}$ is as defined in Theorem 2.9. This raises the following natural questions:

$\bullet$ Is the limiting distribution of $Z_n(H, W)$ non-degenerate? Given the result in Theorem 2.9, it is natural to wonder whether, when $W$ is $H$ -regular, the limiting distribution of $Z_n(H, W)$ in (4.2) is always non-degenerate. This is indeed the case for cliques: if $H=K_r$ for some $r \geq 2$ , then it was shown in [Reference Hladký, Pelekis and Šileikis17, Remark 1.6] that the limit in (4.2) is never degenerate. However, for general graphs $H$ the situation is surprisingly more complicated. It turns out that there are graphs $H$ for which there exist a $H$ -regular graphon $W$ , with $0\lt t(H, W)\lt 1$ , such that the limit in (4.2) is degenerate (see Example 4.6). Naturally this raises the question: For which graphs $H$ is the limiting distribution of $Z_n(H, W)$ always non-degenerate? In Section 4.3, we answer this question in the affirmative when $H=C_4$ is the 4-cycle and $H=K_{1, 2}$ is the 2-star.

In cases when the limit in (4.2) is non-degenerate, we can ask about the structure of $W$ when one of the components of the limit vanishes:

  • When is the limiting distribution of $Z_n(H, W)$ normal? Note from (4.2) that $Z_n(H, W)$ is asymptotically Gaussian if and only if the non-Gaussian component

    \begin{align*} \sum _{\lambda \in{\textrm{Spec}}^{-}(W_{H})}\lambda (Z_{\lambda }^{2}-1) \end{align*}
    is degenerate. We show in Proposition 4.1 that this happens precisely when the 2-point conditional graphon $W_H$ is constant a.e.
  • When is the limiting distribution of $Z_n(H, W)$ normal-free? Clearly, the limit (4.2) has no Gaussian component whenever $\sigma _{H, W}=0$ . In Theorem 4.3 we characterise the structure of such graphons when $H$ is bipartite: we show that if $H$ is bipartite, then the limit in (4.2) is normal-free if and only if $W(x,y)\in \{0,1\}$ a.e. (that is, $W$ is random-free). We also show that there are non-bipartite graphs $H$ and graphons $W$ which are not random-free for which $\sigma _{H, W}=0$ (Example 6.1).

4.1. Degeneracy of the Non-Gaussian Component

The following proposition characterises when the limit in (4.2) is Gaussian. It extends the special case $H=K_r$ which was shown in [Reference Hladký, Pelekis and Šileikis17, Theorem 1.3].

Proposition 4.1. Let $H$ be a simple graph and let $W$ be a $H$ -regular graphon. Then the following are equivalent:

  1. (1) $Z_n(H, W) \stackrel{D}\rightarrow N(0, \sigma ^2_{H, W})$ .

  2. (2) $\sum _{\lambda \in{\textrm{Spec}}^{-}(W_{H})}\lambda (Z_{\lambda }^{2}-1)$ is degenerate.

  3. (3) ${\textrm{Spec}}^-(W_{H})=\emptyset$ .

  4. (4) $W_H(x,y)=d_{W_H}$ a.e., where $d_{W_H}=\frac{|V(H)|(|V(H)|-1)}{2 |\textrm{Aut}(H)|}\cdot t(H,W)$ is as defined in (2.21).

Proof. From (4.2) it is clear that (1), (2) and (3) are equivalent. Next, recalling the discussion following (2.21), ${\textrm{Spec}}^-(W_H)=\emptyset$ if and only if ${\textrm{Spec}}(W_H)=\{d_{W_H}\}$ ; furthermore, since $W$ is $H$ -regular, $W_H$ is degree regular and, hence, $\phi \equiv 1$ is an eigenfunction corresponding to $d_{W_H}$ . Therefore, by (2.18), if ${\textrm{Spec}}(W_H)=\{d_{W_H}\}$ , then

(4.3) \begin{align} W_H(x,y)= d_{W_H}\phi (x)\phi (y) = d_{W_H} \qquad \text{a.e.} \end{align}

Conversely, $W_H(x,y)=d_{W_H}$ a.e. implies that $d_{W_H}$ is the only non-zero eigenvalue of $T_{W_H}$ , and thus ${\textrm{Spec}}^-(W_H)=\emptyset$ . This establishes that (3) and (4) are equivalent.

4.2. Degeneracy of the Gaussian Component

The Gaussian component in the limit (4.2) is degenerate when $\sigma _{H,W}^2 = 0$ . To study the structure of such graphons, we need a few definitions. For a graph $F=(V(F), E(F))$ and $S\subseteq V(F)$ , the neighbourhood of $S$ in $F$ is $N_F(S)=\{v \in V(F) \;:\; \exists u \in S \text{ such that } (u, v) \in E(F)\}$ . Moreover, for $u, v \in V(F)$ , $F\backslash \{u, v\}$ is the graph obtained by removing the vertices $u, v$ and all the edges incident to them. For notational convenience, we introduce the following definition:

Definition 4.2. Let $H$ be a labelled finite simple graph and $W$ a graphon. Then, for $1 \leq u \ne v \leq |V(H)|$ , the function $t_{u, v}^{-}(\cdot, \cdot, H, W) \;:\; [0, 1]^2 \rightarrow [0, 1]$ is defined as

(4.4) \begin{align} & t_{u, v}^{-}(x, y, H, W) \notag \\[5pt] &= \int _{[0,1]^{|V(H)|-2}} \prod _{r \in N_H(u)\backslash \{v\}}W(x, z_r) \prod _{s \in N_H(v) \backslash \{u\}} W(y, z_s) \prod _{(r, s) \in E(H\backslash \{u, v\}) } W(z_r, z_s) \prod _{r \notin \{u, v\}} \,\textrm{d} z_r . \end{align}

Thus, if $(u,v)\in E(H)$ , then

(4.5) \begin{align} t_{u, v}(x, y, H, W) = W(x,y) t_{u, v}^{-}(x, y, H, W) . \end{align}

Note that

(4.6) \begin{align} \sigma _{H,W}^2 & = c_H\sum _{(a,b),(c,d)\in E^{+}(H)} \int t_{a, b}^{-}(x, y, H, W) t_{c, d}^{-}(x, y, H, W) W(x,y)(1-W(x,y)) \,\textrm{d} x \,\textrm{d} y, \end{align}

where $c_H\;:\!=\; \frac{1}{2 |\textrm{Aut} (H)|^2}$ . It is clear from (4.6) that if $W$ is random-free, then $\sigma _{H,W}^2=0$ and hence, if $W$ is $H$ -regular, the asymptotic distribution does not have a normal component. Interestingly, the converse is also true whenever $H$ is bipartite. This is formulated in the following theorem:

Theorem 4.3. If $H$ is a non-empty bipartite graph with $t(H,W)\gt 0$ , then $\sigma _{H,W}^2 = 0$ if and only if $W$ is random-free.

The proof of Theorem 4.3 is given in Section 6. It entails showing, using the bipartite structure of $H$ , that for almost every $(x, y)$ such that $W(x, y) \in (0, 1)$ , we have $t_{a, b}^{-}(x, y, H, W) \gt 0$ , for $a\neq b\in V(H)$ such that $(a,b)\in E(H)$ . Consequently, from (4.6), $\sigma _{H,W}^2 \gt 0$ whenever the set $\{(x, y) \in [0, 1]^2 \;:\; W(x, y) \in (0, 1)\}$ has positive Lebesgue measure. An immediate consequence of Theorem 4.3 is that for a bipartite graph $H$ and an $H$ -regular $W$ , the asymptotic distribution of $Z_n(H, W)$ is non-degenerate whenever $W$ is not random-free.

Remark 4.4. The bipartite assumption in Theorem 4.3 is necessary, in the sense that there exist non-bipartite graphs $H$ and graphons $W$ with $t(H,W)\gt 0$ such that $\sigma _{H,W}^2 = 0$ , but $W$ is not random-free. We discuss this in Example 6.1.

For non-bipartite $H$ , we note only the following, which extends [Reference Hladký, Pelekis and Šileikis17, Proposition 1.5].

Proposition 4.5. We have $\sigma ^2_{H,W}=0$ if and only if $W(x,y)=1$ for a.e. $(x,y)$ such that $t_{a, b}(x, y, H, W)\gt 0$ for some $(a,b)\in E^+(H)$ .

Proof. An immediate consequence of (4.6) and (4.5).

4.3. Degeneracy of the Limit in (4.2)

We begin with an example where the limit in (4.2) is degenerate.

Example 4.6. Let $H=K_{1,3}$ be the 3-star on vertex set $\{1,2,3,4\}$ , where the root (non-leaf) vertex is labelled $1$ . Further, suppose that $W$ is the complete balanced bipartite graphon:

(4.7) \begin{align} W(x,y)\;:\!=\; \begin{cases} 0 & \text{ if }(x,y)\in \left [0, \frac{1}{2}\right ]^2 \bigcup \left (\frac{1}{2}, 1\right ]^2, \\[5pt] 1 & \text{ otherwise} . \end{cases} \end{align}

To begin with note that $d_{W}(x)=\int _{0}^{1}W(x,y) \,\textrm{d} y=\frac{1}{2}$ , for all $x\in [0,1]$ . Therefore,

(4.8) \begin{align} \frac{1}{4}\sum _{i=1}^{4}t_{i}\left (x,K_{1,3},W\right ) &=\frac{1}{4}\left [d_W(x)^3+3\int W(x,t)d_W(t)^2 \,\textrm{d} t\right ]= \frac{1}{8} . \end{align}

This establishes that $W$ is $K_{1,3}$ -regular, and that $t(K_{1,3},W)=1/8$ . Next, since $W \in \{0,1\}$ , by Theorem 4.3, $\sigma _{K_{1,3},W_{2}}^{2}=0$ . Hence, to show that the limit distribution of $Z_n(K_{1, 3}, W)$ is degenerate it suffices to check that $\sum _{\lambda \in \text{Spec}^{-}(W_{K_{1,3}})}\lambda ^{2} = 0 .$ By Proposition 4.1, this is equivalent to showing

(4.9) \begin{align} W_{K_{1,3}}(x,y) = \frac{12}{ 2\left |{\textrm{Aut}}(K_{1,3})\right |}t\left (K_{1,3},W\right ) = \frac{1}{8}, \end{align}

for a.e. $(x,y)\in [0,1]^2$ (since $ |{\textrm{Aut}}(K_{1,3}) | = 3! = 6$ ). Towards this recall (2.19), which yields

(4.10) \begin{align} W_{K_{1,3}}(x,y) & =\frac{1}{2|\textrm{Aut}(K_{1,3})|}\sum _{1 \leq a\neq b \leq 4}t_{a,b}\left (x,y,K_{1,3},W\right ) \nonumber \\[5pt] & =\frac{1}{12}\bigg [3W(x,y)\int W(x,z)W(x,t) \,\textrm{d} z \,\textrm{d} t + 3W(x,y)\int W(y,z)W(y,t)\,\textrm{d} z \,\textrm{d} t \nonumber \\[5pt] & \hskip 5em + 6\int W(x,t)W(y,t)W(z,t) \,\textrm{d} z \,\textrm{d} t\bigg ] \nonumber \\[5pt] &=\frac{1}{12}\left [3W(x,y)d_W(x)^2 + 3W(x, y)d_W(y)^2 + 6 \int d_W(t) W(x,t)W(y, t) \,\textrm{d} t\right ] \nonumber \\[5pt] &=\frac{1}{12}\left [\frac{3}{2}W(x,y) + 3\int W(x,t)W(y, t) \,\textrm{d} t\right ]. \end{align}

Now, observe that if $W(x,y) = 0$ then $\int W(x,t)W(y, t) \,\textrm{d} t = \frac{1}{2}$ , which implies, from (4.10), $W_{K_{1,3}}(x,y) = \frac{1}{8}$ . Further, when $W(x,y) = 1$ , then $\int W(x,t)W(y, t) \,\textrm{d} t = 0$ , which implies $W_{K_{1,3}}(x,y) = \frac{1}{8}$ . Thus for all $(x,y)\in [0,1]^2$ , $W_{K_{1,3}}=1/8$ , which establishes (4.9). This shows that limiting distribution of $Z_n(K_{1, 3}, W)$ is degenerate for $W$ as in (4.7).

In fact, in this example, we can easily find the asymptotic distribution of $X_{n}(K_{1,3}, W)$ directly. Let $M\;:\!=\;\bigl |{\{i:U_i\le \frac 12\}}\bigr |\sim \operatorname{Bin}\bigl (n,\frac 12\bigr )$ , and $\hat M\;:\!=\;M-n/2$ . Then

(4.11) \begin{align} X_n(K_{1,3},W)&=M\binom{n-M}3 + (n-M)\binom M3 \notag \\[5pt] & =\frac{1}{6}\Bigl (M(n-M)\bigl ((n-M)^2-3(n-M)+2\bigr )+(n-M)M\bigl (M^2-3M+2\bigr )\Bigr ) \notag \\[5pt] & =\frac{1}{6}M(n-M)\bigl ((n-M)^2+M^2-3n+4\bigr ) \notag \\[5pt] & =\frac{1}{6}\Bigl (\frac{n}2+\hat M\Bigr )\Bigl (\frac{n}2-\hat M\Bigr ) \Bigl (\Bigl (\frac{n}2-\hat M\Bigr )^2+\Bigl (\frac{n}2+\hat M\Bigr )^2-3n+4\Bigr ) \notag \\[5pt] & =\frac{1}{6}\Bigl (\Bigl (\frac{n}2\Bigr )^2-\hat M^2\Bigr ) \Bigl (2\Bigl (\frac{n}2\Bigr )^2+2\hat M^2 -3n+4\Bigr ) \notag \\[5pt] & =\frac{1}{3}\Bigl (\Bigl (\frac{n}2\Bigr )^4-\hat M^4\Bigr ) -\frac{3n-4}{6}\Bigl (\Bigl (\frac{n}2\Bigr )^2-\hat M^2\Bigr ). \end{align}

Hence, subtracting the mean and using (2.8),

(4.12) \begin{align} \frac{X_n(K_{1,3},W)-\frac{(n)_4}{48}}{n^2}& =-\frac{\hat M^4-\mathbb{E} \hat M^4}{3n^2} +\frac{3n-4}{6n}\cdot \frac{\hat M^2-\mathbb{E}\hat M^2}{n}. \end{align}

Since the central limit theorem yields $\hat M/n^{1/2}\overset{D}{\rightarrow } Z/2$ , with all moments, where $Z\sim N(0,1)$ , (4.12) yields

(4.13) \begin{align} \frac{X_n(K_{1,3},W)-\frac{(n)_4}{48}}{n^2}& \overset{D}{\longrightarrow } -\frac{Z^4-3}{48} +\frac{Z^2-1}8 =-\frac{1}{48}\bigl (Z^4-6 Z^2+3\bigr ) =-\frac{1}{48}h_4(Z), \end{align}

where $h_4$ is the 4th Hermite polynomial (using the normalisation in, e.g. [Reference Janson20, Example 3.18]). Consequently, in this example, the correct normalisation is by $n^2=n^{|V(H)|-2}$ , and the limit distribution is given by a fourth-degree polynomial of a Gaussian variable.

The example above raises the question for which graphs $H$ is the limiting distribution of $Z_n(H, W)$ in Theorem 2.9(2) non-degenerate for all graphons $W$ . In the following we will show that the limit is always non-degenerate when $H=C_{4}$ or $H=K_{1, 2}$ (the 4-cycle and the 2-star). Our proofs use the specific structure of the 4-cycle and 2-star, and it remains unclear for what other graphs can one expect the non-degeneracy result to hold.

Non-Degeneracy of the Limit for the 4-Cycle: We begin by deriving explicit conditions for degeneracy of the two components of the limiting distribution of $Z(C_4, W)$ . (For the normal part, we can also use Theorem 4.3, but we find it interesting to first make a direct evaluation of the condition $\sigma ^2_{H,W}=0$ .) Towards this define:

(4.14) \begin{align} U_1(x,y)\;:\!=\;\int _{[0, 1]} W(x,s)W(y,s) \,\textrm{d} s \quad \text{and}\quad U_{2}(x,y)\;:\!=\;\int _{[0, 1]^2} W(x,s)W(s,t)W(y,t) \,\textrm{d} s \,\textrm{d} t. \end{align}

Lemma 4.7. Suppose $W$ is a $C_{4}$ -regular graphon with $t(C_{4},W)\gt 0$ . Then the following hold:

  1. (a) ${\textrm{Spec}}^-(W_{C_4})=\emptyset$ if and only if

    (4.15) \begin{align} U_1(x, y)^2 + 2W(x,y)U_{2}(x,y) = 3 t(C_4, W), \quad \text{a.e. } (x, y) \in [0,1]^2. \end{align}
  2. (b) $\sigma ^2_{C_4,W}=0$ if and only if

    (4.16) \begin{align} \int _{[0, 1]^2} U_2^{2}(x,y) \left (W(x, y) - W^{2}(x,y) \right ) \,\textrm{d} x \,\textrm{d} y =0. \end{align}

As a consequence, the limit of $Z_{n}(C_{4},W)$ in (4.2) is degenerate if and only if (4.15) and (4.16) hold.

Proof. Since all the vertices of the 4-cycle are symmetric, from Definition 2.2 we have the following: The graphon $W$ is $C_4$ -regular if

(4.17) \begin{align} \int _{[0, 1]^3} W(x,y)W(y,z)W(z,t)W(t,x) \,\textrm{d} y \,\textrm{d} z \,\textrm{d} t=t(C_4, W) \text{ a.e. } x \in [0, 1] . \end{align}

Moreover, since $|\textrm{Aut}(C_4)|=8$ , by Definition 2.5, the 2-point conditional graphon induced by $C_4$ is given by

(4.18) \begin{align} W_{C_4}(x, y)=\frac{ 4 U_1(x, y)^2 + 8 W(x, y) U_2(x, y) }{2 |\textrm{Aut}(C_4)|} = \frac{ U_1(x, y)^2 + 2 W(x, y) U_2(x, y) }{4}, \end{align}

where $U_1, U_2$ are as defined in (4.14). Hence, Proposition 4.1 shows that ${\textrm{Spec}}^-(W_{C_4})=\emptyset$ if and only if (4.15) holds.

Next, since all the edges of $C_4$ are symmetric, the weak edge join of 2 copies of $C_4$ is always isomorphic to graph $F_1$ in Figure 5(a). Similarly, the strong edge join of 2 copies of $C_4$ is always isomorphic to graph $F_2$ in Figure 5(b).

Figure 5. (a) The weak and (b) the strong edge join of two copies of $C_4$ .

Therefore, using $|E^+(C_4)|=8$ and $|\textrm{Aut}(C_4)|=8$ in (2.26), we find that $ \sigma ^2_{C_4, W}$ simplifies to

(4.19) \begin{align} \sigma ^2_{C_4, W} & = \frac{1}{2} \left ( t(F_1, W) - t(F_2, W) \right ) \nonumber \\[5pt] & = \frac{1}{2} \left (\int _{[0, 1]^2} W(x,y)U_2^{2}(x,y) \,\textrm{d} x \,\textrm{d} y -\int _{[0, 1]^2} W^{2}(x,y)U_2^{2}(x,y) \,\textrm{d} x \,\textrm{d} y \right ). \end{align}

Hence,

(4.20) \begin{align} \sigma ^2_{C_4, W} = 0 \iff \int _{[0,1]^2} U_2^{2}(x,y) \left ( W(x, y) - W^{2}(x,y) \right ) \,\textrm{d} x \,\textrm{d} y, \end{align}

which completes the proof.

The following theorem shows that (if we ignore the trivial cases in Remark 2.10), whenever $W$ is $C_{4}$ -regular, the limiting distribution of $Z_n(C_4, W)$ is always non-degenerate. Hence, for $H=C_4$ , Theorem 2.9(1) or (2) will give a non-degenerate limit. By Lemma 4.7, Theorem 4.8 is equivalent to the claim that whenever $W$ is $C_{4}$ -regular, (4.15) and (4.16) cannot occur simultaneously. The proof of Theorem 4.8 is given in Section 7.

Theorem 4.8. Suppose $W$ is a $C_{4}$ -regular graphon with $t(C_{4},W)\gt 0$ and $W$ is not identically $1$ a.e. Then, the limit of $Z_{n}(C_{4},W)$ in (4.2) is non-degenerate.

Non-Degeneracy of the Limit for the 2-Star: As in Lemma 4.7, we first derive conditions which are equivalent to degeneracy of the two components of the limiting distribution of $Z_n(K_{1, 2}, W)$ .

Lemma 4.9. Suppose $W$ is a $K_{1, 2}$ -regular graphon with $t(K_{1, 2},W)\gt 0$ . Then the following hold:

  1. (a) ${\textrm{Spec}}^-(W_{K_{1, 2}})=\emptyset$ if and only if

    (4.21) \begin{align} W(x,y)\left (d_{W}(x)+d_{W}(y)\right ) + U_1(x, y) = 3\int d_{W}^{2}(z)\,\textrm{d} z, \quad \text{a.e. }(x,y)\in [0,1]^{2}, \end{align}
    where $U_1(x, y)$ is as defined in (4.14).
  2. (b) $\sigma ^2_{K_{1, 2}, W}=0$ if and only if

    (4.22) \begin{align} \int \left \{ d_W(x) d_W(y) + d_W(x)^2 \right \} W(x,y)(1-W(x,y))\,\textrm{d} x \,\textrm{d} y = 0 . \end{align}

As a consequence, the limit of $Z_{n}(K_{1, 2},W)$ in (4.2) is degenerate if and only if (4.21) and (4.22) hold.

Proof. From (3.14) the 2-point conditional graphon induced by $K_{1, 2}$ is given by

(4.23) \begin{align} W_{K_{1, 2}}(x,y) = \frac{1}{2}\left \{W(x,y)\left (d_{W}(x)+d_{W}(y)\right ) + U_1(x, y) \right \} . \end{align}

Furthermore, (2.21) yields $d_{W_{K_{1, 2}}}=\frac{6}{4}t(K_{1,2},W)=\frac{3}{2}\int _0^1 d_W(x)^2\,\,\textrm{d} x.$ Hence, Proposition 4.1 shows that ${\textrm{Spec}}^-(W_{K_{1, 2}})=\emptyset$ if and only if (4.21) holds.

Furthermore, recalling (3.13) we have,

(4.24) \begin{align} \sigma _{K_{1, 2},W}^{2} =2\left [ t\left (K_{1,3},W\right ) + t(P_3,W)-t\bigl (K_{1, 3}^{+},W\bigr )-t\left (P_3^{+},W\right )\right ] \end{align}

where the graphs $K_{1,3},K_{1,3}^{+},P_3$ and $P_3^{+}$ are as shown in Figure 4. By evaluating the densities in (4.24), we obtain

(4.25) \begin{align} \sigma _{K_{1, 2},W}^{2} = 2\int \left \{ d_W(x) d_W(y) + d_W(x)^2 \right \} W(x,y)(1-W(x,y))\,\textrm{d} x \,\textrm{d} y . \end{align}

This shows that $\sigma _{K_{1, 2},W}^{2} =0$ equivalent to (4.22).

The following theorem is the counterpart of Theorem 4.8 for $K_{1,2}$ and shows that for $H=K_{1,2}$ , and Theorem 2.9(1) or (2) will give a non-degenerate limit. By Lemma 4.7, Theorem 4.10 is equivalent to the claim that whenever $W$ is $K_{1,2}$ -regular, (4.21) and (4.22) cannot occur simultaneously. The proof of Theorem 4.10 is given in Section 8.

Theorem 4.10. Suppose $W$ is a $K_{1, 2}$ -regular graphon with $t(K_{1, 2},W)\gt 0$ and $W$ is not identically $1$ a.e. Then, the limit of $Z_{n}(C_{4},W)$ in (4.2) is non-degenerate.

5. Proof of Theorem 2.9

Fix a graphon $W \in \mathcal{W}_0$ and a non-empty simple graph $H=(V(H), E(H))$ with vertices labelled $V(H)=\{1, 2, \ldots, |V(H)|\}$ and recall the definition of $X_{n}(H,W)$ from (1.1). To express $X_{n}(H,W)$ as a generalised $U$ -statistic note that

(5.1) \begin{align} X_{n}(H,W)=\sum _{1\leq i_{1}\lt \cdots \lt i_{|V(H)|}\leq n}f(U_{i_{1}},\cdots, U_{i_{|V(H)|}},Y_{i_{1}i_{2}},\cdots, Y_{i_{|V(H)|-1}i_{|V(H)|}}) \end{align}

where $\mathscr{G}_H \;:\!=\; \mathscr{G}_H(\{1, 2, \ldots, |V(H)|\})$ and

(5.2) \begin{align} f(U_{1},\cdots, U_{|V(H)|},Y_{12},\cdots, Y_{|V(H)|-1 |V(H)|})=\sum _{H'\in \mathscr{G}_H}\prod _{(a, b) \in E(H')} \textbf{1}\left \{Y_{ab}\leq W(U_{a},U_{b}) \right \}. \end{align}

This is exactly in the framework of generalised $U$ -statistics considered in [Reference Janson and Nowicki22]. Therefore, we can now orthogonally expand the function $f$ as a sum over subgraphs of the complete graph as explained in the section below.

5.1. Orthogonal Decomposition of Generalized $U$ -Statistics

We recall some notations and definitions from [Reference Janson and Nowicki22]. Suppose $\{U_{i} \;:\; 1 \leq i \leq n\}$ and $\{Y_{ij} \;:\; 1\leq i\lt j \leq n\}$ are i.i.d. sequences of $U[0,1]$ random variables. Denote by $K_n$ the complete graph on the set of vertices $\{1, 2, \ldots, n\}$ and let $G= (V(G), E(G))$ be a subgraph of $K_{n}$ . Let $\mathcal{F}_{G}$ be the $\sigma$ -algebra generated by the collections $\{U_{i}\}_{i\in V(G)}$ and $\{Y_{ij}\}_{ij\in E(G)}$ and let $L^{2}(G)=L^2(\mathcal{F}_G)$ be the space of all square integrable random variables that are functions of $\{U_{i} \;:\; i\in V(G)\}$ and $\{Y_{ij} \;:\; ij\in E(G)\}$ . Now, consider the following subspace of $L^{2}(G)$ :

(5.3) \begin{align} M_{G}\;:\!=\;\{Z\in L^{2}(G) \;:\; \mathbb{E}[ZV]=0\text{ for every }V\in L^{2}(H)\text{ such that }H\subset G\}. \end{align}

(For the empty graph, $M_{\emptyset }$ is the space of all constants.) Equivalently, $Z\in M_{G}$ if and only if $Z\in L^{2}(G)$ and

(5.4) \begin{align} \mathbb{E}\left [Z\mid X_{i},Y_{ij} \;:\; i \in V(H), (i, j) \in E(H)\right ] = 0, \quad \text{for all } H\subset G. \end{align}

Then, we have the orthogonal decomposition [Reference Janson and Nowicki22, Lemma 1]

(5.5) \begin{align} L^{2}(G)=\bigoplus _{H\subseteq G} M_H, \end{align}

that is, $L^{2}(G)$ is the orthogonal direct sum of $M_{H}$ for all subgraphs $H\subseteq G$ . This allows us to decompose any function in $L^{2}(G)$ as the sum of its projections onto $M_{H}$ for $H\subseteq G$ . For any closed subspace $M$ of $L^2(K_n)$ , denote the orthogonal projection onto $M$ by $P_M$ . Then, in particular, for $f$ as in (5.2), we have the decomposition

(5.6) \begin{align} f=\sum _{H\subseteq G} f_{H}, \end{align}

where $f_{H}=P_{M_H}f$ is the orthogonal projection of $f$ onto $M_{H}$ . Further, for $1 \leq s \leq |V(H)|$ , define

(5.7) \begin{align} f_{(s)}\;:\!=\;\sum _{H\subseteq G \;:\; |V(H)|=s} f_{H} . \end{align}

The smallest positive $d$ such that $f_{(d)}\neq 0$ is called the principal degree of $f$ . The asymptotic distribution of $X_n(H, W)$ depends on the principal degree of $f$ and the geometry of the subgraphs which appear in its decomposition.

For any graph $G\subseteq K_n$ , the orthogonal projection onto $L^2(G)=L^2(\mathcal{F}_G)$ equals the conditional expectation $\mathbb{E}(\cdot \mid \mathcal{F}_G)$ , i.e.,

(5.8) \begin{align} P_{L^2(G)}=\mathbb{E}[\cdot \mid \mathcal{F}_G]. \end{align}

Moreover, by (5.5), we have

(5.9) \begin{align} P_{L^{2}(G)}=\sum _{H\subseteq G} P_{M_H}. \end{align}

The equations (5.8)–(5.9) enable us to express any $P_{M_H}$ as a linear combination of conditional expectations. We will do this explicitly for the simplest cases in lemmas below.

5.2. Proof of Theorem 2.9(1)

Recall the definition of the function $f$ from (5.2) and consider its decomposition as in (5.6). Then (5.7) for $s=1$ gives,

(5.10) \begin{align} f_{(1)}=\sum _{a=1}^{|V(H)|}f_{K_{\{a\}}}, \end{align}

where $K_{\{a\}}$ is the graph with the single vertex $a$ and $f_{K_{\{a\}}}$ is the projection of $f$ onto the space $M_{K_{\{a\}}}$ , for $1 \leq a \leq |V(H)|$ . We will calculate $f_{{K_{{\{a\}}}}}$ using the following lemma, which we state for general functions $F$ .

Lemma 5.1. For $1 \leq a \leq |V(H)|$ , and any $F\in L^2$ , the projection of $F$ onto the space $M_{K_{\{a\}}}$ is given by

(5.11) \begin{align} F_{K_{\{a\}}}=\mathbb{E}\left [F\mid U_{a}\right ]-\mathbb{E}[F]. \end{align}

Proof. By (5.9) and (5.8),

(5.12) \begin{align} F_{{K_{{\{a\}}}}}\;:\!=\; P_{ M_{{K_{{\{a\}}}}}} F = P_{L^2({K_{{\{a\}}}})}F-P_{M_\emptyset }F = \mathbb{E}[F\mid U_a]-\mathbb{E}[F]. \end{align}

Applying Lemma 5.1 to $f$ defined in (5.2), we obtain

(5.13) \begin{align} f_{K_{\{a\}}} &=\sum _{H'\in \mathscr{G}_H}\mathbb{E}\left [\prod _{(b, c) \in E(H')} \textbf{1}\left \{Y_{bc}\leq W(U_{b},U_{c})\right \}\,\middle |\, U_{a}\right ]-\mathbb{E}[f] \notag \\[5pt] &=\sum _{H'\in \mathscr{G}_H}\mathbb{E}\left [\prod _{(b, c)\in E(H')}W(U_{b},U_{c})\, \middle |\, U_{a}\right ]-\mathbb{E}[f]\notag \\[5pt] &=\sum _{H'\in \mathscr{G}_H}t_{a}(U_{a},H',W)-\mathbb{E}[f], \end{align}

where the last step follows from the definition of the 1-point conditional homomorphism density function (recall Definition 2.1). Then from (5.10),

(5.14) \begin{align} f_{(1)}=\sum _{a=1}^{|V(H)|}\left (\sum _{H'\in \mathscr{G}_H}t_{a}(U_{a},H',W)-\mathbb{E}[f]\right ). \end{align}

We now proceed to compute $\operatorname{Var} f_{(1)}$ .

For this, we need the following combinatorial identity.

Lemma 5.2. For the vertex join operation $\bigoplus _{a,b}$ as in Definition 2.7 the following holds:

(5.15) \begin{align} |\mathscr{G}_H|^{2}\sum _{1\leq a,b\leq |V(H)|}t\left (H\bigoplus _{a,b} H, W\right ) &=|V(H)|^{2}\sum _{H_{1},H_{2}\in \mathscr{G}_H}t\left (H_{1}\bigoplus _{1,1}H_{2},W\right ). \end{align}

Proof. For any permutation $\phi \;:\; V(H)\rightarrow V(H)$ , we define the permuted graph $\phi (H)\;:\!=\;(\phi (V(H)), \phi (E(H)))$ , where $\phi (V(H))=\{\phi (a) \;:\; 1\leq a \leq |V(H)|\}$ and $\phi (E(H))=\{(\phi (a), \phi (b)) \;:\; (a, b) \in E(H)\}$ .

First, fix $(a,b)\in V(H)^{2}$ and consider two permutations, $\phi _{a} \;:\; V(H)\rightarrow V(H)$ and $\phi _{b} \;:\; V(H)\rightarrow V(H)$ such that $\phi _{a}(a)=\phi _{b}(b)=1$ . Then

(5.16) \begin{align} \sum _{1\leq a,b\leq |V(H)|}\sum _{H_{1},H_{2}\in \mathscr{G}_H}t\left (H_{1}\bigoplus _{a,b}H_{2},W\right ) &=\sum _{1\leq a,b\leq |V(H)|}\sum _{H_{1},H_{2}\in \mathscr{G}_H}t\left (\phi _{a}(H_{1})\bigoplus _{1,1}\phi _{b}(H_{2}),W\right ) \nonumber \\[5pt] &=\sum _{1\leq a,b\leq |V(H)|}\sum _{H_{1},H_{2}\in \mathscr{G}_H}t\left (H_{1}\bigoplus _{1,1}H_{2},W\right ) \nonumber \\[5pt] &=|V(H)|^{2}\sum _{H_{1},H_{2}\in \mathscr{G}_H}t\left (H_{1}\bigoplus _{1,1}H_{2},W\right ), \end{align}

where the second equality follows, since the map $(H_1, H_2) \rightarrow (\phi _{a}(H_{1}),\phi _{b}(H_{2}))$ is a bijection from $\mathscr{G}_H^{2}$ to $\mathscr{G}_H^{2}$ , for all $1 \leq a, b \leq |V(H)|$ .

Next, fix $H_{1},H_{2}\in \mathscr{G}_H$ . Then consider isomorphisms $\phi _{1}, \phi _{2} \;:\; V(H)\rightarrow V(H)$ such that $\phi _{1}(H_{1})=H$ and $\phi _{2}(H_{2})=H$ . Thus,

(5.17) \begin{align} \sum _{H_{1},H_{2}\in \mathscr{G}_H}\sum _{1\leq a, b\leq |V(H)|}t\left (H_{1}\bigoplus _{a,b}H_{2},W\right ) &=\sum _{H_{1},H_{2}\in \mathscr{G}_H}\sum _{1\leq a, b\leq |V(H)|}t\left (H\bigoplus _{\phi _{1}(a),\phi _{2}(b)}H,W\right ) \nonumber \\[5pt] &=\sum _{H_{1},H_{2}\in \mathscr{G}_H}\sum _{1\leq a, b\leq |V(H)|}t\left (H\bigoplus _{a,b}H, W \right ) \nonumber \\[5pt] &=|\mathscr{G}_H|^{2}\sum _{1\leq a, b\leq |V(H)|}t\left (H\bigoplus _{a,b}H, W\right ). \end{align}

Here, the second equality follows since $(a, b) \rightarrow (\phi _{1}(a),\phi _{2}(b))$ is a bijection from $V(H)^2$ to $V(H)^2$ .

Combining (5.16) and (5.17) the identity in (5.15) follows.

Lemma 5.3.

(5.18) \begin{align} \operatorname{Var}[f_{(1)}]& =|V(H)| |\mathscr{G}_H|^{2} \left \{ \frac{1}{|V(H)|^2} \sum _{1\leq a,b\leq |V(H)|}t\left (H\bigoplus _{a,b}H,W\right ) - t(H,W)^{2} \right \} \notag \\[5pt] &= \frac{|V(H)|!\,(|V(H)|-1)!}{|\textrm{Aut}(H)|^2} \left \{ \sum _{1\leq a,b\leq |V(H)|}t\left (H\bigoplus _{a,b}H,W\right ) -{|V(H)|^2} t(H,W)^{2} \right \} . \end{align}

Proof. Recalling (5.14) gives, since the terms in the outer sum there are independent,

(5.19) \begin{align} \text{Var}[f_{(1)}]=\sum _{a=1}^{|V(H)|}\text{Var}\left [\sum _{H'\in \mathscr{G}_H}t_{a}(U_{a},H',W)\right ]. \end{align}

Consider the term corresponding to $a=1$ in the sum above. For any $H_{1},H_{2}\in \mathscr{G}_H$ ,

(5.20) \begin{align} \mathbb{E}\left [t_{1}(U_{1},H_{1},W)t_{1}(U_{1},H_{2},W)\right ] &=t\left (H_{1}\bigoplus _{1,1}H_{2},W\right ) . \end{align}

Hence,

(5.21) \begin{align} \text{Var}\left [\sum _{H'\in \mathscr{G}_H}t_{1}(U_{1},H',W)\right ] &=\sum _{H_{1},H_{2}\in \mathscr{G}_H}\text{Cov}\left [t_{1}(U_{1},H_{1},W),t_{1}(U_{1},H_{2},W)\right ] \nonumber \\[5pt] &=\sum _{H_{1},H_{2}\in \mathscr{G}_H}\left (t\left (H_{1}\bigoplus _{1,1}H_{2},W\right )-t(H,W)^{2}\right ) . \end{align}

Now, an argument similar to Lemma 5.2 shows that

(5.22) \begin{align} \sum _{H'\in \mathscr{G}_H}t_{a}(x,H',W)=\sum _{H'\in \mathscr{G}_H}t_{b}(x,H',W), \end{align}

for all $x\in [0,1]$ and $1\leq a, b\leq |V(H)|$ . Hence, (5.19) and (5.21) imply

(5.23) \begin{align} \text{Var}[f_{(1)}]&= |V(H)| \sum _{H_{1},H_{2}\in \mathscr{G}_H}\left (t\left (H_{1}\bigoplus _{1,1}H_{2},W\right )-t(H,W)^{2}\right ), \end{align}

and the result follows by Lemma 5.2, using (2.7) for the second equality.

Note that $\mathbb{E} f_{(1)}= 0$ by (5.7). Hence, $\operatorname{Var} f_{(1)}=0$ if and only if $f_{(1)}=0$ a.s.

Lemma 5.4. $\operatorname{Var}f_{(1)}=0$ if and only if $W$ is $H$ -regular.

Proof. Lemma 5.3 shows that $\text{Var}[f_{(1)}]$ is zero if and only if

(5.24) \begin{align} \frac{1}{|V(H)|^{2}}\sum _{1\leq a,b\leq |V(H)|}t\left (H\bigoplus _{a,b},W\right )=t(H,W)^{2} . \end{align}

Now observe,

(5.25) \begin{align} \sum _{1\leq a,b\leq |V(H)|}t\left (H\bigoplus _{a,b}H,W\right ) &=\sum _{1\leq a,b\leq |V(H)|}\int t_{a}(x,H,W)t_{b}(x,H,W)\,\textrm{d} x \notag \\[5pt] &=\int \left (\sum _{1\leq a\leq |V(H)|}t_{a}(x,H,W)\right )^{2}\,\textrm{d} x . \end{align}

Thus (5.24) becomes, using also (2.11),

(5.26) \begin{align} \int \left (\sum _{1\leq a\leq |V(H)|}t_{a}(x,H,W)\right )^{2}\,\textrm{d} x-\left ( \int \sum _{1\leq a\leq |V(H)|}t_{a}(x,H,W)\right )^{2} \,\textrm{d} x = 0, \end{align}

which is equivalent to $\text{Var} [\Delta (U) ]=0$ , where we define

(5.27) \begin{align} \Delta (x)\;:\!=\;\sum _{1\leq a\leq |V(H)|}t_{a}(x,H,W) \end{align}

and let $U\sim \operatorname{Uniform}[0,1]$ . Hence, $\text{Var}[f_{(1)}]=0$ if and only if $\Delta (U)$ is constant a.s. Therefore, since $\mathbb{E}\Delta (U)= |V(H)| t(H,W)$ , we see that $\text{Var}[f_{(1)}]=0$ if and only if

(5.28) \begin{align} \frac{1}{|V(H)|}\sum _{1\leq a\leq |V(H)|}t_{a}(x,H,W)=t(H,W) \text{ for almost every } x \in [0, 1]. \end{align}

By Definition 2.2, (5.28) says that $W$ is $H$ -regular.

Proof of Theorem 2.9(1). Lemma 5.4 shows that if $W$ is not $H$ -regular, then the principal degree of $f$ is 1. Thus, [Reference Janson and Nowicki22, Theorem 1] yields

(5.29) \begin{align} \frac{X_{n}(H,W)-\frac{(n)_{|V(H)|}}{|\textrm{Aut}(H)|}t(H,W)}{n^{|V(H)|-\frac{1}{2}}} \overset{D}{\rightarrow }\textsf{N}(0, \tau ^{2}), \end{align}

where using also (5.18) and (2.24),

(5.30) \begin{align} \tau ^{2} & = \frac{1}{|V(H)|!\,(|V(H)|-1)!}\text{Var}[f_{(1)}] = \tau ^2_{H, W}. \end{align}

This completes the proof of Theorem 2.9(1) when $W$ is not $H$ -regular.

In fact, (5.29)–(5.30) hold also when $W$ is $H$ -regular, with $f_{(1)}=0$ and $\tau ^2=0$ . Although this case is not included in the statement of [Reference Janson and Nowicki22, Theorem 1], it follows by its proof, as a consequence of [Reference Janson and Nowicki22, Lemma 2]; see also [Reference Janson20, Corollary 11.36]. Consequently, Theorem 2.9(1) holds for any $W \in \mathcal{W}_0$ .

5.3. Proof of Theorem 2.9(2)

In this case, $W$ is $H$ -regular, hence $f_{(1)}\equiv 0$ by Lemma 5.4. Therefore, we consider $f_{(2)}$ (recall (5.7)), which can be written as

(5.31) \begin{align} f_{(2)}=\sum _{1\leq a\lt b\leq |V(H)|} \left ( f_{{E_{{\{a,b\}}}}} +f_{{K_{\{a,b\}}}} \right ), \end{align}

where ${E_{{\{a,b\}}}} = (\{a, b\}, \emptyset )$ is the graph with two vertices $a$ and $b$ and no edges, and ${K_{\{a,b\}}} = (\{a, b\}, \{ (a, b) \} )$ is the complete graph with vertices $a$ and $b$ . As for $f_{(1)}$ , we have $\mathbb{E}f_{(2)}=0$ , and thus $\operatorname{Var} f_{(2)}=0\iff f_{(2)}=0$ a.s.

If $\operatorname{Var}f_{(2)}\neq 0$ , then $f$ has principal degree 2, and we can apply [Reference Janson and Nowicki22, Theorem 2], which shows that

(5.32) \begin{align} \frac{X_{n}(H,W)-\frac{(n)_{|V(H)|}}{|\textrm{Aut}(H)|}t(H,W)}{n^{|V(H)|-{1}}} \overset{D}{\rightarrow } \sigma Z+\sum _{\lambda \in \Lambda }\lambda (Z_\lambda ^2-1), \end{align}

where $Z$ and ${\{Z_\lambda \}}_{\lambda \in \Lambda }$ are independent standard Gaussians,

(5.33) \begin{align} \sigma ^2=\frac{1}{2(|V(H)-2)!^2}\mathbb{E} \bigl [f_{{K_{\{1,2\}}}}^2\bigr ] \end{align}

and $\Lambda$ is the multiset of (non-zero) eigenvalues of a certain integral operator $T$ .

Moreover, if $\operatorname{Var}f_{(2)}=0$ , so $f_{(2)}=0$ a.s., then the conclusion of [Reference Janson and Nowicki22, Theorem 2] still holds (with a trivial limit 0), again as a consequence of [Reference Janson and Nowicki22, Lemma 2]. (See also the more general [Reference Janson20, Theorem 11.35].) Hence, (5.32) holds in any case.

It remains to show that $\sigma ^2=\sigma ^2_{H,W}$ in (2.26), and that $\Lambda$ equals ${\textrm{Spec}}^-(W_H)$ ; then (5.32) yields (2.25). We begin by finding $f_{E_{{\{a,b\}}}}$ and $f_{K_{\{a,b\}}}$ .

Lemma 5.5. For $1\leq a\lt b\leq |V(H)|$ and any $F\in L^2$ , the projection of $f$ onto the space $M_{{E_{{\{a,b\}}}}}$ is given by

(5.34) \begin{align} F_{{E_{{\{a,b\}}}}}=\mathbb{E}\left [F\mid U_{a},U_{b}\right ]-\mathbb{E}[F\mid U_{a}]-\mathbb{E}[F\mid U_{b}]+\mathbb{E}[F] . \end{align}

Proof. By (5.9),

(5.35) \begin{align} F_{{E_{{\{a,b\}}}}} \;:\!=\; P_{ M_{{E_{{\{a,b\}}}}}} F &= P_{L^2({E_{{\{a,b\}}}})}F - P_{M_{K_{{\{a\}}}}}F - P_{M_{{K_{{\{b\}}}}}}F -P_{M_\emptyset }F \notag \\[5pt] & = P_{L^2({E_{{\{a,b\}}}})}F - P_{L^2({K_{{\{a\}}}})}F - P_{L^2({K_{{\{b\}}}})}F +P_{M_\emptyset }F \end{align}

and the result follows by (5.8).

Lemma 5.6. For $1\leq a\lt b\leq |V(H)|$ and any $F\in L^2$ , the projection of $f$ onto the space $M_{{K_{\{a,b\}}}}$ is given by

(5.36) \begin{align} F_{{K_{\{a,b\}}}}=\mathbb{E}\left [F\mid U_{a},U_{b},Y_{ab}\right ] - \mathbb{E}\left [F\mid U_{a},U_{b}\right ]. \end{align}

Proof. The subgraphs of $K_{\{a,b\}}$ are $E_{{\{a,b\}}}$ , $K_{{\{a\}}}$ , $K_{{\{b\}}}$ and $\emptyset$ , and thus (5.9) yields

(5.37) \begin{align} F_{{K_{\{a,b\}}}} \;:\!=\; P_{ M_{{K_{\{a,b\}}}}} F& = P_{L^2({K_{\{a,b\}}})}F -P_{M_{E_{{\{a,b\}}}}}F- P_{M_{K_{{\{a\}}}}}F - P_{M_{{K_{{\{b\}}}}}}F -P_{M_\emptyset }F \notag \\[5pt] & = P_{L^2({K_{\{a,b\}}})}F - P_{L^2({E_{{\{a,b\}}}})}F, \end{align}

and the result follows by (5.8).

Specialising to $f$ defined in (5.2), we found $f_{K_{{\{a\}}}}=\mathbb{E} [f\mid U_a]-\mathbb{E} f$ in (5.13). Furthermore, the same argument yields, recalling (2.9) and (4.4),

(5.38) \begin{align} \mathbb{E}[f\mid U_a,U_b]= \sum _{H'\in \mathscr{G}_H}t_{a,b}(U_{a},U_b,H',W) \end{align}

and

(5.39) \begin{align} \mathbb{E}[f\mid U_a,U_b,Y_{ab}] &= \sum _{H'\in \mathscr{G}_H}t^-_{a,b}(U_{a},U_b,H',W) Z_{H',{\{a,b\}}}(Y_{ab},U_{a},U_{b}), \end{align}

where

(5.40) \begin{align} Z_{H',{\{a,b\}}}(Y_{ab},U_{a},U_{b}) \;:\!=\; \begin{cases} \textbf{1}\{Y_{ab}\leq W(U_{a},U_{b})\} & \text{ if } (a, b) \in E(H'),\\[5pt] 1 & \text{ otherwise. } \end{cases} \end{align}

Let also

(5.41) \begin{align} \overline W_{H', \{a, b\}}(x, y) \;:\!=\; \begin{cases} W(x, y) & \text{ if } (a, b)\in E(H'),\\[5pt] 1 & \text{ otherwise}, \end{cases} \end{align}

and $\mathscr{G}_{H, \{a, b\}}\;:\!=\;\{H'\in \mathscr{G}_H \;:\; (a, b) \in E(H')\}$ . Then, (5.36), (5.38) and (5.39) yield, using also (4.5),

(5.42) \begin{align} f_{K_{\{a,b\}}} &= \sum _{H'\in \mathscr{G}_H}t^-_{a,b}(U_{a},U_b,H',W) \Bigl ( Z_{H',{\{a,b\}}}(Y_{ab},U_{a},U_{b}) - \overline W_{H', \{a, b\}}(U_a, U_b) \Bigr ) \notag \\[5pt] &= \sum _{H'\in \mathscr{G}_{H,{\{a,b\}}}} t^-_{a,b}(U_{a},U_b,H',W) \Bigl ( \textbf{1}\{Y_{ab}\leq W(U_{a},U_{b})\} -W(U_a,U_b)\Bigr ). \end{align}

To compute the variance of $f_{{K_{\{1,2\}}}}$ , we recall the notions of weak and strong edge joins from Definition 2.7 and introduce a few definitions. Let $(V_H)_2 \;:\!=\;\{(a,b)\in V(H)^{2} \;:\; a\neq b\}$ . For $(a,b),(c,d)\in (V_H)_2$ define

(5.43) \begin{align} \underline{t}\left (H_{1}\mathop{\ominus }\limits _{(a,b),(c,d)} H_{2},W\right ) = t\left (H_{1}\mathop{\ominus }\limits _{(a,b),(c,d)} H_{2},W\right ) \boldsymbol 1\{(a,b) \in E^{+}(H_{1}) \text{ and }(c,d) \in E^{+}(H_{2})\} \end{align}

and similarly,

(5.44) \begin{align} \underline{t}\left (H_{1}\bigoplus _{(a,b),(c,d)} H_{2},W\right ) = t\left (H_{1}\bigoplus _{(a,b),(c,d)} H_{2},W \right ) \boldsymbol 1\{(a,b) \in E^{+}(H_{1}) \text{ and }(c,d) \in E^{+}(H_{2})\} . \end{align}

Then we have the following identities, similar to Lemma 5.2:

Lemma 5.7. Let $(V_H)_2$ be as defined above, and let $K_H\;:\!=\;|(V_H)_2|^2=|V(H)|^{2}(|V(H)|-1)^{2}$ . Then

(5.45) \begin{align} K_H \sum _{H_{1},H_{2}\in \mathscr{G}_H} \underline{t} \left (H_{1}\mathop{\ominus }\limits _{(1,2),(1,2)} H_{2},W\right ) & = |\mathscr{G}_H|^{2}\sum _{(a,b),(c,d)\in (V_H)_2 } \underline{t} \left (H\mathop{\ominus }\limits _{(a,b),(c,d)} H,W\right ). \end{align}

and, similarly,

(5.46) \begin{align} K_H \sum _{H_{1},H_{2}\in \mathscr{G}_H} \underline{t} \left (H_{1}\bigoplus _{(1,2),(1,2)} H_{2},W\right ) & = |\mathscr{G}_H|^{2}\sum _{(a,b),(c,d)\in (V_H)_2 } \underline{t} \left (H\bigoplus _{(a,b),(c,d)} H,W\right ). \end{align}

Proof. We will first show that

(5.47) \begin{align} \sum _{(a,b),(c,d)\in (V_H)_2}\sum _{H_{1},H_{2}\in \mathscr{G}_H} \underline{t} \left (H_{1}\mathop{\ominus }\limits _{(a,b),(c,d)} H_{2},W\right ) &= K_H \sum _{H_{1},H_{2}\in \mathscr{G}_H} \underline{t} \left (H_{1}\mathop{\ominus }\limits _{(1,2),(1,2)} H_{2},W\right ) . \end{align}

For this consider permutations $\phi _{(a,b)}, \phi _{(c,d)} \;:\; V(H) \rightarrow V(H)$ such that $\phi _{(a,b)}(a)=1$ and $\phi _{(a,b)}(b)=2$ and $\phi _{(c,d)}(c)=1$ and $\phi _{(c, d)}(d)=2$ . Then

(5.48) \begin{align} \sum _{(a,b),(c,d)\in (V_H)_2}\sum _{H_{1},H_{2}\in \mathscr{G}_H} \underline{t} \left (H_{1}\mathop{\ominus }\limits _{(a,b),(c,d)} H_{2},W\right ) &=\sum _{(V_H)_2\times V_{H}^2}\sum _{\mathscr{G}_H^{2}} \underline{t} \left (\phi _{(a,b)}(H_{1})\mathop{\ominus }\limits _{(1,2),(1,2)} \phi _{(c,d)}(H_{2}),W\right ) \notag \\[5pt] &= K_H \sum _{\mathscr{G}_H^{2}} \underline{t} \left (H_{1}\mathop{\ominus }\limits _{(1,2),(1,2)} H_{2},W\right ), \end{align}

where the last equality follows from the observation that $(H_{1},H_{2}) \rightarrow (\phi _{(a,b)}(H_{1}),\phi _{(c,d)}(H_{2}))$ is an bijection from $\mathscr{G}_H^2$ to $\mathscr{G}_H^2$ , for all $(a,b),(c,d)\in (V_H)_2$ .

Now by considering isomorphisms $\phi _{1}$ and $\phi _{2}$ such that $\phi _{1}(H_{1})=H$ and $\phi _{2}(H_{2})=H$ , a similar argument as above shows that

(5.49) \begin{align} \sum _{(a,b),(c,d)\in (V_H)_2}\sum _{H_{1},H_{2}\in \mathscr{G}_H} \underline{t}\left (H_{1}\mathop{\ominus }\limits _{(a,b),(c,d)} H_{2},W\right ) &=|\mathscr{G}_H|^{2}\sum _{(a,b),(c,d)\in (V_H)_2 } \underline{t} \left (H\mathop{\ominus }\limits _{(a,b),(c,d)} H,W\right ). \end{align}

Combining (5.47) and (5.49) yields the identity (5.45). The identity (5.46) follows by the same proof with only notational differences.

With the above definitions and identities we now proceed to compute the variance of $f_{{K_{\{1,2\}}}}$ .

Lemma 5.8. We have

(5.50) \begin{align} \operatorname{Var}[f_{{K_{\{1,2\}}}}]=\frac{(|V(H)|-2)!^{2}}{|\textrm{Aut}(H)|^{2}}\sum _{(a,b),(c,d)\in E^{+}(H)}\left (t\left (H\mathop{\ominus }\limits _{(a,b),(c,d)} H,W\right )-t\left (H\bigoplus _{(a,b),(c,d)} H,W\right )\right ) . \end{align}

Proof. We specialise (5.42) to $(a,b)=(1,2)$ and write for convenience

(5.51) \begin{align} h(U_{1},U_{2}, H_{1}, H_{2},W) \;:\!=\;t_{1,2}^-(U_{1},U_{2},H_{1},W)\, t_{1,2}^-(U_{1},U_{2},H_{2},W). \end{align}

This yields,

(5.52) \begin{align} \mathbb{E}[f_{{K_{\{1,2\}}}}^2] & = \sum _{H_{1},H_{2}\in \mathscr{G}_{H, \{1, 2\}}}\mathbb{E} \left [ h(U_{1},U_{2}, H_{1}, H_{2},W) \bigl ( \textbf{1}\{Y_{12}\leq W(U_{1},U_{2})\} - W(U_1, U_2) \bigr )^2 \right ] \nonumber \\[5pt] & = \sum _{H_{1},H_{2}\in \mathscr{G}_{H, \{1, 2\}}}\mathbb{E}\bigl [ h(U_{1},U_{2},H_{1}, H_{2}, W) W(U_{1},U_{2}) (1-W(U_{1},U_{2}))\bigr ] \nonumber \\[5pt] & =\sum _{H_{1},H_{2}\in \mathscr{G}_{H, \{1, 2\}}} \left ( t\left (H_{1}\mathop{\ominus }\limits _{(1,2),(1,2)} H_{2},W\right ) -t\left (H_{1}\bigoplus _{(1,2),(1,2)} H_{2},W\right ) \right ). \end{align}

Now, using the notations introduced in (5.43) and (5.44), the identity (5.52) can be written as

(5.53) \begin{align} \mathbb{E}[f_{{K_{\{1,2\}}}}^2] & = \sum _{H_{1},H_{2} \in \mathscr{G}_H}\left (\underline{t}\left (H_{1}\mathop{\ominus }\limits _{(1,2),(1,2)} H_{2},W\right )-\underline{t}\left (H_{1}\bigoplus _{(1,2),(1,2)} H_{2},W\right )\right ) \notag \\[5pt] & = \frac{(|V(H)|-2)!^2}{|\textrm{Aut}(H)|^2}\sum _{ (a,b),(c,d)\in (V_H)_2 }\left ( \underline{t}\left (H\mathop{\ominus }\limits _{(a,b),(c,d)} H,W\right ) - \underline{t}\left (H\bigoplus _{(a,b),(c,d)} H,W\right ) \right ) \notag \\[5pt] & = \frac{(|V(H)|-2)!^2}{|\textrm{Aut}(H)|^2}\sum _{(a,b),(c,d)\in E^{+}(H)}\left (t\left (H\mathop{\ominus }\limits _{(a,b),(c,d)} H,W\right )-t\left (H\bigoplus _{(a,b),(c,d)} H,W\right )\right ), \end{align}

where the second equality uses the identities from Lemma 5.7 and (2.7), and the third equality follows from the definitions in (5.43) and (5.44). This yields the result (5.50), since $\mathbb{E} f_{K_{\{a,b\}}}=0$ .

Lemma 5.8 and (5.33) show that

(5.54) \begin{align} \sigma ^2=\sigma ^2_{H,W}, \end{align}

as defined in (2.26).

Next, we compute the Hilbert–Schmidt operator $T$ as defined in [Reference Janson and Nowicki22, Theorem 2]. Note first that in our case this operator is defined on the space $M_{K_{\{1\}}}$ . Recall that $M_{K_{{\{1\}}}}\subset L^2({K_{{\{1\}}}})$ , where $L^2({K_{{\{1\}}}})$ is the space of all square integrable random variables of the form $g(U_1)$ . We may identify $L^2({K_{{\{1\}}}})$ and $L^2[0,1]$ , and then (5.5) yields the orthogonal decomposition

(5.55) \begin{align} L^{2}[0,1]=M_{K_{{\{1\}}}} \bigoplus M_{\emptyset }, \end{align}

where $M_{\emptyset }$ is the one-dimensional space of all constants. Hence, $M_{K_{{\{1\}}}}$ is identified with the subspace of $L^2[0,1]$ orthogonal to constants, i.e., $M_{K_{{\{1\}}}}={ \{g\in L^2[0,1]\;:\;\int _0^1 g=0 \}}$ .

Then, taking $g,h\in M_{K_{\{1\}}}\subset L^2[0,1]$ , the definitions given in [Reference Janson and Nowicki22, Theorem 2] yield

(5.56) \begin{align} \langle Tg,h\rangle &= \frac{1}{2(|V(H)|-2)!}\mathbb{E}\left [fg(U_{1})h(U_{2})\right ] . \end{align}

Recall the operator $T_{W_H}$ defined on $L^2[0,1]$ by (2.15) and (2.19).

Lemma 5.9. If $W$ is $H$ -regular, then the operator $T$ on $M_{K_{{\{1\}}}}$ defined in (5.56) equals the operator $T_{W_H}$ restricted to the space $M_{K_{\{1\}}}$ . Moreover, then the multiset of non-zero eigenvalues of $T$ is equal to ${\textrm{Spec}}^{-}(W_{H})$ .

Proof. We may replace $f$ by $\mathbb{E}[f\mid U_1,U_2]$ in (5.56), which by (5.38) yields

(5.57) \begin{align} \langle Tg,h\rangle & = \frac{1}{2(|V(H)|-2)!}\mathbb{E}\bigl [\mathbb{E}[f\mid U_1,U_2]g(U_{1})h(U_{2})\bigr ] \notag \\[5pt] &=\frac{1}{2(|V(H)|-2)!}\mathbb{E}\left [\sum _{H'\in \mathscr{G}_H}t_{1,2}(U_{1},U_{2},H',W)g(U_{1})h(U_{2})\right ]\nonumber \\[5pt] &=\left \langle \frac{1}{2(|V(H)|-2)!}\int \sum _{H'\in \mathscr{G}_H}t_{1,2}(x,\cdot,H',W)g(x)\textrm{d}x,h(\cdot )\right \rangle . \end{align}

Denote by $S_{|V(H)|}$ the set of all $|V(H)|!$ permutations of $V(H)$ . Then it is easy to observe that

(5.58) \begin{align} \sum _{\phi \in S_{|V(H)|}}t_{1,2}(x,y,\phi (H),W)=|\textrm{Aut}(H)|\sum _{H'\in \mathscr{G}_H}t_{1,2}(x,y,H',W) . \end{align}

Also,

(5.59) \begin{align} \sum _{\phi \in S_{|V(H)|}}t_{1,2}(x,y,\phi (H),W) &=\sum _{1\leq a\neq b\leq |V(H)|}\sum _{\substack{\phi \in S_{|V(H)|}\\ \phi (a)=1,\phi (b)=2}}t_{1,2}(x,y,\phi (H),W)\nonumber \\[5pt] &=\sum _{1\leq a\neq b\leq |V(H)|}\sum _{\substack{\phi \in S_{|V(H)|}\\ \phi (a)=1,\phi (b)=2}}t_{\phi ^{-1}(1)\phi ^{-1}(2)}(x,y,H,W)\nonumber \\[5pt] &=\sum _{1\leq a\neq b\leq |V(H)|}\sum _{\substack{\phi \in S_{|V(H)|}\\ \phi (a)=1,\phi (b)=2}}t_{a, b}(x,y,H,W)\nonumber \\[5pt] &=(|V(H)|-2)!\sum _{1\leq a\neq b\leq |V(H)|}t_{a,b}(x,y,H,W) \end{align}

Combining (5.58) and (5.59), we have, recalling (2.19),

(5.60) \begin{align} \frac{1}{2(|V(H)|-2)!}\sum _{H'\in \mathscr{G}_H}t_{1,2}(x,y,H',W)=\frac{1}{2|\textrm{Aut}(H)|}\sum _{1\leq a\neq b\leq |V(H)|}t_{a, b}(x,y,H,W)=W_{H}(x,y). \end{align}

Consequently, combining (5.57), (5.60) and (2.15), we obtain

(5.61) \begin{align} \langle Tg,h\rangle =\langle T_{W_H} g,h\rangle, \qquad g,h\in M_{K_{{\{1\}}}}. \end{align}

Furthermore, since $W$ is $H$ -regular, $W_H$ is degree regular and (2.21) shows that

(5.62) \begin{align} T_{W_H}1=d_{W_H}=d_{W_H}\cdot 1. \end{align}

Hence, $T_{W_H}$ maps the space $M_\emptyset$ of constant functions into itself. By (5.55), $M_{K_{{\{1\}}}}$ is the orthogonal complement of $M_\emptyset$ , and thus, since $T_{W_H}$ is a symmetric operator, $T_{W_H}$ also maps $M_{K_{{\{1\}}}}$ into itself. Hence, both $T$ and $T_{W_H}$ map $M_{K_{{\{1\}}}}$ into itself, and thus (5.61) shows that $T=T_{W_H}$ on $M_{K_{{\{1\}}}}$ .

Finally, recall that $\Lambda$ in (5.32) is the multiset of non-zero eigenvalues of $T$ , which we just have shown equals the multiset of eigenvalues of $T_{W_H}$ on $M_{K_{{\{1\}}}}$ . Moreover, on $M_\emptyset$ , $T_{W_H}$ has the single eigenvalue $d_{W_H}$ by (5.62). Hence, ${\textrm{Spec}}(W_H)=\Lambda \cup{\{d_H\}}$ , and thus ${\textrm{Spec}}^-(W_H)=\Lambda$ by the definition after (2.21).

Proof of Theorem 2.9(2). The result now follows by (5.32), (5.54), and Lemma 5.9.

5.4. Higher Order Limits

In the case where the limit in Theorem 2.9(2) is degenerate (as in Example 4.6), the function $f$ in (5.2) has principal degree $d \gt 2$ . In this case, [Reference Janson and Nowicki22, Theorem 3] shows that $(X_n(H, W) - \mathbb{E} X_n(H, W))/n^{|V(H)| - d/2}$ has a (non-degenerate) limit distribution, which can be expressed as a polynomial of degree $d$ in (possibly infinitely many) independent standard Gaussian variables. The expression in [Reference Janson and Nowicki22, Theorem 3] uses Wick products of Gaussian variables; these can be expressed using Hermite polynomials, see [Reference Janson20, Theorems 3.19 and 3.21]. One simple illustration (with $d=4$ ) is given in Example 4.6. This leads to the following natural open questions:

Problem 5.10. For which graphs $H$ can such higher order limits (i.e., with $d\ge 3$ ) occur?

Problem 5.11. Is it possible to have arbitrarily high-order principal degree $d$ ?

6. Proof of Theorem 4.3

It is obvious from (4.6) that if $W$ is random-free, then $\sigma _{H,W}^2=0$ . For the converse, suppose that $W$ is not random-free. Then the set $P\;:\!=\;{ \{(x,y)\in [0,1]^2:0\lt W(x,y)\lt 1 \}}$ has $|P|\gt 0$ , where $|\cdot |$ denotes the Lebesgue measure. Let $(x_{0},y_{0})$ be a Lebesgue point of $P$ . Then we can find intervals $I$ and $J$ containing $x_{0}$ and $y_{0}$ , respectively, such that $|P\bigcap (I\times J)|\gt (1-\varepsilon ) |I\times J|$ ( $\varepsilon \gt 0$ to be chosen later). Define,

(6.1) \begin{align} P_{x}\;:\!=\;\left \{y\in J\;:\;(x,y)\in P\right \} \quad \text{ and } \quad I'\;:\!=\;\left \{x\in I:|P_{x}|\gt (1-\delta )|J|\right \}, \end{align}

where $\delta \gt 0$ will be chosen later. Then,

(6.2) \begin{align} \delta |J|\left |I\setminus I'\right | \leq \int _{I\setminus I'}\left |J\setminus P_{x}\right |\,\textrm{d} x \leq \int _{I}\left |J\setminus P_{x}\right | \,\textrm{d} x &=\int _{I}|J| \,\textrm{d} x-\int _{I}|P_{x}| \,\textrm{d} x \notag \\[5pt] & =|I||J|-\int _{I}\int _{P_x} \,\textrm{d} z \,\textrm{d} x \notag \\[5pt] & =|I||J|- |P \bigcap \left (I\times J\right ) | \notag \\[5pt] & \lt \varepsilon \left |I\times J\right |=\varepsilon |I||J| . \end{align}

This implies,

(6.3) \begin{align} \left |I\setminus I'\right |\leq \frac{\varepsilon }{\delta }|I| . \end{align}

Similarly, defining $P^{y}\;:\!=\; \{x\in I\;:\;(x,y)\in P \}$ and $J'\;:\!=\; \{y\in J \;:\; |P^{y} |\gt (1-\delta )|I| \}$ we have,

(6.4) \begin{align} \left |J\setminus J'\right |\leq \frac{\varepsilon }{\delta }|J| . \end{align}

Next, fix $a\lt b\in V(H)$ such that $(a,b)\in E(H)$ . Suppose $H$ has bipartition $(A, B)$ and without loss of generality consider $a\in A$ and $b\in B$ . Then from (4.6) it follows that,

(6.5) \begin{align} \sigma _{H,W}^2 & \geq c_H \int _{[0, 1]^2} t_{a, b}^{-}(x, y, H, W)^2 W(x,y)(1-W(x,y)) \,\textrm{d} x \,\textrm{d} y . \end{align}

Define,

(6.6) \begin{align}{\mathcal{S}}\;:\!=\;\bigg \{\boldsymbol{z}_{-(a,b)}\;:\!=\;(z_{1}, \cdots,z_{a-1},z_{a+1}, & \cdots,z_{b-1},z_{b+1}, \cdots,z_{|V(H)|}) \nonumber \\[5pt] & \;:\; z_{v}\in I \text{ if } v \in A \backslash \{ a \} \text{ and } z_{v}\in J\text{ if } v\in B\backslash \{b\} \bigg \} . \end{align}

and

(6.7) \begin{align} t_{a, b}^{-}(\boldsymbol z_{-(a, b)}, x, y, H, W) = \prod _{r \in N_H(a)\backslash \{b\}}W(x, z_r) \prod _{s \in N_H(b) \backslash \{a\}} W(y, z_s) \prod _{(r, s) \in E(H\backslash \{a, b\}) } W(z_r, z_s) . \end{align}

Note that

(6.8) \begin{align} \int _{[0, 1]^{|V(H)|-2}}t_{a, b}^{-}(\boldsymbol z_{-(a, b)}, x, y, H, W) \prod _{r \notin \{a, b\}} \,\textrm{d} z_r = t_{a, b}^{-}(x, y, H, W) . \end{align}

It is easy to see that $|{\mathcal{S}}|=|I|^{|A|-1}|J|^{|B|-1}$ . Now, fix $(x,y)\in I'\times J'$ . Then

(6.9) \begin{align} \mathcal Q_0 \;:\!=\; \left |\left \{\boldsymbol{z}_{-(a,b)}\in{\mathcal{S}} \;:\; t_{a, b}^{-}(\boldsymbol z_{-(a, b)}, x, y, H, W) =0\right \}\right | & \leq T_1 + T_2 + T_3, \end{align}

where

(6.10) \begin{align} T_1 & \;:\!=\; \sum _{r \in N_H(a)\backslash \{b\}} \left |\left \{\boldsymbol{z}_{-(a,b)}\in{\mathcal{S}} \;:\; W(x, z_r) =0\right \}\right |, \end{align}
(6.11) \begin{align} T_2 & \;:\!=\; \sum _{s \in N_H(b)\backslash \{a\}} \left |\left \{\boldsymbol{z}_{-(a,b)}\in{\mathcal{S}} \;:\; W(y, z_s) =0\right \}\right |, \end{align}
(6.12) \begin{align} T_3 & \;:\!=\; \sum _{ (r, s) \in E(H\backslash \{a, b\}) } \left |\left \{\boldsymbol{z}_{-(a,b)}\in{\mathcal{S}} \;:\; W(z_r, z_s) =0\right \}\right | . \end{align}

Let us now look at each term separately. We begin with $T_1$ . Note that for $r \in N_H(a)\backslash \{b\}$ ,

(6.13) \begin{align} \left |\left \{\boldsymbol{z}_{-(a,b)}\in{\mathcal{S}}\;:\;W(x,z_{r})=0\right \}\right | &= \left |\left \{z_{r}\in J\;:\;W(x, z_{r})=0 \right \}\right ||I|^{|A|-1}|J|^{|B|-2} \notag \\[5pt] & \leq \left |J\setminus P_{x}\right ||I|^{|A|-1}|J|^{|B|-2} \notag \\[5pt] & \lt \delta |I|^{|A|-1}|J|^{|B|-1} \end{align}

where the last inequality follows from our assumption $x\in I'$ and (6.1). This implies,

(6.14) \begin{align} T_1 \lt (d_a -1 ) \delta |I|^{|A|-1}|J|^{|B|-1}, \end{align}

where $d_a$ is the degree of the vertex $a$ in $H$ . Similarly,

(6.15) \begin{align} T_2 \lt (d_b -1 ) \delta |I|^{|A|-1}|J|^{|B|-1} . \end{align}

Finally, consider $T_3$ . Suppose $(r, s) \in E(H\backslash \{a, b\})$ and assume without loss of generality $r \in A$ and $s\in B$ . Then,

(6.16) \begin{align} \left |\left \{\boldsymbol{z}_{-(a,b)}\in{\mathcal{S}}\;:\;W(z_{r},z_{s})=0\right \}\right | &= \left |\left \{z_{r}\in I, z_{s}\in J \;:\; W(z_{r},z_{s})=0\right \}\right ||I|^{|A|-2}|J|^{|B|-2} \notag \\[5pt] & \leq \left |\left (I\times J\right )\setminus \left ( P \bigcap \left (I\times J\right )\right )\right ||I|^{|A|-2}|J|^{|B|-2} \notag \\[5pt] & \lt \varepsilon |I\times J||I|^{|A|-2}|J|^{|B|-2} \notag \\[5pt] & =\varepsilon |I|^{|A|-1}|J|^{|B|-1} . \end{align}

This implies,

(6.17) \begin{align} T_3 \leq (|E(H)|-d_{a}-d_{b}+1) \varepsilon |I|^{|A|-1}|J|^{|B|-1} . \end{align}

Combining (6.14), (6.15), and (6.17) with (6.9) gives,

(6.18) \begin{align} \mathcal Q_0 &\leq \left [\left (d_{a}+d_{b}-2\right )\delta + \left (E(H)-d_{a}-d_{b}+1\right )\varepsilon \right ]|I|^{|A|-1}|J|^{|B|-1}\notag \\[5pt] &\lt 2 \left |E(H)\right |(\delta +\varepsilon )|I|^{|A|-1}|J|^{|B|-1} . \end{align}

Choosing $\delta =10 \varepsilon$ and $\varepsilon \lt \frac{1}{100 |E(H)|}$ gives $ |E(H) |(\delta +\varepsilon )|I|^{|A|-1}|J|^{|B|-1}\lt |I|^{|A|-1}|J|^{|B|-1}.$ Thus,

(6.19) \begin{align} \mathcal Q_0 \lt |I|^{|A|-1}|J|^{|B|-1}, \end{align}

and hence, $|\mathcal S \backslash \mathcal Q_0| \gt 0$ . This implies, recalling (6.8),

(6.20) \begin{align} t_{a, b}^{-}(x, y, H, W) & = \int _{[0, 1]^{|V(H)|-2}}t_{a, b}^{-}(\boldsymbol z_{-(a, b)}, x, y, H, W) \prod _{r \notin \{a, b\}} \,\textrm{d} z_r \nonumber \\[5pt] & \geq \int _{\mathcal S \backslash \mathcal Q_0} t_{a, b}^{-}(\boldsymbol z_{-(a, b)}, x, y, H, W) \prod _{r \notin \{a, b\}} \,\textrm{d} z_r \gt 0, \end{align}

since $t_{a, b}^{-}(\boldsymbol z_{-(a, b)}, x, y, H, W) \gt 0$ on $\mathcal S \backslash \mathcal Q_0$ . Recall that $(x,y)\in I'\times J'$ was chosen arbitrarily; hence (6.20) is true for all $(x,y)\in I'\times J'$ . Further observe that

(6.21) \begin{align} I'\times J'\subseteq \left \{P\bigcap \left (I'\times J'\right )\right \}\bigcup \left \{\left (I\times J\right )\setminus \left (P\bigcap \left (I\times J\right )\right )\right \}, \end{align}

implying

\begin{align*} \left |P\bigcap \left (I'\times J'\right )\right | & \geq |I'||J'|-\left |\left (I\times J\right )\setminus \left (P\bigcap \left (I\times J\right )\right )\right | \\[5pt] & \geq |I'||J'|-\varepsilon |I||J| \tag *{(by 6.3) and (6.4))} \\[5pt] &\geq \left (\left (1-\frac{\varepsilon }{\delta }\right )^2-\varepsilon \right )|I||J| \\[5pt] & =\left (0.81-\varepsilon \right )|I||J|\gt 0 . \end{align*}

Therefore, recalling (6.5)

(6.22) \begin{align} \sigma _{H,W}^2 & \geq c_H\int _{P\bigcap \left (I'\times J'\right )} t_{a, b}^{-}(x, y, H, W)^2 W(x,y)(1-W(x,y)) \,\textrm{d} x \,\textrm{d} y \nonumber \\[5pt] & \gt 0, \end{align}

since by (6.20) and the definition of the set $P$ , $t_{a, b}^{-}(x, y, H, W)^2 W(x,y)(1-W(x,y)) \gt 0$ for all $(x,y)\in P \bigcap (I'\times J' )$ . This shows that if $\sigma _{H, W}^2 = 0$ then $W$ is random-free.

We conclude this section with an example (which generalises the construction in [Reference Hladký, Pelekis and Šileikis17, Figure  1] for triangles to general cliques) illustrating that Theorem 4.3 does not hold if the bipartite assumption is dropped (as mentioned in Remark 4.4).

Example 6.1. Suppose $H=K_r$ is the $r$ -clique, for $r \geq 3$ . Partition $[0, 1]$ into $2r$ intervals of measure $\frac{1}{2 r}$ each. Denote the first $r$ sets by $I_1, I_2, \ldots, I_{r}$ and the next $r$ sets by $J_1, J_2, \ldots, J_{r}$ . Consider the following graphon:

(6.23) \begin{align} W(x, y) = \left \{ \begin{array}{cl} 1 & \text{ for } (x, y) \in (I_a \times I_b) \text{ such that } 1 \leq a \ne b \leq r, \\[5pt] 1 & \text{ for } (x, y) \in (J_a \times J_b) \text{ such that } 1 \leq a \ne b \leq r, \\[5pt] \frac{1}{2} & \text{ for } (x,y)\in (I_1 \times J_1) \cup (J_1 \times I_1), \\[5pt] 0 & \text{ otherwise. } \end{array} \right . \end{align}

In other words, $W$ is obtained by taking 2 disjoint graphon representations of $K_r$ (which corresponds to the complete $r$ -partite graphon) inside $[0, \frac{1}{2}]^2$ and $[\frac{1}{2}, 1]^2$ , respectively, and connecting the edges between the sets $I_1$ and $J_1$ with probability $\frac{1}{2}$ . Note that $t(K_r, W) \gt 0$ . Denote $R \;:\!=\; (I_1 \times J_1) \cup (J_1 \times I_1)$ . By (4.6),

(6.24) \begin{align} \sigma _{H,W}^2= \frac{c_{K_r}}{4}&\sum _{\substack{ 1 \leq a \ne b \leq r \\ 1 \leq a \ne b \leq r }} \int _{R} t_{a, b}^{-}(x, y, K_r, W) t_{c, d}^{-}(x, y, K_r, W) \,\textrm{d} x \,\textrm{d} y . \end{align}

Next, fix $1 \leq a \ne b \leq r$ . If $(x,y)\in R$ , then, using the notation (6.7),

(6.25) \begin{align} t_{a, b}^{-}(\boldsymbol z_{-(a, b)}, x, y, K_r, W) = 0, \end{align}

for all $\boldsymbol z_{-(a, b)}\in [0,1]^{r-2}$ . Hence, for every $(x,y)\in R$ , we have $t_{a, b}^{-}(x, y, H, W) =0$ by (6.8). Consequently, it follows from (6.24) that $ \sigma _{H,W}^2 = 0$ . (In fact, $i_1,\dots,i_r$ can form an $r$ -clique in $G(n,W)$ only if $U_{i_1},\dots,U_{i_r}$ all belong to either $\bigcup _a I_a$ or $\bigcup _a J_a$ ; hence, the value of $W$ on $I_1\times J_1$ does not matter for $X_n(K_r,W)$ .) Moreover, (6.25) also implies that $\overline{t}(x, K_r, W)$ is constant a.e., that is, $W$ is $K_r$ -regular.

7. Proof of Theorem 4.8

In the proof we will consider many equations or other relations that hold a.e. in $[0,1]$ or $[0,1]^2$ . For this we use the notation that, for example, ${\mathcal{S}}$ (4.15) denotes the set of all $(x,y)\in [0,1]^2$ such that the equation in (4.15) holds, and $\overline{\mathcal{S}}$ (4.15) denotes $\{x\in [0,1]\;:\;(x,y)\in{\mathcal{S}}_{(4.15)}\text{ for a.e.}\ y\in [0,1]\}$ . We use this notation only for sets ${\mathcal{S}}_{(\cdot )}$ with full measure in $[0,1]^2$ ; note that then, by a standard application of Fubini’s theorem, $\overline{\mathcal{S}}_{(\cdot )}$ has full measure in $[0,1]$ , that is, $x\in \overline{\mathcal{S}}_{(\cdot )}$ for a.e. $x\in [0,1]$ . Similarly, for relations with a single variable, we let, for example, $\overline{\mathcal{S}}$ (7.3) be the set of $x\in [0,1]$ such that the inequality in (7.3) holds.

We tacitly assume $x,y,z\in [0,1]$ throughout the proof. However, for notational convenience, we may write integrals with limits that might be outside $[0,1]$ ; $\int _a^b$ should always be interpreted as $\int _{[a,b]\cap [0,1]}$ .

For all $x\in [0,1]$ , define $W_{x}\;:\;[0,1]\rightarrow [0,1]$ as

(7.1) \begin{align} W_{x}(y)\;:\!=\;W(x,y). \end{align}

We regard $W_x$ as an element of $L^{2}[0,1]$ . Note that this means, in particular, that $W_x=W_y$ means $W(x,z)=W(y,z)$ for a.e. $z$ . Since $W(x,y)$ is measurable and bounded, it is well known that the mapping $x\mapsto W_x$ is a measurable, and (Bochner) integrable, map $[0,1]\to L^2[0,1]$ , see [Reference Dunford and Schwartz15, Lemma III.11.16(b)]. The Lebesgue differentiation theorem holds for Bochner integrable Banach space value functions, see [Reference Bochner6, §5.V]; hence, a.e. $x\in [0,1]$ is a Lebesgue point of $x\mapsto W_x$ . We will use $\|\cdot \|_2$ and $\langle \cdot,\cdot \rangle$ for the norm and inner product in $L^2[0,1]$ .

We will denote $t\;:\!=\; t(C_4, W)$ . Suppose (to obtain a contradiction) that $t\gt 0$ , $W\not \equiv 1$ , but that the limit in (4.2) is degenerate, that is, ${\textrm{Spec}}^-(W_{C_4})=\emptyset$ and $\sigma ^2_{C_4.W}=0$ . Then (4.15) and (4.16) both hold by Lemma 4.7, and $W$ is random-free by Theorem 4.3, that is,

(7.2) \begin{align} W(x,y)\in \{0,1\}, \qquad \text{a.e. } \; x,y. \end{align}

We now separate the proof of the theorem into a sequence of claims.

Claim 7.1. For a.e. $x\in [0,1]$ and $W_x$ as defined in (7.1),

(7.3) \begin{align} \|W_x\|_2\le (3t)^{1/4} . \end{align}

Proof. By (4.14) and (4.15), for a.e. $(x,y)$ ,

(7.4) \begin{align} \langle W_x,W_y\rangle = U_1(x,y) \le (3t)^{1/2}. \end{align}

In particular, if $x\in \overline{\mathcal{S}}$ (7.4), then for every $\delta \gt 0$ ,

(7.5) \begin{align} \left \langle W_x, \frac{1}{2\delta }\int _{x-\delta }^{x+\delta } W_y \,\textrm{d} y\right \rangle = \frac{1}{2\delta }\int _{x-\delta }^{x+\delta } \langle W_x,W_y\rangle \,\textrm{d} y\le (3t)^{1/2}. \end{align}

If, furthermore, $x$ is a Lebesgue point of $x\mapsto W_x$ , then it follows by letting $\delta \to 0$ that $\|W_x\|_2^2\le (3t)^{1/2}$ .

Claim 7.2. For a.e. $(x,y)\in [0,1]^2$ ,

(7.6) \begin{align} W(x,y)=0 \implies W_x=W_y \text{ in } L^2[0,1] \text{ and\/ } \|W_x\|_2=\|W_y\|_2=(3t)^{1/4}. \end{align}

Proof. By (4.14) and (4.15), if $(x,y)\in{\mathcal{S}}$ (4.15) and $W(x,y)=0$ , then

(7.7) \begin{align} \langle W_x,W_y\rangle = U_{1}(x,y)=(3t)^{1/2}. \end{align}

If, furthermore, $x,y\in \overline{\mathcal{S}}$ (7.3), then the Cauchy–Schwarz inequality yields

(7.8) \begin{align} (3t)^{1/2}=\langle W_x,W_y\rangle \le \|W_x\|_2\|W_y\|_2\le (3t)^{1/2}. \end{align}

Hence, we must have equalities, and thus $\|W_x\|_2=\|W_y\|_2=(3t)^{1/4}$ ; moreover, equality in the Cauchy–Schwarz inequality implies $W_x=W_y$ .

Claim 7.3. We have $(3t)^{1/2}\lt 1$ .

Proof. Let $Z\;:\!=\;{\{(x,y)\;:\;W(x,y)=0\}}$ and $Z'\;:\!=\;Z\cap{\mathcal{S}}$ (7.6). By (7.2) and the assumption that $W$ is not a.e. 1, we have $|Z'|=|Z|\gt 0$ . For $x\in [0,1]$ , let $Z'_x\;:\!=\;{\{y:(x,y)\in Z'\}}$ . By Fubini’s theorem, $\int _0^1|Z'_x|\,\textrm{d} x=|Z'|\gt 0$ , and thus there exists $x$ such that $|Z'_x|\gt 0$ . Fix one such $x$ . Then there exists $y\in Z'_x$ , and thus $(x,y)\in Z'=Z\cap{\mathcal{S}}$ (7.6). Consequently, (7.6) applies and yields $\|W_x\|_2=(3t)^{1/4}$ . Furthermore, $W(x,y)=0$ for all $y\in Z'_x$ , and thus

(7.9) \begin{align} (3t)^{1/2}= \|W_x\|_2^2=\int _0^1 W(x,y)^2\,\textrm{d} y \le 1-|Z'_x|\lt 1. \end{align}

Claim 7.4. For a.e. $x\in [0,1]$ ,

(7.10) \begin{align} \|W_x\|_2= (3t)^{1/4}\lt 1 . \end{align}

Proof. Suppose $x\in \overline{\mathcal{S}}_{{7.2})} \cap \overline{\mathcal{S}}$ (7.3). Then, using Claim 7.3,

(7.11) \begin{align} |{\{y\;:\;W(x,y)\gt 0\}}| = |{\{y\;:\;W(x,y)=1\}}| = \int _0^1 W(x,y)^2\,\textrm{d} y = \|W_x\|_2^2\le (3t)^{1/2}\lt 1. \end{align}

If, furthermore, $x\in \overline{\mathcal{S}}$ (7.6), this implies that there exists $y$ such that $W(x,y)=0$ and $(x,y)\in{\mathcal{S}}$ (7.6), and thus, in particular, $ \|W_x\|_2= (3t)^{1/4}$ . The result (7.10) follows by Claim 7.3.

Claim 7.5. For a.e. $(x,y)$ ,

(7.12) \begin{align} W(x,y)=1 \implies U_2(x,y)\gt 0. \end{align}

Proof. Let

(7.13) \begin{align} L_1\;:\!=\;{\{(x,y)\in (0, 1)^2 \;:\; y \text{ is a Lebegue point of }y\mapsto W(x,y)\}}. \end{align}

Then $L_1$ is measurable. To see this observe that

(7.14) \begin{align} L_{1} & = \left \{(x,y)\in (0,1)^2 \;:\; \lim _{n\rightarrow \infty }f_{n}(x,y) = 0\right \}, \end{align}

where

(7.15) \begin{align} f_{n}(x,y) \;:\!=\; \frac{n}{2} \int _{-1/n}^{1/n} |W(x,y+t)-W(x,y)| \,\textrm{d} t . \end{align}

Since $\{f_{n}\;:\;n\geq 1\}$ is a sequence of measurable functions, their limsup is also measurable, which implies $L_1$ is measurable. Also, for any given $x$ , we have $(x,y)\in L_1$ for a.e. $y$ . Hence, by Fubini’s theorem, we have $|L_1|=1$ , that is, a.e. $(x,y)\in L_1$ . Now, assume that $(x,y)\in L_1$ , $(y,x)\in L_1$ and that $(x,y)$ is a Lebesgue point of the set $\{(s,t)\;:\;W(s,t)=1\}$ . (In particular, $W(x,y)=1$ .) Let $\delta \gt 0$ and let $I\;:\!=\;(x-\delta,x+\delta )$ and $J\;:\!=\;(y-\delta,y+\delta )$ . Then, if $\delta$ is small enough,

(7.16) \begin{align} |{\{s\in J\;:\;W(x,s)=0\}}|& \lt 0.1|J|, \end{align}
(7.17) \begin{align} |{\{t\in I\;:\;W(t, y)=0\}}|& \lt 0.1|I|, \end{align}
(7.18) \begin{align} |{\{(s,t)\in J\times I\;:\;W(s,t)=0\}}|& \lt 0.1|I|\times |J|, \end{align}

Then $W(x,s)W(s,t)W(t,y)\gt 0$ on a subset of $I\times J$ of positive measure, and thus $U_2(x,y)\gt 0$ .

Claim 7.6. For a.e. $(x,y)$ ,

(7.19) \begin{align} W(x,y)=1- \textbf{1}{\{W_{x}=W_{y}\}}. \end{align}

Proof. Suppose $(x,y)\in{\mathcal{S}}_{\eqref{clu2}}\cap{\mathcal{S}}$ (4.15), and that $W(x,y)=1$ . Then $U_2(x,y)\gt 0$ by (7.12), and thus (4.15) yields

(7.20) \begin{align} \langle W_x,W_y\rangle =U_1(x,y) \lt (3t)^{1/2}. \end{align}

If, furthermore, $x\in \overline{\mathcal{S}}$ (7.10), it follows that $W_x\neq W_y$ .

On the other hand, if $(x,y)\in{\mathcal{S}}$ (7.6) and $W(x,y)=0$ , then $W_x=W_y$ by (7.6).

In both cases, (7.19) holds, and thus, using (7.2), (7.19) holds a.e.

Since $W_{x}=W_{y}$ is an equivalence relation, there exists a partition (possibly infinite) of $[0,1]=\bigsqcup _{\alpha }B_{\alpha }$ such that if we define $\alpha (x)$ for $x\in [0,1]$ by $x\in B_{\alpha (x)}$ , then $W_x=W_y\iff \alpha (x)=\alpha (y)$ , for all $x,y\in [0,1]$ . Note that each $B_{\alpha }$ is measurable, since $x\mapsto W_x$ is. We can write (7.19) as

(7.21) \begin{align} W(x,y)= \textbf{1}{\{\alpha (x)\neq \alpha (y)\}}, \qquad \text{for a.e. } (x,y). \end{align}

Claim 7.7. For a.e. $x\in [0,1]$ ,

(7.22) \begin{align} |B_{\alpha (x)}|=1-(3t)^{1/2}. \end{align}

Proof. Suppose that $x\in \overline{\mathcal{S}}_{(7.19)} \cap \overline{\mathcal{S}}_{({7.2})}\cap \overline{\mathcal{S}}$ (7.10). Then,

(7.23) \begin{align} |B_{\alpha (x)}| &= \int _0^1 \textbf{1}{\{y\in B_{\alpha (x)}\}}\,\textrm{d} y = \int _0^1 \textbf{1}{\{W_y=W_x\}}\,\textrm{d} y = \int _0^1\bigl (1-W(x,y)\bigr )\,\textrm{d} y\notag \\[5pt] &= 1-\int _0^1 W(x,y)\,\textrm{d} y = 1-\int _0^1 W(x,y)^2\,\textrm{d} y = 1-(3t)^{1/2}. \end{align}

Figure 6. The graphon $W_{K}$ with $K=4$ .

Since $1-(3t)^{1/2}\gt 0$ by Claim 7.4, there can only be a finite number of parts $B_\alpha$ of measure $1-(3t)^{1/2}$ , and by Claim 7.7, they fill up $[0,1]$ except for a null set. Hence, Claim 7.7 and (7.21) imply that $W$ is a.e. equal to a complete multipartite graphon with equal part sizes (and thus finitely many parts). In other words, after a measure preserving transformation, $W$ equals a.e. the graphon $W_K$ defined as follows, see Figure 6. Given an integer $K \geq 1$ , partition the interval $[0, 1]$ into $K$ intervals $I_1, I_2, \ldots, I_K$ of equal length ${1}/{K}$ , and define

(7.24) \begin{align} W_{K}(x, y) \;:\!=\; \begin{cases} 0 & \text{ if } (x, y) \in \bigcup _{s=1}^K I_s \times I_s, \\[5pt] 1 & \;\text{otherwise.} \end{cases} \end{align}

Claim 7.8. Let $W$ be the complete multipartite graphon $W_K$ with $K\ge 2$ parts of equal sizes $1/K$ . Then (4.15) cannot hold.

Proof. Suppose $W_K$ satisfies (4.15) a.e. Then by Claim 7.7, each part must have size $1-(3t)^{1/2}$ , that is, $1-(3t)^{1/2} = 1/K$ , which yields

(7.25) \begin{align} t(C_{4},W_K)=\frac{(K-1)^2}{3K^2} . \end{align}

On the other hand, a direct calculation shows that

(7.26) \begin{align} t(C_{4},W_K)=\frac{(K-1)^4+(K-1)}{K^4}. \end{align}

We thus must have $\frac{(K-1)^2}{3K^2} = \frac{(K-1)^4+(K-1)}{K^4}$ , which simplifies to

(7.27) \begin{align} K(K-1)(2K^2-8K+9)=0, \end{align}

which is impossible. (The only real roots to (7.27) are $K=0$ and $K=1$ .)

Claim 7.8 gives the desired contradiction and completes the proof of Theorem 4.8.

8. Proof of Theorem 4.10

The proof is similar to that of Theorem 4.8. Here we will denote $t\;:\!=\; t(K_{1, 2}, W)=\int d_W(x)^2\,\textrm{d} x$ . Suppose that $t\gt 0$ , $W\not \equiv 1$ , but that ${\textrm{Spec}}^-(W_{K_{1,2}})=\emptyset$ and $\sigma ^2_{K_{1,2}.W}=0$ . Then (4.21) and (4.22) both hold by Lemma 4.9, and $W$ is random-free by Theorem 4.3, that is, $W(x,y)\in \{0,1\}$ for a.e. $x,y\in [0,1]^2$ . Now, recalling the definition of $W_{x}$ from (7.1), we have the following claim, which can be proved by arguments similar to Claims 7.1, 7.2, 7.3 and 7.4.

Claim 8.1. For a.e. $(x,y)\in [0,1]^2$ ,

(8.1) \begin{align} W(x,y)= 0 \implies W_{x}=W_{y}\text{ in }L^{2}[0,1]\text{ and }\|W_{x}\|_{2}=\|W_{y}\|_{2}=(3t)^{1/2}. \end{align}

Moreover, for a.e. $x\in [0,1]$ , $\|W_{x}\|_{2}=(3t)^{1/2}\lt 1$ .

Next, we have the analogue of Claim 7.5 for the 2-star.

Claim 8.2. For a.e. $(x,y)\in [0,1]^2$ ,

(8.2) \begin{align} W(x,y)=1\implies d_{W}(x)+d_{W}(y)\gt 0 . \end{align}

Proof. Similarly to the proof of Claim 7.5, for a.e. $(x,y)\in [0,1]^2$ such that $W(x,y)=1$ , we can choose $\delta \gt 0$ small enough such that for $J=(y-\delta,y+\delta )$ ,

(8.3) \begin{align} \left |\left \{s\in J\;:\;W(x,s)=0\right \}\right |\lt 0.1|J|. \end{align}

This implies that the set $\{s\in [0,1]\;:\;W(x,s)\gt 0\}$ has positive measure, and thus $d_W(x)\gt 0$ .

Now, as in Claim 7.6, it follows that for a.e. $(x,y)\in [0,1]^2$ ,

(8.4) \begin{align} W(x,y) = 1-\boldsymbol{1}\left \{W_{x}=W_{y}\right \}. \end{align}

As in the proof of Theorem 4.8, the equivalence relation $W_{x}=W_{y}$ defines a possibly infinite partition of $[0,1]=\bigsqcup _{\alpha }B_{\alpha }$ . For $x\in [0,1]$ define $\alpha (x)$ to be the index such that $x\in B_{\alpha (x)}$ . Then, by definition, $W_{x}=W_{y}\iff \alpha (x)=\alpha (y)$ , which by (8.4) yields, for a.e. $x\in [0,1]$ ,

(8.5) \begin{align} W(x,y) = \boldsymbol{1}\left \{\alpha (x)\neq \alpha (y)\right \} . \end{align}

Again, similarly to Claim 7.7 we have for a.e. $x\in [0,1]$ ,

(8.6) \begin{align} \left |B_{\alpha (x)}\right | = 1-3t . \end{align}

Note that by Claim 8.1, $1-3t\gt 0$ . Hence, by (8.6), there can only be a finite number of parts $B_{\alpha }$ of positive measure and the remaining parts have together measure $0$ . Therefore, by (8.5) and (8.6), we conclude that after a measure preserving transformation, $W$ must be of the form $W_{K}$ as defined in (7.24) for some $K\geq 1$ . We have excluded $W\equiv 1$ , so $K\gt 1$ .

Claim 8.3. Let $W=W_{K}$ for some $K\geq 2$ . Then (4.21) cannot hold.

Proof. Suppose $W_K$ satisfies (4.15) a.e. Then by (8.6), each part must have size $1-3t$ , that is, $1-3t = 1/K$ . In other words,

(8.7) \begin{align} t(K_{1, 2}, W_K) = \frac{K-1}{3K} . \end{align}

On the other hand, since $d_{W_K}(x) = \frac{K-1}{K}$ a.e.,

(8.8) \begin{align} t(K_{1, 2}, W_K) = \int _0^1 d_{W_K}(x)^2 \,\textrm{d} x = \frac{(K-1)^2}{K^2} . \end{align}

Thus we must have $\frac{K-1}{3K} = \frac{(K-1)^2}{K^2}$ , that is, $K = \frac{3}{2}$ , which is impossible.

Claim 8.3 gives a contradiction and completes the proof of Theorem 4.10.

Acknowledgements

The authors are grateful to an anonymous referee for their thoughtful comments.

Footnotes

BBB partly supported by NSF CAREER Grant DMS-2046393 and a Sloan research fellowship. SJ partly supported by the Knut and Alice Wallenberg Foundation.

1 Strictly speaking, $W_H$ is in general not a graphon in $\mathcal{W}_0$ because it can take values greater than 1. However, $W_H\in \mathcal{W}_1$ , and we still call it a graphon.

References

Austern, M. and Orbanz, P. (2022) Limit theorems for distributions invariant under groups of transformations. Ann. Stat. 50 19601991.CrossRefGoogle Scholar
Barbour, A. D., Karoński, M. and Ruciński, A. (1989) A central limit theorem for decomposable random variables with applications to random graphs. J. Comb. Theory, B 47 125145.CrossRefGoogle Scholar
Basak, A. and Mukherjee, S. (2017) Universality of the mean-field for the Potts model. Probab. Theory Related Fields 168(3) 557600.CrossRefGoogle Scholar
Bhattacharya, B. B. and Mukherjee, S. (2019) Monochromatic subgraphs in randomly colored graphons. Eur. J. Comb. 81 328353.CrossRefGoogle Scholar
Bhattacharya, B. B., Diaconis, P. and Mukherjee, S. (2017) Universal limit theorems in graph coloring problems with connections to extremal combinatorics. Ann. Appl. Prob. 27 337394.CrossRefGoogle Scholar
Bochner, S. (1933) Integration von Funktionen, deren Werte die Elemente eines Vektorraumes sind. Fund. Math. 20 262–176.CrossRefGoogle Scholar
Boguná, M. and Pastor-Satorras, R. (2003) Class of correlated random networks with hidden variables. Phys. Rev. E 68 036112.CrossRefGoogle ScholarPubMed
Bollobás, B., Janson, S. and Riordan, O. (2007) The phase transition in inhomogeneous random graphs. Random Struct. Algor. 31 3122.CrossRefGoogle Scholar
Borgs, C., Chayes, J. T., Lovász, L., Sós, V. T. and Vesztergombi, K. (2008) Convergent sequences of dense graphs I: Subgraph frequencies, metric properties and testing. Adv. Math. 219 18011851.CrossRefGoogle Scholar
Borgs, C., Chayes, J. T., Lovász, L., Sós, V. T. and Vesztergombi, K. (2012) Convergent sequences of dense graphs II. Multiway cuts and statistical physics. Ann. Math. 176 151219.CrossRefGoogle Scholar
Chatterjee, S. and Diaconis, P. (2013) Estimating and understanding exponential random graph models. Ann. Stat. 41 24282461.CrossRefGoogle Scholar
Chatterjee, S. and Varadhan, S. S. (2011) The large deviation principle for the Erdős-Rényi random graph. Eur. J. Comb. 32 10001017.CrossRefGoogle Scholar
Delmas, J.-F., Dhersin, J.-S. and Sciauveau, M. (2021) Asymptotic for the cumulative distribution function of the degrees and homomorphism densities for random graphs sampled from a graphon. Random Struct. Algor. 58(1) 94149.CrossRefGoogle Scholar
Diaconis, P. and Freedman, D. (1981) On the statistics of vision: The Julesz conjecture. J. Math. Psychol. 24 112138.CrossRefGoogle Scholar
Dunford, N. and Schwartz, J. T. (1958) Linear Operators, Part I: General Theory. Wiley-Interscience.Google Scholar
Féray, V., Méliot, P.-L. and Nikeghbali, A. (2020) Graphons, permutons and the Thoma simplex: three mod-Gaussian moduli spaces. Proc. Lond. Math. Soc. 121 876926.CrossRefGoogle Scholar
Hladký, J., Pelekis, C. and Šileikis, M. (2021) A limit theorem for small cliques in inhomogeneous random graphs. J. Graph Theory 97 578599.CrossRefGoogle Scholar
Janson, S. (1990) A functional limit theorem for random graphs with applications to subgraph count statistics. Random Struct. Algor. 1(1) 1537.CrossRefGoogle Scholar
Janson, S. (1994) Orthogonal decompositions and functional limit theorems for random graph statistics. Memoirs Amer. Math. Soc 111(534).CrossRefGoogle Scholar
Janson, S. (1997) Gaussian Hilbert Spaces. Cambridge University Press.CrossRefGoogle Scholar
Janson, S., Łuczak, T. and Rucinski, A. (2000) Random Graphs. John Wiley & Sons.CrossRefGoogle Scholar
Janson, S. and Nowicki, K. (1991) The asymptotic distributions of generalized $U$ -statistics with applications to random graphs. Probab Theory Rel 90(3) 341375.CrossRefGoogle Scholar
Kallenberg, O. (2002) Foundations of Modern Probability. 2nd ed. Springer.CrossRefGoogle Scholar
Kaur, G. and Röllin, A. (2021) Higher-order fluctuations in dense random graph models. Electron J Probab 26 136.CrossRefGoogle Scholar
Lovász, L. (2012) Large Networks and Graph Limits. American Mathematical Soc.CrossRefGoogle Scholar
Lovász, L. and Szegedy, B. (2006) Limits of dense graph sequences. Journal of Combinatorial Theory, Series B 96(6) 933957.CrossRefGoogle Scholar
Méliot, P. L. (2021) A central limit theorem for singular graphons. arXiv preprint arXiv:2103.15741.Google Scholar
Nowicki, K. (1989) Asymptotic normality of graph statistics. J Stat Plan Infer 21(2) 209222.CrossRefGoogle Scholar
Nowicki, K. and Wierman, J. C. (1988) Subgraph counts in random graphs using incomplete $U$ -statistics methods. Discrete Math 72(1-3) 299310.CrossRefGoogle Scholar
Ruciński, A. (1988) When are small subgraphs of a random graph normally distributed? Probab Theory Rel 78(1) 110.CrossRefGoogle Scholar
Zhang, Z.-S. (2022) Berry–Esseen bounds for generalized $U$ -statistics. Electron. J. Probab. 27 1–36.CrossRefGoogle Scholar
Figure 0

Figure 1. The $(a, b)$-vertex join of the graphs $H_1$ and $H_2$.

Figure 1

Figure 2. The weak and strong edge joins of the graphs $H_1$ and $H_2$.

Figure 2

Figure 3. The different non-isomorphic graphs that can be obtained by the vertex join of two copies of $K_{1, 2}$ (with vertices labelled $\{1, 2, 3\}$ as in the inset).

Figure 3

Figure 4. (a) The weak edge join of two copies of $K_{1,2}$ and (b) the strong edge join of two copies of $K_{1,2}$.

Figure 4

Figure 5. (a) The weak and (b) the strong edge join of two copies of $C_4$.

Figure 5

Figure 6. The graphon $W_{K}$ with $K=4$.