Hostname: page-component-78c5997874-8bhkd Total loading time: 0 Render date: 2024-11-11T10:20:58.032Z Has data issue: false hasContentIssue false

Sandwiching biregular random graphs

Published online by Cambridge University Press:  06 June 2022

Tereza Klimošová
Affiliation:
Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic
Christian Reiher
Affiliation:
Fachbereich Mathematik, Universität Hamburg, Hamburg, Germany
Andrzej Ruciński
Affiliation:
Faculty of Mathematics and Computer Science, Adam Mickiewicz University, Poznań, Poland
Matas Šileikis*
Affiliation:
Institute of Computer Science of the Czech Academy of Sciences, Prague, Czech Republic
*
*Corresponding author. Email: matas@cs.cas.cz
Rights & Permissions [Opens in a new window]

Abstract

Let ${\mathbb{G}(n_1,n_2,m)}$ be a uniformly random m-edge subgraph of the complete bipartite graph ${K_{n_1,n_2}}$ with bipartition $(V_1, V_2)$ , where $n_i = |V_i|$ , $i=1,2$ . Given a real number $p \in [0,1]$ such that $d_1 \,{:\!=}\, pn_2$ and $d_2 \,{:\!=}\, pn_1$ are integers, let $\mathbb{R}(n_1,n_2,p)$ be a random subgraph of ${K_{n_1,n_2}}$ with every vertex $v \in V_i$ of degree $d_i$ , $i = 1, 2$ . In this paper we determine sufficient conditions on $n_1,n_2,p$ and m under which one can embed ${\mathbb{G}(n_1,n_2,m)}$ into $\mathbb{R}(n_1,n_2,p)$ and vice versa with probability tending to 1. In particular, in the balanced case $n_1=n_2$ , we show that if $p\gg\log n/n$ and $1 - p \gg \left(\log n/n \right)^{1/4}$ , then for some $m\sim pn^2$ , asymptotically almost surely one can embed ${\mathbb{G}(n_1,n_2,m)}$ into $\mathbb{R}(n_1,n_2,p)$ , while for $p\gg\left(\log^{3} n/n\right)^{1/4}$ and $1-p\gg\log n/n$ the opposite embedding holds. As an extension, we confirm the Kim–Vu Sandwich Conjecture for degrees growing faster than $(n \log n)^{3/4}$ .

MSC classification

Type
Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2022. Published by Cambridge University Press

1. Introduction

1.1. History and motivation

The Sandwich Conjecture of Kim and Vu [Reference Kim and Vu6] claims that if $d \gg \log n$ , then for some sequences $p_1 = p_1(n) \sim d/n$ and $p_2 = p_2(n) \sim d/n$ there is a joint distribution of a random d-regular graph $\mathbb{R}(n,d)$ and two binomial random graphs $\mathbb{G}(n,p_1)$ and $\mathbb{G}(n,p_2)$ such that with probability tending to 1

\begin{equation*} \mathbb{G}(n,p_1) \subseteq \mathbb{R}(n,d) \subseteq \mathbb{G}(n,p_2).\end{equation*}

If true, the Sandwich Conjecture would essentially reduce the study of any monotone graph property of the random graph $\mathbb{R}(n,d)$ in the regime $d \gg \log n$ to the more manageable $\mathbb{G}(n,p)$ .

For $\log n \ll d \ll n^{1/3}(\log n)^{-2}$ , Kim and Vu proved the embedding $\mathbb{G}(n,p_1) \subseteq \mathbb{R}(n,d)$ as well as an imperfect embedding $\mathbb{R}(n,d) \setminus H \subseteq \mathbb{G}(n,p_2)$ , where H is some pretty sparse subgraph of $\mathbb{R}(n,d)$ . In [Reference Dudek, Frieze, Ruciński and Šileikis2] the lower embedding was extended to $d \ll n$ (and, in fact, to uniform hypergraph counterparts of the models $\mathbb{G}(n,p)$ and $\mathbb{R}(n,d)$ ). Recently, Gao, Isaev and McKay [Reference Gao, Isaev and McKay4] came up with a result which confirms the conjecture for $d \gg n/\sqrt{\log n}$ ([Reference Gao, Isaev and McKay4] is the first paper that gives the (perfect) embedding $\mathbb{R}(n,d) \subseteq \mathbb{G}(n,p_2)$ for some range of d) and, subsequently, Gao [Reference Gao3] widely extended this range to $d=\Omega(\log^7n)$ .

Initially motivated by a paper of Perarnau and Petridis [Reference Perarnau and Petridis12] (see Section 9), we consider sandwiching for bipartite graphs, in which the natural counterparts of $\mathbb{G}(n,p)$ and $\mathbb{R}(n,d)$ are random subgraphs of the complete bipartite graph ${K_{n_1,n_2}}$ rather than of $K_n$ .

1.2. New results

We consider three models of random subgraphs of ${K_{n_1,n_2}}$ , the complete bipartite graph with bipartition $(V_1, V_2)$ , where $|V_1| = n_1, |V_2| = n_2$ . Given an integer $m \in [0, n_1n_2]$ , let ${\mathbb{G}(n_1,n_2,m)}$ be an m-edge subgraph of ${K_{n_1,n_2}}$ chosen uniformly at random (the bipartite Erdős–Rényi model). Given a number $p \in [0,1]$ , let ${\mathbb{G}(n_1,n_2,p)}$ be the binomial bipartite random graph where each edge of ${K_{n_1,n_2}}$ is included independently with probability p. Note that in the latter model, $pn_{3-i}$ is the expected degree of each vertex in $V_i$ , $i=1,2$ . If, in addition,

\begin{equation*} d_1 \,{:\!=}\, pn_2\qquad\text{and}\qquad d_2 \,{:\!=}\, pn_1\end{equation*}

are integers (we shall always make this implicit assumption), we let ${{\mathcal{R}}(n_1,n_2,p)}$ be the class of subgraphs of ${K_{n_1,n_2}}$ such that every $v \in V_i$ has degree $d_i$ , for $i = 1, 2$ (it is an easy exercise to show that ${{\mathcal{R}}(n_1,n_2,p)}$ is non-empty). We call such graphs p-biregular. Let $\mathbb{R}(n_1,n_2,p)$ be a random graph chosen uniformly from ${{\mathcal{R}}(n_1,n_2,p)}$ .

In this paper we establish an embedding of $\mathbb{G}(n_1,n_2,m)$ into $\mathbb{R}(n_1,n_2,p)$ . This easily implies an embedding of ${\mathbb{G}(n_1,n_2,p)}$ into $\mathbb{R}(n_1,n_2,p)$ . Moreover, by taking complements, our result translates immediately to the opposite embedding of $\mathbb{R}(n_1,n_2,p)$ into $\mathbb{G}(n_1,n_2,m)$ (and thus into ${\mathbb{G}(n_1,n_2,p)}$ ). This idea was first used in [Reference Gao, Isaev and McKay4] to prove $\mathbb{R}(n,d) \subseteq \mathbb{G}(n,p)$ for $p \gg \frac{1}{\sqrt{\log n}}$ . In particular, in the balanced case ( $n_1=n_2\,{:\!=}\,n$ ), we prove this opposite embedding for $p \gg \left( \log^3 n / n \right)^{1/4}$ .

The proof is far from a straightforward adaptation of the proof in [Reference Dudek, Frieze, Ruciński and Šileikis2]. The common aspect shared by the proofs is that the edges of $\mathbb{R}(n_1,n_2,p)$ are revealed in a random order, giving a graph process which turns out to be, for most of the time, similar to the basic Erdős–Rényi process that generates ${\mathbb{G}(n_1,n_2,m)}$ . The rest of the current proof is different in that it avoids using the configuration model. Instead, we focus on showing that both $\mathbb{R}(n_1,n_2,p)$ and its random t-edge subgraphs are pseudorandom. We achieve this by applying the switching method (when $\min \left\{ p, 1-p \right\}$ is small) and otherwise via asymptotic enumeration of bipartite graphs with a given degree sequence proved in [Reference Canfield, Greenhill and McKay1] (see Theorem 5).

In addition, for $p > 0.49$ , we rely on a non-probabilistic result about the existence of alternating cycles in 2-edge-coloured pseudorandom graphs (Lemma 20), which might be of separate interest.

Throughout the paper we assume that the underlying complete bipartite graph ${K_{n_1,n_2}}$ grows on both sides, that is, $\min\{n_1,n_2\} \to \infty$ , and any parameters (e.g., p, m), events, random variables, etc., are allowed to depend on $(n_1,n_2)$ . In most cases we will make the dependence on $(n_1,n_2)$ implicit, with all limits and asymptotic notation like $O, \Omega, \sim$ considered with respect to $\min\{n_1,n_2\} \to \infty$ . We say that an event $\mathcal E = \mathcal E(n_1,n_2)$ holds asymptotically almost surely (a.a.s.) if $\mathbb{P}\left(\mathcal E\right) \to 1$ .

Our main results are Theorem 2 below and its immediate Corollary 3. For a gentle start we first state an abridged version of both in the balanced case $n_1 = n_2 = n$ . Note that $pn^2$ is the number of edges in $\mathbb{R}(n,n,p)$ .

Theorem 1. If $p\gg\frac{\log n}{n}$ and $1 - p \gg \left( \frac{\log n}{n} \right)^{1/4}$ , then for some $m \sim pn^2$ there is a joint distribution of random graphs $\mathbb{G}(n,n,m)$ and $\mathbb{R}(n,n,p)$ such that

(1) \begin{equation} \mathbb{G}(n,n,m) \subseteq \mathbb{R}(n,n,p) \qquad \text{a.a.s.} \end{equation}

If $p\gg\left(\frac{\log^3 n}{n}\right)^{1/4}$ , then for some $m \sim pn^2$ there is a joint distribution of random graphs $\mathbb{G}(n,n,m)$ and $\mathbb{R}(n,n,p)$ such that

(2) \begin{equation} \mathbb{R}(n,n,p) \subseteq \mathbb{G}(n,n,m) \qquad \text{a.a.s.} \end{equation}

Moreover, in (1) and (2) one can replace $\mathbb{G}(n,n,m)$ by the binomial random graph $\mathbb{G}(n,n,p')$ , for some $p' \sim p$ .

The condition $p \gg (\log n)/n$ is necessary for (1) to hold with $m \sim pn^2$ (see Remark 26), since otherwise the maximum degree of $\mathbb{G}(n,n,m)$ is no longer $pn(1+o(1))$ a.a.s. We guess that the maximum degree is, vaguely speaking, the only obstacle for embedding $\mathbb{G}(n,n,m)$ (or $\mathbb{G}(n,n,p')$ ) into $\mathbb{R}(n_1,n_2,p)$ , see Conjecture 1 in Section 10.

We further write $N \,{:\!=}\, n_1n_2$ , $q \,{:\!=}\, 1 - p$ , $\hat n \,{:\!=}\, \min \left\{ n_1, n_2 \right\}$ , $\hat p \,{:\!=}\, \min \{p, q\}$ , and let

(3) \begin{equation} \mathbb{I} \,{:\!=}\,\mathbb{I}(n_1,n_2,p)= \begin{cases} 1, & \hat p < 2\dfrac{n_1n_2^{-1} + n_1^{-1}n_2}{\log N}\\ \\[-8pt] 0, & \hat p \ge 2\dfrac{n_1n_2^{-1} + n_1^{-1}n_2}{\log N}. \end{cases}\end{equation}

Note that $\mathbb{I}=0$ entails that the vertex classes are rather balanced: the ratio of their sizes cannot exceed $\tfrac 1 4 \log N$ . Moreover, $\mathbb{I}=0$ implies that $\hat p \ge 4/\log N$ .

Theorem 2. For every constant $C > 0$ , there is a constant $C^*$ such that whenever the parameter $p \in [0,1]$ satisfies

(4) \begin{equation}q \ge 680\left( \frac{3(C + 4)\log N}{\hat n}\right)^{1/4} \end{equation}

and

(5) \begin{equation}1\ge\gamma \,{:\!=}\, \begin{cases} C^* \left( p^2 \mathbb{I} + \sqrt{\frac{\log N}{p\hat n }}\right), \quad & p \le 0.49 \\ \\[-8pt] C^* \left( q^{3/2}\mathbb{I} + \left(\frac{\log N}{\hat n} \right)^{1/4} + \frac1q\sqrt{\frac{\log N}{\hat n}} \log \frac{\hat n}{\log N}\right), \quad & p > 0.49, \end{cases}\end{equation}

there is, for $m \,{:\!=}\, \lceil(1-\gamma)pN\rceil$ , a joint distribution of random graphs ${\mathbb{G}(n_1,n_2,m)}$ and $\mathbb{R}(n_1,n_2,p)$ such that

(6) \begin{equation} \mathbb{P}\left({\mathbb{G}(n_1,n_2,m)} \subseteq \mathbb{R}(n_1,n_2,p)\right) = 1 - O(N^{-C}). \end{equation}

If, in addition $\gamma \le 1/2$ , then for $p' \,{:\!=}\, (1 - 2\gamma)p$ , there is a joint distribution of $\mathbb{G}(n_1,n_2,p')$ and $\mathbb{R}(n_1,n_2,p)$ such that

(7) \begin{equation} \mathbb{P}\left(\mathbb{G}(n_1,n_2,p') \subseteq \mathbb{R}(n_1,n_2,p)\right) = 1 - O(N^{-C}). \end{equation}

By inflating $C^*$ , the constant $0.49$ in (5) can be replaced by any constant smaller than $1/2$ . It can be shown (see Remark 27 in Section 10) that if $p \le 1/4$ and $\gamma = \Theta\left( \sqrt{\frac{\log N}{p\hat n }} \right)$ , then $\gamma$ has optimal order of magnitude. For more remarks about the conditions of Theorem 2 and the role of the indicator $\mathbb{I}$ , see Section 10.

By taking the complements of $\mathbb{R}(n_1,n_2,p)$ and $\mathbb{G}(n_1,n_2,m)$ and swapping p and q, we immediately obtain the following consequence of Theorem 2 which provides the opposite embedding.

Corollary 3. For every constant $C > 0$ there is a constant $C^*$ such that whenever the parameter $p \in [0,1]$ satisfies

(8) \begin{equation} p \ge 680\left( 3(C + 4)\hat n^{-1}\log N\right)^{1/4} \end{equation}

and

\begin{equation*} 1 \ge \bar\gamma\,{:\!=}\, \begin{cases} C^*\left( p^{3/2}\mathbb{I} + \left(\frac{\log N}{\hat n} \right)^{1/4} + \frac 1p\sqrt{\frac{\log N}{\hat n}} \log \frac{\hat n}{\log N}\right), \quad p < 0.51, \\ \\[-9pt] C^*\left( q^2 \mathbb{I} + \sqrt{\frac{\log N}{q\hat n}}\right), \quad p \ge 0.51, \end{cases} \end{equation*}

there is, for $\bar m=\lfloor(p+\bar\gamma q)N\rfloor$ , a joint distribution of random graphs $\mathbb{G}(n_1,n_2,\bar m)$ and $\mathbb{R}(n_1,n_2,p)$ such that

(9) \begin{equation} \mathbb{P}\left(\mathbb{R}(n_1,n_2,p) \subseteq \mathbb{G}(n_1,n_2,\bar m)\right)=1 - O(N^{-C}). \end{equation}

If, in addition, $\bar \gamma \le 1/2$ , then for $p'' \,{:\!=}\, (p + 2\bar{\gamma} q)N$ there is a joint distribution of $\mathbb{G}(n_1,n_2,p'')$ and $\mathbb{R}(n_1,n_2,p)$ such that

(10) \begin{equation} \mathbb{P}\left(\mathbb{R}(n_1,n_2,p) \subseteq \mathbb{G}(n_1,n_2,p'')\right) = 1 - O(N^{-C}). \end{equation}

Proof. The assumptions of Corollary 3 yield the assumptions of Theorem 2 with $\gamma=\bar\gamma$ and with p and q swapped. Note also that

\begin{equation*} m = \lceil(1-\bar\gamma)qN\rceil =\lceil N-pN-\bar\gamma q N\rceil= N - \lfloor(p+\bar\gamma q)N\rfloor=N-\bar m. \end{equation*}

Thus, by Theorem 2, with probability $1 - O(N^{-C})$ we have $\mathbb{G}(n_1,n_2,N - \bar m) \subseteq \mathbb{R}(n_1,n_2,q)$ , which, by taking complements, translates into (9). Similarly

\begin{equation*} p' = (1 - 2\bar \gamma)q N = N - (p + 2\bar \gamma q) N = N - p'' N. \end{equation*}

Thus, by Theorem 2, with probability $1 - O(N^{-C})$ we have $\mathbb{G}(n_1,n_2,N-p''N) \subseteq \mathbb{R}(n_1,n_2,q)$ , which, by taking complements yields embedding (10).

Proof of Theorem 1. We apply Theorem 2 and Corollary 3 with $C=1$ and the corresponding $C^*$ . Note that for $n_1=n_2=n$ , the ratio in (3) equals $4/\log N = 2/\log n$ , so

(11) \begin{equation}\mathbb{I}=1\quad\text{ if and only if }\quad\hat p<2/\log n.\end{equation}

To prove (1), assume $p\gg\log n/n$ and $q \gg \left( \log n/n \right)^{1/4}$ and apply Theorem 2. Note that condition (4) holds and it is straightforward to check that, regardless of whether $p \le 0.49$ or $p > 0.49$ , $\gamma \to 0$ . In particular, $\gamma \le 1/2$ . We conclude that, indeed, embedding (1) holds with $m = \lceil (1 - \gamma)p N \rceil \sim pn^2$ and (1) still holds if we replace $\mathbb{G}(n,n,m)$ by $\mathbb{G}(n,n,p')$ with $p' = (1 - 2\gamma)p \sim p$ .

For (2) first note that when $p \to 1$ , embedding (2) holds trivially with $m = n^2$ (even though a nontrivial embedding follows in this case under an additional assumption $q \gg \log n/n$ . Hence, we further assume $q = \Omega(1)$ , $p\gg\left(\log^3 n / n\right)^{1/4}$ and apply Corollary 1.2. Note that condition (8) holds. It is routine to check, taking into account (11), that $\bar\gamma\le 1/2$ and, moreover, $\bar\gamma q = o(p)$ . We conclude that (2) holds with $m = \lfloor(p + \bar\gamma q)N \rfloor \sim Np = n^2p$ and (2) still holds if $\mathbb{G}(n,n,m)$ is replaced by $\mathbb{G}(n,n,p')$ with $p' = p + 2 \bar \gamma q \sim p$ .

1.3. A note on the second version of the manuscript

This project was initially aimed at extending the result in [Reference Dudek, Frieze, Ruciński and Šileikis2] to bipartite graphs and, thus, limited to the lower embedding $\mathbb{G}(n,n,p_1) \subseteq \mathbb{R}(n,n,p)$ only. While it was in progress, Gao, Isaev and McKay [Reference Gao, Isaev and McKay4] made an improvement on the Sandwich Conjecture by using a surprisingly fruitful idea of taking complements to obtain the upper embedding $\mathbb{R}(n,d) \subseteq \mathbb{G}(n,p_2)$ directly from the lower embedding $\mathbb{G}(n,p_1) \subseteq \mathbb{R}(n,d)$ . We then decided to borrow this idea (but nothing else) and strengthen some of our lemmas to get a significantly broader range of p for which the upper embedding (i.e. Corollary 1.2) holds. It turned out that our approach works for non-bipartite regular graphs, too. Therefore, prompted by the recent substantial progress of Gao [Reference Gao3] on the Sandwich Conjecture (which appeared on arXiv after the first version of this manuscript), in the current version of the manuscript we added Section 7, which outlines how to modify our proofs to get a corresponding sandwiching for non-bipartite graphs. This improves upon the results in [Reference Gao, Isaev and McKay4] (for regular graphs), but is now superseded by [Reference Gao3].

1.4. Organisation

In Section 2 we introduce the notation and tools used throughout the paper: the switching technique, probabilistic inequalities and an enumeration result for bipartite graphs with a given degree sequence. In Section 3 we state a crucial Lemma 6 and show how it implies Theorem 2.

In Section 4 we give a proof of Lemma 6 based on two technical lemmas, one about the concentration of a degree-related parameter (Lemma 9), the other (Lemma 10) facilitating the switching technique used in the proof of Lemma 6. Lemma 9 is proved in Section 5, after giving some auxiliary results establishing the concentration of degrees and co-degrees in $\mathbb{R}(n_1,n_2,p)$ as well as in its conditional versions. In Section 6, we present a proof of Lemma 10, preceded by a purely deterministic result about alternating cycles in 2-edge-coloured pseudorandom graphs. We defer some technical but straightforward results and their proofs (e.g., the proof of Claim 7) to Section 8. A flowchart of the results ultimately leading to the proof of Theorem 2 is presented in Figure 1.

Figure 1. The structure of the proof of Theorem 2. An arrow from statement A to statement B means that A is used in the proof of B. The numbers in the brackets point to the section where a statement is formulated and where it is proved (unless the proof follows the statement immediately); external results have instead an article reference in square brackets.

The contents of Section 7 were already described (see Subsection 1.3 above). Section 9 contains an application of our main Theorem 2, which was part of the motivation for our research. In Section 10 we present some concluding remarks and our version of the (bipartite) sandwiching conjecture.

2. Preliminaries

2.1. Notation

Recall that

\begin{equation*} d_1 = pn_2\qquad\text{and}\qquad d_2 = pn_1\end{equation*}

are the degrees of vertices in a p-biregular graph, and thus, the number of edges in any p-biregular graph $H \in {{\mathcal{R}}(n_1,n_2,p)}$ is

\begin{equation*} M\,{:\!=}\,pN\qquad\text{where}\qquad N=n_1n_2.\end{equation*}

Throughout the proofs we also use shorthand notation

(12) \begin{equation} q = 1 - p, \quad \hat p = \min \left\{ p, q \right\}, \quad \hat n = \min \left\{ n_1,n_2 \right\},\end{equation}

and $[n] \,{:\!=}\, \left\{ 1, \dots, n \right\}$ . All logarithms appearing in this paper are natural.

By $\Gamma_{G}(v)$ we denote the set of neighbours of a vertex v in a graph G.

2.2. Switchings

The switching technique is used to compare the size of two classes of graphs, say ${\mathcal{R}}$ and ${\mathcal{R}}'$ , by defining an auxiliary bipartite graph $B\,{:\!=}\,B({\mathcal{R}},{\mathcal{R}}')$ , in which two graphs $H \in {\mathcal{R}}$ , $H' \in {\mathcal{R}}'$ are connected by an edge whenever H can be transformed into H by some operation (a forward switching) that deletes and/or creates some edges of H. By counting the number of edges of $B({\mathcal{R}},{\mathcal{R}}')$ in two ways, we see that

(13) \begin{equation} \sum_{H\in{\mathcal{R}}}\deg_B(H) = \sum_{H'\in{\mathcal{R}}'}\deg_B(H'),\end{equation}

which easily implies that

(14) \begin{equation}\frac{\min_{H' \in {\mathcal{R}}'} \deg_B(H')}{\max_{H \in {\mathcal{R}}} \deg_B(H)} \le \frac{|{\mathcal{R}}|}{|{\mathcal{R}}'|} \le \frac{\max_{H' \in {\mathcal{R}}'} \deg_B(H')}{\min_{H \in {\mathcal{R}}} \deg_B(H)}.\end{equation}

The reverse operation mapping $H' \in {\mathcal{R}}'$ to its neighbours in graph B, is called a backward switching. Usually, one defines the forward switching in such a way that the backward switching can be easily described.

All switchings used in this paper follow the same pattern. For a fixed graph $G \subseteq K$ (possibly empty), where $K \,{:\!=}\, {K_{n_1,n_2}}$ , the families ${\mathcal{R}}, {\mathcal{R}}'$ will be subsets of

(15) \begin{equation} {\mathcal{R}}_G \,{:\!=}\, \{ H \in {\mathcal{R}}(n_1,n_2,p)\ \, :\, \ G \subseteq H \}.\end{equation}

Every $H \in {\mathcal{R}}_G$ will be interpreted as a blue-red colouring of the edges of $K\setminus G$ : those in $H\setminus G$ will be coloured blue and those in $K \setminus H$ red. Given $H\in{\mathcal{R}}$ , consider a subset S of the edges of $K\setminus G$ in which for every vertex $v \in V_1 \cup V_2$ the blue degree equals the red degree, i.e., $\deg_{(H\setminus G)\cap S}(v)=\deg_{(K\setminus H)\cap S}(v)$ . Then switching the colours within S produces another graph $H' \in {\mathcal{R}}_G$ . Formally, E(H ) is the symmetric difference $E(H) \triangle S$ . In each application of the switching technique, we will restrict the choices of S to make sure that $H' \in {\mathcal{R}}'$ .

As an elementary illustration of this technique, which nevertheless turns out to be useful in Section 6, we prove here the following result. A cycle in $K\setminus G$ is alternating if it is a union of a blue matching and a red matching. Note that the definition depends on H. We will omit mentioning this dependence, as H will always be clear from the context. Given $e \in K\setminus G$ , set

(16) \begin{equation} {\mathcal{R}}_{G,e} \,{:\!=}\, \left\{ H \in {\mathcal{R}}_G\ \, :\, \ e \in H \right\}\qquad\text{and }\qquad {\mathcal{R}}_{G,\neg e} \,{:\!=}\, \left\{ H \in {\mathcal{R}}_G\ \, :\, \ e \not\in H \right\}.\end{equation}

Proposition 4. Let a graph $G\subseteq K$ be such that ${\mathcal{R}}_G\neq\emptyset$ and let $e\in K\setminus G$ . Assume that for some number $D > 0$ and every $H\in {\mathcal{R}}_{G}$ the edge e is contained in an alternating cycle of length at most 2D. Then ${\mathcal{R}}_{G,\neg e}\neq\emptyset$ , ${\mathcal{R}}_{G, e}\neq\emptyset$ , and

\begin{equation*} \frac{1}{N^D - 1} \le \frac{|{\mathcal{R}}_{G,\neg e}|}{|{\mathcal{R}}_{G,e}|} \le N^D - 1. \end{equation*}

Proof. Let $B = B({\mathcal{R}}_{G,e}, {\mathcal{R}}_{G,\neg e})$ be the switching graph corresponding to the following forward switching: choose an alternating cycle S of length at most 2D containing e and switch the colours of edges within S. Note that the backward switching does precisely the same. Note that by the assumption, the minimum degree $\delta(B)$ is at least 1.

Since ${\mathcal{R}}_G={\mathcal{R}}_{G, e}\cup{\mathcal{R}}_{G, \neg e} \neq \emptyset$ , one of the classes ${\mathcal{R}}_{G, e}$ and ${\mathcal{R}}_{G, \neg e}$ is non-empty and, in view of $\delta(B) \ge 1$ , the other one is non-empty as well. For $\ell = 2, \dots, D$ , the number of cycles of length $2\ell$ containing e is, crudely, at most $n_1^{\ell - 1} n_2^{\ell - 1} = N^{\ell - 1}$ , hence the maximum degree is

\begin{equation*} \Delta(B)\le N + N^2 + \cdots + N^{D - 1} \le N^D - 1. \end{equation*}

Thus, by (14), we obtain the claimed bounds.

2.3. Probabilistic inequalities

We first state a few basic concentration inequalities that we apply in our proofs. For a sum X of independent Bernoulli (not necessarily identically distributed) random variables, writing $\mu = \operatorname{\mathbb{E}} X$ , we have (see Theorem 2.8, (2.5), (2.6), and (2.11) in [Reference Janson, Łuczak and Rucinski5]) that

(17) \begin{equation} \mathbb{P}\left(X \ge \mu + t\right) \le \exp \left\{ - \frac{t^2}{2\left( \mu + t/3 \right)} \right\}, \qquad t \ge 0, \end{equation}
(18) \begin{equation} \mathbb{P}\left(X \le \mu - t\right) \le \exp \left\{ - \frac{t^2}{2\mu} \right\}, \qquad t \ge 0. \end{equation}

Let $\Gamma$ be a set of size $|\Gamma|=g$ and let $A \subseteq \Gamma$ , $|A|=a \ge1$ . For an integer $r \in [0,g]$ , choose uniformly at random a subset $R \subseteq \Gamma$ of size $|R|=r$ . The random variable $Y = |A \cap R|$ has then the hypergeometric distribution $\operatorname{Hyp}(g, a, r)$ with expectation $\mu \,{:\!=}\, \operatorname{\mathbb{E}} Y = ar/g$ . By Theorem 2.10 in [Reference Janson, Łuczak and Rucinski5], inequalities (17) and (18) hold for Y, too.

Moreover, by Remark 2.6 in [Reference Janson, Łuczak and Rucinski5], inequalities (17) and (18) also hold for a random variable Z which has Poisson distribution $\operatorname{Po}(\mu)$ with expectation $\mu$ . In this case, we also have the following simple fact. For $k\ge0$ , set $q_k=\mathbb{P}\left(Z=k\right)$ . Then $q_k/q_{k-1}=\mu/k$ , and hence $k = \lfloor\mu\rfloor$ maximises $q_k$ (we say that such k is a mode of Z). Since $\operatorname{Var} Z = \mu$ , by Chebyshev’s inequality, $\mathbb{P}\left(|Z-\mu|<\sqrt{2\mu}\right)\ge1/2$ . Moreover, the interval $(\mu - \sqrt{2 \mu}, \mu + \sqrt{2 \mu})$ contains at most $\lceil \sqrt{8\mu}\:\rceil$ integers, hence it follows that

(19) \begin{equation} q_{\lfloor \mu \rfloor} \ge \frac{1/2}{\lceil \sqrt{8\mu}\: \rceil}.\end{equation}

2.4. Asymptotic enumeration of dense bipartite graphs

To estimate co-degrees of $\mathbb{R}(n,n,p)$ we will use the following asymptotic formula by Canfield, Greenhill and McKay [Reference Canfield, Greenhill and McKay1]. We reformulate it slightly for our convenience.

Given two vectors ${\mathbf{d}}_1 = (d_{1,v}, v \in V_1)$ and ${\mathbf{d}}_2 = (d_{2,v}, v \in V_2)$ of positive integers such that $\sum_{v \in V_1} d_{1, v} = \sum_{v \in V_2} d_{2,v}$ , let ${{\mathcal{R}}({\mathbf{d}}_1,{\mathbf{d}}_2)}$ be the class of bipartite graphs on $(V_1, V_2)$ with vertex degrees $\deg(v) = d_{i,v}, v \in V_i, i = 1,2$ . Let $|V_i| = n_i$ and write $\bar d_i = {n_i}^{-1}\sum_{v \in V_i} d_{i,v}$ , $D_i = \sum_{v \in V_i} (d_{i,v} - \bar d_i)^2$ , $p = \bar d_1/n_2 = \bar d_2/n_1$ , and $q = 1 - p$ .

Theorem 5. ([Reference Canfield, Greenhill and McKay1]) Given any positive constants a, b, C such that $a+b < 1/2$ , there exists a constant $\epsilon > 0$ so that the following holds. Consider the set of degree sequences ${\mathbf{d}}_1,{\mathbf{d}}_2$ satisfying

  1. i. $\max_{v \in V_1}|d_{1,v} - \bar d_1| \le Cn_2^{1/2 + \epsilon}$ , $\max_{v \in V_2} |d_{2,v} - \bar d_2| \le Cn_1^{1/2 + \epsilon}$

  2. ii. $\max \{n_1, n_2\} \le C(pq)^2(\min \left\{ n_1, n_2 \right\})^{1 + \epsilon}$

  3. iii. $ \frac{(1 - 2p)^2}{4pq} \left( 1 + \frac{5n_1}{6n_2} + \frac{5n_2}{6n_1} \right) \le a \log \max \left\{ n_1, n_2 \right\}$ .

If $\min\{n_1, n_2\} \to \infty$ , then uniformly for all such ${\mathbf{d}}_1,{\mathbf{d}}_2$

(20) \begin{align} |{\mathcal{R}}({\mathbf{d}}_1,{\mathbf{d}}_2)| &= \binom{n_1n_2}{p n_1n_2}^{-1} \prod_{v \in V_1}\binom {n_2}{d_{1,v}} \prod_{v \in V_2}\binom {n_1}{d_{2,v}} \times \nonumber\\&\quad \times \exp \left[ -\frac{1}{2}\left( 1- \frac{D_1}{pqn_1n_2} \right)\left(1- \frac{D_2}{pqn_1n_2} \right) + O\left((\max\left\{ n_1,n_2 \right\})^{-b}\right) \right],\end{align}

where the constants implicit in the error term may depend on a,b,C.

Note that condition (ii) of Theorem 5 implies the corresponding condition in [Reference Canfield, Greenhill and McKay1] after adjusting $\epsilon$ . Also, the uniformity of the bound is not explicitly stated in [Reference Canfield, Greenhill and McKay1], but, given $n_1, n_2$ , one should take ${\mathbf{d}}_1, {\mathbf{d}}_2$ with the worst error and apply the result in [Reference Canfield, Greenhill and McKay1].

3. A crucial lemma

3.1. The set-up

Recall that $K = {K_{n_1,n_2}}$ has $N = n_1n_2$ edges. Consider a sequence of graphs $\mathbb{G}(t) \subseteq K$ for $t = 0, \dots , N$ , where $\mathbb{G}(0)$ is empty and, for $t < N$ , $\mathbb{G}(t + 1)$ is obtained from $\mathbb{G}(t)$ by adding an edge $\varepsilon_{t + 1}$ chosen from $K \setminus \mathbb{G}(t)$ uniformly at random, that is, for every graph $G \subseteq K$ of size t and every edge $e \in K \setminus G$

(21) \begin{equation}\mathbb{P}\left({\varepsilon_{t+1} = e}\,|\,{\mathbb{G}(t) = G}\right) = \frac{1}{N - t}.\end{equation}

Of course, $(\varepsilon_1, \dots, \varepsilon_N)$ is just a uniformly random ordering of the edges of K.

Our approach to proving Theorem 2 is to represent the random regular graph $\mathbb{R}(n_1,n_2,p)$ as the outcome of a random process which behaves similarly to $(\mathbb{G}(t))_t$ . Recalling that a p-biregular graph has $M = pN$ edges, let

\begin{equation*}(\eta_1, \dots, \eta_M)\end{equation*}

be a uniformly random ordering of the edges of $\mathbb{R}(n_1,n_2,p)$ . By taking the initial segments, we obtain a sequence of random graphs

\begin{equation*} \mathbb{R}(t) = \{\eta_1, \dots, \eta_t\}, \qquad t = 0, \dots, M.\end{equation*}

For convenience, we shorten

\begin{equation*} \mathbb{R}\,{:\!=}\, \mathbb{R}(M) = \mathbb{R}(n_1,n_2,p)\;.\end{equation*}

Let us mention here that for a fixed $H \in {{\mathcal{R}}(n_1,n_2,p)}$ , conditioning on $\mathbb{R} = H$ , the edge set of the random subgraph $\mathbb{R}(t)$ is a uniformly random t-element subset of the edge set of H. This observation often leads to a hypergeometric distribution and will be utilised several times in our proofs.

We say that a graph G with t edges is admissible, if the family ${\mathcal{R}}_G$ (see definition (15)) is non-empty, or, equivalently,

\begin{equation*} \mathbb{P}\left(\mathbb{R}(t) = G\right) > 0.\end{equation*}

For an admissible graph G with t edges and any edge $ e \in K \setminus G$ , let

(22) \begin{equation}p_{t+1}(e,G) \,{:\!=}\, \mathbb{P}\left({\eta_{t+1} = e}\,|\,{\mathbb{R}(t) = G}\right).\end{equation}

The conditional space underlying (22) can be described as first extending G uniformly at random to an element of ${\mathcal{R}}_G$ and then randomly permuting the new $M-t$ edges.

The main idea behind the proof of Theorem 2 is that the conditional probabilities in (22) behave similarly to those in (21). Observe that $p_{t+1}(e,\mathbb{R}(t)) = \mathbb{P}\left({\eta_{t+1} = e}\,|\,{\mathbb{R}(t)}\right)$ . Given a real number $\chi \ge 0$ and $t \in \{0, \dots, M - 1\}$ , we define an $\mathbb{R}(t)$ -measurable event

\begin{equation*} {\mathcal{A}}(t,\chi) \,{:\!=}\, \left\{ p_{t+1}(e,\mathbb{R}(t)) \ge \frac{1 - \chi}{N - t} \text{ for every } e \in K \setminus \mathbb{R}(t) \right\}.\end{equation*}

In the crucial lemma below, we are going to show that for suitably chosen $\gamma_0, \gamma_1, \dots$ , a.a.s. the events ${\mathcal{A}}(t,\gamma_t)$ occur simultaneously for all $t = 0,\dots,t_0-1$ where $t_0$ is quite close to M. Postponing the choice of $\gamma_t$ and $t_0$ , we define an event

(23) \begin{equation} {\mathcal{A}} \,{:\!=}\, \bigcap_{t = 0}^{ t_0 - 1}{\mathcal{A}}(t, \gamma_t).\end{equation}

Intuitively, the event ${\mathcal{A}}$ asserts that up to time $t_0$ the process $\mathbb{R}(t)$ stays ‘almost uniform’, which will enable us to embed ${\mathbb{G}(n_1,n_2,m)}$ into $\mathbb{R}(t_0)$ .

To define time $t_0$ , it is convenient to parametrise the time by the proportion of edges of $\mathbb{R}$ ‘not yet revealed’ after t steps. For this, we define by

(24) \begin{equation} \tau = \tau(t) \,{:\!=}\, 1 - \frac{t}{M} \in [0,1]\qquad\text{and so}\qquad t = (1 - \tau)M.\end{equation}

Given a constant $C > 0$ and, we define (recalling the notation in (12)) the ‘final’ value $\tau_0$ of $\tau$ as

(25) \begin{equation} \tau_0 \,{:\!=}\, \begin{cases} \dfrac{3 \cdot 3240^2(C + 4)\log N}{p \hat n}, \quad & p \le 0.49, \\ \\[-9pt] 700(3(C + 4))^{1/4} \left( q^{3/2}\mathbb{I} + \left(\frac{\log N}{\hat n}\right)^{1/4}\right), \quad & p > 0.49. \end{cases}\end{equation}

(Some of the constants appearing here and below are sharp or almost sharp, but others have room to spare as we round them up to the nearest ‘nice’ number.)

Consider the following assumptions on p (which we will later show to follow from the assumptions of Theorem 2):

(26) \begin{equation} \hat p \ge \frac{3 \cdot 3240^2(C+4)\log N}{\hat n},\end{equation}
(27) \begin{equation} \hat p \cdot \mathbb{I} \le \frac{49}{51}\cdot\frac1{340^2 (C + 4)^{1/6}}, \end{equation}

and

(28) \begin{equation} q \ge 680\left( \frac{3(C+4)\log N}{\hat{n}}\right)^{1/4}. \end{equation}

At the end of this subsection we show that these three assumptions imply

(29) \begin{equation} \tau_0 \le 1,\end{equation}

so that

\begin{equation*} t_0 \,{:\!=}\, \lfloor (1 - \tau_0)M \rfloor\end{equation*}

is a non-negative integer. Further, for $t = 0, \dots, M - 1$ , define

(30) \begin{equation} \gamma_t \,{:\!=}\, 1080 \hat p^2 \mathbb{I} + \begin{cases} 3240 \sqrt{\dfrac{2(C + 3)\log N}{\tau p \hat n}}, \quad & p \le 0.49, \\ \\[-9pt] 25,000 \sqrt{\dfrac{(C + 3)\log N}{ \tau^2q^2 \hat n}} , \quad & p > 0.49. \end{cases}\end{equation}

Taking (29) for granted, we now state our crucial lemma, which is proved in Section 4.

Lemma 6. For every constant $C > 0$ , if assumptions (26), (27) and (28) hold, then

(31) \begin{equation} \mathbb{P}\left({\mathcal{A}}\right) = 1- O(N^{-C}),\end{equation}

where the constant implicit in the O-term in (31) may also depend on C.

It remains to show (29). When $p \le 0.49$ , inequality (29) is equivalent to (26). For $p > 0.49$ , we have $q \le 51\hat p/49$ , which together with assumptions (27) and (28) implies that

\begin{equation*} \tau_0 \le 700 \left( 3(C+4) \right)^{1/4} \left( \frac{51}{49}\hat p \cdot \mathbb{I} \right)^{3/2} + 700 \cdot \frac{q}{680} \le \frac{700 \cdot 3^{1/4}}{340^3} + \frac{700 \cdot 0.51}{680} < 1.\end{equation*}

3.2. Proof of Theorem 2

From Lemma 6 we are going to deduce Theorem 2 using a coupling argument similar to the one which was employed by Dudek, Frieze, Ruciński and Šileikis [Reference Dudek, Frieze, Ruciński and Šileikis2], but with an extra tweak (inspired by Kim and Vu [Reference Kim and Vu6]) of letting the probabilities $\gamma_t$ of Bernoulli random variables depend on t, which reduces the error $\gamma$ in (5).

It is easy to check that the assumptions of Lemma 6 follow from the assumptions of Theorem 2. Indeed, (28) coincides with assumption (4), while (26) and (27) follow from the assumption $\gamma \le 1$ (see (5)) for sufficiently large $C^*$ .

Our aim is to couple $(\mathbb{G}(t))_t$ and $(\mathbb{R}(t))_t$ on the event ${\mathcal{A}}$ defined in (23). For this we will define a graph process $\mathbb{R}'(t) \,{:\!=}\, \{\eta^{\prime}_1, \dots, \eta^{\prime}_t\}, t = 0, \dots, t_0$ so that for every admissible graph G with $t \in [0, M -1]$ edges and every $e \in K \setminus G$

(32) \begin{equation} \mathbb{P}\left({\eta^{\prime}_{t+1} = e}\,|\,{\mathbb{R}'(t) = G}\right) = p_{t+1}(e, G),\end{equation}

where $p_{t+1}(e, G)$ was defined in (22). Note that $\mathbb{R}'(0)$ is an empty graph. Since the distribution of the process $(\mathbb{R}(t))_t$ is determined by the conditional probabilities (22), in view of (32), the distribution of $\mathbb{R}'(t_0)$ is the same as that of $\mathbb{R}(t_0)$ and therefore we will identify $\mathbb{R}'(t_0)$ with $\mathbb{R}(t_0)$ . As the second step, we will show that a.a.s. ${\mathbb{G}(n_1,n_2,m)}$ can be sampled from $\mathbb{R}'(t_0) = \mathbb{R}(t_0)$ .

Proceeding with the definition, set $\mathbb{R}'(0)$ to be the empty graph and define graphs $\mathbb{R}'(t)$ , $t = 1, \dots, t_0$ , inductively, as follows. Hence, let us further fix $t \in [0, t_0-1]$ and suppose that

\begin{equation*} \mathbb{R}'(t)=R_t\quad\mbox{and}\quad \mathbb{G}(t)=G_t\end{equation*}

have been already chosen. Our immediate goal is to select a random pair of edges $\varepsilon_{t+1}$ and $\eta_{t+1}^{\prime}$ , according to, resp., (21) and (32), in such a way that the event $\varepsilon_{t+1}\in \mathbb{R}'(t+1)$ is quite likely.

To this end, draw $\varepsilon_{t + 1}$ uniformly at random from $K \setminus G_t$ and, independently, generate a Bernoulli random variable $\xi_{t+1}$ with the probability of success $1 - \gamma_t$ (which is in [0,1] by (146)). If event ${\mathcal{A}}(t, \gamma_t)$ has occurred, that is, if

(33) \begin{equation}p_{t+1}(e, R_t) \ge \frac{1 - \gamma_t}{N - t} \quad \text{ for every } \quad e \in K\setminus R_t,\end{equation}

then draw a random edge $\zeta_{t+1} \in K \setminus R_t$ according to the distribution

\begin{equation*}\mathbb{P}\left({\zeta_{t+1} = e}\,|\,{\mathbb{R}'(t) = R_t}\right) \,{:\!=}\, \frac{p_{t+1}(e, R_t) - (1 - \gamma_t)/(N - t) }{\gamma_t} \ge 0,\end{equation*}

where the inequality holds by (33). Observe also that

\begin{equation*}\sum_{e\in K \setminus R_t}\mathbb{P}\left({\zeta_{t+1} = e}\,|\,{\mathbb{R}'(t) = R_t}\right) = 1,\end{equation*}

so $\zeta_{t+1}$ has a properly defined distribution. Finally, fix an arbitrary bijection

\begin{equation*} f_{R_t, G_t}\ \, :\, \ R_t \setminus G_t \to G_t \setminus R_t\end{equation*}

between the sets of edges and define

\begin{equation*}\eta^{\prime}_{t+1} = \begin{cases}\varepsilon_{t+1}, &\text{ if } \xi_{t + 1} = 1, \varepsilon_{t+1} \in K \setminus R_t,\\ \\[-9pt] f_{R_t, G_t}(\varepsilon_{t+1}), &\text{ if } \xi_{t + 1} = 1, \varepsilon_{t+1} \in R_t,\\ \\[-9pt] \zeta_{t+1}, &\text{ if } \xi_{t + 1} = 0. \\\end{cases}\end{equation*}

On the other hand, if event ${\mathcal{A}}(t, \gamma_t)$ has failed, then $\eta^{\prime}_{t+1}$ is sampled directly (without defining $\zeta_{t+1}$ ) according to the distribution (32). With this definition of $(\mathbb{R}'(t))_{t= 0}^{t_0}$ , it is easy to check that for $\eta^{\prime}_{t+1}$ defined above, (32) indeed holds, so from now on we drop the prime $^{\prime}$ and identify $\mathbb{R}'(t)$ with $\mathbb{R}(t)$ , which is a subset of $\mathbb{R}(n_1,n_2,p)$ .

Most importantly, we conclude that, for $t = 0, \cdots, t_0 - 1$

(34) \begin{equation}{\mathcal{A}}(t, \gamma_t) \cap \{\xi_{t+1} = 1\} \quad \implies \quad \varepsilon_{t+1} \in \mathbb{R}(t+1).\end{equation}

In view of this, define

\begin{equation*} S \,{:\!=}\, \left\{ t \in [t_0]\ \, :\ \, \xi_t = 1 \right\}\end{equation*}

and recall that $m=\lceil (1-\gamma)M\rceil$ . If $|S| \ge m$ , define ${\mathbb{G}(n_1,n_2,m)}$ as, say, the edges indexed by the smallest m elements of S (note that since the vectors $(\xi_i)$ and $(\varepsilon_i)$ are independent, after conditioning on S, these m edges are uniformly distributed), and if $|S| < m$ , then define ${\mathbb{G}(n_1,n_2,m)}$ as, say, the graph with edges $\{\varepsilon_1, \dots, \varepsilon_m\}$ . Recalling the definition (23) of the event ${\mathcal{A}}$ , by (34) we observe that ${\mathcal{A}}$ implies the inclusion $\left\{ \varepsilon_t\ \, :\, \ t \in S \right\} \subseteq \mathbb{R}(t_0) \subset \mathbb{R}(M) = \mathbb{R}(n_1,n_2,p)$ . On the other hand $|S| \ge m$ implies ${\mathbb{G}(n_1,n_2,m)} \subseteq \left\{ \varepsilon_t\ \, :\, \ t \in S \right\}$ , so

\begin{equation*}\mathbb{P}\left({\mathbb{G}(n_1,n_2,m)} \subseteq \mathbb{R}\right) \ge \mathbb{P}\left(\{|S| \ge m\}\cap {\mathcal{A}} \right).\end{equation*}

Since, by Lemma 6, event ${\mathcal{A}}$ holds with probability $1 - O(N^{-C})$ , to complete the proof of (6) it suffices to show that also

\begin{equation*} \mathbb{P}\left(|S| \ge m\right) = 1 - O\left(N^{-C}\right) .\end{equation*}

For this we need the following claim whose technical proof is deferred to Section 8.

Claim 7. We have

(35) \begin{equation} \operatorname{\mathbb{E}} |S| \ge t_0 - \theta M,\end{equation}

where

\begin{equation*} \theta \,{:\!=}\, 1080 \hat p^2 \mathbb{I} + \begin{cases} 6480\sqrt{\dfrac{2(C + 3)\log N}{p\hat n}}, &\quad p \le 0.49, \\ \\[-9pt] 6250\sqrt{\dfrac{(C + 3)\log N}{q^2\hat n } }\log \frac{\hat n}{\log N}, &\quad p > 0.49, \end{cases}\end{equation*}

and, with $\gamma$ as in (5),

(36) \begin{equation} \gamma \ge \tau_0 + \theta + 2/M + \sqrt{\frac{2C \log N}{M}}.\end{equation}

Recalling that $t_0 = \lfloor (1 - \tau_0)M \rfloor$ and $m = \lceil(1-\gamma)M\rceil$ , we have

(37) \begin{equation} \begin{split} t_0 - \theta M - m &\ge (1-\tau_0)M - 1 - \theta M - (1-\gamma)M - 1 \\[3pt] &= (\gamma - \tau_0 - \theta)M - 2 \overset{\mbox{(36)}}{\ge} \sqrt{2C M \log N}. \end{split} \end{equation}

Since $|S|$ is a sum of independent Bernoulli random variables,

\begin{align*} \mathbb{P}\left(|S| < m\right) &= \mathbb{P}\left(|S| < \operatorname{\mathbb{E}}|S|- (\operatorname{\mathbb{E}}|S| - m) \right) \overset{\mbox{(35)}}{\le} \mathbb{P}\left(|S| < \operatorname{\mathbb{E}}|S| -(t_0 - \theta M - m)\right) \\ \fbox{(18), $\operatorname{\mathbb{E}} |S| \le M$}\quad &\overset{\mbox{}}{\le} \exp \left( -\frac{(t_0 - \theta M - m)^2}{2M} \right) \overset{\mbox{(38)}}{\le} \exp \left( - C \log N \right) = N^{-C},\end{align*}

which as mentioned above, implies (6).

Finally, we prove (7) by coupling $\mathbb{G}(n_1, n_2, p')$ with $\mathbb{G}(n_1,n_2, m)$ so that the former is a subset of the latter with probability $1 - O(N^{-C})$ . Denote by $X \,{:\!=}\, e(\mathbb{G}(n_1,n_2,p'))$ the number of edges of $\mathbb{G}(n_1,n_2, p')$ . Whenever $X \le m$ , choose X edges of $\mathbb{G}(n_1,n_2,m)$ at random and declare them to be a copy of $\mathbb{G}(n_1,n_2,p')$ . Otherwise sample $\mathbb{G}(n_1,n_2,p')$ independently from $\mathbb{G}(n_1,n_2,m)$ . Hence it remains to show that $\mathbb{P}\left(X > m\right) = O(N^{-C})$ .

Recalling that $m = \lceil{{(1-\gamma)}}\rceil pN$ and $p' = (1 - 2\gamma)p$ , we have $m/N \ge (1 - \gamma)p = p' + \gamma p$ . Since $X \sim \operatorname{Bin}(N,p')$ , Chernoff’s bound (17) implies that

(38) \begin{equation} \mathbb{P}\left(X > m\right) \le \mathbb{P}\left(X > p'N + \gamma p N\right) \le \exp\left\{ -\frac{(\gamma pN)^2}{2(p'N + \gamma pN/3)} \right\} \le e^{-\gamma^2pN/2}. \end{equation}

Choosing $C^*$ sufficiently large, the definition (5) implies $\gamma \ge \sqrt{\frac{2C\log N}{p N}}$ with a lot of room (to see this easier in the case $p > 0.49$ , note that (4) implies $\log(\hat n/\log N) \ge 1$ , say). So (38) implies $\mathbb{P}\left(X > m\right) \le N^{-C}$ .

4. Proof of Lemma 6

4.1. Preparations

Recall our notation $K = {K_{n_1,n_2}}$ , and ${\mathcal{R}}_G$ defined in (15). Fix an integer $t \in [0, t_0)$ . For a graph G with t edges and $e, f \in K\setminus G$ , define

\begin{equation*} {\mathcal{R}}_{G,e,\neg f} \,{:\!=}\, \left\{ H \in {\mathcal{R}}_G\ \, :\, \ e \in H, f \notin H\right\}.\end{equation*}

By skipping a few technical steps, we will see that the essence of the proof of Lemma 6 is to show that the ratio ${|{\mathcal{R}}_{G,e,\neg f}|} / {|{\mathcal{R}}_{G,f,\neg e}|}$ is approximately 1 for any pair of edges e,f, where G is a ‘typical’ instance of $\mathbb{R}(t)$ .

Recalling our generic definition of switchings from Subsection 2.2, let us treat the graphs in ${\mathcal{R}} \,{:\!=}\, {\mathcal{R}}_{G,e,\neg f}$ and ${\mathcal{R}}' \,{:\!=}\, {\mathcal{R}}_{G,f,\neg e}$ as blue-red edge colourings of the graph $K \setminus G$ . Recall that a path or a cycle in $K \setminus G$ is alternating if no two consecutive edges have the same colour.

When edges e, f are disjoint, we define the switching graph $B = B({\mathcal{R}},{\mathcal{R}}')$ by putting an edge between $H \in {\mathcal{R}}$ and $H' \in {\mathcal{R}}'$ whenever there is an alternating 6-cycle containing e and f in H such that switching the colours in the cycle gives H $^{\prime}$ (see Figure 2). If e and f share a vertex, instead of 6-cycles we use alternating 4-cycles containing e and f.

Figure 2. Switching between H and H $^{\prime}$ when e and f are disjoint: solid edges are in $H \setminus G$ (or $H' \setminus G$ ) and the dashed ones in $K \setminus H$ (or $K \setminus H'$ ).

It is easy to describe the vertex degrees in B. For distinct $u,v\in V_i$ , let $\theta_{G,H}(u,v)$ be the number of (alternating) paths uxv such that ux is blue and xv is red. Note that $\theta_{G,H}(u,v) = |\Gamma_{H\setminus G}(u)\cap\Gamma_{K\setminus H}(v)|$ . Then, setting $f = u_1u_2$ and $e = v_1v_2$ , where $u_i,v_i\in V_i$ , $i = 1,2$ ,

(39) \begin{equation} \deg_B(H) = \prod_{i: u_i \neq v_i}\theta_{G,H}(u_i,v_i), \quad H \in {\mathcal{R}}\end{equation}

and

(40) \begin{equation} \deg_B(H') = \prod_{i: u_i \neq v_i}\theta_{G,H'}(v_i,u_i), \quad H' \in {\mathcal{R}}'.\end{equation}

Note that equation (13) is equivalent to

(41) \begin{equation} \frac{|{\mathcal{R}}|}{|{\mathcal{R}}'|} = \frac{\frac1{|{\mathcal{R}}'|}\sum_{H'\in{\mathcal{R}}'}\deg_B(H')}{\frac1{|{\mathcal{R}}|}\sum_{H\in{\mathcal{R}}}\deg_B(H)}.\end{equation}

In view of (39) and (40), the denominator of the RHS above is the (conditional) expectation of the random variable $\prod_{i : u_i \neq v_i} \theta_{G,\mathbb{R}}(u_i,v_i)$ , given that $\mathbb{R}$ contains G and e, but not f (and similarly for the numerator).

To get an idea of how large that expectation could be, let us focus on one factor, say, $\theta_{G,\mathbb{R}}(u_1,v_1)$ , assuming $u_1 \neq v_1$ . Clearly, the red degree of $v_1$ equals $|\Gamma_{K\setminus\mathbb{R}}(v_1)| = n_2 - d_2 = qn_2$ . Since $|\Gamma_{\mathbb{R}}(v_1)| = pn_2$ , viewing $\mathbb{R} \setminus G$ as a $\tau$ -dense subgraph of $\mathbb{R}$ (see (24)) we expect that the blue neighbourhood $|\Gamma_{\mathbb{R}\setminus G}(u_1)|$ is approximately $\tau p n_2$ . It is reasonable to expect that for typical G and $\mathbb{R}$ the red and blue neighbourhoods intersect proportionally, that is, on a set of size about $q \cdot \tau p \cdot n_2 = \tau qd_1$ .

Inspired by this heuristic, we say that, for $\delta > 0$ , an admissible graph G with t edges is $\delta$ -typical if

(42) \begin{equation} \max_{u_1u_2, v_1v_2 \in K \setminus G} \mathbb{P}\left({\max_{i \in [2] : u_i \neq v_i}\left|\frac{\theta_{G,\mathbb{R}}(u_i,v_i)}{\tau qd_i } - 1\right| > \delta }\,|\,{G \subseteq \mathbb{R}, v_1v_2 \in \mathbb{R}, u_1u_2 \notin \mathbb{R}}\right) \le \tau^2 \delta,\end{equation}

where the outer maximum is taken over distinct pairs of edges. (The bound $\tau^2 \delta$ has hardly any intuition, but it is simple and sufficient for our purposes.)

Lemma 6 is a relatively easy consequence of the upcoming Lemma 8, which states that for a suitably chosen function $\delta(t)$ it is very likely that the initial segments of $\mathbb{R}$ are $\delta(t)$ -typical.

For each $t = 0, \dots, t_0 - 1$ , define

(43) \begin{equation} \delta(t) \,{:\!=}\, 120\hat p^2 \mathbb{I} + 360 \sqrt{\frac{(C + 3)\lambda(t)}{6\tau pq\hat n }} ,\end{equation}

where

(44) \begin{equation} \lambda(t) \,{:\!=}\,\begin{cases} 6 \log N, \quad &p \le 0.49, \\ \\[-9pt] 6\log N + \dfrac{64\log N}{\tau p q}, \quad &p > 0.49.\end{cases}\end{equation}

For future reference, note that

(45) \begin{equation}\delta(t) \le \gamma_t/9.\end{equation}

Indeed, recalling (30), for $p\le0.49$ ,

\begin{equation*} 9\delta(t) \le 1080\hat p^2 \mathbb{I} + 3240 \sqrt{\frac{2(C + 3)\log N}{\tau p\hat n }} =\gamma_t,\end{equation*}

while, for $p > 0.49$ , noting that $\lambda(t)\le 70\log N/(\tau pq)$ , we have

\begin{equation*}9\delta(t) \le 1080\hat p^2 \mathbb{I} + 3240 \sqrt{\frac{70(C + 3)\log N}{0.49^2 \cdot 6\tau^2q^2\hat n}} \le \gamma_t.\end{equation*}

Lemma 8. For every constant $C > 0$ , under the conditions of Lemma 6,

\begin{align*} \mathbb{P}\left(\mathbb{R}(t) \text{ is } \delta(t)\text{-typical for all } t < t_0\right) = 1 - O(N^{-C}).\\[-25pt]\end{align*}

Note that the LHS of (42) is a function of graph G, say f(G). Hence, Lemma 8 asserts that, with high probability, $f(\mathbb{R}(t))$ is small for all $t < t_0$ . The main idea of the proof of Lemma 8, which we defer to Subsection 4.3, is to bound $f(\mathbb{R}(t))$ by a ratio of two simpler functions (again conditional probabilities) of $\mathbb{R}(t)$ . Lemmas 9 and 10 below bound each of these conditional probabilities separately.

For any $t = 0, \dots, M$ and distinct $u_i, v_i \in V_i$ , $i = 1,2$ , set

(46) \begin{equation}\theta_t(u_i,v_i) \,{:\!=}\, \theta_{\mathbb{R}(t),\mathbb{R}}(u_i,v_i),\end{equation}

where $\theta_{G,H}(u_i,v_i) = |\Gamma_{H\setminus G}(u_i)\cap\Gamma_{K\setminus H}(v_i)|$ was introduced earlier in this subsection.

Lemma 9. For every constant $C>0$ , under the conditions of Lemma 6, for $i \in [2]$ we have that with probability $1 - O(N^{-C})$ ,

(47) \begin{equation} \mathbb{P}\left({ \max_{u,v \in V_i, u \neq v} \left|\frac{\theta_t(u,v)}{\tau qd_{i} } - 1\right| > \delta(t)}\,|\,{\mathbb{R}(t)}\right) \le e^{ - 2\lambda(t) }, \qquad \text{for all } t < t_0. \end{equation}

Lemma 10. For every constant $C > 0$ , under the conditions of Lemma 6, with probability $1 - O(N^{-C})$

(48) \begin{equation} \min_{e, f \in K \setminus \mathbb{R}(t), e \neq f}\mathbb{P}\left({e \in \mathbb{R}, f \notin \mathbb{R}}\,|\,{\mathbb{R}(t)}\right) \ge e^{ - \lambda(t) }, \qquad \text{for all } t < t_0.\end{equation}

4.2. Proof of Lemma 6

In view of Lemma 8, it suffices to show that if $t < t_0$ , and a t-edge graph G is $\delta(t)$ -typical, then

\begin{equation*} \min_{e \in K \setminus G}p_{t+1}(e,G) \ge \frac{1 - \gamma_t}{N - t}.\end{equation*}

Fix $f \in K \setminus G$ which maximises $p_{t+1}(f,G)$ . Since the average of $p_{t+1}(e,G)$ over $e \in K \setminus G$ is exactly $\frac{1}{N - t}$ , we have $p_{t+1}(f,G) \ge \frac{1}{N - t}$ and therefore it is enough to prove that, for every $e \in K \setminus (G \cup \{f\})$ ,

\begin{equation*}\frac{p_{t+1}(e,G)}{p_{t+1}(f,G)} \ge 1 - \gamma_t.\end{equation*}

Recall the definitions of ${\mathcal{R}}_G$ in (15), ${\mathcal{R}}_{G,e}, {\mathcal{R}}_{G, \neg e}$ in (16), and ${\mathcal{R}} = {\mathcal{R}}_{G,e,\neg f}$ and ${\mathcal{R}}' = {\mathcal{R}}_{G, f, \neg e}$ from Subsection 4.1 and observe that ${\mathcal{R}} = {\mathcal{R}}_{G\cup \{e\},\neg f}$ too. In view of the remark immediately following (22),

\begin{equation*} p_{t+1}(e,G) = \mathbb{P}\left({e \in \mathbb{R}}\,|\,{\mathbb{R}(t) = G}\right) \cdot \mathbb{P}\left({\eta_{t+1} = e}\,|\,{\mathbb{R}(t) = G, e \in \mathbb{R}}\right) = \frac{|{\mathcal{R}}_{G,e}|}{|{\mathcal{R}}_G|} \cdot \frac{1}{M-t}.\end{equation*}

Therefore,

\begin{equation*} 1 \ge \frac{p_{t+1}(e,G)}{p_{t+1}(f,G)} = \frac{|{\mathcal{R}}_{G,e}|}{|{\mathcal{R}}_{G,f}|} = \frac{|{\mathcal{R}}_{G \cup \{e,f\}}| + |{\mathcal{R}}_{G,e,\neg f}|}{|{\mathcal{R}}_{G \cup \{e, f\}}| + |{\mathcal{R}}_{G,f,\neg e}| } \ge \frac{|{\mathcal{R}}|}{|{\mathcal{R}}'|}.\end{equation*}

Write $e = v_1v_2$ and $f = u_1u_2$ and for simplicity assume that both $u_1 \neq v_1$ and $u_2 \neq v_2$ (otherwise the proof goes mutatis mutandis and is, in fact, a bit simpler). By (39), (40), and (41),

\begin{equation*} \frac{|{\mathcal{R}}|}{|{\mathcal{R}}'|} = \frac{\operatorname{\mathbb{E}}_{\text{top}}}{\operatorname{\mathbb{E}}_{\text{bottom}}},\end{equation*}

where

\begin{equation*} \operatorname{\mathbb{E}}_{\text{top}} \,{:\!=}\, \frac1{|{\mathcal{R}}'|}\sum_{H'\in{\mathcal{R}}}\deg_B(H')= \operatorname{\mathbb{E}}\left[{\theta_{G,\mathbb{R}}(v_1,u_1)\theta_{G,\mathbb{R}}(v_2,u_2)}\,|\,{G \subseteq \mathbb{R}, f \in \mathbb{R}, e \notin \mathbb{R}}\right]\end{equation*}

and

\begin{equation*} \operatorname{\mathbb{E}}_{\text{bottom}} \,{:\!=}\, \frac1{|{\mathcal{R}}|}\sum_{H\in{\mathcal{R}}}\deg_B(H)= \operatorname{\mathbb{E}}\left[{\theta_{G,\mathbb{R}}(u_1,v_1)\theta_{G,\mathbb{R}}(u_2,v_2)}\,|\,{G \subseteq \mathbb{R}, e \in \mathbb{R}, f \notin \mathbb{R}}\right].\end{equation*}

Since G is $\delta$ -typical with $\delta = \delta(t)$ , denoting the LHS of (42) as $p_*$ , we have $p_* \le \tau^2 \delta$ , so

(49) \begin{align} \notag \operatorname{\mathbb{E}}_{\text{top}} &\ge (1 - \delta) \tau qd_1 \cdot (1 - \delta)\tau qd_2 \cdot (1 - p_*) + 0 \cdot p_* \\[3pt] &\ge (1 - \delta)^2 \tau^2p^2q^2n_1n_2(1 - \tau^2\delta ) \ge (1 - \delta)^3 \tau^2p^2q^2N.\end{align}

Moreover, since, deterministically,

\begin{equation*} \theta_{G,\mathbb{R}}(u_1, v_1)\theta_{G, \mathbb{R}}(u_2, v_2) \le \min\{p,q\} n_2\cdot \min\{p,q\}n_1 \le 4p^2q^2N,\end{equation*}

using (42) again, we infer that

(50) \begin{align} \operatorname{\mathbb{E}}_{\text{bottom}} \notag &\le (1 + \delta) \tau qd_1 \cdot (1 + \delta)\tau qd_2 \cdot (1 - p_*) + 4p^2q^2N \cdot p_*\\[3pt] &\le (1 + \delta)^2\tau^2p^2q^2N + 4p^2q^2N \cdot \tau^2\delta \le (1 + 6\delta + \delta^2)\tau^2p^2q^2N .\end{align}

Finally, combining the bounds on $\operatorname{\mathbb{E}}_{\text{top}}$ and $\operatorname{\mathbb{E}}_{\text{bottom}}$ and using (45), we conclude that

(51) \begin{equation} \frac{p_{t+1}(e,G)}{p_{t+1}(f,G)} \ge \frac{(1 - \delta)^3}{(1 + 6 \delta + \delta^2)} \ge 1 - 9\delta \ge 1 - \gamma_t,\end{equation}

and the proof of Lemma 6 is complete.

4.3. Proof of Lemma 8

By Lemmas 9 and 10 the events (47) (for each $i \in [2]$ ) and (48) hold simultaneously with probability $1 - O(N^{-C})$ . Hence it suffices to fix an arbitrary integer $t < t_0$ and a realisation of $\mathbb{R}(t)$ , say $\mathbb{R}(t) = G$ satisfying inequalities (47) (for each $i \in [2]$ ) and (48) and to prove that G is $\delta(t)$ -typical. Fix any such G and define events

\begin{equation*} {\mathcal{E}} \,{:\!=}\, \bigcup_{i \in [2]} \left\{ \max_{u,v \in V_i, u \neq v}\left|\frac{\theta_{G,\mathbb{R}}(u,v)}{\tau qd_i } - 1\right| > \delta(t) \right\} \quad\text{and}\quad {\mathcal{F}}_{e,f} \,{:\!=}\, \{ e\in \mathbb{R}, f \notin \mathbb{R} \}.\end{equation*}

Since, conditioning on the event $\mathbb{R}(t) = G$ , we have $\theta_{G,\mathbb{R}}(u_i,v_i) = \theta_t(u_i,v_i)$ , by the choice of G we have

(52) \begin{equation} \mathbb{P}\left({{\mathcal{E}}}\,|\,{\mathbb{R}(t) = G}\right) \le 2e^{-2\lambda(t)}\end{equation}

and

(53) \begin{equation} \min_{e,f\in K \setminus G}\mathbb{P}\left({{\mathcal{F}}_{e,f}}\,|\,{\mathbb{R}(t) = G}\right) \ge e^{-\lambda(t)}.\end{equation}

Note that the probability on the LHS of (42) does not change if we replace $G \subseteq \mathbb{R}$ by $\mathbb{R}(t) = G$ , since conditioning on either event makes $\mathbb{R}$ uniformly distributed over ${\mathcal{R}}_G$ (i.e., biregular graphs containing G) and the random variables $\theta_{G,\mathbb{R}}(u_i,v_i)$ do not depend on the random ordering of the edges of $\mathbb{R}$ . Hence it remains to prove that (52) and (53) imply, for any distinct edges $e = v_1v_2,$ $f = u_1u_2 \in K \setminus G$ , that

(54) \begin{equation} \mathbb{P}\left({\max_{i \in [2] : u_i \neq v_i}\left|\frac{\theta_{G,\mathbb{R}}(u_i,v_i)}{\tau qd_i } - 1\right| > \delta(t) }\,|\,{\mathbb{R}(t) = G, e \in \mathbb{R}, f \notin \mathbb{R}}\right) \le \tau^2 \delta(t).\end{equation}

The probability on the LHS of (54) is at most $\mathbb{P}\left({{\mathcal{E}}}\,|\,{\mathbb{R}(t) = G, {\mathcal{F}}_{e,f}}\right)$ . Inequalities (52) and (53) imply that

\begin{equation*} \mathbb{P}\left({{\mathcal{E}}}\,|\,{\mathbb{R}(t) = G, {\mathcal{F}}_{e,f}}\right) = \frac{\mathbb{P}\left({{\mathcal{E}}\cap {\mathcal{F}}_{e,f}}\,|\,{\mathbb{R}(t) = G}\right)}{\mathbb{P}\left({{\mathcal{F}}_{e,f}}\,|\,{\mathbb{R}(t) = G}\right)} \le \frac{\mathbb{P}\left({{\mathcal{E}}}\,|\,{\mathbb{R}(t) = G}\right)}{\mathbb{P}\left({{\mathcal{F}}_{e,f}}\,|\,{\mathbb{R}(t) = G}\right)} \le 2 e^{ - \lambda(t)}. \end{equation*}

Finally, even a quick glance at the definitions of $\delta(t)$ and $\tau_0$ (see (43) and (25)) reveals that $\min \{ \tau_0,\delta(t) \} \ge 1/\hat n$ . Thus, we get, with a huge margin,

\begin{align*} 2e^{-\lambda(t)}\le 2/N^6 \le 1/N^2 \le 1/\hat n^4 \le \tau_0^2\delta(t) \le \tau^2\delta(t).\\[-30pt]\end{align*}

5. Degrees and co-degrees

In this section we prove facts about the neighbourhood structure of $\mathbb{R}(t)$ , one of which will be enough to deduce Lemma 9, while the other two will be used in the proof of Lemma 10 in Section 6. We start, in Subsection 5.1, with a tail bound for the co-degrees in $\mathbb{R} = \mathbb{R}(n_1,n_2,p)$ (Lemma 11). In Subsection 5.2 we analyse the process $(\mathbb{R}(t))_t$ . First, we show that the vertex degrees in the process grow proportionally until almost the very end (Lemma 13). Then, conditioning on $\mathbb{R}$ having concentrated co-degrees, we prove that the co-degrees in $\mathbb{R}(t)$ do not exceed their expectation too much (Lemma 14). Finally, in Subsection 5.3, we present a proof of Lemma 9 based on Lemma 11.

5.1. Co-degrees in the random biregular graph

Recall that $\Gamma_F(v)$ is the set of neighbours of a vertex v in a graph F. We define the co-degree of two distinct vertices u, v as

(55) \begin{equation} \operatorname{cod}_F(u,v) \,{:\!=}\, |\Gamma_F(u) \cap \Gamma_F(v)|.\end{equation}

A few times we will use the following simple observation: for $F \subseteq K$ and distinct $u,v\in V_1$ ,

(56) \begin{equation} \begin{split} \operatorname{cod}_{F}(u, v) &= |\Gamma_{F}(u)| + |\Gamma_{F}(v)| - |\Gamma_{F}(u) \cup \Gamma_{F}(v)| \\[3pt] &= \deg_F(u) + \deg_F(v) - \left( n_2 - \operatorname{cod}_{K \setminus F}(u,v) \right), \end{split}\end{equation}

where, recall, $K = {K_{n_1,n_2}}$ is the complete bipartite graph.

Due to symmetry, we prove the following concentration result for pairs of vertices on one side of the bipartition only.

Lemma 11. Suppose that $\hat p\hat n \to \infty$ and let $\lambda = \lambda(n_1,n_2)$ be such that $\lambda \to \infty$ . Then, for any distinct $u_1, v_1 \in V_1$ ,

\begin{equation*} \mathbb{P}\left( |\operatorname{cod}_{\mathbb{R}}(u_1,v_1) - p^2n_{2}| > 20\left( \hat p^3n_{2}\mathbb{I} + \frac{\hat pn_{2}}{\hat n} + \sqrt{\lambda \hat p^2 n_2} \right) + \lambda \right) = O\left( \sqrt N e^{ - \lambda } \right),\end{equation*}

where $\mathbb{I}$ is defined in (3).

Proof. We first claim that it is sufficient to assume $p \le 1/2$ and prove that for any distinct $u_1, v_1 \in V_1$

(57) \begin{equation} \mathbb{P}\left( |\operatorname{cod}_{\mathbb{R}}(u_1,v_1) - p^2n_2| > 20\left(p^3n_2\mathbb{I} + \frac{pn_2}{\hat n} + \sqrt{\lambda p^2n_2 } \right) + \lambda \right) = O\left( \sqrt N e^{ - \lambda } \right).\end{equation}

To see this, note that by (56), recalling $q = 1 - p$ ,

\begin{equation*} \operatorname{cod}_{\mathbb{R}}(u_1, v_1) - p^2n_2= 2pn_2 - (n_2 - \operatorname{cod}_{K \setminus \mathbb{R}}(u_1,v_1))- p^2n_2=\operatorname{cod}_{K \setminus \mathbb{R}}(u_1,v_1) - q^2n_2. \end{equation*}

Further, $K \setminus \mathbb{R} = K \setminus \mathbb{R}(n_1,n_2,p)$ has the same distribution as $\mathbb{R}(n_1,n_2,q)$ . So, if $p > 1/2$ , the lemma follows by applying (57) with q instead of p.

The crude proof idea comes from our anticipation that $\operatorname{cod}_\mathbb{R}(u_1,v_1)$ behaves similarly to $\operatorname{cod}_{{\mathbb{G}(n_1,n_2,p)}}(u_1,v_1)$ , which is distributed as $\operatorname{Bin}(n_2, p^2)$ or, approximately, as $\operatorname{Po}(p^2n_2)$ . We will show that each tail of $\operatorname{cod}_\mathbb{R}(u_1,u_2)$ is comparable to the tail of $\operatorname{Po}(\mu)$ with expectation $\mu$ fairly close to $p^2n_2$ and then apply the Chernoff bounds (this is packaged in Claim 12 below). Further we consider the cases $\mathbb{I} = 1$ and $\mathbb{I} = 0$ and apply Claim 12 analogously to the proof of Theorem 2.1 in [Reference Krivelevich, Sudakov, Vu and Wormald7]: in the case $\mathbb{I} = 1$ we use switchings and in the case $\mathbb{I} = 0$ we use asymptotic enumeration (Theorem 5).

Fix distinct $u_1, v_1 \in V_1$ and set $X(u_1,v_1): = \operatorname{cod}_{\mathbb{R}}(u_1,v_1)$ . Further, recall that ${{\mathcal{R}}(n_1,n_2,p)}$ is the class of subgraphs of ${K_{n_1,n_2}}$ such that every $v \in V_i$ has degree $d_i$ , for $i = 1, 2$ , and let

\begin{equation*} {\mathcal{R}}_k(u_1,v_1) = \{H \in {{\mathcal{R}}(n_1,n_2,p)}\ \, :\ \, \operatorname{cod}_H(u_1,v_1) = k \}, \qquad k = 0, \dots, d_1. \end{equation*}

Claim 12. Fix two vertices $u_1,v_1\in V_1$ . Suppose that positive numbers $r_k, k = 0, \dots, d_1,$ are such that

\begin{equation*} |{\mathcal{R}}_k(u_1,v_1)| \sim r_k, \text{ uniformly in } \textit{k} \text{ as } (n_1, n_2) \to \infty, \end{equation*}

and there exist numbers $0 \le \mu_- \le \mu_+ \le N$ such that

\begin{equation*} \frac{r_k}{r_{k-1}} \le \frac{\mu_+}{k}, \text{ for } k \in [\mu_+, d_1] \quad \text{ and } \quad \frac{r_{k-1}}{r_k} \le \frac{k}{\mu_-}, \text{ for } k \in [1, \mu_-]. \end{equation*}

If $x \ge \sqrt{2\mu_+ \lambda} + \lambda$ , then

(58) \begin{equation} \mathbb{P}\left(X(u_1,v_1) \ge \mu_+ + x\right) + \mathbb{P}\left(X(u_1,v_1) \le \mu_- - x\right) = O(\sqrt{N} e^{-\lambda}). \end{equation}

Proof. Note that if $Z_+ \sim \operatorname{Po}(\mu_+)$ , then $\mu_+/k = \mathbb{P}\left(Z_+ = k\right)/\mathbb{P}\left(Z_+ = k - 1\right)$ . Setting $m = \lfloor \mu_+ \rfloor$ , for any integer $i\in\{m,\dots,d_1\}$ , and abbreviating $X\,{:\!=}\,X(u_1,v_1)$ and ${\mathcal{R}}_j\,{:\!=}\,{\mathcal{R}}_j(u_1,v_1)$ , we have

\begin{align*} \mathbb{P}\left(X \ge i\right) &= \frac{\sum_{j \ge i}|{\mathcal{R}}_j| }{|{{\mathcal{R}}(n_1,n_2,p)}|}\le \sum_{j \ge i} \frac{|{\mathcal{R}}_j|}{|{\mathcal{R}}_{m}|} \sim \sum_{j \ge i} \frac{r_j}{r_m} = \sum_{j \ge i} \prod_{k = m + 1}^{j} \frac{r_k}{r_{k-1}} \le \sum_{j \ge i} \prod_{k = m + 1}^{j} \frac{\mu_+}{k}\\ &= \sum_{j \ge i} \prod_{k = m + 1}^{j}\frac{\mathbb{P}\left(Z_+ = k\right)}{\mathbb{P}\left(Z_+ = k-1\right)} = \sum_{j \ge i} \frac{\mathbb{P}\left(Z_+ = j\right)}{\mathbb{P}\left(Z_+ = m\right)} = \frac{\mathbb{P}\left(Z_+ \ge i\right)}{\mathbb{P}\left(Z_+ = m\right)}. \end{align*}

Since $\mu_+ \le N$ , by (19) we have $\mathbb{P}\left(Z_+ = m\right) = \Omega(1/\sqrt{\mu_+}) = \Omega(1/\sqrt{N})$ , and conclude that

(59) \begin{equation} \mathbb{P}\left(X \ge i\right) = O\left( \sqrt{N} \cdot \mathbb{P}\left(Z_+ \ge i\right) \right), \qquad i \ge \mu_+. \end{equation}

Similarly, if $Z_- \sim \operatorname{Po}(\mu_-)$ , then for $i\le m \,{:\!=}\, \lfloor \mu_- \rfloor$ , using $k/\mu_- = \mathbb{P}\left(Z_- = k-1\right)/\mathbb{P}\left(Z_- = k\right)$ ,

\begin{equation*} \mathbb{P}\left(X \le i\right)\le \sum_{j \le i} \frac{|{\mathcal{R}}_j|}{|{\mathcal{R}}_{m}|} \sim \sum_{j \le i}\prod_{k = j + 1}^{ m} \frac{r_{k-1}}{r_{k}} \le \sum_{j \le i}\prod_{k = j + 1}^{ m} \frac{\mathbb{P}\left(Z_- = k-1\right)}{\mathbb{P}\left(Z_- = k\right)} = \frac{\mathbb{P}\left(Z_- \le i\right)}{\mathbb{P}\left(Z_- = m\right)}, \end{equation*}

and therefore, again using (19),

(60) \begin{equation} \mathbb{P}\left(X \le i\right) = O\left( \sqrt{N} \cdot \mathbb{P}\left(Z_- \le i\right) \right), \qquad i \le \mu_-. \end{equation}

Since the RHS of (58) does not depend on x, we can assume the equality: $x = \sqrt{2\mu_+\lambda} + \lambda$ . Inequalities (17) and (18) imply

\begin{equation*} \begin{split} \mathbb{P}\left(Z_+ \ge \mu_+ + x\right) &+ \mathbb{P}\left(Z_- \le \mu_- - x\right) \le \exp \left( - \frac{x^2}{2(\mu_+ + x/3)} \right) + \exp \left( -\frac{x^2}{2\mu_-} \right) \\[3pt] \fbox{$\mu_- \le \mu_+, x > 0$}\quad &\le 2 \exp \left( -\frac{x^2}{2(\mu_+ + x/3)} \right) \\[3pt] \fbox{$x = \sqrt{2 \mu_+ \lambda} + \lambda$}\quad &= 2 \exp \left( -\frac{\lambda(2\mu_+ + 2\sqrt{2\mu_+ \lambda} + \lambda)}{2\mu_+ + \sqrt{2\mu_+ \lambda}/3 + \lambda/3} \right) \le 2e^{-\lambda}. \end{split}\end{equation*}

Combining this with (59) and (60) yields (58).

It remains to prove (57). We consider separately the cases $\mathbb{I} = 1$ and $\mathbb{I} = 0$ .

Case $\mathbb{I} = 1$ . If $p \ge 1/5$ , then deterministically $\operatorname{cod}_\mathbb{R}(u_1,u_2) \le pn_2\le 5p^2n_2 \le p^2 n_2 + 20 p^3 n_2$ and $\operatorname{cod}_\mathbb{R}(u_1, u_2) - p^2 n_2 \ge - p^2 n_2 \ge -5p^3 n_2$ , and hence (57) holds trivially. So we further assume $p < 1/5$ .

Setting $r_k = |{\mathcal{R}}_k|$ , we are going to prove bounds on $r_k/r_{k-1}$ , $k\ge 1$ , using a switching between ${\mathcal{R}} = {\mathcal{R}}_k$ and ${\mathcal{R}}' = {\mathcal{R}}_{k-1}$ . To this end, recall our terminology from Subsection 2.2 (with $G = \emptyset$ ): any graph $H \subseteq K$ is interpreted as a colouring of the edges of K, with the edges in H blue and the rest red. We define a forward switching as follows: pick a common blue neighbour $w_2 \in \Gamma_H(u_1) \cap \Gamma_H(v_1)$ and find two alternating 4-cycles $u_1w_2x_1x_2$ and $v_1w_2y_1y_2$ so that $x_1 \neq y_1$ , $x_2 \neq y_2$ . Moreover, we restrict the choice of the cycles to those for which $v_1x_2$ and $u_1y_2$ are red; this is to make sure that swapping of the colours on each of the two cycles (but keeping the colour of $v_1x_2,u_1y_2$ red) indeed decreases $\operatorname{cod}_{\mathbb{R}}(u_1,v_1)$ by one, thus mapping $H \in {\mathcal{R}}_k$ to $H' \in {\mathcal{R}}_{k-1}$ (see Figure 3).

Figure 3. Switching between $H \in {\mathcal{R}}_k(u_1,v_1)$ and $H' \in {\mathcal{R}}_{k-1}(u_1,v_1)$ : solid edges are in H and H $^{\prime}$ and the dashed ones in $K \setminus H$ and $K \setminus H'$ , respectively.

Let $B_k \,{:\!=}\, B({\mathcal{R}}_k, {\mathcal{R}}_{k-1})$ be the auxiliary graph corresponding to the described switching. Since the number of choices of $w_2$ is k for any $H \in {\mathcal{R}}_k$ , we will show upper and lower bounds on $\deg_{B_k}(H)/k$ by fixing $w_2$ and proving upper and lower bounds on the possible choices of $(x_1, x_2, y_1, y_2)$ .

For the upper bound, since there are $n_2 - 2d_1 + k$ common red neighbours of $u_1$ and $v_1$ (one can use (56) for this), the number of choices of distinct $x_2, y_2$ is exactly $(n_2 - 2d_1 + k)_2$ , while the number of choices of $x_1$ and $y_1$ , as blue neighbours of, resp., $x_2$ and $y_2$ , is at most $d_2^2$ . Ignoring the requirements that $x_1\neq y_1$ and that both $x_1w_2$ and $y_1w_2$ should be red), we have shown

(61) \begin{equation} \deg_{B_k}(H)/k \le (n_2 - 2d_1 + k)_2d_2^2.\end{equation}

For the lower bound, we subtract from the upper bound (61) the number of choices of $(x_1, x_2, y_1, y_2)$ for which either $x_1 = y_1$ or at least one of $x_1w_2$ and $y_1w_2$ is blue (the assumption $x_2 \neq y_2$ was taken into account in the upper bound). The number of choices with $x_1 = y_1$ is at most $n_1d_1^2$ , and those with, say, $x_1w_2$ blue, is at most $d_2d_1 \cdot (n_2 - 2d_1 + k)d_2 $ . Indeed, there are $\deg_H(w_2) \le d_2$ candidates for $x_1$ and, then, at most $d_1$ candidates for $x_2$ , $\operatorname{cod}_{K \setminus H}(u_1,u_2) = n_2 - 2d_1 + k$ candidates for $y_2$ , and then $d_2$ choices of $y_1$ . Therefore

\begin{align*} \deg_{B_k}(H)/k &\ge (n_2 - 2d_1 + k)_2d_2^2 - n_1d_1^2 - 2(n_2 - 2d_1 + k)d_1d_2^2 \\[3pt]&= (n_2 - 2d_1 + k)_2 d_2^2 \left( 1 - \frac{n_1d_1^2}{(n_2 - 2d_1 + k)_2 d_2^2} - \frac{2d_1}{n_2 - 2d_1 + k - 1}\right) \\[3pt]\fbox{$d_1 = n_2p, d_2 = n_1p, k \ge 1$}\quad& \ge (n_2 - 2d_1 + k)_2 d_2^2 \left( 1 - \frac{n_2^2}{(n_2 - 2pn_2)^2 n_1} - \frac{2pn_2}{n_2 - 2pn_2}\right) \\[3pt]&= (n_2 - 2d_1 + k)_2 d_2^2 \left( 1 - \frac{1}{(1 - 2p)^2n_1} - \frac{2p}{1 - 2p}\right) \\[3pt]\fbox{$p\le1/5$}\quad &\ge (n_2 - 2d_1 + k)_2 d_2^2 \left( 1 - \left( \frac{25}{9pn_1} + \frac{10}{3} \right) p \right) \\[3pt]\fbox{$\hat p \hat n \to \infty$}\quad &\ge (n_2 - 2d_1 + k)_2 d_2^2 \left( 1 - 4p \right).\end{align*}

A moment of thought (and a glance at Figure 3) reveals that the backward switching corresponds to choosing a common red neighbour $w_2$ of $u_1$ and $v_1$ , choosing alternating cycles $u_1w_2x_1x_2$ and $v_1w_2y_1y_2$ such that $x_1 \neq y_1$ and the edges $v_1x_2$ and $u_1y_2$ are red (note that these assumptions imply $x_2 \neq y_2$ ), and swapping the colours along the cycles.

Since for every $H' \in {\mathcal{R}}_{k-1}$ the number of choices of $x_2 \in \Gamma_{H'}(u_1) \setminus \Gamma_{H'}(v_1)$ and $ y_2 \in \Gamma_{H'}(v_1) \setminus \Gamma_{H'}(u_1)$ is exactly $(d_1 - k + 1)^2$ , we will bound $\deg_{B_k}(H')/(d_1 - k + 1)^2$ from above and below by fixing $x_2, y_2$ and estimating the number of possible triplets $(w_2, x_1,y_1)$ . For the upper bound, we can choose $w_2$ in exactly $\operatorname{cod}_{K \setminus H'}(u_1,u_2) = n_2 - 2d_1 + k - 1$ ways, and a pair $x_1, y_1$ of distinct blue neighbours of $w_2$ in at most $(d_2)_2$ ways (we ignore the requirement that $x_1x_2$ and $y_1y_2$ are red). Thus,

(62) \begin{equation} \deg_{B_k}(H')/(d_1 - k + 1)^2 \le (n_2 - 2d_1 + k - 1)(d_2)_2. \end{equation}

For the lower bound, given $x_2, y_2$ , we need to subtract from the upper bound (62) the number of choices of $w_2, x_1, y_1$ for which $x_1x_2$ or $y_1y_2$ is blue. In the former case, ignoring other constraints, there are at most $d_2$ choices of a blue neighbour $x_1$ of $x_2$ , at most $d_1$ choices of a blue neighbour $w_2$ of $x_1$ , and at most $d_2$ choices of a blue neighbour $y_1$ of $w_2$ . By symmetry, the total number of bad choices is at most $2d_1d_2^2$ , whence

\begin{align*} \deg_{B_k}(H')/(d_1 - k + 1)^2 &\ge (n_2 - 2d_1 + k - 1)(d_2)_2 - 2d_1d_2^2 \\[3pt] \fbox{$k \ge 1$}\quad &\ge (n_2 - 2d_1 + k - 1)d_2^2\left( 1 - \frac{1}{d_2} - \frac{2d_1}{n_2 - 2d_1} \right) \\[3pt] \fbox{$d_1 = n_2p, p \le 1/5$}\quad &\ge (n_2 - 2d_1 + k - 1)d_2^2\left( 1- d_2^{-1} - 4p \right).\end{align*}

Since, by our assumptions, $d_2 \ge \hat p\hat n \to \infty$ and $p \le 1/5$ , it follows that the graphs $B_k$ for $k = 1, \dots, d_1$ have minimum degrees at least 1. In particular, this means that starting with any p-biregular graph, by applying a certain number of switchings, we can obtain a p-biregular graph with arbitrary co-degree. Since we implicitly assume ${{\mathcal{R}}(n_1,n_2,p)}$ is non-empty, this implies that all classes ${\mathcal{R}}_k, k = 0, \dots, d_1,$ are non-empty, i.e., the numbers $r_k = |{\mathcal{R}}_k|$ are all positive, satisfying one of the conditions of Claim 12.

Using (14) and bounds on the degrees in $B_k$ , for $k = 1, \dots, d_1,$ we have

(63) \begin{align} \notag \frac{r_k}{r_{k-1}} \le \frac{\max_{H' \in {\mathcal{R}}' }\deg_{B_k}(H')}{\min_{H \in {\mathcal{R}}} \deg_{B_k}(H)} &\le \frac{(d_1 - k + 1)^2}{k(n_2 - 2d_1 + k)}\cdot\frac{1}{1 - 4p} \\ &\le \frac{(d_1 - k + 1)^2}{k(n_2 - 2d_1 + k)} \cdot \left( 1 + 20p \right),\end{align}

(where the last inequality holds because $p \le 1/5$ implies $\frac{1}{1 - 4p} = 1 + \frac{4p}{1 - 4p} \le 1 + 20p$ ), and

(64) \begin{equation} \frac{r_k}{r_{k-1}} \ge \frac{\min_{H' \in {\mathcal{R}}' }\deg_{B_k}(H')}{\max_{H \in {\mathcal{R}}} \deg_{B_k}(H)} \ge \frac{(d_1 - k + 1)^2}{k(n_2 - 2d_1 + k)} \left( 1 - d_2^{-1} -4p \right). \end{equation}

Let $\mu$ be a real number such that

(65) \begin{equation}\frac{(d_1 - \mu + 1)^2}{\mu(n_2 - 2d_1 + \mu)} = 1.\end{equation}

After solving for $\mu$ , we have

(66) \begin{equation} \mu = \frac{(d_1 + 1)^2}{n_2 + 2} \in [\:p^2n_2, p^2n_2(1 + 3d_1^{-1})\:]. \end{equation}

Define

(67) \begin{equation} \mu_+ \,{:\!=}\, \mu(1 + 20p), \quad \text{and} \quad \mu_- \,{:\!=}\, \mu(1 - d_2^{-1} -4p). \end{equation}

From (63), (65), and (67), for $\mu \le k \le d_1$ , we have

\begin{equation*} \frac{r_k}{r_{k-1}} \le \frac{(d_1 - \mu + 1)^2\left( 1 + 20p \right)}{\mu (n_2 - 2d_1 + \mu)} \cdot \frac{\mu}{k} = \frac{\mu_+}{k}. \end{equation*}

On the other hand, from (64), (65), and (67) that, for $1\le k \le \mu$ , we have

\begin{equation*} \frac{r_{k-1}}{r_k} \le \frac{\mu (n_2 - 2d_1 + \mu)}{(d_1 - \mu + 1)^2\left( 1 - 1/d_2 -4p \right)} \cdot \frac{k}{\mu} = \frac{k}{\mu_-}. \end{equation*}

Note that, since $\min\{d_1,d_2\} = p\hat n \to \infty$ and $p\le1/5$ , it follows by (66), and (67) that

(68) \begin{equation} p^2n_2(1 - (p\hat n)^{-1} - 4p) \le \mu_-\le \mu_+\le p^2n_2(1 + 15(p\hat n)^{-1} + 20p)\le 6p^2n_2 .\end{equation}

Therefore,

(69) \begin{equation} \mu_+ \le p^2n_2 + 20\left(p^3n_2 + \frac{pn_2}{\hat n} \right) \quad\text{and}\quad \mu_- \ge p^2n_2 - 20\left(p^3n_2 + \frac{pn_2}{\hat n} \right).\end{equation}

Consequently, setting

(70) \begin{equation} x \,{:\!=}\, 20 \sqrt{\lambda p^2 n_2} + \lambda,\end{equation}

we have, by (69)

\begin{equation*} \mathbb{P}\left(|X - p^2n_2| \ge 20\left(p^3n_2 + \frac{pn_2}{\hat n}\right) + x\right) \le \mathbb{P}\left(X \ge \mu_+ + x\right) + \mathbb{P}\left(X\le \mu_- - x\right). \end{equation*}

Noting that the last inequality in (68) implies $x \ge \sqrt{\mu_+n_2} + \lambda $ , using Claim 12 we obtain (57), completing the proof in the case $\mathbb{I} = 1$ .

Case $\mathbb{I} = 0$ . For the switching argument used in the previous case it was crucial that p was small, as otherwise several estimates would be negative and so meaningless. Therefore, it cannot be used now. Fortunately, in this case we are in a position to apply an asymptotic enumeration approach based on Theorem 5 and analogous to the proof of Theorem 2.1 in [Reference Krivelevich, Sudakov, Vu and Wormald7].

Recall the notation ${{\mathcal{R}}({\mathbf{d}}_1,{\mathbf{d}}_2)}$ defined before Theorem 5. Writing $A = \Gamma_H(u_1)$ and $B = \Gamma_H(v_1)$ , we note that every graph in ${\mathcal{R}}_k$ induces an ordered partition of $V_2$ into four sets $A \cap B$ , $A \setminus B$ , $B \setminus A$ and $V_2 \setminus (A \cup B)$ , of sizes, respectively, $k, d_1 - k, d_1 - k$ , and $n_2 - 2d_1 + k$ . There are exactly

\begin{equation*}\Pi(n_2,d_1,d_2,k)\,{:\!=}\,\frac{n_2!}{k!(d_1 - k)!^2(n_2 - 2d_2 + k)!} \end{equation*}

such partitions. After removing vertices $u_1$ and $v_1$ , we obtain a graph $H^* \in {{\mathcal{R}}({\mathbf{d}}_1,{\mathbf{d}}_2)}$ on $(V_1\setminus\{u_1,v_1\}, V_2)$ , with ${\mathbf{d}}_1$ having all its $n_1 - 2$ entries equal to $d_1$ , and entries of ${\mathbf{d}}_2$ determined by the partition: entries equal to $d_2 - 2$ on $A \cap B$ , to $d_2 - 1$ on $A \triangle B$ and the remaining ones equal to $d_2$ . Since $H^*$ together with the ordered partition uniquely determines H, we have

\begin{equation*} |{\mathcal{R}}_k| = \Pi(n_2,d_1,d_2,k)\times |{\mathcal{R}}({\mathbf{d}}_1,{\mathbf{d}}_2)|.\end{equation*}

Let us check that the three assumptions (i)–(iii) of Theorem 5 are satisfied. When doing so, we should remember that we have $n_1 - 2$ instead of $n_1$ , but that the parameter $p = d_1/n_2$ remains intact. With foresight, choose $a = 0.35$ and, say, $b = 0.1$ , and, for convenience $C = 1$ , which determine, via Theorem 5, an $\epsilon > 0$ .

Note that $\mathbb{I} = 0$ implies

(71) \begin{equation} \hat n \ge \frac{4\max \left\{ n_1,n_2 \right\}} {\log N} \quad \text{and} \quad \frac{n_1}{n_2} + \frac{n_2}{n_1} \le \frac{p \log N}{2} \le p \log \max \left\{ n_1, n_2 \right\}.\end{equation}

As the degrees on one side are all equal and on the other side they differ from each other by at most 2, assumption (i) holds true with a big margin. Using $q \ge 1/2$ and the first inequality in (71), we get

\begin{equation*} (pq)^2\min\{n_1 - 2,n_2\}^{1 + \epsilon} \ge \frac{p^2\hat n^{1 + \epsilon}}{4}(1 + o(1)) = \Omega \left( \frac{ \left(\max \left\{ n_1, n_2 \right\}\right)^{1 + \epsilon}}{(\log N)^{3 + \epsilon}} \right) \gg \max \left\{ n_1, n_2 \right\},\end{equation*}

which implies assumption (ii) for large $\hat n$ . Finally, using elementary inequalities $(1 - 2p)^2 \le q$ (since $p \le 1/2$ ), $1 \le \left( n_1/n_2 + n_2/n_1 \right)/2$ , and the second inequality in (71), we obtain

\begin{align*} \frac{(1 - 2p)^2}{4pq} &\left(1 + \frac{5(n_1 - 2)}{6n_2} + \frac{5n_2}{6(n_1 - 1)}\right) \le \frac{1}{4p}\left( \frac{1}{2} + \frac{5}{6} \right)\left(\frac{n_1}{n_2} + \frac{n_2}{n_1}\right) (1 + o(1)) \\ &\le \frac{1}{3}\log \max \left\{ n_1, n_2 \right\} (1 + o(1)) \ll 0.35 \cdot \log \max \left\{ n_1 - 2, n_2 \right\},\end{align*}

which for large $\hat n$ implies assumption (iii) with $a = 0.35$ .

Note that in (20) we have $D_1 = 0$ , while $D_2 \le 4n_2 \ll pq n_1n_2$ , by the assumption $\hat p\hat n\to\infty$ . Thus, uniformly over k, the exponent in (20) is $-1/2 + o(1)$ and, by Theorem 5,

\begin{equation*} |{\mathcal{R}}_k| \sim \frac {e^{-1/2}n_2!\binom {n_2}{d_1}^{n_1 - 2} \binom {n_1 - 2}{d_{2} - 2}^{k}\binom {n_1 - 2}{d_{2} - 1}^{2d_1 - 2k}\binom {n_1 - 2}{d_{2}}^{n_1 - 2d_1 + k}} {k!(d_1 - k)!^2(n_2 - 2d_1 + k)!\binom{(n_1 - 2)n_2}{(n_1 - 2)d_1}} =: r_k.\end{equation*}

Straightforward calculations yield

\begin{equation*}\frac{r_k}{r_{k-1}} =\frac {(d_1 - k + 1)^2(n_1 - 2 - d_2 + 1)(d_2 - 1)} {k(n_2 - 2d_1 + k)\cdot d_2(n_1 - d_2)} \le \frac{(d_1 - k + 1)^2}{k(n_2 - 2d_1 + k)}\end{equation*}

and, because $d_2/n_1 = p\le1/2$ ,

\begin{equation*}\frac{r_k}{r_{k-1}} \ge \frac{(d_1 - k + 1)^2}{k(n_2 - 2d_1 + k)}\left(1 - 2d_2^{-1} \right).\end{equation*}

(Compare with (63) and (64) to note the absence of $\Theta(p)$ error terms.) With $\mu$ as in (66), we redefine

(72) \begin{equation}\mu_+ \,{:\!=}\, \mu\quad\text{and}\quad\mu_- \,{:\!=}\, \mu (1 - 2d_2^{-1}).\end{equation}

From (66) and (72) it follows that

(73) \begin{equation} \mu_{+} = \mu \le p^2n_2 (1 + 3d_1^{-1}) \le p^2n_2 + \frac{3pn_2}{n_1}\le p^2n_2 + \frac{20pn_2}{\hat n}\end{equation}

and

(74) \begin{equation} \mu_{-} = p^2n_2 (1 - 2d_2^{-1}) \ge p^2n_2 - 2p \ge p^2n_2 - \frac{20pn_2}{\hat n}.\end{equation}

From (73) it follows $\mu_+ \le 6p^2n_2$ , which implies that x defined in (70) satisfies $x \ge \sqrt{2\mu_+\lambda} + \lambda$ . Consequently, using (73), (74) and Claim 12 we infer

\begin{equation*} \mathbb{P}\left(|X - p^2n_2| \ge \frac{20pn_2}{\hat n} + x\right) \le \mathbb{P}\left(X \ge \mu_+ + x\right) + \mathbb{P}\left(X\le \mu_- - x\right) = O\left( \sqrt{N} e^{-\lambda} \right).\end{equation*}

We have obtained (57) in the case $\mathbb{I} = 0$ .

5.2. Degrees and co-degrees in $\mathbb{R}(t)$

For convenience, having fixed $H\in{{\mathcal{R}}(n_1,n_2,p)}$ , we will denote by $\mathbb{P}_H$ and $\operatorname{\mathbb{E}}_H$ the conditional probability and expectation with respect to the event $\mathbb{R} = H$ . In the conditional space defined by such an event, $(\eta_1, \dots, \eta_M)$ is just a uniformly random permutation of the edges of H and therefore $\mathbb{R}(t)$ is a uniformly random t-subset of edges of H.

We first show that the degrees of vertices in the process $\mathbb{R}(t)$ grow proportionally almost until the end. Recall that $d_1 = pn_2,\; d_2 = pn_1$ , while $\tau=\tau(t)$ and $\tau_0$ are defined, respectively, in (24) and (25).

Lemma 13. If $\lambda = \lambda(n_1,n_2) \le \tau_0 p \hat n$ , then for $i = 1, 2$ , with probability $1 - O(N^2e^{-9\lambda/4})$

(75) \begin{equation} \forall t < t_0 \quad \forall v \in V_i \quad |\deg_{\mathbb{R}(t)}(v) - (1 - \tau) d_i| \le 3 \sqrt{\lambda \tau d_i}.\end{equation}

Proof. Fix $t < t_0$ and $v\in V_i$ and set $X_t(v) \,{:\!=}\, \deg_{\mathbb{R} \setminus \mathbb{R}(t)}(v)$ . For any $H \in {{\mathcal{R}}(n_1,n_2,p)}$ , conditioning on $\mathbb{R} = H$ , the random variable $X_t(v) \sim \operatorname{Hyp}(M, d_i, M - t)$ has a hypergeometric distribution. Since the distribution does not depend on H, $X_t(v)$ has the same distribution unconditionally. In view of (24), $\operatorname{\mathbb{E}} X_t(v) = \tau d_i$ , therefore combining (17) and (18), for all $x > 0$ ,

(76) \begin{align}\mathbb{P}\left(|X_t(v) - \tau d_i| \ge x\right)\le 2 \exp \left\{ - \frac{x^2}{2\left(\tau d_i + x/3 \right)} \right\}= 2 \exp \left\{ - \frac{x^2}{2\tau d_i\left(1 + x/(3\tau d_i) \right)} \right\}.\end{align}

Let $x \,{:\!=}\, 3\sqrt{\lambda\tau d_i}$ . Since $\tau \ge \tau_0$ , by the assumption $\lambda\le \tau_0 p\hat n $ ,

(77) \begin{equation} x/(3\tau d_i) = \sqrt{\lambda/(\tau d_i)} \le \sqrt{\tau_0 \hat n p/(\tau_0 d_i)} \le 1.\end{equation}

Consequently, taking the union bound over all $(1 - \tau_0)Mn_i \le N^2$ choices of t and v and using (76) and (77), we conclude that (75) fails with probability at most

\begin{equation*}2N^2\exp \left\{ - \frac{x^2}{2\tau d_i\left(1 + x/(3\tau d_i) \right)} \right\}\le 2N^2\exp \left\{ -\frac{9 \tau d_i \lambda}{4\tau d_i} \right\} = 2N^2e^{-9\lambda/4}.\end{equation*}

In the proof of Lemma 10, to have a pseudorandom-like property of $\mathbb{R} \setminus \mathbb{R}(t)$ for $p > 0.49$ , we will need a bound on the upper tail of co-degrees in $\mathbb{R}(t)$ . Recall the definition of co-degree from (55).

Lemma 14. Assume that $n_1 \ge n_2$ and $p > 0.49$ . If $\lambda = \lambda(n_1,n_2) \ge \log N$ , then, with probability $1 - O(N^3e^{-\lambda/3 })$ ,

(78) \begin{equation} \forall t \le M \quad \forall u_1 \neq v_1 \quad\operatorname{cod}_{\mathbb{R}(t)}(u_1,v_1) \le(1 - \tau)^2p^2 n_{2} + 20 q^3n_2\mathbb{I} + 15\sqrt{\lambda n_{2}}. \end{equation}

Proof. If $\lambda \ge n_2$ , inequality (78) holds trivially, so let us assume $\lambda < n_2$ . We can condition on $\mathbb{R} = H$ satisfying

(79) \begin{align} \notag \forall u_1,v_1 \in V_1 \quad\operatorname{cod}_{H}(u_1,v_1) & \le p^2n_2+ 20\left(\hat p^3n_2\mathbb{I} + n_2\hat p/\hat n + \sqrt{\hat p^2\lambda n_2}\right) + \lambda\\[3pt] \notag \fbox{$\hat p \le \min \{q, 0.5\}, n_2 = \hat n$}\quad&\le p^2n_2+ 20\left(q^3n_2\mathbb{I} + 0.5 + 0.5\sqrt{\lambda n_2}\right) + \lambda\\[3pt] \fbox{$1 \ll \lambda < n_2$}\quad &\le p^2n_2 + 20q^3n_2 \mathbb{I} + 12\sqrt{\lambda n_2}.\end{align}

Indeed, the probability of the opposite event can be bounded, by Lemma 11 and the union bound, by $O\left(n_1^2\sqrt Ne^{-\lambda}\right) \ll N^3e^{-\lambda}$ . Taking this bound into account, it thus suffices to show that if H satisfies (79), then for any distinct $u_1, v_1\in V_1$ and $t \le M$

(80) \begin{equation} \mathbb{P}_H\left({\operatorname{cod}_{\mathbb{R}(t)}(u_1,v_1) > (1 - \tau)^2p^2 n_{2} + 20 q^3n_2\mathbb{I} + 15\sqrt{\lambda n_{2}}}\right) \le 2e^{-\lambda/3}, \end{equation}

since then the proof is completed by a union bound over $O(N^3)$ choices of $t, u_1, v_1$ .

Fix $t \le M, u_1 \neq v_1$ and let $Y \,{:\!=}\, \left| \Gamma_{\mathbb{R}(t)}(u_1) \cap \Gamma_{H}(v_1)\right|$ and $\operatorname{cod}_H \,{:\!=}\, \operatorname{cod}_H(u_1, v_1)$ . The distribution of Y conditioned on $\mathbb{R} = H$ is hypergeometric $\operatorname{Hyp}(M, \operatorname{cod}_H, t)$ , and hence, by (79),

\begin{equation*} \mu_Y \,{:\!=}\, \operatorname{\mathbb{E}}_H{Y} = \frac{t\operatorname{cod}_H}{M} = (1 - \tau) \operatorname{cod}_H \le (1 - \tau)p^2n_2 + 20q^3n_2 \mathbb{I} + 12 \sqrt{\lambda n_2}. \end{equation*}

Using (17), a trivial bound $\mu_Y \le n_2$ , and our assumption $\lambda < n_2$ ,

\begin{equation*} \mathbb{P}_H\left({ Y \ge \mu_Y + \sqrt{\lambda n_2}}\right) \le \exp\left\{-\frac{\lambda n_2}{2(\mu_Y + \sqrt{\lambda n_2}/3)}\right\} \le e^{-3\lambda/8}\le e^{-\lambda/3}. \end{equation*}

Since also trivially $Y \le \min\{d_1, t\}$ , we have shown that given $\mathbb{R} = H$ , with probability at least $1 - e^{-\lambda/3}$ ,

(81) \begin{equation} Y \le y_0 \,{:\!=}\, \min \{\mu_Y + \sqrt{\lambda n_2}, d_1, t\} \le (1 - \tau)p^2n_2 + 20q^3n_2 \mathbb{I} + 13 \sqrt{\lambda n_2}.\end{equation}

Let ${\mathcal{E}}_{(80)}$ be the event that the inequality in (80) holds. By the law of total probability,

\begin{align*} \mathbb{P}_H\left({{\mathcal{E}}_{(80)}}\right) &= \sum_y \mathbb{P}_H\left({{\mathcal{E}}_{(80)}}\,|\,{Y = y}\right)\mathbb{P}_H\left({Y = y}\right) \\[3pt] \fbox{(81)}\quad &\le \sum_{y \le y_0}\mathbb{P}_H\left({{\mathcal{E}}_{(80)}}\,|\,{Y = y}\right)\mathbb{P}_H\left({Y = y}\right) + e^{-\lambda/3}.\end{align*}

Hence, to prove (80), it suffices to show that, for any $ y = 0, \dots, y_0$ ,

(82) \begin{equation}\mathbb{P}_H\left({{\mathcal{E}}_{(80)}}\,|\,{Y = y}\right) \le e^{-\lambda/3}.\end{equation}

Fix an integer $y \in [0, y_0]$ and a set $S\subseteq V_2$ of size $|S| = y \le y_0$ . Under an additional conditioning $\Gamma_{\mathbb{R}(t)}(u_1)\cap \Gamma_H(v_1) = S$ , set $\mathbb{R}(t)$ is the union of a fixed y-element set $\{u_1w\,\ :\,\ w \in S\}$ and a random $(t - y)$ -element subset of $E(H) \setminus \{u_1w\,\ :\,\ w \in \Gamma_H(u_1)\cap \Gamma_H(v_1)\}$ . Thus, in this conditional space, $X \,{:\!=}\, \operatorname{cod}_{\mathbb{R}(t)}(u_1,v_1)$ counts how many of these $t-y$ random edges fall into the set $\{v_1w\,\ :\,\ w \in S\}$ and therefore $X \sim \operatorname{Hyp}(M - \operatorname{cod}_H, y, t - y)$ . Moreover, the distribution of X is the same for all S of size y, so X has the same distribution when conditioned on $Y = y$ . In particular,

\begin{align*} \mu_X \,{:\!=}\, \operatorname{\mathbb{E}}_H\left({X}\,|\,{Y = y}\right) &= \frac{y(t-y)}{M-\operatorname{cod}_H} \\[3pt] \fbox{$y \ge 0$}\quad &\le \frac{yt}{M} \left(1 + \frac{\operatorname{cod}_H}{M - \operatorname{cod}_H}\right) \\[3pt] \fbox{$M = pn_1n_2, \operatorname{cod}_H \le pn_2$}\quad &\le (1-\tau) y \left(1 + \frac{1}{n_1 - 1}\right) \\[3pt] \fbox{$y \le y_0$ and (81)}\quad &\le (1-\tau)^2p^2n_2 + 20q^3n_2 \mathbb{I} + 13 \sqrt{\lambda n_2} + \frac{n_2}{n_1 - 1}. \\[3pt] \fbox{$n_1 \ge n_2$}\quad &\le (1-\tau)^2p^2n_2 + 20q^3n_2 \mathbb{I} + 14 \sqrt{\lambda n_2}. \end{align*}

Using again (17) as well as inequalities $\mu_X \le n_2$ and $\lambda \le n_2$ , we infer that

\begin{align*}\mathbb{P}_H\left({X \ge \mu_X + \sqrt{\lambda n_2}}\,|\,{Y = y}\right) \le \exp\left\{-\frac{\lambda n_2}{2(\mu_X + \sqrt{\lambda n_2}/3)}\right\} \le e^{-\lambda/3} ,\end{align*}

which, together with the above upper bound on $\mu_X$ , implies (82). This completes the proof of Lemma 14.

5.3. Proof of Lemma 9

Recall the definitions of $\tau=\tau(t)$ in (24), $\delta(t)$ in (43), and $\lambda(t)$ in (44). In this proof we utilise some technical bounds on $\tau\,{:\!=}\,\tau(t)$ and $\gamma_t$ proved in Section 8 (see Proposition 23). In particular, the bound (146) on $\gamma_t$ , together with (45), implies that

(83) \begin{equation} \delta(t) \le 1/9\le 1.\end{equation}

(We do not even need to remember what $\gamma_t$ is to see that.)

We now derive an upper bound on $\lambda(t)$ . By (147), with a huge margin,

\begin{equation*} 6 \log N \le \frac{\tau \hat p \hat n}{16(C + 3)} \le \frac{\tau pq\hat n }{8(C+3)}.\end{equation*}

On the other hand, for $p > 0.49$ , squaring and rearranging the inequality (145) implies

\begin{equation*} \frac{64\log N}{\tau p q} \le \frac{\tau pq\hat n }{8(C+3)}.\end{equation*}

Summing up, for any p,

(84) \begin{equation}\lambda(t)\le \frac{\tau pq \hat n}{4(C + 3)}.\end{equation}

Without loss of generality, we assume that $i = 1$ . Let ${\mathcal{F}}_t$ be the family of graphs H satisfying, for any distinct $u_1, v_1 \in V_1$ ,

(85) \begin{equation} |d_1 - \operatorname{cod}_H(u_1, v_1) - pqn_2| \le 20\hat p n_{2} \left(\hat p^2\mathbb{I} + \frac{1}{\hat n} + \sqrt{\frac{(C+3)\lambda(t)}{n_{2}}} + \frac{(C+3)\lambda(t)}{20\hat pn_2}\right).\end{equation}

Lemma 11 with $\lambda = (C+3)\lambda(t)$ and a union bound over the $O(N^2)$ choices of $u_1, v_1$ imply

(86) \begin{equation} \mathbb{P}\left(\mathbb{R} \in {\mathcal{F}}_t\right) = 1 - O(N^{2.5} e^{-(C+3)\lambda(t)}) \overset{{\lambda(t) \ge 6\log n}}{\ge} 1 - O(N^{-(C+1)}e^{-2\lambda(t)}).\end{equation}

Writing $\delta_* = \sqrt{\frac{(C+3)\lambda(t)}{\tau p q \hat n}}$ and noting that (84) implies $\delta_* \le 1/2$ , the sum of the last three terms in the parentheses in (85) is at most

\begin{equation*} 2 \cdot \sqrt{\frac{(C+3)\lambda(t)}{\hat n}} + \frac{(C+3)\lambda(t)}{20\hat p \hat n} {\le} 2\sqrt{pq}\delta_* + \frac{pq\delta_*^2}{20\hat p} {\le} (1 + 1/40) \delta_*\le \frac{3\delta_*}{\sqrt{6}}.\end{equation*}

In addition, the factor in front of the brackets is at most $40pq n_2$ . Thus, (85) implies

(87) \begin{equation} |d_1 - \operatorname{cod}_H(u_1, v_1) - pqn_2| \le 40 pqn_2 \left( \hat p^2 \mathbb{I} + \frac{3\delta_*}{\sqrt{6}} \right) = pqn_2 \cdot \delta(t)/3.\end{equation}

We claim that for $t = 0, \dots, t_0 - 1$

(88) \begin{equation} \max_{H \in {\mathcal{F}}_t}\mathbb{P}_H\left({\max_{u_1 \ne v_1} \left|\frac{\theta_t(u_1,v_1)}{\tau pqn_2 } - 1\right| \ge \delta(t)}\right) \le 2N^2e^{-(C+3)\lambda(t)}, \end{equation}

and deferring its proof to the end we first show how (86) and (88) imply the lemma.

Inequalities (86) and (88) imply

(89) \begin{align} \notag \mathbb{P}\left(\max_{u_1 \ne v_1} \left|\frac{\theta_t(u_1,v_1)}{\tau pqn_2 } - 1\right| \ge \delta(t)\right) &\le 2N^2e^{-(C+3)\lambda(t)} + \mathbb{P}\left(\mathbb{R} \notin {\mathcal{F}}_t\right) \\[3pt] &= O(N^{-(C+1)}e^{-2\lambda(t))}).\end{align}

Consider random variables, for $t = 0, \dots, t_0 - 1$ ,

\begin{equation*} Y_t \,{:\!=}\, \mathbb{P}\left({\max_{u_1 \ne v_1} \left|\frac{\theta_t(u_1,v_1)}{\tau pqn_2 } - 1\right| > \delta(t)}\,|\,{\mathbb{R}(t)}\right). \end{equation*}

Using Markov’s inequality and (89), we infer that

\begin{align*} \mathbb{P}\left(Y_t > e^{-2\lambda(t)}\right) &\le e^{2\lambda(t)}\operatorname{\mathbb{E}} Y_t = e^{2\lambda(t)}\mathbb{P}\left(\max_{u_1 \ne v_1} \left|\frac{\theta_t(u_1,v_1)}{\tau pqn_2 } - 1\right| \ge \delta(t)\right) = O( N^{-(C+1)}), \end{align*}

which, taking the union bound over the O(N) choices of t, implies that (47) holds with the desired probability, completing the proof of lemma.

Returning to the proof of (88), fix $t < t_0, H \in {\mathcal{F}}_t$ and two distinct vertices $u_1,v_1\in V_1$ . Conditioning on $\mathbb{R} = H$ , note that, recalling (46), random variable $X \,{:\!=}\, \theta_t(u_1,v_1) = |\Gamma_{H \setminus \mathbb{R}(t)}(u_i)\cap\Gamma_{K\setminus H}(v_i)|$ counts elements in the intersection of two subsets of E(H): a fixed set $ \left\{ (u_i,w)\ \, :\ \, w \in \Gamma_H(u_i) \cap \Gamma_{K \setminus H}(v_i) \right\}$ of size $d_1 - \operatorname{cod}_H(u_1, v_1)$ and a random set $H \setminus \mathbb{R}(t)$ of size $M - t$ . Hence $X \sim \operatorname{Hyp}(M, d_1 - \operatorname{cod}_H(u_1,v_1), M -t)$ has the hypergeometric distribution with expectation

\begin{equation*} \mu_X \,{:\!=}\, \operatorname{\mathbb{E}} X = (d_1 - \operatorname{cod}_H(u_1,v_1))(M - t)/M = \tau (d_1 - \operatorname{cod}_H(u_1,v_1)).\end{equation*}

Note that by (87),

(90) \begin{equation}|\mu_X - \tau p q n_2| \le \tau pqn_2 \cdot \delta(t)/3.\end{equation}

Let $\lambda^* \,{:\!=}\, (3C + 9)\lambda(t)$ . From (90), (83), and (84) it follows that

(91) \begin{equation} \mu_X \ge \frac{26}{27}\tau pqn_{2} \ge \lambda^*.\end{equation}

Note that (43) implies

(92) \begin{equation} \delta(t)^2\ge7200\frac{\lambda^*}{\tau pqn_2}\ge 10\frac{\lambda^*}{\tau pqn_2}. \end{equation}

By (17), (18), and (91),

\begin{equation*} \begin{split} \mathbb{P}_H\left({|X - \mu_X| \ge \sqrt{\mu_X \lambda^*} }\right) &\le 2 \exp \left\{ - \frac{\lambda^*}{2\left(1 + \frac{1}{3}\sqrt{\lambda^*/\mu_X } \right)} \right\} \\[3pt] &\le 2 e^{-3\lambda^*/8}\le 2 e^{-\lambda^*/3} = 2e^{-(C + 3)\lambda(t)}. \end{split}\end{equation*}

Thus, with probability $1 - 2e^{-(C+3)\lambda(t)}$

\begin{align*} |\theta_t(u_1,v_1) - \tau p q n_2| &\le |X - \mu_X| + |\mu_X - \tau p q n_2| \\[4pt] &\le \sqrt{\mu_X \lambda^*} + |\mu_X - \tau p q n_2| \\[4pt] \fbox{(90), (92)}\quad &\le \sqrt{\left( 1 + \delta(t)/3 \right)\tau p q n_2 \cdot \delta^2(t)\tau p q n_2 / 10} + (\delta(t)/3) \tau p q n_2\\[4pt] \fbox{(83)}\quad &\le \delta(t)\tau pqn_2 (\sqrt{(1 + 1/27)/10} + 1/3) \\[4pt] &\le \delta(t)\tau pqn_2.\end{align*}

Hence, applying also the union bound over all $n_1^2 \le N^2$ choices of $u_1,v_1$ , we infer (88).

6. Alternating cycles in regularly 2-edge-coloured jumbled graphs

The ultimate goal of this section is to prove Lemma 10. While for $p\le0.49$ the proof follows relatively easily from Lemma 13 by a standard switching technique, the case $p > 0.49$ is much more involved. To cope with it, we first study the existence of alternating walks and cycles in a class of 2-edge-coloured pseudorandom graphs.

In Subsection 6.1, we define an appropriate notion of pseudorandom bipartite graphs (jumbledness), inspired by a similar notion introduced implicitly by Thomason in [Reference Thomason14]. We show that for $p > 0.49$ and suitably chosen parameters, the random graph $K\setminus \mathbb{R}(t)$ is jumbled with high probability (Lemma 17).

The next two subsections are devoted to 2-edge-coloured jumbled graphs which are almost regular in each colour. After proving a technical Lemma 18 in Subsection 6.2, in Subsection 6.3 we show the existence of alternating short walks between any two vertices of almost regular 2-edge-coloured jumbled graphs (Lemma 19).

An immediate consequence of Lemma 19 is Lemma 20, which states that every edge belongs to an alternating short cycle. The latter result together with a standard switching argument (Proposition 4) will be used in the proof of Lemma 10 for $p > 0.49$ . That proof, for both cases $p\le 0.49$ and $p > 0.49$ , is presented in Subsection 6.4.

6.1. Jumbled graphs

Let $K\,{:\!=}\, {K_{n_1,n_2}}$ be the complete bipartite graph with partition classes $V_1$ and $V_2$ , where $|V_i| = n_i$ . Given a bipartite graph $F \subseteq K$ and two subsets $A \subseteq V_1$ , $B \subseteq V_2$ , denote by $e_F(A,B)$ the number of edges of F between A and B. Recall that $N = n_1n_2$ and $M = pN$ .

Given real numbers $\pi,\delta\in(0,1)$ , we say that a graph $F \subseteq K$ is $(\pi, \delta)$ -jumbled if for every $A \subseteq V_1$ and $B \subseteq V_2$

\begin{equation*} |e_F(A,B) - \pi|A||B|| \le \delta \sqrt{N|A||B|}.\end{equation*}

The following result of Thomason [14, Theorem 2], which quantifies a variant of jumbledness in terms of the degrees and co-degrees of a graph, will turn out to be crucial for us.

Theorem 15 ([Reference Thomason14]). Let $F \subseteq K$ be a bipartite graph and $\rho \in (0,1)$ and $\mu \ge 0$ be given. If

(93) \begin{equation} \min_{v \in V_1} \deg_F(v) \ge \rho n_2\quad\text{and}\quad\max_{u,v\in V_1: u \neq v} \operatorname{cod}_F(u,v) \le \rho^2n_2 + \mu, \end{equation}

then, for all $A \subseteq V_1$ and $B \subseteq V_2$ ,

\begin{equation*} |e_F(A,B) - \rho|A||B|| \le \sqrt{(\rho n_2 + \mu |A|)|A||B|} + |B|\mathbb{I}_{\left\{ |A|\rho < 1 \right\}}. \end{equation*}

Remark 16. The proof in [Reference Thomason14] is given only in the case $n_1 = n_2 = n$ . However, it carries over in this more general setting, as in [Reference Thomason14] n always refers to $|V_2|$ .

Recall that $K\setminus \mathbb{R}(t)$ has precisely $N - t = N - (1-\tau)M = (\tau p + q)N$ edges. The following technical result states that, under the conditions of Lemma 6, with high probability, $K\setminus \mathbb{R}(t)$ is jumbled for parameters which are tailored for Lemma 20.

Lemma 17. Let $\alpha = \min \{ \tau p, q \}$ and $\pi \,{:\!=}\, \tau p + q$ . For every constant $C > 0$ and $p > 0.49$ , if assumptions (27) and (28) hold, then, with probability $1 - O(N^{-C })$ , for all $t < t_0$

\begin{equation*} K \setminus \mathbb{R}(t)\quad \text{is} \quad(\pi, \alpha/16)\text{-jumbled}\end{equation*}

and for any $e \in K \setminus \mathbb{R}(t)$

\begin{equation*} K \setminus (\mathbb{R}(t) \cup \{ e \}) \quad \text{is} \quad(\pi, \alpha/16)\text{-jumbled}.\end{equation*}

Proof. W.l.o.g., we assume that $n_1\ge n_2$ . Let $\lambda \,{:\!=}\, 3(C + 4)\log N$ , and

(94) \begin{equation} \delta \,{:\!=}\, 20 \left( \lambda/n_2 \right)^{1/4} + 10q^{3/2}\mathbb{I}. \end{equation}

The plan is to show that

(95) \begin{equation} \delta \le \frac{\alpha}{16}\end{equation}

and that with the correct probability, $K \setminus \mathbb{R}(t)$ and $K \setminus \mathbb{R}(t) \cup \{e\}$ are in fact $(\pi, \delta)$ -jumbled.

We start with the proof of (95). Notice that, by (28),

(96) \begin{equation}\sqrt{\lambda/{n_2}}\le\left(\lambda/{n_2}\right)^{1/4}\le\frac q{680}\le\frac\pi{680}\le\frac1{680}.\end{equation}

Since $\hat p > \tfrac{49}{51}q$ , condition (27) implies that $1/320 \ge q^{1/2} \mathbb{I}$ . After multiplying both sides by 10q, we get

\begin{equation*} \frac q{32} \ge 10\cdot q^{3/2} \mathbb{I},\end{equation*}

which together with the second inequality in (96) implies

\begin{equation*} \frac{q}{16} = \frac{q}{32} + \frac{q}{32} \ge 20 \left( \lambda/{n_2} \right)^{1/4} + 10\cdot q^{3/2} \mathbb{I} = \delta.\end{equation*}

On the other hand, using $p > 0.49$ , $\tau \ge \tau_0$ and the definition (25) of $\tau_0$ , we infer that

\begin{align*} \frac{\tau p}{16} &\ge \frac{0.49 \cdot \tau_0}{16} = \frac{0.49 \cdot 700\cdot (3(C + 4))^{1/4}}{16} \left( \left(\frac{\log N}{n_2}\right)^{1/4} + q^{3/2} \mathbb{I}\right) \\ &\ge 20 \left( \lambda/{n_2} \right)^{1/4} + 10q^{3/2} \mathbb{I} = \delta.\end{align*}

Hence $\alpha/16 = \min \left\{ \tau p/16, q/16 \right\} \ge \delta$ , implying (95).

We now prove the jumbledness, first focusing on $K \setminus \mathbb{R}(t)$ and then indicating the tiny change in calculation for $K \setminus (\mathbb{R}(t) \cup \{e\})$ . Fixing an arbitrary $t < t_0$ , we will first show that, with probability $1 - O(N^{-C -1})$ , conditions (93) of Theorem 15 are satisfied by $F = K \setminus \mathbb{R}(t)$ and $F = K \setminus (\mathbb{R}(t) \cup \{e\})$ for suitably chosen $\rho$ and $\mu$ . Then we will apply Theorem 15 to deduce that $K \setminus \mathbb{R}(t)$ is $(\pi, \delta)$ -jumbled. Lemma 17 will follow by applying the union bound over all (at most $t_0\le M\le N$ ) choices of t.

By (147), $\lambda \le \tau_0 p \hat n$ with a big room to spare. Note that

(97) \begin{equation} (1-\tau)p = 1 - \pi,\end{equation}

which implies that $|\deg_{K \setminus \mathbb{R}(t)}(v) - \pi n_2|=|\deg_{\mathbb{R}(t)}(v) - (1-\tau)d_1|$ . Hence, by Lemma 13, with probability $1 - O(N^2e^{-\lambda})$ ,

(98) \begin{equation} \max_{v \in V_1} |\deg_{K \setminus \mathbb{R}(t)}(v) - \pi n_2| \le 3\sqrt{\tfrac{4}{9}\lambda\tau d_1} \le 2\sqrt{\lambda n_2} \le 3\sqrt{\lambda n_2}.\end{equation}

Moreover, recalling that $n_1\ge n_2$ and, again using (97), Lemma 14 implies that, with probability $1 - O(N^3e^{-\lambda/3})$ ,

(99) \begin{equation} \max_{u, v \in V_1, u \ne v} \operatorname{cod}_{\mathbb{R}(t)}(u,v) \le (1 - \pi)^2 n_2 + 20q^3n_2\mathbb{I} + 15 \sqrt{\lambda n_2}. \end{equation}

Since $\lambda = 3(C + 4)\log N$ , the intersection of events (98) and (99) holds with probability $1 - O(N^3e^{-\lambda/3}) = 1 - O(N^{-C - 1})$ .

Note that for distinct $u, v \in V_1$ , by (56) and (98)

(100) \begin{equation} \operatorname{cod}_{K \setminus \mathbb{R}(t)}(u, v) \le \operatorname{cod}_{\mathbb{R}(t)}(u,v) + (2 \pi - 1)n_2 + 6\sqrt{\lambda n_2}, \end{equation}

which, by (99), implies that

(101) \begin{equation} \max_{u, v \in V_1, u \ne v} \operatorname{cod}_{K \setminus \mathbb{R}(t)}(u,v) \le \pi^2 n_2 + 20q^3n_2\mathbb{I} + (15 + 6) \sqrt{\lambda n_2}. \end{equation}

Set

\begin{equation*} \rho \,{:\!=}\, \pi - 3\sqrt{\lambda/n_2} \quad \text{and} \quad \mu \,{:\!=}\, 20q^3n_2\mathbb{I} + (15 + 12) n_2\sqrt{\lambda/ n_2},\end{equation*}

and note that by the inequality $\pi\le1$ and by (96), we have $0 < \rho < 1$ . Furthermore, $\rho^2\ge\pi^2-6\sqrt{\lambda/ n_2}$ . Hence, (98) and (101) imply the assumptions (93) for $F = K \setminus \mathbb{R}(t)$ with the above $\rho$ and $\mu$ . Consequently, by Theorem 15 (using $a \le n_1$ and $\rho\le\pi$ ),

\begin{equation*} |e_{K \setminus \mathbb{R}(t)}(A,B) - \rho ab| \le \sqrt{ \left( \pi n_2 + 20q^3n_1n_2\mathbb{I} + (15 + 12)n_1n_2\sqrt{\lambda /n_2}\right)ab} + b. \end{equation*}

Further, since $N=n_1n_2$ , $n_1\ge n_2 \ge b$ and $a,\lambda \ge 1$ , we have, with a big margin,

\begin{equation*} \pi \le1 \le n_1 \sqrt{\lambda/n_2}\quad\text{and}\quad b \le \sqrt{b n_2} \le \sqrt{Nab}(\lambda/n_2)^{1/4}.\end{equation*}

It follows, applying the inequality $\sqrt{x + y} \le \sqrt x+ \sqrt y$ as well as (94), that

(102) \begin{equation} |e_{K \setminus \mathbb{R}(t)}(A,B) - \rho ab| \le \left( (\sqrt{15 + 13} + 1)(\lambda/n_2)^{1/4} + \sqrt{20} q^{3/2}\mathbb{I}\right) \sqrt{Nab} \le \frac{\delta}{2}\sqrt{Nab}. \end{equation}

Moreover, note that using $ab \le n_1n_2 = N$ and the first inequality in (96),

(103) \begin{equation}(\pi - \rho)ab = 3\sqrt{\lambda/n_2} \cdot ab \le 3\left(\lambda/n_2\right)^{1/4} \cdot \sqrt{Nab} \le \frac{\delta}{2}\sqrt{Nab},\end{equation}

Hence, (102) and (103) imply

\begin{equation*} |e_{K \setminus \mathbb{R}(t)}(A,B) - \pi ab| \le |e_{K \setminus \mathbb{R}(t)}(A,B) - \rho ab| + (\pi - \rho)ab \le \delta \sqrt{Nab},\end{equation*}

meaning that $K \setminus \mathbb{R}(t)$ is $(\pi, \delta)$ -jumbled.

If above we replace $K \setminus \mathbb{R}(t)$ by $K \setminus (\mathbb{R}(t) \cup \left\{ e \right\})$ , the upper bound in (98) and the bound in (101) still hold trivially, while the lower bound $\pi n_2 - 3 \sqrt{\lambda n_2}$ in (98) remains correct, since $\deg_{K \setminus (\mathbb{R}(t) \cup \{e\})}(v) \ge \deg_{K \setminus \mathbb{R}(t)}(v) - 1$ and we have plenty of room in (98). Hence Theorem 15 applies with the same $\rho$ and $\mu$ , implying that $K \setminus (\mathbb{R}(t) \cup \{e\})$ is also $(\pi, \delta)$ -jumbled.

6.2. A technical inequality for blue-red graphs

We find it convenient to introduce relative counterparts of basic graph quantities. Below $i\in\{1,2\}$ . As before, let $K\,{:\!=}\, {K_{n_1,n_2}}$ be the complete bipartite graph with partition classes $V_1$ and $V_2$ , where $|V_i| = n_i$ . The relative size of a subset of vertices $S\subseteq V_i$ is

\begin{equation*} s({S}) \,{:\!=}\, \frac{|S|}{n_i}.\end{equation*}

Further, for $X\subseteq V_1$ and $Y\subseteq V_{2}$ and a subgraph $F \subseteq K$ , we define the relative edge count

\begin{equation*} \varepsilon_F(X,Y) = \varepsilon_F(Y,X) = \frac{e_F(X,Y)}{n_1n_2}.\end{equation*}

Moreover, for $v\in V_i$ and $Y\subseteq V_{3-i}$ , we define the relative degree

\begin{equation*} d_F(v,Y) = \frac{e_F(\left\{ v \right\}, Y)}{n_{3-i}}.\end{equation*}

If $Y = V_{3-i}$ , we shorten $d_F(v,Y)$ to $d_F(v)$ .

In this notation, a graph F is $(\pi,\delta)$ -jumbled if for every $X \subseteq V_1$ , $Y \subseteq V_2$

(104) \begin{equation} |\varepsilon_F(X,Y) - \pi s(X)s(Y)| \le \delta \sqrt{s(X)s(Y)}.\end{equation}

Now, let the edges of a graph F be 2-coloured by blue and red, and let B and R be the subgraphs of F induced by the edges of colour, resp., blue and red. We then call $F = B \cup R$ a blue-red graph. Note that

\begin{equation*} \varepsilon_{B\cup R}(X,Y) = \varepsilon_F(X,Y)= \varepsilon_{R}(X,Y) + \varepsilon_{B}(X,Y).\end{equation*}

We say that a blue-red graph F is $(r,b,\delta)$ -regular, if

(105) \begin{equation} b - \delta \le {d_{B}} (v) \le b + \delta, \quad\text{and}\quad r - \delta \le d_{R} (v) \le r + \delta \quad \text{ for every } v\in V_1\cup V_2.\end{equation}

If F is at the same time $(r+b, \delta)$ -jumbled and $(r,b,\delta)$ -regular, as in the technical lemma below, we will sometimes loosely refer to such a graph as regularly jumbled.

Finally, for every $S\subseteq V_i$ , $i = 1,2$ , set $\overline{S}\,{:\!=}\,V_i\setminus S$ .

Lemma 18. Let $r, b \in (0,1)$ be real numbers and define $\alpha \,{:\!=}\, \min \{r,b\}$ . Let $\nu<\alpha/16$ and $\delta \leq \alpha/16$ be positive reals and let $F \subseteq K$ be a $(r, b, \delta)$ -regular, $(b + r, \delta)$ -jumbled bipartite blue-red graph. If sets $X \subseteq V_i$ , $Y \subseteq V_{3-i}$ satisfy

(106) \begin{equation} \varepsilon_{B}(X,\overline{Y}) + \varepsilon_{R}(\overline{X},Y) \le \nu,\end{equation}

and

(107) \begin{equation} \min \{bs(X),rs(Y)\} \leq \frac{rb}{r + b},\end{equation}

then

(108) \begin{equation} \max \{bs(X), rs(Y)\} \le \frac{\nu}{1 - 7\delta/\alpha}.\end{equation}

Proof. Since

\begin{equation*} \varepsilon_{R}(X,Y) = \frac{e_R(X,Y)}{n_1n_2} = \frac{\sum_{v\in X}e_R(\left\{ v \right\}, Y)}{n_1n_2} = \frac1{n_1}\sum_{v\in X}d_R(v,Y),\end{equation*}

from (105) we have

(109) \begin{equation}\varepsilon_{R}(X,Y) + \varepsilon_{R}(X,\overline{Y}) \leq (r + \delta)s(X)\end{equation}

and, similarly,

(110) \begin{equation} \varepsilon_{B}(X,Y) + \varepsilon_{B}(\overline{X},Y) \leq (b + \delta)s(Y).\end{equation}

By summing (106), (109) and (110), we infer that

(111) \begin{equation} \varepsilon_F(X,Y) + \varepsilon_F(X,\overline{Y}) + \varepsilon_F(\overline{X},Y)\leq \nu + (r + \delta)s(X) + (b + \delta)s(Y). \end{equation}

On the other hand, by (105),

\begin{equation*} \varepsilon_F(X,\overline{Y}) = \varepsilon_F(X, V_2) - \varepsilon_F(X,Y) \ge (b + r - 2\delta)s(X) - \varepsilon_F(X,Y)\end{equation*}

and

\begin{equation*} \varepsilon_F(\overline{X},Y) = \varepsilon_F(V_1, Y) - \varepsilon_F(X,Y) \ge (b + r - 2\delta)s(Y) - \varepsilon_F(X,Y).\end{equation*}

Hence, by (104) with $\pi = b + r$ ,

\begin{align*} \varepsilon_F(X,Y) + \varepsilon_F(X,\overline{Y}) + \varepsilon_F(\overline{X},Y) &\geq (b + r - 2\delta) (s(X) + s(Y)) - \varepsilon_F(X, Y) \\[3pt] & \ge (b + r - 2\delta) (s(X) + s(Y)) - (b + r)s(X)s(Y) - \delta \sqrt{s(X)s(Y)}. \end{align*}

Comparing with (111), we obtain the inequality

\begin{equation*} b s(X) + r s(Y) - \frac{(b + r)bs(X) rs(Y)}{br} \le \nu + \delta\left(3s(X) + 3s(Y) + \sqrt{s(X)s(Y)}\right).\\\end{equation*}

Denoting $x \,{:\!=}\, bs(X)$ and $y \,{:\!=}\, rs(Y)$ and $h \,{:\!=}\, rb/(b + r)$ , this becomes

(112) \begin{equation} x + y - \frac{x y}h \le \nu + \delta \left( \frac{3x}b + \frac{3y}r +\sqrt{\frac{xy}{br}} \right) =: \psi.\end{equation}

Trivially, by the definitions of s(X) and $\alpha$ , and by our assumptions on $\nu$ and $\delta$ , we have

\begin{equation*} \psi \le \nu + 7\delta < \frac12\alpha\le \frac{1}{1/b + 1/r} = h. \end{equation*}

Since our goal—inequality (108)—now reads as

\begin{equation*} \max \left\{ x, y \right\} \le \frac{\nu}{1 - 7\delta /\alpha},\end{equation*}

to complete the proof it is enough to assume, without loss of generality, that $\max \{x, y\} = y$ and show, equivalently, that

(113) \begin{equation} y \le \nu + 7\delta y/\alpha.\end{equation}

By (107) we have $x = \min\{x,y\} \le h$ . Note that $x = h$ cannot hold, since then the LHS of (112) would equal h, contradicting the fact that $\psi < h$ . Hence, we have $x < h$ , which, together with $\psi < h$ and (112), implies that

\begin{align*} y &\le \frac{\psi - x}{1 - x/h} = h - \frac{h - \psi}{1 - x/h} \le h - (h - \psi) = \psi = \nu + \delta \left( \frac{3x}b + \frac{3y}r + \sqrt{\frac{xy}{br}} \right) \le \nu + \frac{7\delta y}{\alpha},\end{align*}

and (113) is proved.

6.3. Alternating walks and cycles

A cycle in a blue-red bipartite graph is said to be alternating if it is a union of a red matching and a blue matching, that is, every other edge is blue and the remaining edges are red. The ultimate goal of this subsection is to show that for every edge in a regularly jumbled blue-red bipartite graph, there is an alternating cycle of bounded length containing that edge. We are going to achieve it by utilising walks.

Given $x,y\in V_1\cup V_2$ , an alternating walk from x to y in a blue-red graph F is a sequence of (not necessarily distinct) vertices $(v_1 = x,\dots,v_s = y)$ such that for each $i = 1,\dots,s-1$ , $v_iv_{i+1}\in F$ , every other edge is blue and the remaining edges are red. There is no restriction on the colour of the initial edge $v_1v_2$ . The length of a walk is defined as the number of edges, or $s-1$ .

If the vertices $v_1,\dots,v_s$ are all distinct, an alternating walk is called an alternating path. Note also that if F is bipartite, $x\in V_1,\;y\in V_2$ , and the edge xy is, say, blue, then every alternating path from x to y which begins (and thus ends) with a red edge together with xy forms an alternating cycle containing xy.

The first result of this section asserts that regularly jumbled blue-red bipartite graphs have a short alternating walk between any pair of vertices.

Lemma 19. Given $r, b \in (0,1)$ such that $\alpha \,{:\!=}\, \min \{r,b\}$ , let $\delta \in (0,\alpha/16]$ and let $F \subseteq K$ be an $(r, b, \delta)$ -regular, $(b + r, \delta)$ -jumbled blue-red graph. Let $L = 4\lceil 16/rb \rceil + 1$ . For any $x\in V_i$ and $y\in V_{3-i}$ there exist at least two alternating walks from x to y of length at most L, one starting with a blue edge and another starting with a red edge.

Proof. For $w \in V_1 \cup V_2$ and an integer $k \ge 1$ , define $R^w_k$ and $B^w_k$ as the sets of vertices $v\in V(F)$ such that there is an alternating walk from v to w of length $\ell \le k$ , $\ell\equiv k \pmod{2}$ , starting with, respectively, a red edge and a blue edge. (Note that these definitions concern walks ending with w.)

Clearly, for every $k\ge3$ , $B^w_{k-2}\subseteq B^w_k$ and $R^w_{k-2}\subseteq R^w_k$ . Observe also that for any $k \geq 2$ , by definition the sets $R^w_{k-1}$ and $B^w_k$ are contained in opposite sides of the bipartition $(V_1,V_2)$ and, moreover,

(114) \begin{equation} \varepsilon_{B}\left(\overline{B^w_k}, R^w_{k-1}\right) = 0.\end{equation}

By symmetry, $B_{k-1}^w$ and $R_k^w$ are contained in opposite sides of $(V_1,V_2)$ and

(115) \begin{equation} \varepsilon_{R}\left(\overline{R^w_k}, B^w_{k-1}\right) = 0.\end{equation}

Set $\nu = rb/16$ and note that, since $r,b < 1$ , we have $\nu < \alpha/16$ . There exists an integer $t\leq T \,{:\!=}\, \lceil 1/\nu \rceil = \lceil 16/rb \rceil$ such that

(116) \begin{equation}s\left({R^w_{2t + 1}\setminus R^w_{2t - 1}}\right)\leq \nu,\end{equation}

since otherwise $1 \ge s({R^w_{2T + 1}}) \ge \sum_{i = 1}^{T} s({R^w_{2i+1} \setminus R^w_{2i-1}}) > \nu T \ge 1$ , which is a contradiction.

By (114) and (116),

(117) \begin{equation} \varepsilon_{B}\left(R^w_{2t+1}, \overline{B^w_{2t}}\right) = \varepsilon_{B}\left(R^w_{2t - 1},\overline{B^w_{2t}}\right)+ \varepsilon_{B}\left(R^w_{2t + 1}\setminus R^w_{2t - 1}, \overline{B^w_{2t}}\right)\leq s\left({R^w_{2t + 1}\setminus R^w_{2t - 1}}\right)\leq \nu.\end{equation}

Combining (115) for $k = 2t + 1$ and (117), we get

(118) \begin{equation} \varepsilon_{B}\left(R^w_{2t + 1}, \overline{B^w_{2t}}\right)+ \varepsilon_{R}\left(\overline{R^w_{2t + 1}}, B^w_{2t}\right) \leq \nu.\end{equation}

Set $X = R^w_{2t + 1}, Y = B^w_{2t}$ , for convenience. We claim that

(119) \begin{equation} s(X) > r/(r + b)\quad\text{and}\quad s(Y) > b/(r + b). \end{equation}

Assuming the contrary, we have

\begin{equation*} \min \left\{ bs(X), rs(Y) \right\} \le br/(r + b), \end{equation*}

which, together with (118), constitute the assumptions of Lemma 18. Applying it, we get

(120) \begin{equation} \max \left\{ bs(X), rs(Y) \right\} \le \frac{\nu}{1 - 7\delta/\alpha} = \frac{rb}{16(1 - 7\delta/\alpha)} \le \frac{rb}{16(1 - 7/16)} = \frac{rb}{9}, \end{equation}

where the second inequality follows by our assumption $\delta \le \alpha/16$ .

On the other hand, since X contains the set $R^w_1 = \Gamma_R(w)$ of red neighbours of w, by $(r, b, \delta)$ -regularity of F we have $s(X) \ge r - \delta \ge \frac{15}{16}r$ , a contradiction with (120). Hence, we have shown (119). Since $X = R^w_{2t + 1}\subseteq R^w_{2T + 1}$ and $Y = B^w_{2t}\subseteq B^w_{2T}$ , we also have

(121) \begin{equation}s\left({R^w_{2T + 1}}\right) > \frac{r}{r + b}, \quad s\left({B^w_{2T}}\right) > \frac{b}{r + b}.\end{equation}

Since we chose w arbitrarily, (121) holds for $w \in \left\{ x,y \right\}$ , implying

(122) \begin{equation} s\left({R^y_{2T + 1}}\right) > \frac{r}{r + b}, \quad s\left({B^x_{2T}}\right) > \frac{b}{r + b}.\end{equation}

Let us assume, without loss of generality, that $x \in V_1$ and $y \in V_2$ . Then, $B^x_{2T}, R^y_{2T + 1}\subseteq V_1$ and, in particular, for every vertex $v\in B^x_{2T}$ there is a walk (of even length at most 2T) from v to x starting with a blue and thus ending with a red edge. By (122), $s({B^x_{2T}}) + s({R^y_{2T + 1}}) > 1$ , so there exists $v \in B^x_{2T} \cap R^y_{2T + 1}$ . This means that there is an alternating walk of length at most $2T + (2T + 1)= 4\lceil 16/rb \rceil + 1 = L$ from x to y (through v) that starts with a red edge.

By an analogous reasoning with the roles of the colours red and blue swapped, $R^x_{2T}\cap B^y_{2T + 1}\neq \emptyset$ , and thus there is an alternating walk of length at most $2T + (2T + 1) = L $ from x to y which starts with a blue edge.

The following result is an easy consequence of Lemma 19.

Lemma 20. Let $r, b \in (0,1)$ , $\alpha \,{:\!=}\, \min \{r,b\}$ , and $\delta\in (0,\alpha/16]$ . If $F \subseteq K$ is an $(r, b, \delta)$ -regular, $(r + b, \delta)$ -jumbled blue-red bipartite graph, then every edge of F belongs to an alternating cycle of length at most 2D, where $D = 2\lceil 16/rb \rceil + 1$ .

Proof. Let xy be an edge with $x\in V_1$ , $y\in V_2$ . Without loss of generality, we assume that xy is blue. Then, by Lemma 19, there exists at least one alternating walk from x to y of length at most $4\lceil 16/rb \rceil + 1 = 2D - 1$ starting with a red edge. Consider a shortest such walk W. We claim W is a path. Indeed, assume that W is not a path and let w be the first repeated vertex on W. If we remove the whole segment of the walk between the first two occurrences of w, what remains is still an alternating walk from x to y starting with a red edge (since this segment has an even number of edges), contradicting the minimality of W. The path W is not just a single edge xy (since xy is blue) and therefore W and xy form an alternating cycle of length at most 2D.

6.4. Proof of Lemma 10

Let $\alpha = \min \left\{ \tau p, q \right\}$ . Applying Lemma 13 with $\lambda = (C + 1)\log N$ (note that the condition $\lambda \le \tau_0p\hat n$ follows generously from (147)) we have that, with probability $1 - O(N^{-C})$ , for every $t < t_0$

(123) \begin{equation} \forall v_i \in V_i \quad \tau d_i (1 - \delta) \le d_i - \deg_{\mathbb{R}(t)}(v_i) \le \tau d_i (1 + \delta), \qquad i \in \left\{ 1,2 \right\},\end{equation}

where

\begin{equation*} \delta = 3\sqrt{(C+1) \log N/(\tau p\hat n )}\end{equation*}

Whenever $p > 0.49$ , by Lemma 17, with probability $1 - O(N^{-C})$ for every $t < t_0$ we have that

(124) \begin{equation}\begin{split} K \setminus \mathbb{R}(t) \quad &\text{is} \quad(\tau p + q, \alpha/16)\text{-jumbled}, \\[3pt] \forall e \in K \setminus \mathbb{R}(t) \quad K \setminus (\mathbb{R}(t) \cup \{e\}) \quad &\text{is} \quad(\tau p + q, \alpha/16)\text{-jumbled}.\end{split}\end{equation}

Fix $\mathbb{R}(t) = G$ satisfying (123) and (124). It remains to prove that for every pair of distinct edges $e, f \in K \setminus G$

(125) \begin{equation} \mathbb{P}\left({e \in \mathbb{R}, f \notin \mathbb{R}}\,|\,{\mathbb{R}(t) = G}\right) \ge e^{ - \lambda(t) } = N^{-2D},\end{equation}

where $D = 3$ if $p \le 0.49$ and $D = 32/\tau p q + 3$ , if $p > 0.49$ . For this, fix distinct edges $e,f\in K\setminus G$ . Aiming to apply Proposition 4, we need to verify the assumption on the existence of alternating cycles containing a given edge.

Given a graph G and $H\in{\mathcal{R}}_{G'}$ , recall our convention to call the edges of $H\setminus G'$ blue and the edges of $K\setminus H$ red.

Claim 21. Let $G'\in\{ G,\;G\cup \{e\}\}$ . For every $H \in {\mathcal{R}}_{G'}$ , every edge $g \in K \setminus G'$ is contained in an alternating cycle of length at most 2D.

From Claim 21 we complete the proof of (125) as follows. Since G is admissible, we have ${\mathcal{R}}_G \ne \emptyset$ . Therefore Proposition 4 implies

(126) \begin{equation} {\mathcal{R}}_{G, e} \neq \emptyset, \quad \text{and} \quad \frac{|{\mathcal{R}}_{G,\neg e}|}{|{\mathcal{R}}_{G,e}|} \le N^{D} - 1.\end{equation}

Since (126) implies ${\mathcal{R}}_{G \cup \left\{ e \right\}} = {\mathcal{R}}_{G, e} \ne \emptyset$ , Proposition 4, applied to graph $G \cup \left\{ e \right\}$ and edge f, implies

(127) \begin{equation} {\mathcal{R}}_{G \cup \left\{ e \right\}, \neg f} \neq \emptyset, \quad\text{and} \quad \frac{|{\mathcal{R}}_{G \cup\{e\},f}|}{|{\mathcal{R}}_{G \cup \{e\},\neg f}|} \le N^{D} - 1.\end{equation}

Using (126) and (127), we infer

\begin{align*} \notag &\frac{1}{\mathbb{P}\left({e \in \mathbb{R}, f \notin \mathbb{R}}\,|\,{\mathbb{R}(t) = G}\right)} = \frac{|{\mathcal{R}}_G|}{|{\mathcal{R}}_{G,e,\neg f}|} = \frac{|{\mathcal{R}}_G|}{|{\mathcal{R}}_{G,e}|} \cdot \frac{|{\mathcal{R}}_{G,e}|}{|{\mathcal{R}}_{G,e,\neg f}|} \\[3pt] &= \left( 1 + \frac{|{\mathcal{R}}_{G,\neg e}|}{|{\mathcal{R}}_{G,e}|} \right) \cdot \left( 1 + \frac{|{\mathcal{R}}_{G,e,f}|}{|{\mathcal{R}}_{G,e,\neg f}|} \right) \\[3pt] &= \left( 1 + \frac{|{\mathcal{R}}_{G,\neg e}|}{|{\mathcal{R}}_{G,e}|} \right) \cdot \left( 1 + \frac{|{\mathcal{R}}_{G \cup\{e\},f}|}{|{\mathcal{R}}_{G \cup \{e\},\neg f}|} \right) \le N^{2D},\end{align*}

which implies (125).

It remains to prove Claim 21. As a preparation, we derive bounds on the vertex degrees in $G\cup\{e\}$ . Note that the inequality (147) implies

(128) \begin{equation} \delta \le 0.001,\end{equation}

and

(129) \begin{equation} \delta \tau d_i \ge 3 \sqrt{C\tau p\hat n \log N } \ge 3C \log N \ge 1.\end{equation}

The latter, together with (123), implies that, for an arbitrary $e \in K \setminus G$ ,

(130) \begin{equation} \forall v_i \in V_i \quad \tau d_i (1 - 2\delta) \le d_i - \deg_{G\cup \{e\}}(v_i) \le \tau d_i (1 + \delta), \qquad i \in \left\{ 1,2 \right\}\end{equation}

We consider two cases with respect to p.

Case $\mathbf{p \le 0.49}$ . We first claim that for any two vertices $x_i \in V_i, i = 1,2$ , there is an alternating path $x_1y_2y_1x_2$ such that $x_1y_2$ is red (and thus $y_1y_2$ is blue and $y_1x_2$ is red). The number of ways to choose a blue edge $y_1y_2$ is

\begin{equation*} M - |G'| \ge M - t - 1 = \tau pN - 1. \end{equation*}

We bound the bad choices of $y_1y_2$ which do not give a desired alternating path. These correspond to the walks (we must permit $y_1 = x_1$ and $y_2 = x_2$ ) $x_1y_2y_1$ and $x_2y_1y_2$ whose first edge in non-red, i.e., it belongs to H, while the second one is blue, i.e., it belongs to $H\setminus G'$ . By second inequalities in (123) and in (130), there are at most $d_1 \cdot\tau(1 + \delta)d_2$ choices of such $x_1y_2y_1$ and at most $d_2 \cdot \tau(1 + \delta)d_1$ choices of such $x_2y_1y_2$ , so altogether there are at most

\begin{equation*} 2p^2\tau(1 + \delta)N \le 0.98(1 + \delta) \tau p N \end{equation*}

bad choices of $y_1y_2$ . Thus, noting that $1/\tau p N \le \delta\le 0.001$ (cf. (129) and (128)), the number of good choices of $y_1y_2$ is

\begin{equation*} \tau pN - 1 - 2p^2\tau(1 + \delta)N \ge \tau p N ( 1 - 2 p - 3\delta) > 0,\end{equation*}

implying that there exists a desired path $x_1y_2y_1x_2$ .

This immediately implies that if $g = x_1x_2$ is blue, then g is contained in an alternating 4-cycle. If $g = u_1u_2$ is red, then we choose blue neighbours $x_1 \in \Gamma_{H\setminus G'}(u_2)$ and $x_2 \in \Gamma_{H \setminus G'}(u_1)$ (which exist due to the lower bounds in (123) and (130) being positive). Since there exists an alternating path $x_1y_2y_1x_2$ starting with a red edge, we obtain an alternating 6-cycle containing g. This proves Claim 6.4 in the case $p \le 0.49$ .

Case $\mathbf{p > 0.49}$ . We aim to apply Lemma 20. We first verify that, for $G'\in\{G, G\cup\{e\}\}$ and every $H\in{\mathcal{R}}_{G'}$ , blue-red graph $K\setminus G'$ is $(q, \tau p, \alpha/16)$ -regular. This assumption is trivial for the red graph $K\setminus H$ , which is q-biregular regardless of G . In view of (123) and (130), the relative degrees $d_{H\setminus G'}(v)$ in the blue graph lie in the interval $[\tau p - 2\delta \tau p, \tau p + 2\delta \tau p]$ . Since (28) implies

\begin{equation*} \delta\tau p = 3\sqrt{\frac{(C+1)\tau p \log N }{\hat n}} \le 3\sqrt{\frac{(C+1)\log N }{ \hat n}} \le \sqrt3\left(\frac q{680}\right)^2 \le \frac q{32} \end{equation*}

and (128) implies $\delta \tau p \le \tau p / 32$ , we obtain $2 \delta \tau p \le \alpha/16$ . Hence, indeed, $K\setminus G'$ is $(q, \tau p, \alpha/16)$ -regular.

On the other hand, by (124) $K \setminus G$ is also $(\tau p + q, \alpha/16)$ -jumbled. Hence, by Lemma 20 with $F = K\setminus G'$ , $r = q$ and $b = \tau p$ , the edge g belongs to an alternating cycle of length at most 2D with $D = 2\lceil 16/\tau pq \rceil + 1 \le 32/\tau pq + 3$ . Claim 21 is proven.

7. Extension to non-bipartite graphs

Given integers n and d, $0\le d\le n-1$ such that nd is even, define the random regular graph $\mathbb{R}(n,d)$ as a graph selected uniformly at random from the family ${\mathcal{R}}(n,d)$ of all d-regular graphs on an n-vertex set V. To make the comparison with the binomial model $\mathbb{G}(n,p)$ easier, similarly as in the bipartite case, we set $p=\tfrac d{n-1}$ and define $\mathbb{R}(n,p)\,{:\!=}\,\mathbb{R}(n,d)$ . In what follows, we often suppress the parameter d and instead just assume that $0\le p\le 1$ , $p(n-1)$ is an integer, and $p(n-1)n$ is even.

As described below, our proof of Theorem 2 can be adjusted to yield its non-bipartite version and, consequently, also a non-bipartite version of Corollary 3. Instead of formulating these two quite technical results, we limit ourselves to just stating their abridged version, analogous to Theorem 1. It confirms the Sandwich Conjecture of Kim and Vu [Reference Kim and Vu6] whenever $d \gg \left( n\log n \right)^{3/4}$ and $n - d \gg n^{3/4} (\log n)^{1/4} $ .

Theorem 1. If

(131) \begin{equation}p \gg \frac{\log n}{n}\quad\mbox{and}\quad 1-p \gg \left( \frac{\log n}{n}\right)^{1/4}, \end{equation}

then for some $m \sim p\binom n2$ , there is a joint distribution of random graphs $\mathbb{G}(n,m)$ and $\mathbb{R}(n,p)$ such that

\begin{equation*} \mathbb{G}(n,m) \subseteq \mathbb{R}(n,p) \qquad \text{a.a.s.} \end{equation*}

If

\begin{equation*} p\gg\left(\frac{\log^3 n}{n}\right)^{1/4}, \end{equation*}

then for some $m \sim p\binom n2$ , there is a joint distribution of random graphs $\mathbb{G}(n,m)$ and $\mathbb{R}(n,p)$ such that

\begin{equation*} \mathbb{R}(n,p) \subseteq \mathbb{G}(n,m) \qquad \text{a.a.s.} \end{equation*}

Moreover, in both inclusions, one can replace $\mathbb{G}(n,m)$ by the binomial random graph $\mathbb{G}(n,p')$ , for some $p' \sim p$ .

To obtain a proof of Theorem 1 $^{\prime}$ , one would modify the proof of Theorem 2 and its prerequisites fixing, say, $C = 1$ . For the bulk of the proof (see Sections 36) the changes are straightforward and consist mainly of replacing $K = K_{n_1, n_2}$ by $K_n$ , both $n_1$ and $n_2$ by n, $N = n_1 n_2$ by $\binom{n}{2}$ , both $d_1$ and $d_2$ by $p(n-1)$ , as well as of setting $\mathbb{I} = 0$ . In particular, we redefine (cf. (25))

(132) \begin{equation} \tau_0 \,{:\!=}\, C_1\begin{cases} \dfrac{\log n}{p n}, \quad & p \le 0.49, \\ \\[-9pt] \left(\dfrac{\log n}{n}\right)^{1/4}, \quad & p > 0.49, \end{cases}\end{equation}
\begin{equation*} \gamma_t = C_2 \begin{cases} \sqrt{\dfrac{\log n}{\tau p n}}, \quad & p \le 0.49, \\ \\[-9pt] \sqrt{\dfrac{\log n}{\tau^2 q^2 n}}, \quad & p > 0.49, \end{cases}\end{equation*}

and

\begin{equation*} \gamma \,{:\!=}\, C_3 \begin{cases} \sqrt{\dfrac{\log n}{p n}}, \quad & p \le 0.49, \\ \\[-9pt] \left(\dfrac{\log n}{n}\right)^{1/4}, \quad & p > 0.49, \end{cases}\end{equation*}

for some appropriately chosen constants $C_1, C_2, C_3 > 0$ and replace assumptions (26)–(28) by conditions (131). Some other constants appearing in various definitions, might need to be updated, too.

The proofs of non-bipartite versions of Theorem 2, Claim 7, Lemmas 8, 9, 13, and 14 follow the bipartite ones in a straightforward way.

The proof of Lemma 6 is modified also in a straightforward way except for one technical change. In the switching graph B we consider 6-circuits rather than 6-cycles (that is, we allow the vertices $x_1$ and $x_2$ in Figure 2 coincide). With this definition the degrees in the switching graph B (cf. (39)–(40)) are now as follows. If edges $f = u_1u_2$ and $e = v_1v_2$ are disjoint, then

(133) \begin{equation} \deg_B(H) = \theta_{G,H}(u_1,v_1)\theta_{G,H}(u_2,v_2) + \theta_{G,H}(u_1,v_2)\theta_{G,H}(u_2,v_1), \quad H \in {\mathcal{R}}\end{equation}

and

(134) \begin{equation} \deg_B(H') = \theta_{G,H}(v_1,u_1)\theta_{G,H}(v_2,u_2) + \theta_{G,H}(v_1,u_2)\theta_{G,H}(v_2,u_1), \quad H' \in {\mathcal{R}}'.\end{equation}

If e and f share a vertex (without loss of generality, $u_1 = v_1$ ), then

\begin{equation*} \deg_B(H) = \theta_{G,H}(u_2,v_2), \quad H \in {\mathcal{R}}, \quad \text{and} \quad \deg_B(H') = \theta_{G,H'}(v_2,u_2), \quad H' \in {\mathcal{R}}'.\end{equation*}

We modify the definition (42) of a $\delta$ -typical graph by taking the maximum over all pairs $(u, v) \in f \times e$ of distinct vertices. Since we now have two terms in (133) and (134), the bounds in (49) and (50) get some extra factors 2, which cancel out, leading to a bound similar to (51), but with constant 9 inflated.

There is also a little inconvenience related to the analog of formula (56). Namely, now

(135) \begin{align} \operatorname{cod}_{F}(u, v) &= |\Gamma_{F}(u) \setminus \left\{ v \right\}| + |\Gamma_{F}(v) \setminus \left\{ u \right\}| - |\Gamma_{F}(u) \cup \Gamma_{F}(v) \setminus \left\{ u, v \right\}| \nonumber\\[3pt]& = \deg_F(u)+\deg_F(v) - 2\mathbb{I}_{uv \in F} - (n - 2 - \operatorname{cod}_{K_n \setminus F}(u,v)),\end{align}

so, the formula gets an extra factor O(1), which turns out to be negligible.

Set $\mathbb{R}\,{:\!=}\,\mathbb{R}(n,p)$ . More substantial modifications needed to prove Theorem 1 $^{\prime}$ (which we discuss in detail below) are the following.

  • Lemma 11 (only the case $\mathbb{I}=0$ left). Instead of using the asymptotic enumeration formula of Canfield, Greenhill and McKay (Theorem 5), we apply the one of Liebenau and Wormald [Reference Liebenau and Wormald8].

  • Lemma 10 in the case $p > 0.49$ . Instead of directly showing the existence of short alternating cycles in $K_n\setminus \mathbb{R}(t)=(K_n\setminus\mathbb{R})\cup(\mathbb{R}\setminus \mathbb{R}(t))$ , we create a blue-red auxiliary bipartite graph from $K_n \setminus \mathbb{R}(t)$ and apply unchanged Lemma 20 to it.

7.1. Sketch of co-degree concentration for regular graphs

The non-bipartite version of Lemma 11 below is obtained by just setting $\mathbb{I}=0$ and replacing $\hat n$ by n (the resulting term $\hat p$ is swallowed by $\lambda$ ).

Lemma 11. Suppose that $\hat p n \to \infty$ and $\lambda = \lambda(n)\to \infty$ . Then, for any distinct $u, v \in [n]$ ,

(136) \begin{equation} \mathbb{P}\left( |\operatorname{cod}_{\mathbb{R}}(u,v) - p^2 n| \le 20\hat p\sqrt{ \lambda n} + \lambda\right) = O\left( n e^{ - \lambda } \right). \end{equation}

As before, it is sufficient to assume that $p \le 1/2$ and thus replace $\hat p$ by p in (136). Indeed, by (135) and the identity $2p-1=p^2-q^2$ ,

\begin{equation*} \begin{split} \operatorname{cod}_{\mathbb{R}}(u, v)-p^2n = 2(p(n-1) - \mathbb{I}_{uv \in \mathbb{R}}) - (n - 2 - \operatorname{cod}_{K_n \setminus \mathbb{R}}(u,v))-p^2n \\ = \operatorname{cod}_{K_n \setminus \mathbb{R}}(u_1,v_1) - q^2n + O(1). \end{split} \end{equation*}

This allows, for $p>1/2$ , to replace p by q, as explained in the proof of Lemma 11 (the error O(1) is absorbed by the term $\lambda$ ).

The proof of (136) for $p \le 1/2$ is based on the following enumeration result by Liebenau and Wormald [Reference Liebenau and Wormald8] (see Cor. 1.5 and Conj. 1.2 therein), proved for some ranges of d already by McKay and Wormald [Reference McKay and Wormald10,Reference McKay and Wormald11].

Given a sequence $\mathbf{d}=(d_1,\dots,d_n)$ , let $g(\mathbf{d})$ denote the number of graphs G on the vertex set $V=(v_1,\dots,v_n)$ whose degree sequence is $\mathbf{d}$ , that is, $\deg_G(v_i)=d_i$ , $i=1,\dots,n$ . Further, let

\begin{equation*} \bar d=\frac1n\sum_{i=1}^nd_i\;,\quad\mu=\frac{\bar d}{n-1}\;,\quad \gamma_2=\frac1{(n-1)^2}\sum_{i=1}^n(d_i-\bar d)^2\;,\quad\hat d=\min(\bar d,n-1-\bar d).\end{equation*}

Theorem 22 ([Reference Liebenau and Wormald8]). For some absolute constant $\varepsilon > 0$ , if $\mathbf{d}=\mathbf{d}(n)$ satisfies $\max_i|d_i-\bar d|=o(n^\varepsilon\hat d)$ , $n\hat d\to\infty$ , as $n\to\infty$ , and $\sum_{i=1}^nd_i$ is even, then

\begin{equation*} g(\mathbf{d})\sim\sqrt2\exp\left(-\frac14-\frac{\gamma_2^2}{4\mu^2(1-\mu)^2}\right)\left(\mu^\mu(1-\mu)^{1-\mu}\right)^{\binom n2}\prod_{i=1}^n\binom{n-1}{d_i}.\end{equation*}

The proof of Lemma 11 $^{\prime}$ follows the lines of the proof of Lemma 11 in the case $\mathbb{I}=0$ . For fixed distinct $u,v\in V$ we define, as before, ${\mathcal{R}}_k = \{ G \in {\mathcal{R}}(n,d)\,\ :\,\ \operatorname{cod}_G(u,v) = k \}$ , $0\le k\le d$ . Using Theorem 22, it is tedious but straightforward to find a sequence $r_k\,{:\!=}\,r_k(n,d)$ such that $|{\mathcal{R}}_k|\sim r_k$ . We have $r_k = r_k^0 + r_k^1$ , as the formula for $|{\mathcal{R}}_k|$ breaks into two, according to whether uv is an edge ( $r_k^1)$ or not ( $r_k^0$ ). It can be checked that $r_0, \dots, r_d > 0$ and, for $1\le k\le d$ ,

\begin{equation*} \frac{r_k^0}{r_{k-1}^0}=\frac{(d-k+1)^2}{k(n-2-2d+k)}\left(1-\frac1d\right)\left(1-\frac1{n-1-d}\right)\end{equation*}

as well as

\begin{equation*} \frac{r_k^1}{r_{k-1}^1}=\frac{(d-k)^2}{k(n-2d+k)}\left(1-\frac1d\right)\left(1-\frac1{n-1-d}\right). \end{equation*}

Let $\mu^0 \,{:\!=}\, (d+1)^2/n$ and $\mu^1 \,{:\!=}\, d^2/n$ . For $\mu^0\le k\le d$ and, respectively, for $\mu^1\le k\le d$ ,

\begin{equation*} \frac{r_k^0}{r_{k-1}^0}\le\frac{\mu^0}k\quad\mbox{and}\quad\frac{r_k^1}{r_{k-1}^1}\le\frac{\mu^1}k\le\frac{\mu^0}k.\end{equation*}

Thus, for $\mu^0\le k\le d$ ,

\begin{equation*}\frac{r_k}{r_{k-1}}=\frac{r_k^0+r_k^1}{r_{k-1}^0+r_{k-1}^1}\le\frac{\mu^0}k.\end{equation*}

Similarly, setting $\rho=\left(1-\frac1d\right)\left(1-\frac1{n-1-d}\right)$ , for $1\le k\le\mu^0$ and, respectively, $1\le k\le\mu^1$ ,

\begin{equation*} \frac{r_{k-1}^0}{r_{k}^0}\le\frac k{\mu^0\rho}\le\frac k{\mu^1\rho}\quad\mbox{and}\quad\frac{r_{k-1}^1}{r_{k}^1}\le\frac k{\mu^1\rho}.\end{equation*}

So, for $1\le k\le\mu^1$ ,

\begin{equation*}\frac{r_{k-1}}{r_k}=\frac{r_{k-1}^0+r_{k-1}^1}{r_k^0+r_k^1}\le\frac k{\mu^1\rho}.\end{equation*}

Setting conveniently $\mu_+\,{:\!=}\,\mu^0$ and $\mu_-\,{:\!=}\,\mu^1\rho$ , we may now apply Claim 12, which extends straighforwardly to the non-bipartite setting.

We have, using that $d \ge 1$ ,

(137) \begin{equation} \mu_+ \le \frac{4d^2}{n} \le 4p^2n.\end{equation}

Also, using $p \le 1/2$ ,

\begin{equation*} \mu_+\le \frac{(pn+1)^2}n\le p^2n+2 \quad \text{ and } \quad \mu_-=\mu^1\rho\ge\frac{d^2}n\left(1-\frac1d\right)^2\ge\frac{(pn-2)^2}n\ge p^2n-2,\end{equation*}

whence, setting $X = \operatorname{cod}_{\mathbb{R}}(u_1,v_1)$ , and $x = 20\sqrt{p^2 n \lambda} + \lambda - 2$ ,

(138) \begin{equation} \mathbb{P}\left(|X - p^2n| \ge 20p\sqrt{\lambda n} + \lambda\right) \le \mathbb{P}\left(X \ge \mu_+ + x\right) + \mathbb{P}\left(X\le \mu_- - x\right). \end{equation}

We now bound the RHS of (138) using Claim 12 with $\lambda - 2$ instead of $\lambda$ . Noting that (137) implies $x \ge \sqrt{2 \mu_+ (\lambda - 2)} + (\lambda - 2)$ , we have

\begin{equation*} \mathbb{P}\left(X \ge \mu_+ + x\right) + \mathbb{P}\left(X\le \mu_- - x\right) = O\left( \sqrt{N} e^{-(\lambda - 2)} \right).\end{equation*}

Since $\sqrt{N} e^{-(\lambda -2)} = \Theta(n e^{-\lambda})$ , this completes the proof of Lemma 11 $^{\prime}$ .

7.2. Sketch of the proof of new Lemma 10, case $p > 0.49$

For completeness, we state here the non-bipartite counterpart of Lemma 10 which on the surface looks almost identical.

Lemma 10. Assuming (131), we have, a.a.s.,

\begin{equation*} \min_{e, f \in K_n \setminus \mathbb{R}(t), e \neq f}\mathbb{P}\left({e \in \mathbb{R}, f \notin \mathbb{R}}\,|\,{\mathbb{R}(t)}\right) \ge e^{ - \lambda(t) }, \qquad \text{for } t \le t_0. \end{equation*}

Looking at the diagram in Figure 1, we see that the proof of Lemma 10 relies on several other results, most notably Lemmas 17 and 20. It would be, however, a very tedious task to come up with non-bipartite counterpart of Lemma 20, and, consequently, ones of Lemmas 19 and 18. Instead, we rather convert the non-bipartite case into the bipartite one by a standard probabilistic construction and use Lemmas 17 and 20 practically unchanged.

Given a blue-red graph $H \subseteq K_n$ , let $\operatorname{bip}\!(H)$ be a random blue-red bipartite graph with bipartition $V_1 = \left\{ u_1, \dots, u_n \right\}, V_2 = \left\{ v_1, \dots, v_n \right\}$ , such that for each edge $ij \in E(H)$ we flip a fair coin and include into $\operatorname{bip}\!(H)$ either $u_iv_j$ or $u_jv_i$ (coloured the same colour as ij), with the flips being independent. In particular, $|E(\operatorname{bip}\!(H))| = |E(H)|$ , so the density of $\operatorname{bip}\!(H)$ is exactly half of that of H, while, if the densities of the blue and red subgraphs of H are b and r, then the expected densities of the blue and red subgraphs of $\operatorname{bip}\!(H)$ are $b/2$ and $r/2$ .

Note that if there is an instance of $\operatorname{bip}\!(H)$ in which an edge $u_iv_j$ is contained in an alternating cycle of length at most D, then ij is contained in an alternating circuit of the same length. It is not, in general, a cycle, since some vertices can be repeated, but edges are not, as the edges of $\operatorname{bip}\!(H)$ correspond to different edges of H. Luckily, Proposition 4 actually works for alternating circuits too, since in a circuit the blue and red degrees of each vertex equal each other (see the paragraph following equation (15)) and, similarly as for cycles, in $K_n$ there are at most $n^{2\ell-2}$ circuits of length $\ell$ containing a given edge e. Defining ${\mathcal{R}}_G$ , ${\mathcal{R}}_{G,e}$ , and $ {\mathcal{R}}_{G,\neg e}$ analogously to the bipartite case (cf. (15) and (16)), and making obvious modifications of the proof of Proposition 4, we obtain the following.

Proposition 4. Let a graph $G\subseteq K_n$ be such that ${\mathcal{R}}_G\neq\emptyset$ and let $e\in K_n\setminus G$ . Assume that for some number $D > 0$ and every $H\in {\mathcal{R}}_{G}$ the edge e is contained in an alternating circuit of length at most 2D. Then ${\mathcal{R}}_{G,\neg e}\neq\emptyset$ , ${\mathcal{R}}_{G, e}\neq\emptyset$ , and

\begin{equation*} \frac{1}{n^{2D} - 1} \le \frac{|{\mathcal{R}}_{G,\neg e}|}{|{\mathcal{R}}_{G,e}|} \le n^{2D} - 1. \end{equation*}

The plan to adapt the proof of Lemma 10 is to condition on $K_n\setminus\mathbb{R}(t)$ having its degrees and co-degrees concentrated (using the non-bipartite counterparts of Lemmas 13 and 14) and then show that there is an instance F of $\operatorname{bip}\!(K_n \setminus \mathbb{R}(t))$ in which the degrees and co-degrees are similarly concentrated, with just a negligibly larger error.

In particular, such an F is $(q/2,\tau p/2,\alpha/32)$ -regular (as before, we denote $\alpha = \min \left\{ \tau p, q \right\}$ ). Then, applying Theorem 15 along the lines of the proof of Lemma 17, we show that F is also $(\pi/2, \alpha/32)$ -jumbled, where, as before, $\pi = \tau p + q$ . Hence, we are in position to apply Lemma 20, obtaining for every edge of F an alternating cycle of length $O\left( 1/(\tau p q) \right)$ in F. As explained above, this implies alternating circuits in $K_n \setminus \mathbb{R}(t)$ of the same length, and so we may complete the proof of Lemma 10 $^{\prime}$ , based on Proposition 4 $^{\prime}$ , in the way we did it in the bipartite case.

Let us now present some more details. We condition on $\mathbb{R}(t) = G$ such that (cf. (75) and (78))

(139) \begin{equation} \forall v \in [n] \quad |\deg_{G}(v) - (1 - \tau) pn| = O(\sqrt{n \log n})\end{equation}

and

(140) \begin{equation} \max_{u, v \in [n], u \ne v} \operatorname{cod}_{G}(u,v) = (1 - \tau)^2p^2 n + O(\sqrt{n \log n}).\end{equation}

Fix an arbitrary $H \in {\mathcal{R}}_G$ . Since the red graph $K_n \setminus H$ is $q(n-1)$ -regular, we have $\deg_{\operatorname{bip}\!(K_n \setminus H)} (v) \sim \operatorname{Bin}(q(n-1), 1/2)$ . Thus, by a routine application of the Chernoff bound a.a.s.

(141) \begin{equation}\max_{v \in [n]} \left| \deg_{\operatorname{bip}\!(K_n \setminus H)}(v) - qn/2 \right| = O(\sqrt{n \log n}).\end{equation}

By (139), the blue graph $H \setminus G$ has degrees $pn - (1 - \tau)pn + O\left( \delta \tau p n \right) = \tau p n + O\left( \sqrt{n \log n} \right)$ , so the Chernoff bound implies that (using $p > 0.49$ ) a.a.s.

(142) \begin{equation}\max_{v \in [n]} \left|\deg_{\operatorname{bip}\!(H \setminus G)}(v) - \tau pn/2 \right| =O\left( \sqrt{n \log n} \right).\end{equation}

By (131) and (132), since $p>0.49$ and $\tau\ge \tau_0$ ,

\begin{equation*} \alpha=\min\{\tau p,q\} \gg \left(\frac{\log n}n\right)^{1/4}. \end{equation*}

Consequently, $\sqrt{n\log n} \ll \alpha n$ and, with a big margin, we conclude that a.a.s. $\operatorname{bip}\!(K_n \setminus G)$ is $(q/2, \tau p/2, \alpha/32)$ -regular.

Next, we check that $\operatorname{bip}\!(K_n \setminus G)$ is $(\pi/2, \alpha/32)$ -jumbled. Inequalities (141) and (142) imply

(143) \begin{equation} \deg_{\operatorname{bip}\!(K_n\setminus G)}(v) = \frac{\pi n}{2} + O(\sqrt{n \log n}).\end{equation}

Further, by (135), (139), and (140), we have that (cf. (100))

\begin{align*} \max_{u \ne v}\operatorname{cod}_{K_n \setminus G}(u, v) &\le 2\max_{v \in [n]} \deg_{K_n \setminus G}(v) - (n - 2 - \operatorname{cod}_{G}(u,v))\\[3pt] &=(2 \pi - 1)n + (1 - \pi)^2 n + O(\sqrt{n\log n}) = \pi^2 n + O\left( \sqrt{n\log n} \right). \end{align*}

Since for every $u, v \in [n], u \ne v$ , $ \operatorname{cod}_{\operatorname{bip}\!(K_n \setminus G)}(u,v) \sim \operatorname{Bin}( \operatorname{cod}_{K_n \setminus G}(u,v), \tfrac{1}{4})$ , by a simple application of Chernoff’s inequality and the union bound we show that a.a.s.

(144) \begin{equation} \max_{u,v \in V_1} \operatorname{cod}_{\operatorname{bip} \left( K_n \setminus G \right)} (u,v) = \frac{\pi^2n}{4} + O( \sqrt{n\log n} ).\end{equation}

Now, fix an instance F of the graph $\operatorname{bip}\!(K_n \setminus G)$ for which (141), (142) and (144) hold.

Applying Theorem 15, by calculations similar to those in the proof of Lemma 17, from (143) and (144) we deduce that F is $( \pi/2, \alpha/32)$ -jumbled. Since we earlier showed that F is $(q/2, \tau p/2, \alpha/32)$ -regular, Lemma 20 implies that in F every edge is contained in an alternating cycle of length $O\left( 1/(\tau p q) \right)$ and therefore in $K_n \setminus G$ every edge is contained in an alternating circuit of the same length. The same argument implies alternating cycles in $K_n \setminus (G \cup \{e\})$ for an arbitrary edge e, since only the lower bound in (139) has to be decreased by a negligible quantity 1.

The rest of the proof of Lemma 10 $^{\prime}$ in the case $p > 0.49$ follows along similar lines.

8. Technical facts

Here we collect a few technical or very plausible facts with their easy proofs. Most of them have been already utilised in the paper. An exception is Proposition 24 to be used only in Remark 26, Section 10.

We begin with convenient consequences of the assumptions of Lemma 6.

Proposition 23. For $t = 0, \dots, t_0 - 1$ , the conditions of Lemma 6, namely, (26), (27), and (28), imply that

(145) \begin{equation} \tau q \ge \tau_0 q \ge 700 \cdot 680 \sqrt{\frac{3(C+4)\log N}{\hat n}}, \quad \text{whenever } p > 0.49, \end{equation}
(146) \begin{equation} \gamma_t \le \gamma_{t_0}\le 1, \end{equation}

and

(147) \begin{equation} \tau \hat p \hat n \ge \tau_0 \hat p\hat n \ge 3000^2(C + 4)\log N.\end{equation}

Proof. Since $\tau = \tau(t) \ge \tau(t_0) \ge \tau_0$ and $\gamma_t$ is increasing in t, the first inequalities in (145)–(147) are immediate and it is enough to prove the second inequalities.

Inequality (145) follows from the definition (25) of $\tau_0$ and (28).

To show (146), for $p\le0.49$ , using the definition of $\tau_0$ (see (25)) and (27), we get

\begin{equation*} \gamma_{t_0} \le \frac{1080}{340^4} + \sqrt\frac23 < 0.01 + 0.82 < 1, \end{equation*}

while for $p > 0.49$

\begin{equation*} \gamma_{t_0} \le 0.01 + \frac{25,000}{680 \cdot 700} < 1. \end{equation*}

To see (147) first note that for $p \le 0.49$ it is straightforward from the definition of $\tau_0$ in (25). For $p > 0.49$ , observing that (26) implies $\sqrt{\tfrac{3(C+4)\log N}{\hat{n}}} \le \sqrt{\hat p /3240} \le 1/3240$ , we argue that

\begin{equation*} \tau_0 \hat p\ge \frac{49}{51}\tau_0 q \overset{\mbox{(145)}}{\ge} \frac{49}{51}\cdot 700\cdot 680\sqrt{\frac{3(C+4)\log N}{\hat{n}}} \ge 3000^2\frac{(C+4)\log N}{\hat{n}},\end{equation*}

whence (147) follows.

Next, we give a proof of Claim 7 which was instrumental in deducing Theorem 2 from Lemma 6 in Section 3.2.

Proof of Claim 7. Writing $X = t_0 - |S|$ , we have $\operatorname{\mathbb{E}} X = \sum_{t=0}^{t_0 - 1} \gamma_t$ . Denoting $\alpha \,{:\!=}\, 1080\hat p^2 \mathbb{I}$ and

\begin{equation*} \beta \,{:\!=}\, \begin{cases} 3240\sqrt{\dfrac{2(C + 3)\log N}{p\hat n}}, \quad & p \le 0.49, \\ \\[-9pt] 25,000 \sqrt{\dfrac{(C + 3)\log N}{q^2\hat n } } , \quad & p > 0.49, \end{cases} \end{equation*}

we have

\begin{equation*} \gamma_t = \alpha + \begin{cases} \beta \tau^{-1/2}, \quad & p \le 0.49, \\ \\[-9pt] \beta \tau^{-1}, \quad & p > 0.49. \end{cases} \end{equation*}

Since, trivially, $\sum_{t = 0}^{t_0 - 1} \alpha = \alpha t_0 \le \alpha M = 1080\hat p^2 \mathbb{I} \cdot M$ , to prove (35), it suffices to show that

(148) \begin{equation} \sum_{t = 0}^{t_0 - 1} \tau^{-1/2} \le 2M, \quad \text{and} \quad \sum_{t = 0}^{t_0 - 1} \tau^{-1} \le \frac{M}{4}\log \frac{\hat n}{\log N}. \end{equation}

Since $\tau = \tau(t) = 1 - t/M$ is positive and decreasing on [0,M),

\begin{equation*} \sum_{t = 0}^{t_0 - 1} \tau^{-1/2} \le \sum_{t = 0}^{M - 1} \tau^{-1/2} \le \int_0^M \tau^{-1/2} dt = M \int_0^{1} \tau^{-1/2} d\tau = 2M,\end{equation*}

implying the first inequality in (148). On the other hand, recalling $t_0 = \lfloor (1-\tau_0)M \rfloor$ ,

\begin{equation*} \sum_{t = 0}^{t_0 - 1} \tau^{-1} \le \int_0^{t_0} \tau^{-1} dt \le \int_0^{(1-\tau_0)M} \tau^{-1} dt \le M \int_{\tau_0}^{1} \tau^{-1} d\tau = M \log \frac{1}{\tau_0} \le \frac{M}{4} \log \frac{\hat n}{\log N},\end{equation*}

which implies the second inequality in (148).

Checking (36) is a dull inspection of definitions using conditions (28)–(29), which the readers might prefer to do themselves. We nevertheless spell out the details starting with the case $p\le 0.49$ . Choosing $C^*$ large enough,

\begin{align*} \gamma &= C^*\left( p^2 \mathbb{I} + \sqrt{\frac{\log N}{p\hat n }}\right) \\[3pt] &\ge 1080 p^2 \mathbb{I} + 6480 \sqrt{\frac{(C + 3)\log N}{p\hat n }} + 3240\sqrt{\frac{3(C + 4)\log N}{p\hat n}} + \sqrt{\frac{\log N}{p\hat n}} + \sqrt{\frac{2C\log N}{p\hat n}}\\[3pt] &\ge \theta + \sqrt{\tau_0} + 2\sqrt{1/M} + \sqrt{\frac{2C\log N}{M}} \overset{\mbox{(29)}}{\ge} \theta + \tau_0 + 2/M + \sqrt{\frac{2C\log N}{M}}.\end{align*}

In the case $p > 0.49$ , note that $q^{3/2}\ge q^2\ge \hat p^2$ . Therefore, assuming $C^*$ is large enough,

\begin{equation*} \gamma/3 \ge 700\left(3(C+4)\right)^{1/4} \left( q^{3/2}\mathbb{I} + \left(\frac{\log N}{\hat n}\right)^{1/4}\right) = \tau_0,\end{equation*}

and

\begin{equation*}\gamma/3 \ge 1080q^{3/2}\mathbb{I} + 6250\sqrt{\frac{(C + 3)\log N}{q^2\hat n}}\log\frac{\hat n}{\log N} \ge \theta.\end{equation*}

Finally, because (28) implies $\hat n / (\log N) \ge 1$ and $d_1, d_2 \ge 1$ implies $M \ge \hat n$ , for large enough $C^*$ ,

\begin{align*} \gamma/3 \ge \frac{C^*}{3}\left( \frac{\log N}{\hat n} \right)^{1/4} &\ge \frac{\log N}{\hat n} + \sqrt{\frac{2C\log N}{\hat n}} \\[3pt] \fbox{$p > 0.49$, $M \ge \hat n$}\quad & \ge \frac{2}{M} + \sqrt{\frac{2C\log N}{M}},\end{align*}

which, with the previous two inequalities, implies (36).

We conclude this technical section with a lower bound on the maximum degree in a binomial random graph.

Proposition 24. If $p' \le 1/4$ and $n_2 \le n_1$ , then a.a.s. the maximum degree of $\mathbb{G}(n_1,n_2,p')$ in $V_1$ is at least $\kappa \,{:\!=}\, \min \{ \lceil{{p'n_2 + \sqrt{p'(1-p')n_2\log n_1}}}\rceil, n_2 \}$ .

Proof. Writing $Z \sim \mathcal{N}(n_2p', n_2p'(1- p'))$ , by Slud’s inequality [Reference Slud13, Theorem 2.1] we have

\begin{equation*} r \,{:\!=}\, \mathbb{P}\left(\operatorname{Bin}(n_2,p') \ge \kappa\right) \ge \mathbb{P}\left(Z \ge \kappa\right) \, \ge \mathbb{P}\left(Z \ge \sqrt{p'(1-p')n_2\log n_1}\right),\end{equation*}

hence by a standard approximation of the normal tail we obtain

\begin{equation*} r \ge \frac{1 + o(1)}{\sqrt{2\pi \log n_1}}e^{-\frac{1}{2}\log n_1} = e^{ - (1/2 + o(1)) \log n_1 }. \end{equation*}

Therefore, the probability that all vertices in $V_1$ have degrees smaller than $\kappa$ is

\begin{equation*} (1-r)^{n_1} \le e^{ - n_1 r} \le \exp \left( -n_1e^{-(1/2 + o(1))\log n_1} \right) \to 0,\end{equation*}

proving the proposition.

9. Application: perfect matchings between subsets of vertices

Perarnau and Petridis in [Reference Perarnau and Petridis12], in connection with a problem of Plünnecke, studied the existence of perfect matchings between fixed subsets of vertices in random biregular graphs. In particular, they proved the following result (which we state in our notation to make it easier to apply Theorem 2).

Theorem 25. (Theorem 2 in [Reference Perarnau and Petridis12]) Let $k>0$ be a constant, and assume $n_2 = kn_1$ and $pn_2 \le n_1$ . Take subsets $A\subseteq V_1$ and $B\subseteq V_2$ of size $pn_2$ and let $\mathcal M_{A,B}$ denote the event that the subgraph of $\mathbb{R}(n_1, n_2, p)$ induced by A and B contains a perfect matching. As $\hat n \to \infty$ , we have

\begin{equation*} \mathbb{P}\left(\mathcal M_{A,B}\right) \to 0, \quad \text{if} \quad p^2n_2 - \log pn_2 \to -\infty \text{ or } pn_2 \text{ is constant}, \end{equation*}

and

(149) \begin{equation} \mathbb{P}\left(\mathcal M_{A,B}\right) \to 1, \quad \text{if} \quad p^2n_2 - \log pn_2 \to \infty. \end{equation}

Perarnau and Petridis [Reference Perarnau and Petridis12] speculated that if the bipartite version of the Sandwich Conjecture was true, then Theorem 25 would follow straightforwardly from the classical result of Erdős and Rényi on perfect matchings in the random bipartite graph (see Theorem 4.1 in [Reference Janson, Łuczak and Rucinski5]). That result, in particular, implies that the bipartite binomial random graph $\mathbb{G}(n',n',p')$ contains a perfect matching a.a.s. whenever $p'n'-\log n'\to\infty$ as $n'\to\infty$ . We show how this, together with Theorem 2, implies the 1-statement in (149), provided that condition (4) of Theorem 2 is satisfied, which in particular implies

(150) \begin{equation} q \ge \left( \frac{\log N}{\hat n} \right)^{1/4}.\end{equation}

By Theorem 2, the random graph $\mathbb{R}(n_1,n_2,p)$ a.a.s. contains a random graph $\mathbb{G}(n_1,n_2,p')$ with

\begin{equation*} p' =(1 - 2\gamma) p, \end{equation*}

where $\gamma$ is defined in (5). In particular, the subgraph of $\mathbb{R}(n_1,n_2,p)$ induced by A and B contains a random graph $\mathbb{G}(pn_2, pn_2, p')$ . To see that the latter random graph contains a perfect matching a.a.s., let us verify the Erdős–Rényi condition

\begin{equation*} p'pn_2 - \log pn_2 \to \infty. \end{equation*}

From the assumption in (149) it follows that

(151) \begin{equation} p\geq n_2^{-1/2} = \Theta\left( \hat n^{-1/2} \right). \end{equation}

For our purposes, it is sufficient to check that

(152) \begin{equation} \gamma \ll (\log N)^{-1}, \end{equation}

since then

\begin{equation*} \gamma \log pn_2 \le \gamma \log N \to 0, \quad \text{and} \quad \gamma \to 0,\end{equation*}

which together with the condition in (149) imply

\begin{equation*} p'pn_2 - \log pn_2 = (1 - 2\gamma)(p^2n_2 - \log pn_2) - 2\gamma \log pn_2 \to \infty.\end{equation*}

To see that (152) holds, first note that $\mathbb{I} = 1$ implies $p = O( (\log N)^{-1})$ for $p \le 0.49$ and $q = O( (\log N)^{-1})$ for $p > 0.49$ ; hence, regardless of p, the first term in the definition of $\gamma$ is $o\left( (\log N)^{-1} \right)$ . The remaining terms are much smaller: for $p \le 0.49$ inequality (151) implies that the second term in the definition of $\gamma$ is $O\left( (\log N)^{1/2}\hat n^{-1/4} \right) \ll (\log N)^{-1}$ , while in the case $p > 0.49$ assumption (150) implies that the last two terms in the definition of $\gamma$ are at most $(\log N)^{O(1)}\hat n^{-1/4} \ll (\log N)^{-1}$ .

10. Concluding remarks

Remark 26. Assume $p \le 1/4$ . If a.a.s. $\mathbb{G}(n_1, n_2, p') \subseteq \mathbb{R}(n_1,n_2,p)$ , then we must have

(153) \begin{equation} p' = p \left(1 - \Omega \left( \min \left\{ \sqrt{(\log N)/(p \hat n)}, 1 \right\}\right)\right). \end{equation}

To see this, assume, without loss of generality, that $n_2 \le n_1$ , and note that by Proposition 24 we must have $p'n_2 + \sqrt{p'(1 - p')n_2\log n_1} \le pn_2$ , and therefore $p/p' \ge 1 + \sqrt{\frac{(1 - p')\log n_1}{p'n_2}} $ . Since $p' \le p \le 1/4$ and $n_2 \le n_1$ , we have $p/p' = 1 + \Omega\left( \sqrt{\tfrac{\log N}{p\hat n }} \right)$ , whence (153) follows.

Remark 27. In view of Remark 26, the error $\gamma$ in Theorem 2 has optimal order of magnitude, whenever $p \le 1/4$ , provided that also $p^2 \mathbb{I} = O\left( \sqrt{\log N / p \hat n} \right)$ (that is, if either $p = O\left( \left( (\log N)/\hat n \right)^{1/5}\right)$ or $\mathbb{I} = 0$ ).

From Remark 26 it also follows that we cannot have $\gamma = o(1)$ for $\log N = \Omega(p\hat n )$ . Theorem 2 does not apply to the case $\log N \ge p\hat n /(C^*)^2$ , but we think it would be interesting to find the largest p for which one can a.a.s. embed $\mathbb{G}(n_1,n_2,p')$ into $\mathbb{R}(n_1,n_2,p)$ even in this case. The tightest embedding one can expect is the one permitted by the maximum degree. We conjecture that such an embedding is possible.

Conjecture 1. Suppose that a sequence of parameters $(n_1, n_2, p, p')$ is such that a.a.s. in $\mathbb{G}(n_1, n_2, p')$ the maximum degree over $V_1$ is at most $d_1 = n_2p$ and the maximum degree over $V_2$ is at most $d_2 = n_1p$ . There is a joint distribution of $\mathbb{G}(n_1, n_2, p')$ and $\mathbb{R}(n_1,n_2,p)$ such that

\begin{equation*} \mathbb{G}(n_1, n_2, p') \subseteq \mathbb{R}(n_1,n_2,p) \quad a.a.s. \quad \end{equation*}

Note that if Conjecture 1 is true, then by taking complements we also have the tightest embedding $\mathbb{R}(n_1,n_2,p) \subseteq \mathbb{G}(n_1,n_2,p'')$ that the minimum degrees of $\mathbb{G}(n_1,n_2,p'')$ permit.

We also propose the following strengthening of the Kim–Vu Sandwich Conjecture.

Conjecture 2. Suppose that a sequence of parameters (n, p, p ) is such that a.a.s. in $\mathbb{G}(n, p')$ the maximum degree is at most $d = (n-1)p$ . There is a joint distribution of $\mathbb{G}(n, p')$ and $\mathbb{R}(n,p)$ such that

\begin{equation*} \mathbb{G}(n, p') \subseteq \mathbb{R}(n,p) \quad a.a.s. \quad \end{equation*}

Remark 28. For constant p, to obtain $\gamma = o(1)$ in Theorem 2, we need to assume $\mathbb{I} = 0$ , which requires a rather restricted ratio $n_1/n_2$ . For example, one cannot afford $n_1 = n_2^{1+\delta}$ for any constant $\delta > 0$ . This restriction comes from an enumeration result we use in the proof, namely, Theorem 5 (see condition (iii) therein). Should enumeration be proven with a relaxed condition, it would automatically improve our Theorem 2.

Remark 29. The terms $ p^2\mathbb{I}$ and $q^{3/2}\mathbb{I}$ in (5) are artifacts of the application of switchings in Lemma 11. In the sparse case ( $\hat p \to 0$ ) it is plausible that the condition $\mathbb{I} = 0$ can be made much milder by using a very recent enumeration result of Liebenau and Wormald [Reference Liebenau and Wormald9] instead of Theorem 5. Due to the schedule of this manuscript, we did not check what this would imply, but readers seeking smaller errors in (5) are encouraged to do so.

Acknowledgement

We thank Noga Alon and Benny Sudakov who, at the conference Random Structures & Algorithms 2019, suggested a way to show the existence of alternating paths in non-bipartite graphs. We are also thankful to the anonymous referee for useful remarks.

Footnotes

Research of TK was supported by the grant no. 19-04113Y of the Czech Science Foundation (GAČR) and the Center for Foundations of Modern Computer Science (Charles Univ. project UNCE/SCI/004).

Research of AR was supported by Narodowe Centrum Nauki, grant 2018/29/B/ST1/00426.

Research of MŠ was supported by the Czech Science Foundation, grant number 20-27757Y, with institutional support RVO:67985807.

References

Canfield, E. R., Greenhill, C. and McKay, B. D. (2008) Asymptotic enumeration of dense 0-1 matrices with specified line sums. J. Comb. Th. A 115 3266.CrossRefGoogle Scholar
Dudek, A., Frieze, A., Ruciński, A. and Šileikis, M. (2017) Embedding the Erdős-Rényi hypergraph into the random regular hypergraph and hamiltonicity. J. Comb. Theory B 122 719740.CrossRefGoogle Scholar
Gao, P. Kim-Vu’s sandwich conjecture is true for all $ d=\Omega (\log^ 7 n) $ . arXiv:2011.09449.Google Scholar
Gao, P., Isaev, M. and McKay, B. (2020) Sandwiching random regular graphs between binomial random graphs. In Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, pp. 690701.CrossRefGoogle Scholar
Janson, S., Łuczak, T. and Rucinski, A. (2000) Random Graphs, Wiley-Interscience Series in Discrete Mathematics and Optimization, Wiley-Interscience, New York.CrossRefGoogle Scholar
Kim, J. H. and Vu, V. H. (2004) Sandwiching random graphs: universality between random graph models. Adv. Math. 188(2) 444469.CrossRefGoogle Scholar
Krivelevich, M., Sudakov, B., Vu, V. H. and Wormald, N. (2001) Random regular graphs of high degree. Random Struct. Alg. 18 346363.CrossRefGoogle Scholar
Liebenau, A. and Wormald, N. Asymptotic enumeration of graphs by degree sequence, and the degree sequence of a random graph. arXiv:1702.08373.Google Scholar
Liebenau, A. and Wormald, N. Asymptotic enumeration of digraphs and bipartite graphs by degree sequence. arXiv:2006.15797.Google Scholar
McKay, B. D. and Wormald, N. C. (1990) Asymptotic enumeration by degree sequence of graphs of high degree. Eur. J. Comb. 11 565580.CrossRefGoogle Scholar
McKay, B. D. and Wormald, N. C. (1991) Asymptotic enumeration by degree sequence of graphs with degrees $o(n^{1/2})$ . Combinatorica 11 369382.CrossRefGoogle Scholar
Perarnau, G. and Petridis, G. (2013) Matchings in random biregular bipartite graphs. Electron. J. Combin. 20(1) #P60 1.CrossRefGoogle Scholar
Slud, E. V. (1977) Distribution inequalities for the binomial law Ann. Prob. 5(3) 404412.CrossRefGoogle Scholar
Thomason, A. (1989) Dense expanders and pseudo-random bipartite graphs. Discrete Math. 75 381386.CrossRefGoogle Scholar
Figure 0

Figure 1. The structure of the proof of Theorem 2. An arrow from statement A to statement B means that A is used in the proof of B. The numbers in the brackets point to the section where a statement is formulated and where it is proved (unless the proof follows the statement immediately); external results have instead an article reference in square brackets.

Figure 1

Figure 2. Switching between H and H$^{\prime}$ when e and f are disjoint: solid edges are in $H \setminus G$ (or $H' \setminus G$) and the dashed ones in $K \setminus H$ (or $K \setminus H'$).

Figure 2

Figure 3. Switching between $H \in {\mathcal{R}}_k(u_1,v_1)$ and $H' \in {\mathcal{R}}_{k-1}(u_1,v_1)$: solid edges are in H and H$^{\prime}$ and the dashed ones in $K \setminus H$ and $K \setminus H'$, respectively.