1 Introduction
1.1 An Informal Introduction
A voting method or social choice function with m candidates and n voters is a function
From the social choice theory perspective, the input of the function f is a list of votes of n people who are choosing between m candidates. Each of the m candidates is labelled by the integers $1,\ldots ,m$. If the votes are $x\in \{1,\ldots ,m\}^{n}$, then $x_{i}$ denotes the vote of person $i\in \{1,\ldots ,n\}$ for candidate $x_{i}\in \{1,\ldots ,m\}$. Given the votes $x\in \{1,\ldots ,m\}^{n}$, $f(x)$ is interpreted as the winner of the election.
It is both natural and desirable to find a voting method whose output is most likely to be unchanged after votes are randomly altered. One could imagine that malicious third parties or miscounting of votes might cause random vote changes, so we desire a voting method f whose output is stable to such changes. In addition to voting motivations, finding a voting method that is stable to noise has applications to the unique games conjecture [Reference Khot, Kindler, Mossel and O’DonnellKKMO07, Reference Mossel, O’Donnell and OleszkiewiczMOO10, Reference Khot and MoshkovitzKM16], to semidefinite programming algorithms such as MAX-CUT [Reference Khot, Kindler, Mossel and O’DonnellKKMO07, Reference Isaksson and MosselIM12], to learning theory [Reference Feldman, Guruswami, Raghavendra and WuFGRW12], etc. For some surveys on this and related topics, see [Reference O’DonnellO’D, Reference KhotKho, Reference HeilmanHei20].
The output of a constant function f is never altered by changes to the votes. Also, if the function f only depends on one of its n inputs, then the output of f is rarely changed by independent random changes to each of the votes. In these cases, the function f is rather ‘undemocratic’ from the perspective of social choice theory. In the case of a constant function, the outcome of the election does not depend at all on the votes. In the case of a function that only depends on one of its inputs, the outcome of the election only depends on one voter (so f is called a dictatorship function).
Among ‘democratic’ voting methods, it was conjectured in [Reference Khot, Kindler, Mossel and O’DonnellKKMO07] and proven in [Reference Mossel, O’Donnell and OleszkiewiczMOO10] that the majority voting method is the voting method that best preserves the outcome of the election. The following is an informal statement of the main result of [Reference Mossel, O’Donnell and OleszkiewiczMOO10].
Theorem 1.1 Majority Is Stablest, Informal Version [Reference Mossel, O’Donnell and OleszkiewiczMOO10, Theorem 4.4]
Suppose that we run an election with a large number n of voters and $m=2$ candidates. We make the following assumptions about voter behavior and about the election method.
• Voters cast their votes randomly and independently, with equal probability of voting for either candidate.
• Each voter has a small influence on the outcome of the election. (That is, all influences from Equation 5 are small for the voting method.)
• Each candidate has an equal chance of winning the election.
Under these assumptions, the majority function is the voting method that best preserves the outcome of the election, when votes have been corrupted independently each with probability less than $1/2$.
We say a vote $x_{i}\in \{1,2\}$ is corrupted with probability $0<\delta <1$ when, with probability $\delta $, the vote $x_{i}$ is changed to a uniformly random element of $\{1,2\}$, and with probability $1-\delta $, the vote $x_{i}$ is unchanged.
For a formal statement of Theorem 1.1, see Theorem 1.8 below.
The primary interest of the authors of [Reference Khot, Kindler, Mossel and O’DonnellKKMO07] in Theorem 1.1 was proving optimal hardness of approximation for the MAX-CUT problem. In the MAX-CUT problem, we are given a finite undirected graph on n vertices, and the objective of the problem is to find a partition of the vertices of the graph into two sets that maximises the number of edges going between the two sets. The MAX-CUT problem is MAX-SNP hard, i.e. if $P\neq NP$, there is no polynomial time (in n) approximation scheme for this problem. Nevertheless, there is a randomised polynomial time algorithm [Reference Goemans and WilliamsonGW95] that achieves, in expectation, at least $.87856\ldots $ times the maximum value of the MAX-CUT problem. This algorithm uses semidefinite programming. Also, the exact expression for the $.87856\ldots $ constant is
The authors of [Reference Khot, Kindler, Mossel and O’DonnellKKMO07] showed that, if the Unique Games Conjecture is true, then Theorem 1.1 implies that the Goemans-Williamson algorithm’s .87856$\ldots $ constant of approximation cannot be increased. Assuming the validity of the Unique Games Conjecture is a fairly standard in complexity theory, though the conjecture remains open. See [Reference O’DonnellO’D, Reference KhotKho] and the references therein for more discussion on this conjecture, and see [Reference Khot, Minzer and SafraKMS18] for some recent significant progress.
Theorem 1.1 (i.e. Theorem 1.8) gives a rather definitive statement on the two candidate voting method that is most stable to corruption of votes. Moreover, the applcation of Theorem 1.1 gives a complete understanding of the optimal algorithm for solving MAX-CUT, assuming the Unique Games Conjecture. Unfortunately, the proof of Theorem 1.1 says nothing about elections with $m>2$ candidates. Moreover, Theorem 1.1 fails to prove optimality of the Frieze-Jerrum [Reference Frieze and JerrumFJ95] semidefinite programming algorithm for the MAX-m-CUT problem. In the MAX-m-CUT problem, we are given a finite undirected graph on n vertices, and the objective of the problem is to find a partition of the vertices of the graph into m sets that maximises the number of edges going between the two sets. So, MAX-CUT is the same as MAX-2-CUT.
In order to prove the optimality of the Frieze-Jerrum [Reference Frieze and JerrumFJ95] semidefinite programming algorithm for the MAX-m-CUT problem, one would need an analogue of Theorem 1.1 for $m>2$ voters, where the plurality function replaces the majority function. For this reason, it was conjectured [Reference Khot, Kindler, Mossel and O’DonnellKKMO07, Reference Isaksson and MosselIM12] that the plurality function is the voting method that is most stable to independent, random vote corruption.
Conjecture 1.2 Plurality Is Stablest, Informal Version [Reference Khot, Kindler, Mossel and O’DonnellKKMO07], [Reference Isaksson and MosselIM12, Conjecture 1.9]
Suppose we run an election with a large number n of voters and $m\geq 3$ candidates. We make the following assumptions about voter behavior and about the election method.
• Voters cast their votes randomly, independently, with equal probability of voting for each candidate.
• Each voter has a small influence on the outcome of the election. (That is, all influences from Equation 5 are small for the voting method.)
• Each candidate has an equal chance of winning the election.
Under these assumptions, the plurality function is the voting method that best preserves the outcome of the election when votes have been corrupted independently each with probability less than $1/2$.
We say that a vote $x_{i}\in \{1,\ldots ,m\}$ is corrupted with probability $0<\delta <1$ when, with probability $\delta $, the vote $x_{i}$ is changed to a uniformly random element of $\{1,\ldots ,m\}$, and with probability $1-\delta $, the vote $x_{i}$ is unchanged.
In the case that the probability of vote corruption goes to zero, the first author proved the first known case of Conjecture 1.2 in [Reference HeilmanHei19], culminating in a series of previous works [Reference Colding and MinicozziCM12, Reference McGonagle and RossMR15, Reference Barchiesi, Brancolini and JulinBBJ17, Reference HeilmanHei17, Reference Milman and NeemanMN18a, Reference Milman and NeemanMN18b, Reference HeilmanHei18]. Conjecture 1.2 for all fixed parameters $0<\rho <1$ was entirely open until now. Unlike the case of the majority is stablest (Theorem 1.8), Conjecture 1.2 cannot hold when the candidates have unequal chances of winning the election [Reference Heilman, Mossel and NeemanHMN16]. This realization is an obstruction to proving Conjecture 1.2. It suggested that existing proof methods for Theorem 1.8 cannot apply to Conjecture 1.2.
Nevertheless, we are able to overcome this obstruction in the present work.
Theorem 1.3 Main Result, Informal Version
There exists $\varepsilon>0$ such that Conjecture 1.2 holds for $m=3$ candidates, for all $n\geq 1$, when the probability of a single vote being corrupted is any number in the range $(1/2-\varepsilon ,1/2)$.
Theorem 1.3 is the first proven case of the plurality is stablest conjecture (Conjecture 1.2).
1.2 More Formal Introduction
Using a generalization of the central limit theorem known as the invariance principle [Reference Mossel, O’Donnell and OleszkiewiczMOO10, Reference Isaksson and MosselIM12], there is an equivalence between the discrete problem of Conjecture 1.2 and a continuous problem known as the standard simplex conjecture [Reference Isaksson and MosselIM12]. For more details on this equivalence, see Section 7 of [Reference Isaksson and MosselIM12]. We begin by providing some background for the latter conjecture, stated in Conjecture 1.6.
For any $k\geq 1$, we define the Gaussian density as
Let $z_{1},\ldots ,z_{m}\in \mathbb {R}^{n+1}$ be the vertices of a regular simplex in $\mathbb {R}^{n+1}$ centred at the origin. For any $1\leq i\leq m$, define
We refer to any sets satisfying (2) as cones over a regular simplex.
Let $f\colon \mathbb {R}^{n+1}\to [0,1]$ be measurable and let $\rho \in (-1,1)$. Define the Ornstein–Uhlenbeck operator with correlation $\rho $ applied to f by
$T_{\rho }$ is a parametrization of the Ornstein–Uhlenbeck operator, which gives a fundamental solution of the (Gaussian) heat equation
Here $\overline {\Delta }\colon =\sum _{i=1}^{n+1}\partial ^{2}/\partial x_{i}^{2}$ and $\overline {\nabla }$ is the usual gradient on $\mathbb {R}^{n+1}$. $T_{\rho }$ is not a semigroup, but it satisfies $T_{\rho _{1}}T_{\rho _{2}}=T_{\rho _{1}\rho _{2}}$ for all $\rho _{1},\rho _{2}\in (0,1)$. We have chosen this definition because the usual Ornstein–Uhlenbeck operator is only defined for $\rho \in [0,1]$.
Definition 1.4 Noise Stability
Let $\Omega \subseteq \mathbb {R}^{n+1}$ be measurable. Let $\rho \in (-1,1)$. We define the noise stability of the set $\Omega $ with correlation $\rho $ to be
Equivalently, if $X=(X_{1},\ldots ,X_{n+1}),Y=(Y_{1},\ldots ,Y_{n+1})\in \mathbb {R}^{n+1}$ are $(n+1)$-dimensional jointly Gaussian distributed random vectors with $\mathbb {E} X_{i}Y_{j}=\rho \cdot 1_{(i=j)}$ for all $i,j\in \{1,\ldots ,n+1\}$, then
Maximising the noise stability of a Euclidean partition is the continuous analogue of finding a voting method that is most stable to random corruption of votes among voting methods where each voter has a small influence on the election’s outcome.
Problem 1.5 Standard Simplex Problem [Reference Isaksson and MosselIM12]
Let $m\geq 3$. Fix $a_{1},\ldots ,a_{m}>0$ such that $\sum _{i=1}^{m}a_{i}=1$. Fix $\rho \in (0,1)$. Find measurable sets $\Omega _{1},\ldots \Omega _{m}\subseteq \mathbb {R}^{n+1}$ with $\cup _{i=1}^{m}\Omega _{i}=\mathbb {R}^{n+1}$ and $\gamma _{n+1}(\Omega _{i})=a_{i}$ for all $1\leq i\leq m$ that maximise
subject to the above constraints. (Here $\gamma _{n+1}(\Omega _{i})\colon =\int _{\Omega _{i}}\gamma _{n+1}(x)\,\mathrm {d} x \ \forall \ 1\leq i\leq m$.)
We can now state the continuous version of Conjecture 1.2.
Conjecture 1.6 Standard Simplex Conjecture [Reference Isaksson and MosselIM12]
Let $\Omega _{1},\ldots \Omega _{m}\subseteq \mathbb {R}^{n+1}$ maximise Problem 1.5. Assume that $m-1\leq n+1$. Fix $\rho \in (0,1)$. Let $z_{1},\ldots ,z_{m}\in \mathbb {R}^{n+1}$ be the vertices of a regular simplex in $\mathbb {R}^{n+1}$ centred at the origin. Then $\exists \ w\in \mathbb {R}^{n+1}$ such that, for all $1\leq i\leq m$,
It is known that Conjecture 1.6 is false when $(a_{1},\ldots ,a_{m})\neq (1/m,\ldots ,1/m)$ [Reference Heilman, Mossel and NeemanHMN16]. In the remaining case that $a_{i}=1/m$ for all $1\leq i\leq m$, it is assumed that $w=0$ in Conjecture 1.6.
For expositional simplicity, we separately address the case $\rho <0$ of Conjecture 1.6 in Section 7.
1.3 Plurality Is Stablest Conjecture
As previously mentioned, the standard simplex conjecture [Reference Isaksson and MosselIM12] stated in Conjecture 1.6 is essentially equivalent to the plurality is stablest conjecture from Conjecture 1.2. After making several definitions, we state a formal version of Conjecture 1.2 as Conjecture 1.7.
If $g\colon \{1,\ldots ,m\}^{n}\to \mathbb {R}$ and $1\leq i\leq n$, we denote
Define also the ith influence of g – that is, the influence of the $i^{th}$ voter of g – as
Let
If $f\colon \{1,\ldots ,m\}^{n}\to \Delta _{m}$, we denote the coordinates of f as $f=(f_{1},\ldots ,f_{m})$. For any $\omega \in \mathbb {Z}^{n}$, we denote $\left \|\omega \right \|_{0}$ as the number of nonzero coordinates of $\omega $. The noise stability of $g\colon \{1,\ldots ,m\}^{n}\to \mathbb {R}$ with parameter $\rho \in (-1,1)$ is
Equivalently, conditional on $\omega $, $\mathbb {E}_{\rho }g(\delta )$ is defined so that for all $1\leq i\leq n$, $\delta _{i}=\omega _{i}$ with probability $\frac {1-(m-1)\rho }{m}$, and $\delta _{i}$ is equal to any of the other $(m-1)$ elements of $\{1,\ldots ,m\}$ each with probability $\frac {1-\rho }{m}$, so that $\delta _{1},\ldots ,\delta _{n}$ are independent.
The noise stability of $f\colon \{1,\ldots ,m\}^{n}\to \Delta _{m}$ with parameter $\rho \in (-1,1)$ is
Let $m\geq 2$, $k\geq 3$. For each $j\in \{1,\ldots ,m\}$, let $e_{j}=(0,\ldots ,0,1,0,\ldots ,0)\in \mathbb {R}^{m}$ be the jth unit coordinate vector. Define the plurality function $\mathrm {PLUR}_{m,n}\colon \{1,\ldots ,m\}^{n}\to \Delta _{m}$ for m candidates and n voters such that for all $\omega \in \{1,\ldots ,m\}^{n}$.
We can now state the more formal version of Conjecture 1.2.
Conjecture 1.7 Plurality Is Stablest, Discrete Version
For any $m\geq 2$, $\rho \in [0,1]$, $\varepsilon>0$, there exists $\tau>0$ such that if $f\colon \{1,\ldots ,m\}^{n}\to \Delta _{m}$ satisfies $\mathrm {Inf}_{i}(f_{j})\leq \tau $ for all $1\leq i\leq n$ and for all $1\leq j\leq m$, and if $\mathbb {E} f=\frac {1}{m}\sum _{i=1}^{m}e_{i}$, then
The main result of the present article (stated in Theorem 1.10) is $\exists \ \rho _{0}>0$ such that Conjecture 1.7 is true for $m=3$ for all $0<\rho <\rho _{0}$, for all $n\geq 1$. The only previously known case of Conjecture 1.7 was the following.
Theorem 1.8 Majority Is Stablest, Formal, Biased Case [Reference Mossel, O’Donnell and OleszkiewiczMOO10, Theorem 4.4]
Conjecture 1.7 is true when $m=2$.
For an even more general version of Theorem 1.8, see [Reference Mossel, O’Donnell and OleszkiewiczMOO10, Theorem 4.4]. In particular, the assumption on $\mathbb {E} f$ can be removed, though we know that this cannot be done for $m\geq 3$ [Reference Heilman, Mossel and NeemanHMN16].
1.4 Our Contribution
The main structure theorem below implies that sets optimising noise stability in Problem 1.5 are inherently low-dimensional. Though this statement might seem intuitively true, because many inequalities involving the Gaussian measure have low-dimensional optimisers, this statement has not been proven before. For example, Theorem 1.9 was listed as an open question in [Reference De, Mossel and NeemanDMN17, Reference De, Mossel and NeemanDMN18] and [Reference Ghazi, Kamath and RaghavendraGKR18]. Indeed, the lack of Theorem 1.9 has been one main obstruction to a solution of Conjectures 1.5 and 1.7.
Theorem 1.9 Main Structure Theorem/Dimension Reduction
Fix $\rho \in (0,1)$. Let $m\geq 2$ with $m\leq n+2$. Let $\Omega _{1},\ldots \Omega _{m}\subseteq \mathbb {R}^{n+1}$ maximise Problem 1.5. Then, after rotating the sets $\Omega _{1},\ldots \Omega _{m}$ and applying Lebesgue measure zero changes to these sets, there exist measurable sets $\Omega _{1}',\ldots \Omega _{m}'\subseteq \mathbb {R}^{m-1}$ such that
In the case $m=2$, Theorem 1.9 is (almost) a variational proof of Borell’s inequality, because it reduces Problem 1.5 to a 1-dimensional problem.
In the case $m=3$, Theorem 1.9 says that Conjecture 1.6 for arbitrary $n+1$ reduces to the case $n+1=2$, which was solved for small $\rho>0$ in [Reference HeilmanHei14]. That is, Theorem 1.9 and the main result of [Reference HeilmanHei14] imply the following.
Theorem 1.10 Main; Plurality Is Stablest for Three Candidates and Small Correlation
There exists $\rho _{0}>0$ such that Conjecture 1.7 is true for $m=3$ and for all $0<\rho <\rho _{0}$.
In [Reference HeilmanHei14] it is noted that $\rho _{0}=e^{-20\cdot 3^{10^{14}}}$ suffices in Theorem 1.10.
We can also prove a version of Theorem 1.9 when $\rho <0$. See Theorem 7.9 and the discussion in Section 7. One difficulty in proving Theorem 1.9 directly for $\rho <0$ is that it is not a priori obvious that a minimiser of Problem 1.5 exists in that case.
1.5 Noninteractive Simulation of Correlated Distributions
As mentioned above, Theorem 1.9 answers a question in [Reference De, Mossel and NeemanDMN17, Reference De, Mossel and NeemanDMN18] and [Reference Ghazi, Kamath and RaghavendraGKR18]. Their interest in Theorem 1.9 stems from the following problem. Let $(X,Y)\in \mathbb {R}^{n}$ be a random vector. Let $(X_{1},Y_{1}),(X_{2},Y_{2}),\ldots $ be independent and identically distributed copies of $(X,Y)$. Suppose there are two players A and B. Player A has access to $X_{1},X_{2},\ldots $ and player B has access to $Y_{1},Y_{2},\ldots $. Without communication, what joint distributions can players A and B jointly simulate? For details on the relation of this problem to Theorem 1.9, see [Reference De, Mossel and NeemanDMN17, Reference De, Mossel and NeemanDMN18] and [Reference Ghazi, Kamath and RaghavendraGKR18].
1.6 Outline of the Proof of the Structure Theorem
In this section we outline the proof of Theorem 1.9 in the case $m=2$. The proof loosely follows that of a corresponding statement [Reference McGonagle and RossMR15, Reference Barchiesi, Brancolini and JulinBBJ17] for the Gaussian surface area (which was then adapted to multiple sets in [Reference Milman and NeemanMN18a, Reference Milman and NeemanMN18b, Reference HeilmanHei18]), with a few key differences. For didactic purposes, we will postpone a discussion of technical difficulties (such as existence and regularity of a maximiser) to Subsection 2.1.
Fix $0<a<1$. Suppose there exists $\Omega ,\Omega ^{c}\subseteq \mathbb {R}^{n+1}$ are measurable sets maximizing
subject to the constraint $\gamma _{n+1}(\Omega )=a$. A first variation argument (Lemma 3.1) implies that $\Sigma \colon =\partial \Omega $ is a level set of the Ornstein–Uhlenbeck operator applied to $1_{\Omega }$. That is, there exists $c\in \mathbb {R}$ such that
Because $\Sigma $ is a level set, a vector perpendicular to the level set is also perpendicular to $\Sigma $. Denoting $N(x)\in \mathbb {R}^{n+1}$ as the unit length exterior pointing normal vector to $x\in \partial \Omega $, (7) implies that
(It is not obvious that there must be a negative sign here, but it follows from examining the second variation.) We now observe how the noise stability of $\Omega $ changes as the set is translated infinitesimally. Fix $v\in \mathbb {R}^{n+1}$ and consider the variation of $\Omega $ induced by the constant vector field v. That is, let $\Psi \colon \mathbb {R}^{n+1}\times (-1,1)\to \mathbb {R}^{n+1}$ such that $\Psi (x,0)=x$ and such that $\frac {\mathrm {d}}{\mathrm {d} s}|_{s=0}\Psi (x,s)=v$ for all $x\in \mathbb {R}^{n+1},s\in (-1,1)$. For any $s\in (-1,1)$, let $\Omega ^{(s)}=\Psi (\Omega ,s)$. Note that $\Omega ^{(0)}=\Omega $. Denote $f(x)\colon = \langle v,N(x)\rangle $ for all $x\in \Sigma $. Then define
A second variation argument (Lemma 4.5) implies that, if f is Gaussian volume preserving – that is, $\int _{\Sigma }f(x)\gamma _{n+1}(x)\,\mathrm {d} x=0$ – then
Somewhat unexpected, the function $f(x)=\langle v,N(x)\rangle $ is almost an eigenfunction of the operator S (by Lemma 5.1), in the sense that
Equation (10) is the key fact used in the proof of the main theorem, Theorem 1.9. Equation (10) follows from (8) and the divergence theorem (see Lemma 5.1 for a proof of (10).) Plugging (10) into (9),
The set
has dimension at least n, by the rank-nullity theorem. Because $\Omega $ maximises noise stability, the quantity on the right of (11) must be nonpositive for all $v\in V$, implying that $f=0$ on $\Sigma $ (except possibly on a set of measure zero on $\Sigma $. One can show that $\left \|\overline {\nabla }T_{\rho }1_{\Omega }(x)\right \|>0$ for all $x\in \Sigma $. See Lemma 4.8.) That is, for all $v\in V$, $\langle v,N(x)\rangle =0$ for all $x\in \Sigma $ (except possibly on a set of measure zero on $\Sigma $). Because V has dimension at least n, there exists a measurable discrete set $\Omega '\subseteq \mathbb {R}$ such that $\Omega =\Omega '\times \mathbb {R}^{n}$ after rotating $\Omega $, concluding the proof of Theorem 1.9 in the case $m=2$.
Theorem 1.9 follows from the realization that all of the above steps still hold for arbitrary m in Conjecture 1.5. In particular, the key lemma (10) still holds. See Lemmas 5.1 and 5.4.
Remark 1.11. In the case that we replace the Gaussian noise stability of $\Omega $ with the Euclidean heat content
the corresponding operator $\overline {S}$ from the second variation of the Euclidean heat content satisfies
and then the analogue of (9) for $f(x)\colon =\langle v,N(x)\rangle $ is
so that the second variation corresponding to $f=\langle v,N\rangle $ is automatically zero. This fact is expected, because a translation does not change the Euclidean heat content. However, this example demonstrates that the key property of the above proof is exactly (10). More specifically, f is an ‘almost eigenfunction’ of S with ‘eigenvalue’ $1/\rho $ that is larger than $1$. It seems plausible that other semigroups could also satisfy an identity such as (10), because (10) seems related to hypercontractivity. We leave this open for further research.
2 Existence and Regularity
2.1 Preliminaries and Notation
We say that $\Sigma \subseteq \mathbb {R}^{n+1}$ is an n-dimensional $C^{\infty }$ manifold with boundary if $\Sigma $ can be locally written as the graph of a $C^{\infty }$ function on a relatively open subset of $\{(x_{1},\ldots ,x_{n})\in \mathbb {R}^{n}\colon x_{n}\geq 0\}$. For any $(n+1)$-dimensional $C^{\infty }$ manifold $\Omega \subseteq \mathbb {R}^{n+1}$ such that $\partial \Omega $ itself has a boundary, we denote
We also denote $C_{0}^{\infty }(\Omega )\colon = C_{0}^{\infty }(\Omega ;\mathbb {R})$. We let $\mathrm {div}$ denote the divergence of a vector field in $\mathbb {R}^{n+1}$. For any $r>0$ and for any $x\in \mathbb {R}^{n+1}$, we let $B(x,r)\colon =\{y\in \mathbb {R}^{n+1}\colon \left \|x-y\right \|\leq r\}$ be the closed Euclidean ball of radius r centred at $x\in \mathbb {R}^{n+1}$. Here $\partial \partial \Omega $ refers to the $(n-1)$-dimensional boundary of $\Omega $.
Definition 2.1 Reduced Boundary
A measurable set $\Omega \subseteq \mathbb {R}^{n+1}$ has locally finite surface area if, for any $r>0$,
Equivalently, $\Omega $ has locally finite surface area if $\nabla 1_{\Omega }$ is a vector-valued Radon measure such that, for any $x\in \mathbb {R}^{n+1}$, the total variation
is finite [Reference Cicalese and LeonardiCL12]. If $\Omega \subseteq \mathbb {R}^{n+1}$ has locally finite surface area, we define the reduced boundary $\partial ^{*} \Omega $ of $\Omega $ to be the set of points $x\in \mathbb {R}^{n+1}$ such that
exists, and it is exactly one element of $S^{n}\colon =\{x\in \mathbb {R}^{n+1}\colon \left \|x\right \|=1\}$.
The reduced boundary $\partial ^{*}\Omega $ is a subset of the topological boundary $\partial \Omega $. Also, $\partial ^{*}\Omega $ and $\partial \Omega $ coincide with the support of $\nabla 1_{\Omega }$, except for a set of n-dimensional Hausdorff measure zero.
Let $\Omega \subseteq \mathbb {R}^{n+1}$ be an $(n+1)$-dimensional $C^{2}$ submanifold with reduced boundary $\Sigma \colon =\partial ^{*} \Omega $. Let $N\colon \Sigma \to S^{n}$ be the unit exterior normal to $\Sigma $. Let $X\in C_{0}^{\infty }(\mathbb {R}^{n+1},\mathbb {R}^{n+1})$. We write X in its components as $X=(X_{1},\ldots ,X_{n+1})$, so that $\mathrm {div}X=\sum _{i=1}^{n+1}\frac {\partial }{\partial x_{i}}X_{i}$. Let $\Psi \colon \mathbb {R}^{n+1}\times (-1,1)\to \mathbb {R}^{n+1}$ such that
For any $s\in (-1,1)$, let $\Omega ^{(s)}\colon =\Psi (\Omega ,s)$. Note that $\Omega ^{(0)}=\Omega $. Let $\Sigma ^{(s)}\colon =\partial ^{*}\Omega ^{(s)}$, $\forall \ s\in (-1,1)$.
Definition 2.2. We call $\{\Omega ^{(s)}\}_{s\in (-1,1)}$ as defined above a variation of $\Omega \subseteq \mathbb {R}^{n+1}$. We also call $\{\Sigma ^{(s)}\}_{s\in (-1,1)}$ a variation of $\Sigma =\partial ^{*}\Omega $.
For any $x\in \mathbb {R}^{n+1}$ and any $s\in (-1,1)$, define
Below, when appropriate, we let $\,\mathrm {d} x$ denote Lebesgue measure, restricted to a surface $\Sigma \subseteq \mathbb {R}^{n+1}$.
Lemma 2.3 Existence of a Maximiser
Let $0<\rho <1$ and let $m\geq 2$. Then there exist measurable sets $\Omega _{1},\ldots ,\Omega _{m}$ maximising Problem 1.5.
Proof. Define $\Delta _{m}$ as in (6). Let $f\colon \mathbb {R}^{n+1}\to \Delta _{m}$. We write f in its components as $f=(f_{1},\ldots ,f_{m})$. The set $D_{0}\colon =\{f\colon \mathbb {R}^{n+1}\to \Delta _{m}\}$ is norm closed, bounded and convex; therefore, it is weakly compact and convex. Consider the function
This function is weakly continuous on $D_{0}$, and $D_{0}$ is weakly compact, so there exists $\widetilde {f}\in D_{0}$ such that $C(\widetilde {f})=\max _{f\in D_{0}}C(f)$. Moreover, C is convex because for any $0<t<1$ and for any $f,g\in D_{0}$,
Here we used that
for all measurable $h\colon \mathbb {R}^{n+1}\to [-1,1]$.
Because C is convex, its maximum must be achieved at an extreme point of $D_{0}$. Let $e_{1},\ldots ,e_{m}$ denote the standard basis of $\mathbb {R}^{m}$, so that f takes its values in $\{e_{1},\ldots ,e_{m}\}$. Then, for any $1\leq i\leq m$, define $\Omega _{i}\colon =\{x\in \mathbb {R}^{n+1}\colon f(x)=e_{i}\}$, so that $f_{i}=1_{\Omega _{i}}\ \forall \ 1\leq i\leq m$.
Lemma 2.4 Regularity of a Maximiser
Let $\Omega _{1},\ldots ,\Omega _{m}\subseteq \mathbb {R}^{n+1}$ be the measurable sets maximising Problem 1.5, guaranteed to exist by Lemma 2.3. Then the sets $\Omega _{1},\ldots ,\Omega _{m}$ have locally finite surface area. Moreover, for all $1\leq i\leq m$ and for all $x\in \partial \Omega _{i}$, there exists a neighbourhood U of x such that $U\cap \partial \Omega _{i}$ is a finite union of $C^{\infty } \ n$-dimensional manifolds.
Proof. This follows from a first variation argument and the strong unique continuation property for the heat equation. We first claim that there exist constants $(c_{ij})_{1\leq i<j\leq m}$ such that
By the Lebesgue density theorem [Reference SteinSte70, 1.2.1, Proposition 1], we may assume that, for all $i\in \{1,\ldots ,k\}$, if $y\in \Omega _{i}$, then we have $\lim _{r\to 0}\gamma _{n+1}(\Omega _{i}\cap B(y,r))/\gamma _{n+1}(B(y,r))=1$.
We prove (16) by contradiction. Suppose there exist $c\in \mathbb {R}$, $j,k\in \{1,\ldots ,m\}$ with $j\neq k$ and there exists $y\in \Omega _{j}$ and $z\in \Omega _{k}$ such that
By (3), $T_{\rho }(1_{\Omega _{j}}-1_{\Omega _{k}})(x)$ is a continuous function of x. And by the Lebesgue density theorem, there exist disjoint measurable sets $U_{j},U_{k}$ with positive Lebesgue measure such that $U_{j}\subseteq \Omega _{j},U_{k}\subseteq \Omega _{k}$ such that $\gamma _{n+1}(U_{j})=\gamma _{n+1}(U_{k})$ and such that
We define a new partition of $\mathbb {R}^{n+1}$ such that $\widetilde {\Omega }_{j}\colon = U_{k}\cup \Omega _{j}\setminus U_{j}$, $\widetilde {\Omega }_{k}\colon = U_{j}\cup \Omega _{k}\setminus U_{k}$ and $\widetilde {\Omega }_{i}\colon =\Omega _{i}$ for all $i\in \{1,\ldots ,m\}\setminus \{j,k\}$. Then
This contradicts the maximality of $\Omega _{1},\ldots ,\Omega _{m}$. We conclude that (16) holds.
We now fix $1\leq i<j\leq m$ and we upgrade (16) by examining the level sets of
Fix $c\in \mathbb {R}$ and consider the level set
This level set has Hausdorff dimension at most n by [Reference ChenChe98, Theorem 2.3].
From the strong unique continuation property for the heat equation [Reference LinLin90], $T_{\rho }(1_{\Omega _{i}}-1_{\Omega _{j}})(x)$ does not vanish to infinite order at any $x\in \mathbb {R}^{n+1}$, so the argument of [Reference Hardt and SimonHS89, Lemma 1.9] (see [Reference Han and LinHL94, Proposition 1.2] and also [Reference ChenChe98, Theorem 2.1]) shows that in a neighbourhood of each $x\in \Sigma $, $\Sigma $ can be written as a finite union of $C^{\infty }$ manifolds. That is, there exists a neighbourhood U of x and there exists an integer $k\geq 1$ such that
Here $D^{i}$ denotes the array of all iterated partial derivatives of order $i\geq 1$. We therefore have
and the lemma follows.
From Lemma 2.4 and Definition 2.1, for all $1\leq i<j\leq m$, if $x\in \Sigma _{ij}$, then the unit normal vector $N_{ij}(x)\in \mathbb {R}^{n+1}$ that points from $\Omega _{i}$ into $\Omega _{j}$ is well defined on $\Sigma _{ij}$, $\big ((\partial \Omega _{i})\cap (\partial \Omega _{j})\big )\setminus \Sigma _{ij}$ has Hausdorff dimension at most $n-1$ and
In Lemma 4.5 we will show that the negative sign holds in (18) when $\Omega _{1},\ldots ,\Omega _{m}$ maximise Problem 1.5.
3 First and Second Variation
In this section, we recall some standard facts for variations of sets with respect to the Gaussian measure. Here is a summary of notation.
Summary of Notation.
• $T_{\rho }$ denotes the Ornstein–Uhlenbeck operator with correlation parameter $\rho \in (-1,1)$.
• $\Omega _{1},\ldots ,\Omega _{m}$ denotes a partition of $\mathbb {R}^{n+1}$ into m disjoint measurable sets.
• $\partial ^{*}\Omega $ denotes the reduced boundary of $\Omega \subseteq \mathbb {R}^{n+1}$.
• $\Sigma _{ij}\colon =(\partial ^{*}\Omega _{i})\cap (\partial ^{*}\Omega _{j})$ for all $1\leq i,j\leq m$.
• $N_{ij}(x)$ is the unit normal vector to $x\in \Sigma _{ij}$ that points from $\Omega _{i}$ into $\Omega _{j}$, so that $N_{ij}=-N_{ji}$.
Throughout the article, unless otherwise stated, we define $G\colon \mathbb {R}^{n+1}\times \mathbb {R}^{n+1}\to \mathbb {R}$ to be the following function. For all $x,y\in \mathbb {R}^{n+1}, \forall \rho \in (-1,1)$, define
We can then rewrite the noise stability from Definition 1.4 as
Our first and second variation formulas for the noise stability will be written in terms of G.
Lemma 3.1 The First Variation [Reference Choksi and SternbergCS07]; also [Reference Heilman, Mossel and NeemanHMN16, Lemma 3.1, Equation (7)]
Let $X\in C_{0}^{\infty }(\mathbb {R}^{n+1},\mathbb {R}^{n+1})$. Let $\Omega \subseteq \mathbb {R}^{n+1}$ be a measurable set such that $\partial \Omega $ is a locally finite union of $C^{\infty }$ manifolds. Let $\{\Omega ^{(s)}\}_{s\in (-1,1)}$ be the corresponding variation of $\Omega $. Then
The following lemma is a consequence of (20) and Lemma 2.4.
Lemma 3.2 The First Variation for Maximisers
Suppose that $\Omega _{1},\ldots ,\Omega _{m}\subseteq \mathbb {R}^{n+1}$ maximise Problem 1.5. Then for all $1\leq i<j\leq m$, there exists $c_{ij}\in \mathbb {R}$ such that
Proof. Fix $1\leq i<j\leq m$ and denote $f_{ij}(x)\colon =\langle X(x),N_{ij}(x)\rangle $ for all $x\in \Sigma _{ij}$. From Lemma 3.1, if X is nonzero outside of $\Sigma _{ij}$, we get
We used above $N_{ij}=-N_{ji}$. If $T_{\rho }(1_{\Omega _{i}}-1_{\Omega _{j}})(x)$ is nonconstant, then we can construct $f_{ij}$ supported in $\Sigma _{ij}$ with $\int _{\partial ^{*}\Omega _{i'}}f_{ij}(x)\gamma _{n+1}(x)dx=0$ for all $1\leq i'\leq m$ to give a nonzero derivative, contradicting the maximality of $\Omega _{1},\ldots ,\Omega _{m}$ (as in Lemma 2.4 and (17)).
Theorem 3.3 General Second Variation Formula [Reference Choksi and SternbergCS07, Theorem 2.6]; also [Reference HeilmanHei15, Theorem 1.10]
Let $X\in C_{0}^{\infty }(\mathbb {R}^{n+1},\mathbb {R}^{n+1})$. Let $\Omega \subseteq \mathbb {R}^{n+1}$ be a measurable set such that $\partial \Omega $ is a locally finite union of $C^{\infty }$ manifolds. Let $\{\Omega ^{(s)}\}_{s\in (-1,1)}$ be the corresponding variation of $\Omega $. Define V as in (14). Then
4 Noise Stability and the Calculus of Variations
We now further refine the first and second variation formulas from the previous section. The following formula follows by using $G(x,y)\colon =\gamma _{n+1}(x)\gamma _{n+1}(y)\ \forall \ x,y\in \mathbb {R}^{n+1}$ in Lemma 3.1 and in Theorem 3.3.
Lemma 4.1 Variations of Gaussian Volume [Reference LedouxLed01]
Let $\Omega \subseteq \mathbb {R}^{n+1}$ be a measurable set such that $\partial \Omega $ is a locally finite union of $C^{\infty }$ manifolds. Let $X\in C_{0}^{\infty }(\mathbb {R}^{n+1},\mathbb {R}^{n+1})$. Let $\{\Omega ^{(s)}\}_{s\in (-1,1)}$ be the corresponding variation of $\Omega $. Denote $f(x)\colon =\langle X(x),N(x)\rangle $ for all $x\in \Sigma \colon = \partial ^{*}\Omega $. Then
Lemma 4.2 Extension Lemma for Existence of Volume-Preserving Variations [Reference HeilmanHei18, Lemma 3.9]
Let $X'\in C_{0}^{\infty }(\mathbb {R}^{n+1},\mathbb {R}^{n+1})$ be a vector field. Define $f_{ij}\colon =\langle X',N_{ij}\rangle \in C_{0}^{\infty }(\Sigma _{ij})$ for all $1\leq i<j\leq m$. If
then $X'|_{\cup _{1\leq i<j\leq m}\Sigma _{ij}}$ can be extended to a vector field $X\in C_{0}^{\infty }(\mathbb {R}^{n+1},\mathbb {R}^{n+1})$ such that the corresponding variations $\{\Omega _{i}^{(s)}\}_{1\leq i\leq m,s\in (-1,1)}$ satisfy
Lemma 4.3. Define G as in (19). Let $f\colon \Sigma \to \mathbb {R}$ be continous and compactly supported. Then
Proof. If $g\colon \mathbb {R}^{n+1}\to \mathbb {R}$ is continuous and compactly supported, then it is well known that
because, for example, $\frac {G(x,y)}{\gamma _{n+1(x)}\gamma _{n+1}(y)}$ is the Mehler kernel, which can be written as an (infinite-dimensional) positive semidefinite matrix. That is, there exists an orthonormal basis $\{\psi _{i}\}_{i=1}^{\infty }$ of $L_{2}(\gamma _{n+1})$ (of Hermite polynomials) and there exists a sequence of nonnegative real numbers $\{\lambda _{i}\}_{i=1}^{\infty }$ such that the following series converges absolutely pointwise:
From Mercer’s theorem, this is equivalent to $\forall \ p\geq 1$, for all $z^{(1)},\ldots ,z^{(p)}\in \mathbb {R}^{n}$, for all $\beta _{1},\ldots ,\beta _{p}\in \mathbb {R}$,
In particular, this holds for all $z^{(1)},\ldots ,z^{(p)}\in \partial \Omega \subseteq \mathbb {R}^{n+1}$. So, the positive semidefinite property carries over (by restriction) to $\partial \Omega $.
4.1 Two Sets
For didactic purposes, we first present the second variation of noise stability when $m=2$ in Conjecture 1.5.
Lemma 4.4 Second Variation of Noise Stability
Let $\Omega \subseteq \mathbb {R}^{n+1}$ be a measurable set such that $\partial \Omega $ is a locally finite union of $C^{\infty }$ manifolds. Let $X\in C_{0}^{\infty }(\mathbb {R}^{n+1},\mathbb {R}^{n+1})$. Let $\{\Omega ^{(s)}\}_{s\in (-1,1)}$ be the corresponding variation of $\Omega $. Denote $f(x)\colon =\langle X(x),N(x)\rangle $ for all $x\in \Sigma \colon = \partial ^{*}\Omega $. Then
Proof. For all $x\in \mathbb {R}^{n+1}$, we have $V(x,0)\stackrel {(14)}{=}\int _{\Omega }G(x,y)\,\mathrm {d} y\stackrel {(3)}{=}\gamma _{n+1}(x)T_{\rho }1_{\Omega }(x)$. So, from Theorem 3.3,
That is, (22) holds.
Lemma 4.5 Volume-reserving Second Variation of Maximisers
Suppose that $\Omega ,\Omega ^{c}\subseteq \mathbb {R}^{n+1}$ maximise Problem 1.5 for $0<\rho <1$ and $m=2$. Let $\{\Omega ^{(s)}\}_{s\in (-1,1)}$ be the corresponding variation of $\Omega $. Denote $f(x)\colon =\langle X(x),N(x)\rangle $ for all $x\in \Sigma \colon = \partial ^{*}\Omega $. If
then there exists an extension of the vector field $X|_{\Sigma }$ such that the corresponding variation of $\{\Omega ^{(s)}\}_{s\in (-1,1)}$ satisfies
Moreover,
Proof. From Lemma 3.1, $T_{\rho }1_{\Omega }(x)$ is constant for all $x\in \Sigma $. So, from Lemma 4.1 and Lemma 4.2, the last term in (22) vanishes; that is,
(Here $\overline {\nabla }$ denotes the gradient in $\mathbb {R}^{n+1}$.) Because $T_{\rho }1_{\Omega }(x)$ is constant for all $x\in \partial \Omega $ by Lemma 3.2, $\overline {\nabla } T_{\rho }1_{\Omega }(x)$ is parallel to $N(x)$ for all $x\in \partial \Omega $. That is,
In fact, we must have a negative sign in (25); otherwise, we could find a vector field X supported near $x\in \partial \Omega $ such that (25) has a positive sign, and then because G is a positive semidefinite function by Lemma 4.3, we would have
a contradiction. In summary,
4.2 More Than Two Sets
We can now generalise Subsection 4.1 to the case of $m>2$ sets.
Lemma 4.6 Second Variation of Noise Stability, Multiple Sets
Let $\Omega _{1},\ldots ,\Omega _{m}\subseteq \mathbb {R}^{n+1}$ be a partition of $\mathbb {R}^{n+1}$ into measurable sets such that $\partial \Omega _{i}$ is a locally finite union of $C^{\infty }$ manifolds for all $1\leq i\leq m$. Let $X\in C_{0}^{\infty }(\mathbb {R}^{n+1},\mathbb {R}^{n+1})$. Let $\{\Omega _{i}^{(s)}\}_{s\in (-1,1)}$ be the corresponding variation of $\Omega _{i}$ for all $1\leq i\leq m$. Denote $f_{ij}(x)\colon =\langle X(x),N_{ij}(x)\rangle $ for all $x\in \Sigma _{ij}\colon = (\partial ^{*}\Omega _{i})\cap (\partial ^{*}\Omega _{j})$. We let N denote the exterior pointing unit normal vector to $\partial ^{*}\Omega _{i}$ for any $1\leq i\leq m$. Then
Proof. From Lemma 4.4,
Summing over $1\leq i\leq m$ and using $N_{ij}=-N_{ji}$ completes the proof.
Below, we need the following combinatorial lemma, the case $m=3$ being treated in [Reference Hutchings, Morgan, Ritoré and RosHMRR02, Proposition 3.3].
Lemma 4.7 [Reference HeilmanHei19, Lemma 4.6]
Let $m\geq 3$. Let
Let $x\in D_{1}$ and let $y\in D_{2}$. Then $\sum _{1\leq i<j\leq m}x_{ij}y_{ij}=0$.
Proof. $D_{1}$ is defined to be perpendicular to vectors in $D_{2}$ and vice versa. That is, $D_{1}$ and $D_{2}$ are orthogonal complements of each other, and in terms of vector spaces, $D_{1}\oplus D_{2}=\mathbb {R}^{\binom {m}{2}}$. Consequently, the inner product of any $x\in D_{1}$ and $y\in D_{2}$ is zero.
Lemma 4.8 Volume-Preserving Second Variation of Maximisers, Multiple Sets
Let $\Omega _{1},\ldots ,\Omega _{m}\subseteq \mathbb {R}^{n+1}$ be a partition of $\mathbb {R}^{n+1}$ into measurable sets such that $\partial \Omega _{i}$ is a locally finite union of $C^{\infty }$ manifolds for all $1\leq i\leq m$. Let $X\in C_{0}^{\infty }(\mathbb {R}^{n+1},\mathbb {R}^{n+1})$. Let $\{\Omega _{i}^{(s)}\}_{s\in (-1,1)}$ be the corresponding variation of $\Omega _{i}$ for all $1\leq i\leq m$. Denote $f_{ij}(x)\colon =\langle X(x),N_{ij}(x)\rangle $ for all $x\in \Sigma _{ij}\colon = (\partial ^{*}\Omega _{i})\cap (\partial ^{*}\Omega _{j}) $. We let N denote the exterior pointing unit normal vector to $\partial ^{*}\Omega _{i}$ for any $1\leq i\leq m$. Then
Also,
Moreover, $\|\overline {\nabla } T_{\rho }(1_{\Omega _{i}}-1_{\Omega _{j}})(x)\|>0$ for all $x\in \Sigma _{ij}$, except on a set of Hausdorff dimension at most $n-1$.
Proof. From Lemma 3.2, there exist constants $(c_{ij})_{1\leq i<j\leq m}$ such that $T_{\rho }(1_{\Omega _{i}}-1_{\Omega _{j}})(x)=c_{ij}$ for all $1\leq i<j\leq m$, for all $x\in \Sigma _{ij}$. So, from Lemma 4.6,
The last term then vanishes by Lemma 4.7. That is,
Meanwhile, if $1\leq i<j\leq m$ is fixed, it follows from Lemma 3.2 that
In fact, we must have a negative sign in (29); otherwise, we could find a vector field X supported near $x\in \Sigma _{ij}$ such that (25) has a positive sign, and then because G is a positive semidefinite function by Lemma 4.3, we would have
a contradiction. In summary,
5 Almost Eigenfunctions of the Second Variation
For didactic purposes, we first consider the case $m=2$, and we then later consider the case $m>2$.
5.1 Two Sets
Let $\Sigma \colon =\partial ^{*}\Omega $. For any bounded measurable $f\colon \Sigma \to \mathbb {R}$, define the following function (if it exists):
Lemma 5.1 Key Lemma, $m=2$, Translations as Almost Eigenfunctions
Let $\Omega ,\Omega ^{c}$ maximise Problem 1.5 for $m=2$. Let $v\in \mathbb {R}^{n+1}$. Then
Proof. Because $T_{\rho }1_{\Omega }(x)$ is constant for all $x\in \partial \Omega $ by Lemma 3.2, $\overline {\nabla } T_{\rho }1_{\Omega }(x)$ is parallel to $N(x)$ for all $x\in \partial \Omega $. That is, (24) holds; that is,
From Definition 3, and then using the divergence theorem,
Therefore,
Remark 5.2. To justify the use of the divergence theorem in (32), let $r>0$ and note that we can differentiate under the integral sign of $T_{\rho }1_{\Omega \cap B(0,r)}(x)$ to get
Fix $r'>0$. Fix $x\in \mathbb {R}^{n+1}$ with $\left \|x\right \|<r'$. The last integral in (33) over $\Omega \cap \partial B(0,r)$ goes to zero as $r\to \infty $ uniformly over all such $\left \|x\right \|<r'$. Also, $\overline {\nabla } T_{\rho }1_{\Omega }(x)$ exists a priori for all $x\in \mathbb {R}^{n+1}$, and
And the last integral goes to zero as $r\to \infty $, uniformly over all $\left \|x\right \|<r'$.
Lemma 5.3 Second Variation of Translations
Let $v\in \mathbb {R}^{n+1}$. Let $\Omega ,\Omega ^{c}$ maximise Problem 1.5 for $m=2$. Let $\{\Omega ^{(s)}\}_{s\in (-1,1)}$ be the variation of $\Omega $ corresponding to the constant vector field $X\colon = v$. Assume that
Then
Proof. Let $f(x)\colon =\langle v,N(x)\rangle $ for all $x\in \Sigma $. From Lemma 4.5,
Applying Lemma 5.1, $S(f)(x)=f(x)\frac {1}{\rho }\|\overline {\nabla }T_{\rho }1_{\Omega }(x)\| \ \forall \ x\in \Sigma $, proving the lemma. Note also that $\int _{\Sigma }\|\overline {\nabla }T_{\rho }1_{\Omega }(x)\|\langle v,N(x)\rangle ^{2}\gamma _{n+1}(x)\,\mathrm {d} x$ is finite priori by the divergence theorem and (24):
5.2 More Than Two Sets
Let $v\in \mathbb {R}^{n+1}$ and denote $f_{ij}\colon =\langle v,N_{ij}\rangle $ for all $1\leq i,j\leq m$. For simplicity of notation in the formulas below, if $1\leq i\leq m$ and if a vector $N(x)$ appears inside an integral over $\partial \Omega _{i}$, then $N(x)$ denotes the unit exterior pointing normal vector to $\Omega _{i}$ at $x\in \partial ^{*}\Omega _{i}$. Similarly, for simplicity of notation, we denote $\langle v,N\rangle $ as the collection of functions $(\langle v,N_{ij}\rangle )_{1\leq i<j\leq m}$. For any $1\leq i<j\leq m$, define
Lemma 5.4. Key Lemma, $m\geq 2$, Translations as Almost Eigenfunctions
Let $\Omega _{1},\ldots ,\Omega _{m}$ maximise problem 1.5. Fix $1\leq i<j\leq m$. Let $v\in \mathbb {R}^{n+1}$. Then
Proof. From Lemma 4.8 (i.e., (28)),
From Definition 3, and then using the divergence theorem,
The use of the divergence theorem is justified in Remark 5.2. Therefore,
Lemma 5.5 Second Variation of Translations, Multiple Sets
Let $v\in \mathbb {R}^{n+1}$. Let $\Omega _{1},\ldots ,\Omega _{m}$ maximise problem 1.5. For each $1\leq i\leq m$, let $\{\Omega _{i}^{(s)}\}_{s\in (-1,1)}$ be the variation of $\Omega _{i}$ corresponding to the constant vector field $X\colon = v$. Assume that
Then
Proof. For any $1\leq i<j\leq m$, let $f_{ij}(x)\colon =\langle v,N_{ij}(x)\rangle $ for all $x\in \Sigma $. From Lemma 4.5,
Applying Lemma 5.4, $S_{ij}(\langle v,N\rangle )(x)=f_{ij}(x)\frac {1}{\rho }\|\overline {\nabla }T_{\rho }(1_{\Omega _{i}}-1_{\Omega _{j}})(x)\|$, proving the lemma. Note also that $\sum _{1\leq i<j\leq m}\int _{\Sigma _{ij}}\|\overline {\nabla }T_{\rho }(1_{\Omega _{i}}-1_{\Omega _{j}})(x)\|\langle v,N_{ij}(x)\rangle ^{2}\gamma _{n+1}(x)\,\mathrm {d} x$ is finite priori by the divergence theorem because
Summing over $1\leq i\leq m$ then gives
6 Proof of the Main Structure Theorem
Proof of Theorem 1.9. Let $m\geq 2$. Let $0<\rho <1$. Fix $a_{1},\ldots ,a_{m}>0$ such that $\sum _{i=1}^{m}a_{i}=1$. Let $\Omega _{1},\ldots \Omega _{m}\subseteq \mathbb {R}^{n+1}$ be measurable sets that partition $\mathbb {R}^{n+1}$ such that $\gamma _{n+1}(\Omega _{i})=a_{i}$ for all $1\leq i\leq m$ that maximise Problem 1.5. These sets exist by Lemma 2.3 and from Lemma 2.4 their boundaries are locally finite unions of $C^{\infty }\ n$-dimensional manifolds. Define $\Sigma _{ij}\colon =(\partial ^{*}\Omega _{i})\cap (\partial ^{*}\Omega _{j})$ for all $1\leq i<j\leq m$.
By Lemma 3.2, for all $1\leq i<j\leq m$, there exists $c_{ij}\in \mathbb {R}$ such that
By this condition, the regularity Lemma 2.4 and the last part of Lemma 4.8,
Moreover, by the last part of Lemma 4.8, except for a set $\sigma _{ij}$ of Hausdorff dimension at most $n-1$, we have
Fix $v\in \mathbb {R}^{n+1}$ and consider the variation of $\Omega _{1},\ldots ,\Omega _{m}$ induced by the constant vector field $X\colon = v$. For all $1\leq i<j\leq m$, define $S_{ij}$ as in (34). Define
From Lemma 5.5,
Because $0<\rho <1$, (37) implies
The set V has dimension at least $n+2-m$, by the rank-nullity theorem, because V is the null space of the linear operator $M\colon \mathbb {R}^{n+1}\to \mathbb {R}^{m}$ defined by
and M has rank at most $m-1$ (because $\sum _{i=1}^{m}(M(v))_{i}=0$ for all $v\in \mathbb {R}^{n+1}$). So, by (38), after rotating $\Omega _{1},\ldots ,\Omega _{m}$, we conclude that there exist measurable $\Omega _{1}',\ldots ,\Omega _{m}'\subseteq \mathbb {R}^{m-1}$ such that
7 The Case of Negative Correlation
In this section, we consider the case that $\rho <0$ in Problem 1.5. When $\rho <0$ and $h\colon \mathbb {R}^{n+1}\to [-1,1]$ is measurable, quantity
could be negative, so a few parts of the above argument do not work, namely, the existence Lemma 2.3. We therefore replace the noise stability with a more general bilinear expression, guaranteeing existence of the corresponding problem. The remaining parts of the argument are essentially identical, mutatis mutandis. We indicate below where the arguments differ in the bilinear case.
When $\rho <0$, we look for a minimum of noise stability, rather than a maximum. Correspondingly, we expect that the plurality function minimises noise stability when $\rho <0$. If $\rho <0$, then (3) implies that
So, in order to understand the minimum of noise stability for negative correlations, it suffices to consider the following bilinear version of the standard simplex problem with positive correlation.
Problem 7.1 Standard Simplex Problem, Bilinear Version, Positive Correlation [Reference Isaksson and MosselIM12]
Let $m\geq 3$. Fix $a_{1},\ldots ,a_{m}>0$ such that $\sum _{i=1}^{m}a_{i}=1$. Fix $0<\rho <1$. Find measurable sets $\Omega _{1},\ldots \Omega _{m},\Omega _{1}',\ldots \Omega _{m}'\subseteq \mathbb {R}^{n+1}$ with $\cup _{i=1}^{m}\Omega _{i}=\cup _{i=1}^{m}\Omega _{i}'=\mathbb {R}^{n+1}$ and $\gamma _{n+1}(\Omega _{i})=\gamma _{n+1}(\Omega _{i}')=a_{i}$ for all $1\leq i\leq m$ that minimise
subject to the above constraints.
Conjecture 7.2 Standard Simplex Conjecture, Bilinear Version, Positive Correlation [Reference Isaksson and MosselIM12]
Let $\Omega _{1},\ldots \Omega _{m},\Omega _{1}',\ldots \Omega _{m}'\subseteq \mathbb {R}^{n+1}$ minimise Problem 1.5. Assume that $m-1\leq n+1$. Fix $0<\rho <1$. Let $z_{1},\ldots ,z_{m}\in \mathbb {R}^{n+1}$ be the vertices of a regular simplex in $\mathbb {R}^{n+1}$ centred at the origin. Then $\exists \ w\in \mathbb {R}^{n+1}$ such that, for all $1\leq i\leq m$,
In the case that $a_{i}=1/m$ for all $1\leq i\leq m$, it is assumed that $w=0$ in Conjecture 7.2.
Because we consider a bilinear version of noise stability in Problem 7.1, that existence of an optimiser is easier than in Problem 1.5.
Lemma 7.3 Existence of a Minimiser
Let $0<\rho <1$ and let $m\geq 2$. Then there exist measurable sets $\Omega _{1},\ldots \Omega _{m},\Omega _{1}',\ldots \Omega _{m}'$ that minimise Problem 7.1.
Proof. Define $\Delta _{m}$ as in (6). Let $f,g\colon \mathbb {R}^{n+1}\to \Delta _{m}$. The set $D_{0}\colon =\{f\colon \mathbb {R}^{n+1}\to \Delta _{m}\}$ is norm closed, bounded and convex; therefore, it is weakly compact and convex. Consider the function
This function is weakly continuous on $D_{0}\times D_{0}$, and $D_{0}\times D_{0}$ is weakly compact, so there exists $\widetilde {f},\widetilde {g}\in D_{0}$ such that $C(\widetilde {f},\widetilde {g})=\min _{f,g\in D}C(f,g)$. Because C is bilinear and $D_{0}$ is convex, the minimum of C must be achieved at an extreme point of $D_{0}\times D_{0}$. Let $e_{1},\ldots ,e_{m}$ denote the standard basis of $\mathbb {R}^{m}$, so that $f,g$ take their values in $\{e_{1},\ldots ,e_{m}\}$. Then, for any $1\leq i\leq m$, define $\Omega _{i}\colon =\{x\in \mathbb {R}^{n+1}\colon f(x)=e_{i}\}$ and $\Omega _{i}'\colon =\{x\in \mathbb {R}^{n+1}\colon g(x)=e_{i}\}$. Note that $f_{i}=1_{\Omega _{i}}$ and $g_{i}=1_{\Omega _{i}'}$ for all $1\leq i\leq m$.
Lemma 7.4 Regularity of a Minimiser
Let $\Omega _{1},\ldots ,\Omega _{m},\Omega _{1}',\ldots ,\Omega _{m}'\subseteq \mathbb {R}^{n+1}$ be the measurable sets minimising Problem 1.5, guaranteed to exist by Lemma 7.3. Then the sets $\Omega _{1},\ldots ,\Omega _{m},\Omega _{1}',\ldots ,\Omega _{m}'$ have locally finite surface area. Moreover, for all $1\leq i\leq m$ and for all $x\in \partial \Omega _{i}$, there exists a neighbourhood U of x such that $U\cap \partial \Omega _{i}$ is a finite union of $C^{\infty } \ n$-dimensional manifolds. The same holds for $\Omega _{1}',\ldots ,\Omega _{m}'$.
We denote $\Sigma _{ij}\colon =(\partial ^{*}\Omega _{i})\cap (\partial ^{*}\Omega _{j}), \Sigma _{ij}'\colon =(\partial ^{*}\Omega _{i}')\cap (\partial ^{*}\Omega _{j}')$ for all $1\leq i<j\leq m$.
Lemma 7.5 The First Variation for Minimisers
Suppose that $\Omega _{1},\ldots ,\Omega _{m},\Omega _{1}',\ldots ,\Omega _{m}'\subseteq \mathbb {R}^{n+1}$ minimise Problem 7.1. Then for all $1\leq i<j\leq m$, there exists $c_{ij},c_{ij}'\in \mathbb {R}$ such that
We denote $N_{ij}(x)$ as the unit exterior normal vector to $\Sigma _{ij}$ for all $1\leq i<j\leq m$. Also, denote $N_{ij}'(x)$ as the unit exterior normal vector to $\Sigma _{ij}'$ for all $1\leq i<j\leq m$. Let $\Omega _{1},\ldots ,\Omega _{m},\Omega _{1}',\ldots ,\Omega _{m}'\subseteq \mathbb {R}^{n+1}$ be a partition of $\mathbb {R}^{n+1}$ into measurable sets such that $\partial \Omega _{i},\partial \Omega _{i}'$ are a locally finite union of $C^{\infty }$ manifolds for all $1\leq i\leq m$. Let $X,X'\in C_{0}^{\infty }(\mathbb {R}^{n+1},\mathbb {R}^{n+1})$. Let $\{\Omega _{i}^{(s)}\}_{s\in (-1,1)}$ be the variation of $\Omega _{i}$ corresponding to X for all $1\leq i\leq m$. Let $\{\Omega _{i}^{'(s)}\}_{s\in (-1,1)}$ be the variation of $\Omega _{i}'$ corresponding to $X'$ for all $1\leq i\leq m$. Denote $f_{ij}(x)\colon =\langle X(x),N_{ij}(x)\rangle $ for all $x\in \Sigma _{ij}$ and $f_{ij}'(x)\colon =\langle X'(x),N_{ij}'(x)\rangle $ for all $x\in \Sigma _{ij}'$. We let N denote the exterior pointing unit normal vector to $\partial ^{*}\Omega _{i}$ for any $1\leq i\leq m$ and we let $N'$ denote the exterior pointing unit normal vector to $\partial ^{*}\Omega _{i}'$ for any $1\leq i\leq m$.
Lemma 7.6 Volume-Preserving Second Variation of Minimisers, Multiple Sets
Let $\Omega _{1},\ldots ,\Omega _{m},\Omega _{1}',\ldots ,\Omega _{m}'\subseteq \mathbb {R}^{n+1}$ be two partitions of $\mathbb {R}^{n+1}$ into measurable sets such that $\partial \Omega _{i},\partial \Omega _{i}'$ are a locally finite union of $C^{\infty }$ manifolds for all $1\leq i\leq m$. Then
Also,
Moreover, $\|\overline {\nabla } T_{\rho }(1_{\Omega _{i}}-1_{\Omega _{j}})(x)\|>0$ for all $x\in \Sigma _{ij}'$, except on a set of Hausdorff dimension at most $n-1$, and $\|\overline {\nabla } T_{\rho }(1_{\Omega _{i}'}-1_{\Omega _{j}'})(x)\|>0$ for all $x\in \Sigma _{ij}$, except on a set of Hausdorff dimension at most $n-1$.
Equation (40) and the last assertion require a slightly different argument than previously used. To see the last assertion, note that if there exists $1\leq i<j\leq m$ such that $\left \|\overline {\nabla } T_{\rho }(1_{\Omega _{i}}-1_{\Omega _{j}})(x)\right \|=0$ on an open set in $\Sigma _{ij}'$, then choose $X'$ supported in this open set so that the third term of (39) is zero. Then, choose Y such that sum of the first two terms in (39) is negative. Then multiplying X by a small positive constant, and noting that the fourth term in (39) has quadratic dependence on X, we can create a negative second derivative of the noise stability, giving a contradiction. We can similarly justify the positive signs appearing in (40) (as opposed to the negative signs from (28)).
Let $v\in \mathbb {R}^{n+1}$. For simplicity of notation, we denote $\langle v,N\rangle $ as the collection of functions $(\langle v,N_{ij}\rangle )_{1\leq i<j\leq m}$ and we denote $\langle v,N'\rangle $ as the collection of functions $(\langle v,N_{ij}'\rangle )_{1\leq i<j\leq m}$. For any $1\leq i<j\leq m$, define
Lemma 7.7. Key Lemma, $m\geq 2$, Translations as Almost Eigenfunctions
Let $\Omega _{1},\ldots , \Omega _{m}, \Omega _{1}',\ldots ,\Omega _{m}'\subseteq \mathbb {R}^{n+1}$ minimise problem 7.1. Fix $1\leq i<j\leq m$. Let $v\in \mathbb {R}^{n+1}$. Then
When compared to Lemma 5.4, Lemma 7.7 has a negative sign on the right side of the equality, resulting from the positive sign in (40) (as opposed to the negative sign on the right side of (28)). Lemmas 7.6 and 7.7 then imply the following.
Lemma 7.8 Second Variation of Translations, Multiple Sets
Let $0<\rho <1$. Let $v\in \mathbb {R}^{n+1}$. Let $\Omega _{1},\ldots ,\Omega _{m}$ minimise Problem 1.5. For each $1\leq i\leq m$, let $\{\Omega _{i}^{(s)}\}_{s\in (-1,1)}$ be the variation of $\Omega _{i}$ corresponding to the constant vector field $X\colon = v$. Assume that
Then
Because $\rho \in (0,1)$, $-\frac {1}{\rho }+1<0$. (The analogous inequality in Lemma 5.5 was $\frac {1}{\rho }-1>0$.) Repeating the argument of Theorem 1.9 then gives the following.
Theorem 7.9 Main Structure Theorem/Dimension Reduction, Negative Correlation
Fix $0<\rho <1$. Let $m\geq 2$ with $2m\leq n+3$. Let $\Omega _{1},\ldots \Omega _{m},\Omega _{1}',\ldots \Omega _{m}'\subseteq \mathbb {R}^{n+1}$ minimise Problem 7.1. Then, after rotating the sets $\Omega _{1},\ldots \Omega _{m},\Omega _{1}',\ldots \Omega _{m}'$ and applying Lebesgue measure zero changes to these sets, there exist measurable sets $\Theta _{1},\ldots \Theta _{m},\Theta _{1}',\ldots \Theta _{m}'\subseteq \mathbb {R}^{2m-2}$ such that
Acknowledgements
SH is supported by NSF Grant CCF 1911216.
Conflicts of interest
None.