1 Introduction and main results
The study of random analytic functions (RAF) has a long and rich history, with a dominating theme being the distribution of the zero sets or, more generally, that of a-values. In this paper, however, our primary concern lies in the metrical aspect of RAF or, rather, how to measure the size of an RAF?
The first significant result along this line is the Littlewood theorem: let $f(z)=a_0+a_1z+a_2z^2+\cdots \in H^2$ be an element of the Hardy space over the unit disk. Let $\{\varepsilon _n\}_{n\geq 0}$ be a sequence of independent, identically distributed Bernoulli random variables, that is, ${\mathbb P}(\varepsilon _n=1)={\mathbb P}(\varepsilon _n=-1)=\frac {1}{2}$ for all $n\geq 0.$ Littlewood’s theorem, proven in 1930 [Reference Littlewood17], states that
almost surely for all $p \ge 2.$ When $f \notin H^2$ , for almost every choice of signs, $\mathcal {R} f$ has a radial limit almost nowhere. As a consequence, according to the fundamental theory of the Hardy spaces [Reference Duren5, Theorem 2.2, p. 17], $\mathcal {R} f$ is not in any $H^p$ almost surely. The same holds true for a standard Steinhaus sequence [Reference Littlewood and Offord18, Reference Paley and Zygmund25] and a standard Gaussian sequence [Reference Kahane11, p. 54].
Determining when the random series $\mathcal {R} f$ represents an $H^{\infty }$ -function almost surely is much harder, where $H^{\infty }$ denotes the space of bounded analytic functions over the unit disk $\mathbb {D}$ . Several partial results since the 1930s, encompassing both necessary conditions and sufficient conditions, have been obtained by noted analysts, including Paley, Zygmund, and Salem [Reference Offord24, Reference Salem and Zygmund29]. In [Reference Billard2], Billard demonstrated the equivalence between the Bernoulli case and the Steinhaus case. A remarkable characterization was eventually achieved by Marcus and Pisier in 1978 [Reference Marcus and Pisier20] (see also [Reference Kahane11, Reference Marcus and Pisier21]). Their characterization relies on the celebrated Dudley–Fernique theorem.
Despite its intrinsic interests and the active research on various aspects of random analytic functions, such as on the distribution of zeros in canonical Gaussian analytic functions, the study of the metrical properties of RAF has remained largely stagnant in recent decades. An elegant exception to this is a theorem due to Cochran, Shapiro, and Ullrich concerning the Dirichlet space [Reference Cheng, Fang and Liu3]. In 1993, they proved that a Dirichlet function with random signs is almost surely a Dirichlet multiplier. This result has been further generalized by Liu in [Reference Liu19]. More recently, Cheng, Liu, and the first author have proved a Littlewood-type theorem for random Bergman functions [Reference Cochran, Shapiro and Ullrich4]. The exploration of this theme has been continued by the current authors, particularly for Fock spaces [Reference Fang7].
In this paper, we address a fundamental aspect in complex analysis, namely, the growth rate of analytic functions, for which a gap appears to exist in the literature: while the growth rate of RAF for entire functions has been fairly well understood since the celebrated work of Littlewood and Offord in 1948 [Reference Littlewood17], relatively less is known about the growth rate of RAF in the unit disk, where a polynomial rate appears to be the most natural. In the deterministic context, a closely related framework is formulated in Section 4.3 of the monograph [Reference Gao8]. In the present paper, we show that, by imposing a polynomial growth rate, a considerably satisfactory theory can be established for random analytic functions over the unit disk. Our key finding suggests that the following three aspects, when suitably formulated, are mutually determinative:
-
• the (polynomial) growth rate;
-
• the growth of Taylor coefficients; and
-
• the asymptotic distribution of the zero sets.
This phenomenon, which we refer to as “rigidity,” stands in contrast to entire functions, where estimates instead of rigidity are often observed.
Let $H({\mathbb D})$ denote the space of analytic functions over the unit disk ${\mathbb D}$ in the complex plane.
Definition 1.1. A function $f \in H({\mathbb D})$ has a polynomial growth rate if there exists a constant $\alpha> 0$ such that
In this case, the infimum of such constants $\alpha $ is called the growth rate of f.
Let ${\mathbb G}_{<\infty }$ denote the collection of analytic functions with a finite polynomial growth rate. The motivation for our work is the common belief, often found in folklore, that a randomized summation tends to exhibit enhanced regularity. An exemplary illustration of this enhancement in regularity is observed in the randomized p-harmonic series $\sum _{n=1}^\infty \pm \frac {1}{n^p}$ , which converges almost surely if and only if $p> \frac {1}{2}$ , indicating a $\frac {1}{2}$ -order improvement in regularity compared to the deterministic p-harmonic series. Then, the 1930 Littlewood theorem aligns with this perspective as the first example involving analytic functions. Consequently, in terms of growth rates, a natural question arises:
Question A How much growth rate improvement can one gain for the random function
when compared with that of $f(z)=\sum _{n=0}^\infty a_n z^n \in H({\mathbb D})$ ?
We shall see that the rate of growth of $\mathcal {R}f$ is indeed always at most that of f, and the amount of improvement in terms of $\alpha $ is at most $\frac {1}{2}$ , which is sharp. On the other hand, this prompts a more fundamental question:
Question B How to characterize those functions $f \in H({\mathbb D})$ such that the random function ${\mathcal R} f$ belongs to ${\mathbb G}_{<\infty }$ almost surely?
Once we address the aforementioned inquiries (see Theorem C and Theorem B below), our investigation shifts toward examining the zero sets of the corresponding $\mathcal {R} f$ . Then, the unexpected discovery, at least to us, is that the integrated counting function $N_{{\mathcal R} f}(r)$ exhibits a strong rigidity (Theorem D).
In this paper, we consider the following three types of randomization.
Definition 1.2 A random variable X is called Bernoulli if ${\mathbb P}(X=1) = {\mathbb P}(X=-1) = \frac {1}{2}$ , Steinhaus if it is uniformly distributed on the unit circle, and by a standard complex Gaussian, i.e., $N_{\mathbb {C}}(0,1)$ , we mean the law of $U+iV$ , where $U, V$ are independent, real Gaussian variables with zero mean and $\text {Var}(U)=\text {Var}(V)=\frac {1}{2}$ . Moreover, let X be either Bernoulli, Steinhaus, or standard complex Gaussian, and a standard X sequence is a sequence of independent, identically distributed X variables, denoted by $(\varepsilon _n)_{n \geq 0}$ , $(e^{2\pi i \alpha _n})_{n \geq 0}$ , and $(\xi _n)_{n\geq 0}$ , respectively. Lastly, a standard random sequence $(X_n)_{n \geq 0}$ refers to either a standard Bernoulli, Steinhaus, or standard complex Gaussian sequence.
In the rest of this paper, we shall always assume that $(X_n)_{n \geq 0}$ is a standard random sequence if not otherwise indicated, and for such a sequence, we define
An expert might wonder whether the examination of the Steinhaus and Gaussian cases can be simplified by applying Kahane’s reduction principle to the Bernoulli case [Reference Kahane11]. While such an approach may be applicable in certain scenarios, it falls short of attaining the desired level of rigidity, as illustrated, say, by Proposition 2.9.
Let ${\mathcal X}\subset H({\mathbb D})$ be any subspace of analytic functions over ${\mathbb D}$ . We introduce another (deterministic) subspace ${\mathcal X}_*\subset H({\mathbb D})$ , which we call the random symbol space of ${\mathcal X}$ , by
Clearly, ${\mathcal X}_*$ depends on, a priori, the choice of $(X_n)_{n \geq 0}$ .
As a preparatory step, we show that there exists a well-defined notion of growth rate for random analytic functions.
Lemma A (Existence)
For any $f \in H({\mathbb D})$ , the following statements hold:
-
(a) ${\mathbb P}({\mathcal R} f \ \text {has a polynomial growth rate}) \in \{0, 1\}$ .
-
(b) ${\mathcal R} f$ has a polynomial growth rate almost surely if and only if f has a polynomial growth rate.
-
(c) If ${\mathcal R} f $ has a polynomial growth rate almost surely, then there exists a constant $\alpha \in [0, \infty )$ such that the growth rate of ${\mathcal R} f$ is almost surely equal to $\alpha $ .
An immediate consequence is that
where $\bigsqcup $ represents the disjoint union, and ${\mathbb G}_{\alpha }$ denotes the collection of analytic functions with precisely a growth rate $\alpha $ .
Next, we show that a concise characterization of $\left ({\mathbb G}_{\alpha }\right )_*$ , in terms of Taylor coefficients, can be obtained. For this, we need:
Definition 1.3 A sequence of complex numbers $(a_n)_{n\ge 0}$ has a polynomial growth rate if there exists some constant $\alpha> 0$ such that
The infimum of such constants $\alpha $ is called the growth rate of $(a_n)_{n\ge 0}$ .
Theorem B (Characterization)
Let $\alpha \in [0, \infty )$ and $f(z) = \sum _{n=0}^{\infty } a_nz^n \in H({\mathbb D})$ . Then the random function ${\mathcal R} f$ has a growth rate $\alpha $ almost surely if and only if the growth rate of the sequence $\left (\sum _{k=0}^n|a_k|^2\right )^{\frac {1}{2}}$ is also $\alpha $ .
Interestingly, our proof of the above theorem relies heavily on Banach space techniques. For Question A, we show next that the rate of ${\mathcal R} f$ does improve, compared with that of f, and the amount of improvement is at most $\frac {1}{2}$ .
Theorem C (Regularity)
Let $\alpha \in [0, \infty )$ .
-
(a) If $f \in H({\mathbb D})$ has a growth rate $\alpha $ , then the growth rate of ${\mathcal R} f$ belongs to $\left [\max \{\alpha - \frac {1}{2}, 0\}, \alpha \right ]$ almost surely.
-
(b) For each $\alpha ' \in \left [\max \{\alpha - \frac {1}{2}, 0\}, \alpha \right ]$ , there is a function $f \in H({\mathbb D})$ with growth rate $\alpha $ such that ${\mathcal R} f$ has growth rate $\alpha '$ almost surely.
The above result is one of the two Littlewood-type theorems which we shall prove in Section 3. The other one is Theorem 3.1.
We now proceed to the second characterization of $ \left ({\mathbb G}_{\alpha }\right )_*$ . In Section 4, we study the zero sets of $\mathcal {R}f$ , which, in general, has an extensive literature, and an in-depth analysis when $f \in \left ({\mathbb G}_{\alpha }\right )_*$ might be better conducted in a separate work. In this paper, our focus is on the asymptotic behaviors of the counting function $n_{\mathcal {R}f}(r)$ and the integrated counting function $N_{\mathcal {R}f}(r)$ for $f \in H(\mathbb {D})$ . Here, $n_{f}(r)$ denotes the number of zeros, accounting for multiplicity, of f within the disk $|z| < r$ . We also define
Our second characterization of $ \left ({\mathbb G}_{\alpha }\right )_*$ is the following:
Theorem D (Rigidity)
Let $\alpha \in [0, \infty )$ and $f \in H({\mathbb D})$ . Then the following statements are equivalent:
-
(i) The function f belongs to $({\mathbb G}_{\alpha })_*$ .
-
(ii) $ \displaystyle \limsup _{r \to 1}\frac {N_{{\mathcal R} f}(r)}{\log \frac {1}{1-r}} = \alpha $ almost surely.
-
(iii) $\displaystyle \limsup _{r \to 1}\frac {{\mathbb E}(N_{{\mathcal R} f}(r))}{\log \frac {1}{1-r}} = \alpha $ .
In summary, we establish that three aspects of ${\mathcal R} f$ are equivalent in a certain sense: the growth of the random function, the growth of its Taylor coefficients, and the distribution of its zero set.
As for the proof of Theorem D, when the random function ${\mathcal R} f$ is induced by a standard complex Gaussian sequence, one has the advantage of utilizing the Edelman–Kostlan formula, from which good knowledge on the expected number of zeros of ${\mathcal R} f$ , i.e., on ${\mathbb E}(n_{{\mathcal R} f}(r)),$ follows quickly. This allows us to draw several conclusions on the asymptotic behavior of ${\mathbb E}(n_{{\mathcal R} f}(r))$ . This is, however, far from being enough for our purpose. Three other ingredients play important roles in the proof: the estimates obtained by the methods of [Reference Cochran, Shapiro and Ullrich4], the new Banach space $H_\psi $ with $\psi (x)=x^\alpha \log (x),$ and an estimate of [Reference Nazarov, Nishry and Sodin22].
1.1 Blaschke condition
Lastly, we examine various convergence exponents related to the Blaschke condition, which is perhaps the most important geometric condition for zero sets in the unit disk [Reference Duren5, Section 2.2], and is, however, never satisfied for any $f \in ({\mathbb G}_{\alpha })_*$ when $\alpha>0$ , as an immediate consequence of Corollary 4.3 or [Reference Nazarov, Nishry and Sodin22, Theorem 1.3]; that is,
This prompts us to take a closer look at the Blaschke-type conditions, and we introduce four notions of convergence exponent in Section 4.2. Then, as an application of Corollary 4.3, we show that they are indeed the same and equal to one (almost surely) for any $\alpha>0$ (Corollary 4.4).
1.2 Methodology
Most literature on the growth rate of random analytic functions is for entire functions, and relatively less is known over the unit disk. Moreover, techniques in the literature are usually complex analytic, and the conclusions are often approximate. To obtain sharp results such as the rigidity in Theorem D, in this paper, we devise a rather different route of proof featuring a functional analysis approach. The key strategy in our arguments is to introduce an auxiliary class of Banach spaces of analytic functions, which allows us to obtain some quantitative estimates such as those by the methods in [Reference Cochran, Shapiro and Ullrich4]. This Banach space approach also allows us to make effective use of entropy integrals, for which we obtain new estimates which are of independent interests.
The rest of this paper is organized as follows. Section 2 is devoted to Question B. To answer this question, we begin by introducing the Banach spaces $H_\psi $ and analyzing their symbol spaces $(H_\psi )_*$ . Section 3 studies the regularity improvement under randomization, thereby proving Theorem C and answering Question A in particular. The proof of Theorem D is presented in Section 4.1. Section 4.2 defines four convergence exponents related to the classical Blaschke condition, and they are shown to be the same and equal to one when $\alpha>0$ . Finally, the sharpness of various estimates is discussed in Section 4.3.
1.3 Notations
The abbreviation “a.s.” stands for “almost surely.” We assume that all random variables are defined on a probability space $(\Omega , {\mathcal F}, {\mathbb P})$ with expectation denoted by ${\mathbb E}(\cdot )$ . Moreover, $A \lesssim B$ (or, $A \gtrsim B$ ) means that there exists a positive constant C dependent only on the indexes $\alpha , \beta \cdots $ such that $A \leq CB$ (or, respectively, $A \geq \frac {B}{C}$ ), and $A \simeq B$ means that both $A \lesssim B$ and $A \gtrsim B$ hold. Lastly, $m(\cdot )$ denotes the Lebesgue measure.
2 The random symbol space $({\mathbb G}_{\alpha })_*$
In this section, we aim to characterize the random symbol space $({\mathbb G}_{\alpha })_*$ using Taylor coefficients. To achieve this, we introduce a family of Banach spaces denoted as $H_\psi $ and study their symbol spaces $(H_\psi )_*$ . Although a complete characterization of $(H_\psi )_*$ remains elusive, we are able to obtain a sufficient condition (Proposition 2.4) and a necessary condition (Proposition 2.5), which are sufficiently close to allow us to derive a precise characterization of $\left ({\mathbb G}_{\alpha }\right )_*$ and, consequently, $\left ({\mathbb G}_{\leq \alpha }\right )_*$ , where ${\mathbb G}_{\leq \alpha }$ denotes the set of analytic functions with a growth rate at most $\alpha $ .
Following [Reference Bennett, Stegenga and Timoney1], by a doubling weight, we mean an increasing function $\psi : [1, \infty ) \to [0, \infty )$ such that $\psi (2x) = O(\psi (x))$ as $x \to \infty $ . For each doubling weight $\psi $ , we introduce a Banach space by
For the standard weight $\psi _{\alpha }(x): = x^{\alpha }$ with $\alpha> 0$ , we shall write $H_{\alpha }$ instead of $H_{\psi _{\alpha }}$ . Another weight of importance to us (in the proof of Theorem D) is $\psi (x)=x^\alpha \log x$ . Such spaces $H_{\psi }$ were first studied by Shields and Williams [Reference Shields and Williams31, Reference Shields and Williams32] in a different setting.
Our first set of techniques build on arguments in [Reference Cochran, Shapiro and Ullrich4]. It is worth mentioning that in [Reference Cochran, Shapiro and Ullrich4], the authors considered standard real Gaussian variables. Nevertheless, every outcome in that study can be extended to complex Gaussian variables by treating the real and imaginary components separately. The following result, which extends Theorem 8 in [Reference Cochran, Shapiro and Ullrich4], will be of repeated use.
Proposition 2.1 Let $\psi $ be a doubling weight and $f \in H({\mathbb D})$ . Then the following statements are equivalent:
-
(i) ${\mathcal R} f \in H_{\psi }$ a.s.
-
(ii) ${\mathbb E}(\|{\mathcal R} f\|^s_{H_{\psi }}) < \infty $ for some $s> 0$ .
-
(iii) ${\mathbb E}(\|{\mathcal R} f\|^s_{H_{\psi }}) < \infty $ for any $s> 0$ .
Moreover, the quantities $\left ( {\mathbb E}(\|{\mathcal R} f\|^s_{H_{\psi }}) \right )^{\frac {1}{s}}$ are equivalent for all $s> 0$ with some constant depending only on s.
To prove Proposition 2.1, we need an auxiliary lemma which extends [Reference Cochran, Shapiro and Ullrich4, Lemma 10]. Here and in what follows, for an analytic function $f(z) = \sum _{n=0}^{\infty }a_nz^n$ , let $s_nf(z): = \sum _{k=0}^n a_k z^k$ denote its nth Taylor polynomial.
Lemma 2.2 Let $\psi $ be a doubling weight and $(X_n)_{n \geq 0}$ be a sequence of independent and symmetric random variables. If a random function ${\mathcal R} f(z): = \sum _{n=0}^{\infty } a_n X_n z^n$ belongs to the space $H_{\psi }$ a.s., then its Taylor series $(s_n{\mathcal R} f)_{n \ge 0}$ is a.s. bounded in $H_{\psi }$ , i.e., $\sup _{n \geq 0} \|s_n{\mathcal R} f\|_{H_{\psi }} < \infty $ a.s.
Proof Since the arguments are similar to those of the proof of [Reference Cochran, Shapiro and Ullrich4, Lemma 10], here we only outline the key points and indicate the differences. We fix an increasing sequence of positive numbers $r_m \to 1$ as $m \to \infty $ . Then, for each $m \geq 1$ , the function ${\mathcal R} f_{r_m}(z): = \sum _{n=0}^{\infty } r_m^n a_n X_n z^n$ belongs to $H_{\psi }$ a.s.; moreover, it is not difficult to see that $ \|{\mathcal R} f_{r_m}\|_{H_{\psi }} \to \|{\mathcal R} f\|_{H_{\psi }}$ as $m \to \infty $ and the Taylor polynomials $(s_n {\mathcal R} f_{r_m})_{n \geq 0}$ of ${\mathcal R} f_{r_m}$ converge to ${\mathcal R} f_{r_m}$ in $H_{\psi }$ . From this and the A-bounded Marcinkiewicz–Zygmund–Kahane theorem [Reference Li and Queffélec16, Theorem II.4], we conclude that the Taylor series $(s_n{\mathcal R} f)_{n \ge 0}$ is a.s. bounded in $H_{\psi }$ .
The Proof of Proposition 2.1
Again, the proof relies on arguments in [Reference Cochran, Shapiro and Ullrich4], to which the reader is suggested to consult for more details since only key points are outlined here. (iii) $\Longrightarrow $ (ii) $\Longrightarrow $ (i) is trivial, and (ii) $\Longrightarrow $ (iii) follows from [Reference Cochran, Shapiro and Ullrich4, Lemma 11]. It remains to prove (i) $\Longrightarrow $ (ii). If ${\mathcal R} f \in H_{\psi }$ a.s., then, by Lemma 2.2, the Taylor series $(s_n {\mathcal R} f)_{n \geq 0}$ is a.s. bounded in $H_{\psi }$ , i.e., ${\mathbb P}(M < \infty ) = 1$ , where $M: = \sup _{n \geq 0} \|s_n {\mathcal R} f\|_{H_{\psi }}$ . Thus, by [Reference Cochran, Shapiro and Ullrich4, Lemma 9], for a small enough constant $\lambda> 0$ , one has $ {\mathbb E}\left (\exp \left ( \lambda \|{\mathcal R} f\|_{H_{\psi }}\right ) \right ) \leq {\mathbb E}(\exp (\lambda M)) < \infty , $ from which and Jensen’s inequality (ii) follows. Moreover, [Reference Cochran, Shapiro and Ullrich4, Lemma 11] implies that the quantities $({\mathbb E}(\|{\mathcal R} f\|^s_{H_{\psi }}))^{\frac {1}{s}}$ are equivalent for all $s> 0$ by a constant depending only on s.
The next set of techniques we shall need are estimates for Dudley–Fernique-type entropy integrals, which deserves perhaps more attention in complex analysis and for which we derive new estimates of independent interests. Recall that the nondecreasing rearrangement of a nonnegative function $\rho : [0,1] \to {\mathbb R}^+$ is defined as $ \overline {\rho }(s): = \sup \{y: m(\{t: \rho (t) < y\}) < s\}. $ For a sequence of complex numbers $(a_n)_{n \geq 0}$ , as in [Reference Marcus and Pisier21, p. 8], one defines invariant pseudometrics $\rho (t+s, s) := \rho (t)$ on the unit circle $\mathbb {T}$ by
Lemma 2.3 Let $(a_n)_{n \geq 0}$ be a sequence of complex numbers. Then the following holds for every $n \geq 2$ :
Proof Firstly, we observe that $\displaystyle \sup _{1 \leq k \leq n}|a_k| \lesssim \int _0^1 \dfrac {\overline {\rho _n}(t)}{t\sqrt {\log e/t}} dt.$ By a result of Marcus and Pisier for $\overline {\rho _n}$ [Reference Marcus and Pisier21, Theorem 1.2, p. 126],
here and throughout the paper, $(a_k^*)_{k \geq 1}$ denotes the nonincreasing rearrangement of the sequence $(|a_k|)_{k \geq 1}$ if existing. Consequently,
Then, by a result of Jain and Marcus [Reference Hedenmalm, Korenblum and Zhu9, Corollary 2.5],
which is dominated by $ \sqrt {\log n}\left (\sum _{k=1}^n|a_k|^2\right )^{\frac {1}{2}}.$ The proof of Lemma 2.3 is complete now.
Proposition 2.4 Let $\psi $ be a doubling weight and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in H({\mathbb D})$ . If
then $f \in \left (H_{\psi }\right )_*$ .
Proof By [Reference Jain and Marcus10, Proposition 2], we get
where, for a random variable x,
is the Orlicz norm of x (see [Reference Krasnoselski and Ruticki12, Reference Rudin28]). Indeed, [Reference Jain and Marcus10, Proposition 2] treats only the Bernoulli case, but the case of the Steinhaus sequence is essentially the same and the case of the Gaussian sequence is even simpler. Using this, together with the following form of Markov’s inequality
we get, except on an event with probability at most $\frac {2}{n^2}$ ,
which, by the assumption and Lemma 2.3, implies that
for every $n \in {\mathbb N}$ and for some $C_1> 0$ . Thus, for every $m \in {\mathbb N}$ , we obtain
on an event with probability at least
where the last inequality above follows from arguments as in [Reference Shields and Williams32, Lemma 1] and [Reference Bennett, Stegenga and Timoney1, Lemma 2.1]. The proof is complete now.
Proposition 2.5 Let $\psi $ be a doubling weight and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in H({\mathbb D})$ . If $f \in \left (H_{\psi }\right )_*$ , then
Proof By Lemma 2.2, the Taylor series $(s_n{\mathcal R} f)_{n \geq 0}$ of ${\mathcal R} f$ is a.s. bounded in $H_{\psi }$ . As in the proof of Proposition 2.1, by [Reference Cochran, Shapiro and Ullrich4, Lemma 9] and Jensen’s inequality, for a small enough constant $\lambda> 0$ , we get
For each $n \in {\mathbb N}$ , using [Reference Marcus and Pisier21, Theorem 1.4, p. 11], we get
Moreover, step by step using the contraction principle [Reference Li and Queffélec15, Theorem IV.3, p. 136] and the fact that $ e \left (1 - \frac {1}{n+1}\right )^k \geq 1 $ for every $k \leq n, $ we get
From this and the doubling assumption, it follows that
From Lemma 2.3 and Propositions 2.4 and 2.5, we conclude the following for the space $(H_{\alpha })_*$ .
Corollary 2.6 Let $\alpha> 0$ and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in H({\mathbb D})$ .
-
(a) If $\left (\sum _{k=0}^n|a_k|^2\right )^{\frac {1}{2}} = O\left (\frac {n^{\alpha }}{\log n} \right )$ , then $f \in (H_{\alpha })_*$ .
-
(b) If $f \in (H_{\alpha })_*$ , then
$$ \begin{align*}\left(\sum_{k=0}^n|a_k|^2\right)^{\frac{1}{2}} = O\left(n^{\alpha} \right) \quad \text{ and } \quad \int_0^{1}\frac{\overline{\rho_n}(t)}{t\sqrt{\log e/t}}dt = O\left(n^{\alpha} \right). \end{align*} $$
Now we are ready to prove Lemma A and Theorem B. First, for any $\alpha> 0$ and $f \in H({\mathbb D})$ , by [Reference Cochran, Shapiro and Ullrich4, Lemma 4], one has
The proofs of Lemma A and Theorem B follow from scrutinizing the following decomposition, with the aid of Corollary 2.6:
and
In details, for Lemma A, we first observe that ${\mathbb P} ({\mathcal R} f \in {\mathbb G}_{<\infty }) \in \{0, 1\}$ follows from (2.1) and (2.2). Now, letting $f(z) = \sum _{n=0}^{\infty } a_n z^n \in H({\mathbb D})$ , it is an exercise that f has a polynomial growth rate if and only if so does the sequence of its Taylor coefficients $(a_n)_{n \geq 0}$ . Indeed, the direct implication follows from Cauchy’s inequality for the Taylor coefficients, while the other direction follows from [Reference Bennett, Stegenga and Timoney1, Theorem 1.10(a)] and the fact that the sequence $\sum _{k=0}^n|a_k|$ has a polynomial growth rate whenever so does $(a_n)_{n \geq 0}$ .
If $f \in ({\mathbb G}_{<\infty })_*$ , then $f \in (H_{\alpha })_*$ for some $\alpha $ . This, together with Corollary 2.6, implies that $|a_n| = O(n^{\alpha })$ . Conversely, if $|a_n| = O(n^{\alpha })$ for some $\alpha> 0$ , then, again by Corollary 2.6(a), $f \in (H_{\beta })_*$ for $\beta> \alpha + \frac {1}{2}$ , hence $f \in ({\mathbb G}_{<\infty })_*$ . Lastly, ${\mathbb P} ({\mathcal R} f \in {\mathbb G}_{<\infty }) = 1$ implies that, for some $\alpha _1\ge 0$ , ${\mathbb P} ({\mathcal R} f \in H_{\alpha }) = 1$ for all $\alpha \geq \alpha _1$ . We denote by $\alpha _0$ the infimum of all such constants $\alpha _1$ . Then one checks that, by (2.3), $f \in ({\mathbb G}_{\alpha _0})_*$ . This yields Lemma A.
As for Theorem B, we will verify the following claim: Let $\alpha \in [0, \infty )$ and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in H({\mathbb D})$ . Then $f \in ({\mathbb G}_{\alpha })_*$ (or, $f \in ({\mathbb G}_{\leq \alpha })_*$ ) if and only if the sequence $\left (\sum _{k=0}^n|a_k|^2\right )^{\frac {1}{2}}$ has the growth rate $\alpha $ (or, respectively, a growth rate at most $\alpha $ ). It suffices to consider $\alpha> 0$ . The case $\alpha = 0$ is similar and indeed simpler. Let $A_n: = \left (\sum _{k=0}^n|a_k|^2\right )^{\frac {1}{2}}$ . By (2.1) and (2.3), together with Corollary 2.6, $f \in ({\mathbb G}_{\alpha })_*$ if and only if $ A_n = O(n^{\beta }) $ and $ A_n \neq O(n^{\gamma })$ for all $ \beta> \alpha $ and $ \gamma < \alpha $ , i.e., $(A_n)_{n \geq 0}$ has the growth rate $\alpha $ . Similarly, using (2.3) and Corollary 2.6, we can check that $f \in ({\mathbb G}_{\leq \alpha })_*$ if and only if $A_n = O(n^{\beta })$ for all $ \beta> \alpha $ , i.e., $(A_n)_{n \geq 0}$ has a growth rate at most $\alpha $ .
We end this section with two classes of examples of general interests: lacunary series and those with monotone coefficients. A side result is that the random symbol space $(H_\psi )_*$ induced by a general normal weight $\psi $ does depend on the choice of the randomization sequence $(X_n)_{n \geq 0}$ . This stands in contrast with $(H^{\infty })_*$ , which is the same for any standard random sequence [Reference Kahane11, Theorem 7, p. 231]. Later, we shall use these examples to illustrate the sharpness of Theorem C, Theorem D, and Corollary 4.3.
We say that a sequence $(b_k)_{k \geq 1}$ has a polynomial growth rate with respect to a sequence $(n_k)_{k \geq 1}$ if there exists a number $\alpha> 0 $ such that $ |b_k| = O(n_k^{\alpha }) $ as $ k \to \infty .$ In this case, the infimum of such constants $\alpha $ is called the growth rate of $(b_k)_{k \geq 1}$ with respect to $(n_k)_{k \geq 1}$ .
Proposition 2.7 Let $\alpha \in [0, \infty )$ and $f(z) = \sum _{k=1}^{\infty }b_k z^{n_k} \in H({\mathbb D})$ with a Hadamard lacunary sequence $(n_k)_{k \geq 1}$ , i.e., $\inf _{k\geq 1} n_{k+1}/n_k> 1$ . Then $f \in ({\mathbb G}_{\alpha })_*$ (or, $f \in ({\mathbb G}_{\leq \alpha })_*$ ) if and only if $(b_k)_{k \geq 1}$ has the growth rate $\alpha $ (or, respectively, a growth rate at most $\alpha $ ) with respect to $(n_k)_{k \geq 1}$ .
The proof is a straightforward application of Theorem B.
Recall that, according to [Reference Shields and Williams31, p. 291] and [Reference Yang and Xu35, p. 152], an increasing continuous function $\psi : [1, \infty ) \to [1, \infty )$ is called a normal weight if there are constants $0 < \alpha < \beta $ and $x_0> 0$ such that
By [Reference Shields and Williams32, Remark 1], normal weights are doubling ones.
Lemma 2.8 Let $(X_k)_{k\geq 1}$ be a standard complex Gaussian sequence and $(c_k)_{k\geq 1}$ a sequence of positive numbers. Then $|X_k| = O(c_k)$ a.s. if and only if there is a constant $a> 0$ such that $\sum _{k=1}^{\infty } \exp \left (-a c_k^2\right ) < \infty .$
Proof For each $n, k \in {\mathbb N}$ , let us consider the following events:
and
It is clear that
In addition, since $\{E_{n,k}, k \geq 1\}$ are independent, by an application of the (second) Borel–Cantelli lemma, we get
where
From this, the conclusion follows when n is large enough.
Proposition 2.9 Let $\psi $ be a normal weight, and $f(z) = \sum _{k=1}^{\infty }b_k z^{n_k} \in H({\mathbb D})$ with a Hadamard lacunary sequence $(n_k)_{k \geq 1}$ .
-
(a) If $(X_k)_{k \geq 1}$ is a standard Bernoulli or Steinhaus sequence, then $f \in (H_{\psi })_*$ if and only if $|b_k| = O(\psi (n_k))$ .
-
(b) If $(X_k)_{k \geq 1}$ is a standard complex Gaussian sequence, then $f \in (H_{\psi })_*$ if and only if there is a constant $a> 0$ such that
(2.4) $$ \begin{align} \sum_{k=1}^{\infty} \exp \left( - a||b_k|^{-2}|\psi^2(n_k) \right) < \infty. \end{align} $$
Proof By [Reference Yang and Xu35, Theorem 2.3], ${\mathcal R} f(z) = \sum _{k=1}^{\infty }b_k X_k z^{n_k} \in H_{\psi }$ a.s. if and only if $|b_kX_k| = O(\psi (n_k))$ a.s., which immediately implies the assertion in Part (a) and together with Lemma 2.8 yields the Part (b).
Proposition 2.10 Let $\alpha \in [0,\infty )$ and $f(z) = \sum _{n=0}^{\infty }a_n z^{n} \in H({\mathbb D})$ with a monotone sequence $(|a_n|)_{n \ge 0}$ . Then $f \in ({\mathbb G}_{\alpha })_*$ (or, $f \in ({\mathbb G}_{\leq \alpha })_*$ ) if and only if $(a_n\sqrt {n})_{n \geq 0}$ has the growth rate $\alpha $ (or, respectively, a growth rate at most $\alpha $ ).
The proof follows from Theorem B, together with some elementary manipulation of coefficients, hence skipped.
3 Littlewood-type theorems
In this section, we present further applications of Corollary 2.6. Namely, we prove two Littlewood-type theorems that address the improvement of regularity through randomization, with the first one (Theorem C) for ${\mathbb G}_{\alpha }$ and the second one (Theorem 3.1) for $H_{\alpha }$ .
Proof of Theorem C
(a) By Lemma A, we assume that f has a growth rate $\alpha $ and ${\mathcal R} f$ a growth rate $\alpha _0$ almost surely. By (2.3) and [Reference Bennett, Stegenga and Timoney1, Theorem 1.8(a)], $ \left (\sum _{k=0}^n |a_k|^2\right )^{\frac {1}{2}} = O(n^{\beta }), $ which, together with Corollary 2.6(a), implies that ${\mathcal R} f \in H_{\beta }$ a.s. for every $\beta> \alpha $ , hence, $\alpha _0 \leq \alpha $ . To show $\alpha _0 \geq \max \{\alpha - \frac {1}{2}, 0\}$ , it suffices to assume $\alpha> \frac {1}{2}$ . For every $\gamma> \alpha _0$ , ${\mathcal R} f \in H_{\gamma }$ a.s., and hence, by Corollary 2.6(b), $\left (\sum _{k=0}^n |a_k|^2\right )^{\frac {1}{2}} = O(n^{\gamma })$ , which by the Cauchy–Schwarz inequality implies that $ \sum _{k=0}^n |a_k| = O(n^{\gamma + \frac {1}{2}})$ . This, together with [Reference Bennett, Stegenga and Timoney1, Theorem 1.10(a)], implies that the function $g(z): = \sum _{n=0}^{\infty } |a_n| z^n$ belongs to $H_{\gamma + 1/2}$ . Thus, $f \in H_{\gamma + 1/2}$ , and hence the growth rate $\alpha $ of f is no greater than $\alpha _0 + \frac {1}{2}$ .
(b) For each $\alpha ' \in [\max \{\alpha - \frac {1}{2},0\}, \alpha ]$ , we take a sequence $(n_k)_{k \ge 1}$ as follows:
We define $ f(z) := \sum _{k=1}^{\infty } b_k z^{n_k}$ with $ b_k := n_k^{\alpha } - n_{k-1}^{\alpha }. $ By [Reference Bennett, Stegenga and Timoney1, Theorem 1.10(a)], f has a growth rate $\alpha $ . Moreover, one checks that $ \left (\sum _{j=1}^k b_j^2\right )^{\frac {1}{2}} \simeq n_k^{\alpha '}. $ Now, an application of Theorem B completes the proof.
In view of Theorem C, for simplicity, we denote $\alpha _*: = \max \{\alpha - \frac {1}{2},0\}$ . Then,
This implies that $ {\mathbb G}_{\alpha } \cap ({\mathbb G}_{\beta })_* \neq \emptyset $ if and only if $ \beta \in [\alpha _*, \alpha ].$
Naturally, one may wonder what happens in terms of ${\mathbb G}_{\leq \alpha }$ , and we claim
Indeed, if $\alpha _1 \leq \alpha _2$ , then the conclusion follows readily from (2.3), Corollary 2.6(a), and [Reference Bennett, Stegenga and Timoney1, Theorem 1.8(a)]. For $\alpha _1> \alpha _2$ , we take a Hadamard lacunary function $f_0(z):= \sum _{k=1}^{\infty } b_k z^{n_k}$ with a growth rate $\alpha _1$ . By [Reference Yang and Xu35, Theorem 2.3], $(b_k)_{k \geq 1}$ has the growth rate $\alpha _1$ with respect to $(n_k)_{k \geq 1}$ . By Proposition 2.7, ${\mathcal R} f_0$ has the growth rate $\alpha _1$ a.s.; in particular, $f \notin ({\mathbb G}_{\leq \alpha _2})_*$ .
On the other hand, the consideration of $H_\alpha $ yields an interesting comparison; indeed, it exhibits a loss of regularity.
Theorem 3.1 Let $\alpha _1, \alpha _2 \in (0,\infty )$ . Then $H_{\alpha _1} \subset (H_{\alpha _2})_*$ if and only if $\alpha _1 < \alpha _2$ .
Proof If $\alpha _1 < \alpha _2$ , then the conclusion follows from Corollary 2.6(a) and [Reference Bennett, Stegenga and Timoney1, Theorem 1.8(a)]. For $\alpha _1> \alpha _2$ , the Hadamard lacunary function $f_0(z) := \sum _{k=1}^{\infty } n_k^{\alpha _1} z^{n_k}\in H_{\alpha _1}$ and $f_0 \notin (H_{\alpha _2})_*$ , by [Reference Yang and Xu35, Theorem 2.3] and Proposition 2.9, respectively. The main case is when $\alpha _1 = \alpha _2 = \alpha $ , for which we construct a function $f_0 \in H_{\alpha }$ , but $f_0 \notin (H_{\alpha })_*$ . By [Reference Paley and Zygmund26, Theorem 1], there exists a sequence $(\alpha _n)_{n \geq 1}$ in $\{-1, 1\}$ such that the polynomials $ P_n(z) := \sum _{j=1}^n \alpha _j z^j $ satisfy
Let $ f_0(z) := \sum _{n=1}^{\infty }\alpha _n n^{\alpha -\frac {1}{2}} z^n $ and $P_0(z): = 0$ . For any $n \geq 1$ , one has
which, by [Reference Bennett, Stegenga and Timoney1, Theorem 1.4], implies that $f_0 \in H_{\alpha }$ . Now, for this function, using [Reference Marcus and Pisier21, Theorem 1.2, p. 126], we have
where $(a_k^*)_{k=1}^{2n}$ is the nonincreasing rearrangement of the sequence $\left (k^{\alpha - \frac {1}{2}}\right )_{k=1}^{2n}$ . This, together with Corollary 2.6(b), implies that $f_0 \notin (H_{\alpha })_*$ .
We end this section by constructing examples of analytic functions f such that f has a growth rate $\alpha $ and the rate of ${\mathcal R} f$ is $\alpha $ or $\alpha - \frac {1}{2}$ almost surely.
Proposition 3.2 Let $(n_k)_{k \geq 1}$ be a sequence such that $\log k = o(\log n_k)$ . Then, for every function $f(z) = \sum _{k=1}^{\infty }b_k z^{n_k}$ with a growth rate $\alpha $ , its randomization ${\mathcal R} f$ has also the growth rate $\alpha $ almost surely.
Proof By Lemma A and Theorem C, the almost surely growth rate $\alpha _0$ of ${\mathcal R} f$ is at most $\alpha $ . By contradiction, we assume that $\alpha _0 < \alpha $ . Then, by Corollary 2.6(b), there exists $\gamma \in (\alpha _0, \alpha )$ such that $ \left (\sum _{j=1}^k |b_j|^2\right )^{\frac {1}{2}} = O(n_k^{\gamma }). $ Then by our hypothesis and the Cauchy–Schwarz inequality, we get $ \sum _{j=1}^k |b_j| = O(n_k^{\gamma '}) $ for every $\gamma ' \in (\gamma , \alpha )$ . Thus, by [Reference Bennett, Stegenga and Timoney1, Theorem 1.10(a)], the function $g(z): = \sum _{k=1}^{\infty }|b_k| z^{n_k} \in H_{\gamma '}$ , which implies that $f \in H_{\gamma '}$ . This contradiction completes the proof.
Proposition 3.3 Let $\alpha \geq \frac {1}{2}$ . For every function $f(z) = \sum _{n=0}^{\infty }a_n z^{n}$ with a growth rate $\alpha $ , where $(a_n)_{n \geq 0}$ is a monotone sequence of real numbers, its randomization ${\mathcal R} f$ has the growth rate $\alpha -\frac {1}{2}$ almost surely.
Proof Without loss of generality, we suppose that $a_n \geq 0$ for all n. As above, by contradiction, we assume that ${\mathcal R} f$ has a growth rate $\alpha _0$ a.s. with $\alpha _0> \alpha - \frac {1}{2}$ . Then, by Proposition 2.10, $a_n \neq O(n^{\gamma - \frac {1}{2}})$ for every $\gamma \in (\alpha - \frac {1}{2}, \alpha _0)$ . On the other hand, $f \in H_{\gamma + \frac {1}{2}}$ for any $\gamma \in (\alpha - \frac {1}{2}, \alpha _0)$ , which, by [Reference Bennett, Stegenga and Timoney1, Theorem 1.10(a)], implies that $ \sum _{k=0}^n a_k = O\left (n^{\gamma + \frac {1}{2}}\right ). $ Now, using the monotonicity of $(a_n)_{n \geq 0}$ , we get $a_n = O\left (n^{\gamma - \frac {1}{2}}\right )$ , which is a contradiction.
From the above two propositions, we immediately get the following concrete examples.
Example 3.1 Let $(n_k)_{k \geq 1}$ be a Hadamard lacunary sequence. For every function $f(z) = \sum _{k=1}^{\infty } b_k z^{n_k}$ with a growth rate $\alpha $ , its randomization ${\mathcal R} f$ also has the growth rate $\alpha $ almost surely.
Example 3.2 For every $\alpha \geq \frac {1}{2}$ , the function $f(z) = \frac {1}{(1 - z)^{\alpha }}$ has the growth rate $\alpha $ , but its randomization ${\mathcal R} f$ has the growth rate $\alpha - \frac {1}{2}$ almost surely.
Remark 3.3 The function $f_0$ constructed in the proof of Theorem 3.1 satisfies $f_0 \in H_{\alpha }$ and $f_0 \notin (H_{\alpha })_*$ . Thus, the growth rate of $f_0$ is no greater than $\alpha $ and the growth rate of ${\mathcal R} f_0$ is a.s. no less than $\alpha $ . Therefore, by Theorem C, the growth rate of $f_0$ is $\alpha $ and ${\mathcal R} f_0$ has the growth rate $\alpha $ almost surely. From this, we can draw the following conclusions:
-
• The condition $\log k = o(\log n_k)$ is sufficient for analytic functions $f(z) = \sum _{k=1}^{\infty }b_k z^{n_k}$ to preserve its growth rate $\alpha $ under randomization, but it is not a necessary condition.
-
• The monotone property of the sequence $(a_n)_{n \geq 0} \subset {\mathbb R}$ is essential in Proposition 3.3.
4 Zero sets
This section concerns the second main topic in the present paper, namely, the zero sets of ${\mathcal R} f$ when $f \in ({\mathbb G}_{\alpha })_*$ . The rigidity of $N_{{\mathcal R} f}(r)$ (Theorem D) is proved in Section 4.1. Then, in Section 4.2, we explore to what extent the rigidity fails for $n_{{\mathcal R} f}(r)$ , and, as an application of the estimates we obtain, we introduce four Blaschke-type exponents and show that they are always the same and equal to one when $\alpha>0$ . The bulk of Section 4.3 is devoted to the analysis of an example (Example 4.3) in order to illustrate the sharpness of various estimates in this section.
4.1 Proof of Theorem D
Together with Lemma A and Theorem B, the proof follows from the following two lemmas, which are of independent interests.
Lemma 4.1 Let $\alpha> 0$ and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in H({\mathbb D})$ such that
Then the following estimates hold:
-
(a) $\displaystyle \limsup _{r \to 1} \frac {N_{{\mathcal R} f}(r)}{\log \frac {1}{1-r}} \leq \alpha \quad \quad \ \text { a.s.}\ {and } \quad \limsup _{r \to 1} \frac {{\mathbb E}(N_{{\mathcal R} f}(r))}{\log \frac {1}{1-r}} \leq \alpha. $
-
(b) $ \displaystyle \limsup _{r \to 1} \frac {n_{{\mathcal R} f}(r)}{\frac {1}{1-r}\log \frac {1}{1-r}} \leq \alpha \quad \text { a.s.}\ {and } \quad \limsup _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}\log \frac {1}{1-r}} \leq \alpha. $
Lemma 4.2 Let $\alpha> 0$ and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in H({\mathbb D})$ such that
Then the following estimates hold:
-
(a) $ \displaystyle \limsup _{r \to 1} \frac {N_{{\mathcal R} f}(r)}{\log \frac {1}{1-r}} \geq \alpha \quad \text { a.s. } \ \text { and } \ \ \limsup _{r \to 1} \frac {{\mathbb E}(N_{{\mathcal R} f}(r))}{\log \frac {1}{1-r}} \geq \alpha. $
-
(b) $ \displaystyle \limsup _{r \to 1} \frac {n_{{\mathcal R} f}(r)}{\frac {1}{1-r}} \geq \alpha \quad \text { a.s. } \ \text { and } \ \ \limsup _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}} \geq \alpha. $
Proof of Lemma 4.1
We may assume $|a_0| = 1$ since only a little modification is needed to take care of the general case. By Corollary 2.6(a), ${\mathcal R} f \in H_{\psi }$ a.s. with
hence $\|{\mathcal R} f\|_{H_{\psi }} < + \infty $ a.s. Jensen’s formula yields that almost surely for every $r \in (0,1)$ ,
In addition, by Jensen’s inequality, for every $r \in (0,1)$ ,
Now an application of Proposition 2.1 yields Part (a).
For Part (b), by (4.1), for each $\lambda \in (0,1)$ and $r \in (0,1)$ , we have
which yields the first half of Part (b) by arguments similar to the above. The case of ${\mathbb E}(n_{{\mathcal R} f}(r))$ follows in an analogous manner.
Here and in what follows, for an analytic function $f(z) = \sum _{n=0}^{\infty }a_nz^n$ , let
Proof of Lemma 4.2
We still assume $|a_0| = 1$ . Jensen’s formula yields that
where
is a random Fourier series satisfying the condition $\sum _{n\geq 0} |\widehat {a}_n(r)|^2 = 1$ . By [Reference Nazarov, Nishry and Sodin22, Corollary 1.2] for the Bernoulli case, [Reference Rao and Ren27, Reference Ullrich33, Reference Ullrich34] for the Steinhaus case, and a basic fact for the Gaussian case (see also [Reference Nazarov, Nishry and Sodin23, Section 1.1]), there exists a constant $C> 0$ such that
On the other hand, from the assumption and [Reference Bennett, Stegenga and Timoney1, Theorem 1.10(a)], it follows that the function $g(z): = \sum _{n=0}^{\infty } |a_n|^2 z^{2n}$ does not belong to the space $H_{2\alpha }$ , which implies that $ \limsup _{r \to 1} \frac {\log \sigma _f(r)}{\log \frac {1}{1-r}} \geq \alpha , $ hence $ \limsup _{r \to 1} \frac {{\mathbb E}(N_{{\mathcal R} f}(r))}{\log \frac {1}{1-r}} \geq \alpha , $ and, by L’Hôpital’s rule, $ \limsup _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}} \geq \alpha. $ Next, we take a sequence $r_n \uparrow 1$ such that
For each $n \in {\mathbb N}$ , by (4.4),
Now, an application of the Borel–Cantelli lemma yields that, for almost every standard random sequence $(X_n)_{n \geq 0}$ , there is a number $n_0 \in {\mathbb N}$ such that, for every $n \geq n_0$ ,
which, together with Jensen’s formula, implies that
Thus, the proof is complete by our choice of $r_n$ , and by L’Hôpital’s rule again.
4.2 Blaschke-type exponents
Expectedly, the rigidity phenomenon fails for $n_{{\mathcal R} f}(r)$ , and in this subsection, we first explore the extent to which it fails. The following result follows from Theorem B and Lemmas 4.1 and 4.2.
Corollary 4.3 Let $\alpha \in [0, \infty )$ and $f \in ({\mathbb G}_{\alpha })_*$ . Then the following estimates hold:
-
(a) $\displaystyle \limsup _{r \to 1} \frac {n_{{\mathcal R} f}(r)}{\frac {1}{1-r}\log \frac {1}{1-r}} \leq \alpha $ a.s. and $\displaystyle \limsup _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}\log \frac {1}{1-r}} \leq \alpha $ .
-
(b) $\displaystyle \limsup _{r \to 1} \frac {n_{{\mathcal R} f}(r)}{\frac {1}{1-r}} \geq \alpha $ a.s. and $\displaystyle \limsup _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}} \geq \alpha $ .
The above estimates turn out to be quite sharp, and this is addressed in Section 4.3. For now, as an application of these estimates, we recall the Blaschke condition, which is perhaps the best known geometric condition for zero sets $(z_n)_{n \geq 1}$ of analytic functions in the unit disk:
By Corollary 4.3, as well as [Reference Nazarov, Nishry and Sodin22, Theorem 1.3], however, this condition always fails to hold for the zero sets of ${\mathcal R} f$ for any $f \in ({\mathbb G}_\alpha )_*$ whenever $\alpha>0$ . A finer look at Blaschke-type conditions is hence in need. For this purpose, we introduce the following four exponents, which are clearly reminiscent of the convergent exponents of zero sets of entire functions (see, e.g., [Reference Levin14, p. 17]).
Definition 4.1 Given a sequence $(z_n)_{n \geq 1} \subset {\mathbb D}$ , its Blaschke exponent and polynomial order are defined as
and, respectively,
where $n(r)$ is the counting function of $(z_n)_{n \geq 1}$ with multiplicity.
For simplicity, we shall write $\lambda (f)$ and $\rho (f)$ instead of $\lambda ((z_n)_{n \geq 1})$ and $\rho ((z_n)_{n \geq 1})$ , respectively, if $(z_n)_{n \geq 1}$ is the zero sequence of $f \in H({\mathbb D}).$
Definition 4.2 Given a random sequence $(z_n)_{n \geq 1} \subset {\mathbb D}$ , its expected Blaschke exponent and expected polynomial order are defined as
and, respectively,
We shall use the notations $\lambda _E({\mathcal R} f)$ and $\rho _E({\mathcal R} f)$ whose meaning is clear.
Corollary 4.4 For any $f \in ({\mathbb G}_{\alpha })_*$ with $\alpha> 0$ , one has
This corollary follows from Corollary 4.3 and the following elementary properties of the four introduced exponent.
Lemma 4.5 The following statements hold:
-
(a) For every sequence $(z_n)_{n \geq 1} \subset {\mathbb D}$ , its Blaschke exponent $\lambda $ and polynomial order $\rho $ are equal.
-
(b) For every random sequence $(z_n)_{n \geq 1} \subset {\mathbb D}$ , its expected Blaschke exponent $\lambda _E$ and polynomial order $\rho _E$ are equal.
The proof of this lemma is similar to certain familiar arguments on the convergent exponents of zero sets for entire functions in [Reference Levin13, Reference Levin14]. For the reader’s convenience, we outline two key points below:
-
• Using the argument in [Reference Levin14, Lemma 1, p. 17], we prove that the series $\sum _{n=1}^{\infty }(1 - |z_n|)^{\gamma }$ converges (or, the expectation ${\mathbb E}(\sum _{n=1}^{\infty }(1 - |z_n|)^{\gamma })$ is finite) if and only if the integral $\int _0^1 (1-t)^{\gamma - 1} n(t)dt$ (or, respectively, $\int _0^1 (1-t)^{\gamma - 1} {\mathbb E}(n(t))dt$ ) converges.
-
• Following this and the argument in [Reference Levin14, Lemma 2, p. 18], we get $\lambda ((z_n)_{n \geq 1}) = \rho ((z_n)_{n \geq 1})$ and $\lambda _E((z_n)_{n \geq 1}) = \rho _E((z_n)_{n \geq 1})$ .
4.3 Sharpness
In this last subsection, we explore the possibility of replacing the limsup in Theorem D by a limit. This is possible when $\sigma _f(r)$ is of regular growth. Indeed, by taking a sequence $r_n: = 1 - e^{-n^3}$ and repeating the arguments in the proof of Lemma 4.2, one gets the following:
Corollary 4.6 Let $\alpha \in [0, \infty )$ and $f(z) = \sum _{n=0}^{\infty }a_nz^n \in ({\mathbb G}_{\alpha })_*$ such that
Then
Next, we construct an example to show how badly the conclusion of Corollary 4.6 fails when $\sigma _f(r)$ is of irregular growth. Incidentally, this example establishes the sharpness of the upper estimates in Corollary 4.3.
Example 4.3 For any $\alpha> 0$ , there exists a function $f \in ({\mathbb G}_{\alpha })_*$ such that
-
(a) $\displaystyle \limsup _{r \to 1} \frac {n_{{\mathcal R} f}(r)}{\frac {1}{1-r}\log \frac {1}{1-r}} = \alpha \ \ \,\text { a.s.} \ \ \text{ and } \ \limsup _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}\log \frac {1}{1-r}} = \alpha; $
-
(b) $\displaystyle \liminf _{r \to 1} \frac {n_{{\mathcal R} f}(r)}{\frac {1}{1-r}\log \frac {1}{1-r}} = 0 \ \ \ \kern1.7pt\text { a.s.} \ \ \text{ and } \ \ \liminf _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}\log \frac {1}{1-r}} = 0; $
-
(c) $\displaystyle \liminf _{r \to 1} \frac {N_{{\mathcal R} f}(r)}{\log \frac {1}{1-r}} = 0 \ \ \ \ \quad \text { a.s.} \ \ \text{ and } \ \ \liminf _{r \to 1} \frac {{\mathbb E}(N_{{\mathcal R} f}(r))}{\log \frac {1}{1-r}} = 0. $
Proof We take an increasing sequence of integers $(m_k)_{k \geq 1}$ with $m_1 = 3$ and
and hence
Put
Then, by Proposition 2.7, $f \in ({\mathbb G}_{\alpha })_*$ .
(a) Put $r_k: = 2^{-\alpha 2^{-m_k}}$ . Then, letting $k \to \infty $ , we get
Let $A_k$ be the event that
By Rouché’s theorem, $n_{{\mathcal R} f}(r_k) = n_k$ on $A_k$ . Hence, ${\mathbb E}(n_{{\mathcal R} f}(r_k)) \geq n_k {\mathbb P}(A_k)$ for each $k \in {\mathbb N}$ . We claim that ${\mathbb P}(A_k) \to 1$ as $k \to \infty $ . This, together with (4.7), implies
which, in turn, implies the assertion for ${\mathbb E}(n_{{\mathcal R} f}(r))$ . On the other hand, putting $ A := \limsup _{k \to \infty } A_k, $ we get ${\mathbb P}(A) = 1$ . For each standard random sequence $(X_k)_{k \geq 1}$ on A, there is a subsequence $(k_j)_{j \geq 1}$ of integer numbers such that $(X_k)_{k \geq 1} \in A_{k_j}$ for every $j \in {\mathbb N}$ . Then
which implies the assertion for $n_{{\mathcal R} f}(r)$ .
It remains to prove the claim. For sufficiently large k, by (4.6), we get
which is at most $(k-1)n_{k-1}^{\alpha } + 1$ . It follows that
for sufficiently large k. Next, we assume that $(X_k)_{k \geq 1}$ is a standard complex Gaussian sequence since the other two cases are easier. For each $k \geq 1$ and $j \neq k$ , put
Here, $\widetilde {X_k}$ and $\widetilde {X_{k,j}}$ are independent complex Gaussian random variables with zero mean and variances
Letting $A_{k, j}$ be the event that $\left |\widetilde {X_k}\right |> \left |\widetilde {X_{k,j}}\right |$ , we get
Moreover, for each $j \neq k$ ,
which together with (4.8) implies that
(b) For every $\alpha ' < \alpha $ , we put $r^{\prime }_k: = 2^{-\alpha ' 2^{-m_k}}.$ As above, we have
Let $A^{\prime }_k$ be the event that
Again, Rouché’s theorem yields that $n_{{\mathcal R} f}(r^{\prime }_k) = n_k$ on $A^{\prime }_k$ for each $k \in {\mathbb N}$ and for sufficiently large k,
Using arguments similar as above, we see that ${\mathbb P}(A^{\prime }_k) \to 1$ .
Now, we consider the limit for $n_{{\mathcal R} f}(r)$ . Putting $A_{[\alpha ']} = \limsup _{k \to \infty } A^{\prime }_k$ , we get ${\mathbb P}(A_{[\alpha ']}) = 1$ and
Then, putting $A_{[0]}: = \limsup _{m \to \infty } A_{[1/m]}$ , we get ${\mathbb P}(A_{[0]}) = 1$ and
To treat ${\mathbb E}(n_{{\mathcal R} f}(r))$ , we separate into two cases. If $(X_k)_{k \geq 1}$ is either a standard Bernoulli or Steinhaus sequence, then the event $A_k'$ is the whole space $\Omega $ , hence, $n_{{\mathcal R} f}(r^{\prime }_k) = n_k$ on $\Omega $ and ${\mathbb E}(n_{{\mathcal R} f}(r^{\prime }_k)) = n_k$ , which implies that
Now we assume that $(X_k)_{k \geq 1}$ is a standard complex Gaussian sequence. In this case, we may use the Kac formula as in [Reference Edelman and Kostlan6, Theorem 8.2] to get
Similarly as above, for sufficiently large k,
Thus,
From this and (4.5), it follows that
(c) For sufficiently large k, by (4.9),
Repeating the arguments in the proof of Lemma 4.2 for the sequence $(r_k')_{k \geq 1}$ instead of $(r_n)_{n \geq 1}$ , we get
and
This completes the proof since $\alpha ' <\alpha $ is arbitrary.
Now, to illustrate the sharpness of the lower bounds in Corollary 4.3, we construct an example in the case of a standard complex Gaussian sequence. We do not know whether this sharpness persists for the Bernoulli and Steinhaus cases.
Example 4.4 Let $\alpha> 0$ and $(X_n)_{n \ge 0}$ be a standard complex Gaussian sequence. There exists a function f in $({\mathbb G}_{\alpha })_*$ such that $ \lim _{r \to 1} \frac {{\mathbb E}(n_{{\mathcal R} f}(r))}{\frac {1}{1-r}} = \alpha. $ Indeed, we consider $ f(z): = \sum _{n=0}^{\infty }a_nz^n $ with
Then $ \sum _{k=0}^n a_k^{2} \simeq n^{2\alpha }, $ hence $f \in ({\mathbb G}_{\alpha })_*$ by Theorem B. In this case, $ \sigma ^2_f(r) = \frac {1}{(1-r^2)^{2\alpha }}. $ By the Kac formula as improved by Edelman and Kostlan in [Reference Edelman and Kostlan6, Theorem 8.2], $ {\mathbb E}(n_{{\mathcal R} f}(r)) = r\frac {d}{dr} \log \sigma _f(r) = \frac {2\alpha r^2}{1-r^2},$ from which the desired conclusion follows.
We end this paper with two remarks on the zero sets of ${\mathcal R} f$ when $f \in (H_{\alpha })_*$ and that of f when $f \in H_{\alpha }$ .
Remark 4.5 By Lemma 4.1 and Corollary 2.6, it follows that
for every $f \in (H_{\alpha })_*$ with $\alpha> 0$ . Moreover, by Proposition 2.9, the function f constructed in Example 4.3 belongs to $(H_{\alpha })_*$ .
Remark 4.6 For every function $f \in H_{\alpha }$ , $\left (\sum _{k=0}^n|a_k|^2\right )^{\frac {1}{2}} = O(n^{\alpha })$ by [Reference Bennett, Stegenga and Timoney1, Theorem 1.8(a)], which, together with the arguments in the proof of Lemma 4.1, implies that
For the function f constructed in Example 4.3, we have
This improves results of Shapiro and Shields in [Reference Shapiro and Shields30, Theorems 5 and 6].
Acknowledgement
We thank the referee for a meticulous reading which greatly improves the presentation of this paper.