Hostname: page-component-78c5997874-fbnjt Total loading time: 0 Render date: 2024-11-10T11:38:35.215Z Has data issue: false hasContentIssue false

Sharp large deviations and concentration inequalities for the number of descents in a random permutation

Published online by Cambridge University Press:  05 January 2024

Bernard Bercu*
Affiliation:
Université de Bordeaux, Institut de Mathématiques de Bordeaux
Michel Bonnefont*
Affiliation:
Université de Bordeaux, Institut de Mathématiques de Bordeaux
Adrien Richou*
Affiliation:
Université de Bordeaux, Institut de Mathématiques de Bordeaux
*
*Postal address: Université de Bordeaux, Institut de Mathématiques de Bordeaux, UMR CNRS 5251, 351 Cours de la Libération, 33405 Talence cedex, France.
*Postal address: Université de Bordeaux, Institut de Mathématiques de Bordeaux, UMR CNRS 5251, 351 Cours de la Libération, 33405 Talence cedex, France.
*Postal address: Université de Bordeaux, Institut de Mathématiques de Bordeaux, UMR CNRS 5251, 351 Cours de la Libération, 33405 Talence cedex, France.
Rights & Permissions [Opens in a new window]

Abstract

The goal of this paper is to go further in the analysis of the behavior of the number of descents in a random permutation. Via two different approaches relying on a suitable martingale decomposition or on the Irwin–Hall distribution, we prove that the number of descents satisfies a sharp large-deviation principle. A very precise concentration inequality involving the rate function in the large-deviation principle is also provided.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Let $\mathcal{S}_n$ be the symmetric group of permutations on the set of integers $\{1,\ldots,n\}$ where $n \geq 1$ . A permutation $\pi_{{n}} \in \mathcal{S}_n$ is said to have a descent at position $k \in \{1,\ldots,n-1\}$ if $\pi_{ n}(k)>\pi_{ n}(k+1)$ . Denote by $D_n=D_n(\pi_{ n})$ the random variable counting the number of descents of a permutation $\pi_{ n}$ chosen uniformly at random from $\mathcal{S}_n$ . We clearly have $D_1=0$ and, for all $n \geq 2$ ,

(1.1) \begin{equation} D_{n} = \sum_{k=1}^{n-1} \mathbf{1}_{\{\pi_{ n}(k)>\pi_{ n}(k+1)\}}.\end{equation}

A host of results are available on the asymptotic behavior of the sequence $(D_n)$ . More precisely, we can find in [Reference Bóna3] that, for all $n \geq 2$ , $\mathbb{E}[D_n]= ({n-1})/{2}$ and $\text{Var}(D_n)=({n+1})/{12}$ . In addition, it is possible to get a connection with the generalized Pólya urn with two colors, also known as Friedman’s urn; see [Reference Friedman10] and Remark 2.1. In particular, for this construction we have, by [Reference Freedman9, Corollary 5.2], the almost sure (a.s.) convergence

(1.2) \begin{equation} \lim_{n \rightarrow \infty}\frac{D_n}{n}=\frac{1}{2} \qquad \text{a.s.}\end{equation}

Following the approach of [Reference Tanny17], see Section 3.2, it is also possible to construct a different sequence $(D_n)$ with the same marginal distribution using a sequence of independent random variables sharing the same uniform distribution on [0, 1]. For this construction, we directly obtain the same almost sure convergence (1.2), as noticed in [Reference Gnedin and Olshanski12, Section 7.3]. Nevertheless, the distribution of the process $(D_n)$ does not correspond to the one investigated in Section 2.

Four different approaches have been reported in [Reference Chatterjee and Diaconis5] to establish the asymptotic normality

(1.3) \begin{equation} \sqrt{n}\bigg(\frac{D_n}{n} - \frac{1}{2}\bigg) \mathop{\longrightarrow}_{}^{{\mathcal{L}}} \mathcal{N} \bigg(0, \frac{1}{12}\bigg).\end{equation}

We also refer the reader to the recent contribution of [Reference Garet11] that relies on the method of moments, as well as to the recent proof in [Reference Özdemir15] using a rather complicated martingale approach. Furthermore, denote by $L_n$ the number of leaves in a random recursive tree of size n. It is well known [Reference Zhang19] that $L_{n+1}=D_n+ 1$ . Hence, it has been proven in [Reference Bryc, Minda and Sethuraman4] that the sequence $(D_n/n)$ satisfies a large-deviation principle (LDP) with good rate function given by

(1.4) \begin{equation} I(x)=\sup_{ t \in \mathbb{R}} \bigl\{xt - L(t) \bigr\},\end{equation}

where the asymptotic cumulant-generating function is of the form

(1.5) \begin{equation} L(t)=\log \bigg(\frac{\exp\!(t)-1}{t}\bigg).\end{equation}

The purpose of this paper is to go further in the analysis of the behavior of the number of descents by proving a sharp large-deviation principle (SLDP) for the sequence $(D_n)$ . We shall also establish a sharp concentration inequality involving the rate function I given by (1.4).

To be more precise, we propose two different approaches that lead us to an SLDP and a concentration inequality for the sequence $(D_n)$ . The first one relies on a martingale approach while the second one uses a miraculous link between the distribution of $(D_n)$ and the Irwin–Hall distribution, as pointed out in [Reference Tanny17]. On the one hand, the second method is more direct and simpler in establishing our results. On the other hand, the first approach is much more general and we are strongly convinced that it can be extended to other statistics on random permutations that share the same kind of iterative structure, such as the number of alternating runs [Reference Bóna3, Reference Stanley16] or the length of the longest alternating subsequence in a random permutation [Reference Houdré and Restrepo14, Reference Widom18]. Moreover, we have intentionally kept these two proof strategies in the manuscript in order to highlight that the martingale approach is as efficient and powerful as the direct method in terms of results.

The paper is organized as follows. Section 2 is devoted to our martingale approach which allows us to again find a direct proof of (1.2) and (1.3) and to propose new standard results for the sequence $(D_n)$ such as a law of iterated logarithm, a quadratic strong law, and a functional central limit theorem. The main results of the paper are given in Section 3. We establish an SLDP for the sequence $(D_n)$ as well as a sharp concentration inequality involving the rate function I. Three keystone lemmas are analyzed in Section 4. All the technical proofs are postponed to Sections 58.

2. Our martingale approach

We start by describing precisely the construction of the sequence $(D_n)$ on a unique probability space. Let us remark that this construction can be naturally linked to generalized Pólya urns; see Remark 2.1. We consider a sequence $(V_n)$ of independent random variables uniformly distributed on $\{1,\ldots,n\}$ . Then, we set $\pi_1=(1)$ and, for each $n\geq 1$ , we recursively define the permutation $\pi_{n+1}$ as

(2.1) \begin{equation} \pi_{n+1}(k) = \left\{ \begin{array} {c@{\quad}c@{\quad}c}\pi_n(k) & \textrm { if }& k<V_{n+1},\\[2pt]n+1 & \textrm { if }& k=V_{n+1},\\[2pt]\pi_n(k-1) & \textrm { if }& k>V_{n+1}.\end{array}\right.\end{equation}

By a direct recursive argument, it is clear that, for each $n\geq 1$ , $\pi_n$ is uniformly distributed on $\mathcal{S}_n$ . Moreover, as explained in [Reference Özdemir15], it follows from (1.1) and (2.1) that, for all $n \geq 1$ ,

\begin{equation} \mathbb{P}\big(D_{n+1} = D_{n}+d\mid {\mathcal{F}_n}\big) =\left \{ \begin{array}{c@{\quad}c@{\quad}c} {\displaystyle \frac{n-D_n}{n+1} } & \text{ if } & d=1, \\ {\displaystyle \frac{D_n+1 }{n+1} } & \text{ if } & d=0, \end{array} \nonumber \right.\end{equation}

with $\mathcal{F}_n=\sigma(D_1, \ldots,D_n)$ . This means that

(2.2) \begin{equation} D_{n+1} = D_n + \xi_{n+1},\end{equation}

where the conditional distribution of $\xi_{n+1}$ given $\mathcal{F}_n$ is the Bernoulli $\mathcal{B}(p_n)$ distribution with parameter $p_n=({n-D_n})/({n+1})$ . Since $\mathbb{E}[\xi_{n+1} \mid \mathcal{F}_n]=p_n$ and $\mathbb{E}[\xi_{n+1}^2 \mid \mathcal{F}_n]=p_n$ , we deduce from (2.2) that

(2.3) \begin{align} \mathbb{E}[D_{n+1} \mid \mathcal{F}_n] & = \mathbb{E}[D_n + \xi_{n+1} \mid \mathcal{F}_n] = D_n + p_n \quad \text{a.s.}, \end{align}
(2.4) \begin{align} \mathbb{E}\big[D_{n+1}^2 \mid \mathcal{F}_n\big] & = \mathbb{E}\big[\big(D_n + \xi_{n+1}\big)^2 \mid \mathcal{F}_n\big]=D_n^2+ 2p_nD_n +p_n \quad \text{a.s.} \end{align}

Moreover, let $(M_n)$ be the sequence defined for all $n \geq 1$ by

(2.5) \begin{equation} M_{n} = n \bigg( D_n - \frac{n-1}{2} \bigg).\end{equation}

We obtain from (2.3) that

\begin{align*} \mathbb{E}[M_{n+1} \mid \mathcal{F}_n] & = (n+1) \bigg( D_n +p_n - \frac{n}{2} \bigg) = (n+1) \bigg( \frac{n}{n+1}D_n - \frac{n(n-1)}{2(n+1)} \bigg), \\ & = n \bigg( D_n - \frac{n-1}{2} \bigg) = M_n \quad \text{a.s.},\end{align*}

which means that $(M_n)$ is a locally square integrable martingale. We deduce from (2.4) that its predictable quadratic variation is given by

(2.6) \begin{equation} \langle M \rangle_n = \sum_{k=1}^{n-1} \mathbb{E}[(M_{k+1}-M_k)^2\mid\mathcal{F}_{k}] = \sum_{k=1}^{n-1} (k-D_k)(D_k+1) \quad \text{a.s.}\end{equation}

The martingale decomposition (2.5) allows us to again find all the asymptotic results previously established for the sequence $(D_n)$ such as the almost sure convergence (1.2) and the asymptotic normality (1.3). Some improvements to these standard results are as follows. To the best of our knowledge, the quadratic strong law and the law of iterated logarithm are new.

Proposition 2.1. We have the quadratic strong law:

(2.7) \begin{equation} \lim_{n\rightarrow \infty} \frac{1}{\log n} \sum_{k=1}^n\bigg(\frac{D_k}{k} - \frac{1}{2}\bigg)^2 = \frac{1}{12} \quad {a.s.} \end{equation}

Moreover, we also have the law of iterated logarithm:

(2.8) \begin{equation} \limsup_{n\rightarrow\infty}\bigg(\frac{n}{2\log\log n}\bigg)^{1/2}\bigg(\frac{D_n}{n}-\frac{1}{2}\bigg) = -\liminf_{n\rightarrow\infty}\bigg(\frac{n}{2\log\log n}\bigg)^{1/2}\bigg(\frac{D_n}{n}-\frac{1}{2}\bigg) = \frac{1}{\sqrt{12}\,} \quad {a.s.} \end{equation}

In particular,

(2.9) \begin{equation} \limsup_{n\rightarrow\infty}\bigg(\frac{n}{2\log\log n}\bigg)\bigg(\frac{D_n}{n}-\frac{1}{2}\bigg)^2 = \frac{1}{12} \quad {a.s.} \end{equation}

Denote by $D([0,\infty[)$ the Skorokhod space of right-continuous functions with left-hand limits. The functional central limit theorem extends the asymptotic normality (1.3); see a similar result in [Reference Gouet13] using generalized Pólya urns.

Proposition 2.2. We have the distributional convergence in $D([0,\infty[)$

(2.10) \begin{equation} \bigg(\sqrt{n}\bigg(\frac{D_{\lfloor n t \rfloor}}{\lfloor n t \rfloor} - \frac{1}{2}\bigg), t \geq 0\bigg) \Longrightarrow (W_t, t \geq 0), \end{equation}

where $(W_t)$ is a real-valued centered Gaussian process starting at the origin with covariance given, for all $0<s \leq t$ , by $\mathbb{E}[W_s W_t]= {s}/{12 t^2}$ . In particular, we again find the asymptotic normality (1.3).

The proofs are postponed to Section 8.

Remark 2.1. Relation (2.2) allows us to see the sequence $(D_n)$ as the sequence of the number of white balls in a two-color generalized Pólya urn [Reference Friedman10] with the following rule: at each step, one ball is drawn at random and then replaced with an additional ball of the opposite color.

3. Main results

3.1. Sharp large deviations and concentration

Our first result concerns the SLDP for the sequence $(D_n)$ , which nicely extends the LDP previously established in [Reference Bryc, Minda and Sethuraman4]. For any positive real number x, write $\{x\}=\lceil x \rceil - x$ .

Theorem 3.1. For any x in $\big]\frac12,1\big[$ , we have, on the right side,

(3.1) \begin{equation} \mathbb{P}\bigg(\frac{D_n}{n} \geq x \bigg) = \frac{\exp\!({-}nI(x)-\{nx\}t_x)}{\sigma_x t_x \sqrt{2\pi n}}[1+o(1)], \end{equation}

where the value $t_x$ is the unique solution of $L^\prime(t_x)=x$ and $\sigma_x^2=L^{\prime \prime}(t_x)$ .

Our second result is devoted to an optimal concentration inequality involving the rate function I.

Theorem 3.2. For any $x \in \big]\frac12,1\big[$ and for all $n \geq 1$ , we have the concentration inequality

(3.2) \begin{equation} \mathbb{P}\bigg(\frac{D_n}{n} \geq x \bigg) \leq P(x)\frac{\exp\!({-}n I(x)-\{nx\}t_x)}{\sigma_x t_x \sqrt{2\pi n}}, \end{equation}

where the prefactor can be taken as

\begin{equation*} P(x) = \sqrt{\frac{t_x^2 + \pi^2}{t_x^2}} + \bigg(1+\frac1\pi+\frac{2\sqrt{t_x^2+\pi^2}}{\pi^2 -4}\bigg)\sqrt{\frac{\pi^2(t_x^2+4)}{4}} . \end{equation*}

Remark 3.1. Let us denote by $A_n = A_n(\pi_{ n})$ the random variable counting the number of ascents of a permutation $\pi_{ n} \in S_n$ . Then, it is clear that $D_n(\pi_{ n})+A_n(\pi_{ n})=n-1$ . Moreover, by a symmetry argument, $D_n$ and $A_n$ share the same distribution. In particular, $D_n$ has the same distribution as $(n-1) - D_n$ . Consequently, for all $x \in \big]\frac12,1\big[$ , we have

\[\mathbb{P}\bigg(\frac{D_n+1}{n} \leq 1-x \bigg) = \mathbb{P} \bigg(\frac{D_n}{n} \geq x \bigg), \]

which allows us to immediately extend the previous results to the left side.

Remark 3.2. One can observe from (3.1) or (3.2) that, for all $\varepsilon >0$ ,

\begin{equation*} \sum_{n=1}^\infty \mathbb{P} \bigg( \bigg| \frac{D_n}{n} -\frac{1}{2} \bigg|>\varepsilon \bigg) <+\infty. \end{equation*}

That is the complete convergence of $(D_n/n)$ to $\frac12$ , which directly implies the almost sure convergence (1.2) for any construction of the sequence $(D_n)$ .

3.2. A more direct approach

An alternative approach to proving the SLDP and concentration inequalities for the sequence $(D_n)$ relies on a famous result from [Reference Tanny17] which says that the distribution of $D_n$ is nothing other than that of the integer part of the sum $S_n$ of independent and identically distributed random variables. More precisely, let $(U_n)$ be a sequence of independent random variables sharing the same uniform distribution on [0, 1]. Write $S_n= \sum_{k=1}^n U_k$ . Then we have, from [Reference Tanny17], that, for all $k \in [ 0, n-1 ]$ ,

(3.3) \begin{equation} \mathbb{P}(D_n=k)=\mathbb{P}(\lfloor S_n \rfloor =k)=\mathbb{P}(k \leq S_n <k+1).\end{equation}

This simply means that the distribution of $D_n$ is that of the integer part of the the Irwin–Hall distribution. The identity (3.3) is somewhat miraculous and it is really powerful in order to carry out a sharp analysis of the sequence $(D_n)$ . Once again, we would like to emphasize that this direct approach is only relevant for the study of $(D_n)$ , while our martingale approach is much more general. A direct proof of Theorem 3.1 is provided in Section 6, relying on the identity (3.3). It is also possible to use this direct approach in order to establish a sharp concentration inequality with the same shape as (3.2); see Remark 6.1.

3.3. Further considerations on concentration inequalities

We wish to compare our concentration inequality (3.2) with some classical ones. The first one is given by the well-known Azuma–Hoeffding inequality [Reference Bercu, Delyon and Rio2]. It follows from (2.6) that the predictable quadratic variation $\langle M \rangle_n$ of the martingale $(M_n)$ satisfies $\langle M \rangle_n \leq {s_n}/{4}$ where $s_n=\sum_{k=2}^n k^2$ . In addition, its total quadratic variation reduces to $[M]_n=\sum_{k=1}^{n-1}(M_{k+1}-M_{k})^2=s_n$ . Consequently, we deduce from an improvement of the Azuma–Hoeffding inequality given by [Reference Bercu, Delyon and Rio2, (3.20)] that, for any $x \in \big]\frac12,1\big[$ and for all $n \geq 1$ ,

(3.4) \begin{equation} \mathbb{P}\bigg(\frac{D_n}{n} \geq x \bigg) \leq \exp\bigg({-}\frac{2n^4}{s_n}\bigg(x - \frac{1}{2}\bigg)^2\bigg).\end{equation}

We can observe that (3.2) is much sharper than (3.4) for all values of $x \in \big]\frac12,1\big[$ . Furthermore, by using (3.3), we can also infer a concentration inequality by means of Chernoff’s inequality. Indeed, for any $x \in \big]\frac12,1\big[$ and for all $n \geq 1$ ,

(3.5) \begin{align} \mathbb{P}\bigg(\frac{D_n}{n} \geq x \bigg) = \mathbb{P}\Bigg(\sum_{k=1}^n U_k \geq \lceil n x \rceil\Bigg) & \leq \exp \big( nL(t_x)-t_x \lceil nx \rceil \big) \nonumber \\ & \leq \exp\!({-}n I(x)-\{nx\}t_x), \end{align}

which is also rougher than (3.2).

4. Three keystone lemmas

Denote by $m_n$ the Laplace transform of $D_n$ defined, for all $t \in \mathbb{R}$ , by

(4.1) \begin{equation} m_n(t)=\mathbb{E}[\!\exp\!(t D_n)].\end{equation}

We can observe that $m_n(t)$ is finite for all $t \in \mathbb{R}$ and all $n \geq 1$ since $D_n$ is finite. Let us introduce the generating function defined, for all $t \in \mathbb{R}$ and for all $z \in \mathbb{C}$ , by $F(t,z)=\sum_{n=0}^\infty m_n(t) z^n$ , where the initial value is such that, for all $t\in \mathbb{R}$ , $m_0(t)=1$ . Notice that the radius of convergence, denoted $R^F(t)$ , should depend on t and is positive since $|m_n(t)| \leq \textrm{e}^{n|t|}$ . Moreover, we easily have, for all $|z|<R^F(0)=1$ , $F(0,z) = {1}/({1-z})$ . Our first lemma is devoted to the calculation of the generating function F; see also [Reference Bryc, Minda and Sethuraman4, p. 865], where a similar expression was given without proof. We can observe that $k_0$ should be replaced by $1-k_0$ . Let us also remark that the recursive equation (4.4) was already given in [Reference Friedman10, Section 4].

Lemma 4.1. For all $t\in \mathbb{R}$ , $R^F(t) = {t}/({\textrm{e}^t-1})$ . Moreover, for all $t \in \mathbb{R}$ and for all $z \in \mathbb{C}$ such that $|z| < R^F(t)$ ,

(4.2) \begin{equation} F(t,z)= \frac{1-\textrm{e}^{-t}}{1-\exp\!((\textrm{e}^t-1)z-t)}. \end{equation}

Proof. It follows from (2.2) that, for all $t\in \mathbb{R}$ and for all $n \geq 1$ ,

(4.3) \begin{align} m_{n+1} (t) & = \mathbb{E}[\!\exp\!(t D_{n+1})] = \mathbb{E}[\!\exp\!(t D_n)\mathbb{E}[\!\exp\!(t\xi_{n+1})\mid\mathcal{F}_n]], \notag \\ & = \mathbb{E}[\!\exp\!(t D_n)p_n\textrm{e}^t + \exp\!(t D_n)(1-p_n)] , \notag \\ & = m_n(t) + (\textrm{e}^t -1) \mathbb{E}[p_n\exp\!(t D_n)]. \end{align}

However, we already saw that $p_n= ({n-D_n})/({n+1})$ , which implies that

\begin{equation*} \mathbb{E}[p_n\exp\!(t D_n)] = \frac{n}{n+1}m_n(t)-\frac{1}{n+1}m_n^\prime(t). \end{equation*}

Consequently, we obtain from (4.3) that, for all $t \in \mathbb{R}$ and for all $n\geq 1$ ,

(4.4) \begin{equation} m_{n+1}(t)=\bigg(\frac{1+n \textrm{e}^t}{n+1}\bigg)m_n(t) + \bigg(\frac{1-\textrm{e}^t}{n+1}\bigg)m_n^\prime(t). \end{equation}

We can observe that (4.4) remains true for $n=0$ . We deduce from (4.4) that, for all $|z|<R^F(t)$ ,

\begin{align*} \frac{\partial F(t,z)}{\partial z} & = \sum_{n=1}^\infty n m_n(t) z^{n-1}=\sum_{n=0}^\infty (n+1) m_{n+1}(t) z^{n} \\ & = \sum_{n=0}^\infty (1+n \textrm{e}^t)m_n(t) z^n + \sum_{n=0}^\infty (1-\textrm{e}^t)m_n^\prime(t)z^n \\ & = F(t,z)+\textrm{e}^t z\frac{\partial F(t,z)}{\partial z} + (1- \textrm{e}^t) \frac{\partial F(t,z)}{\partial t}, \end{align*}

where the last equality comes from the fact that $|m_n^\prime(t)| \leq n m_n(t)$ allows us to apply the dominated convergence theorem in order to differentiate the series in t. Hence, we have shown that the generating function F is the solution of the partial differential equation

(4.5) \begin{equation} (1- \textrm{e}^t z)\frac{\partial F(t,z)}{\partial z} + (\textrm{e}^t-1) \frac{\partial F(t,z)}{\partial t} = F(t,z) \end{equation}

with initial value

(4.6) \begin{equation} F(t,0)=m_0(t)=1. \end{equation}

We now proceed as in [Reference Flajolet, Gabarró and Pekari8] in order to solve the partial differential equation (4.5) via the classical method of characteristics; see, e.g., [Reference Zwillinger20]. Following this method, we first associate with the linear first-order partial differential equation (4.5) the ordinary differential system given by

\begin{equation*} \frac{\textrm{d} z}{1-\textrm{e}^t z} = \frac{\textrm{d} t}{\textrm{e}^t-1} = \frac{\textrm{d} w}{w}, \end{equation*}

where w stands for the generating function F. We assume in the following that $t>0$ , inasmuch as the proof for $t<0$ follows exactly the same lines. The equation binding w and t can be easily solved, and we obtain

(4.7) \begin{equation} w = C_1 (1-\textrm{e}^{-t}). \end{equation}

The equation binding z and t leads to the ordinary differential equation

\begin{equation*}\frac{\textrm{d} z}{\textrm{d} t} = -\frac{\textrm{e}^t}{\textrm{e}^t-1} z + \frac{1}{\textrm{e}^t-1}.\end{equation*}

We find by the variation of constant method that

(4.8) \begin{equation} (\textrm{e}^{t}-1)z-t =C_2. \end{equation}

According to the method of characteristics, the general solution of (4.5) is obtained by coupling (4.7) and (4.8), namely

(4.9) \begin{equation} C_1 = f(C_2), \end{equation}

where f is a function which can be explicitly calculated from the boundary value in (4.6). We deduce from the conjunction of (4.7), (4.8), and (4.9) that, for all $t>0$ and for all $z \in \mathbb{C}$ such that $|z|<R^F(t)$ ,

(4.10) \begin{equation} F(t,z)=(1-\textrm{e}^{-t})f((\textrm{e}^{t}-1)z-t). \end{equation}

It only remains to determine the exact value of the function f by taking into account the initial condition (4.6). We obtain from (4.10) with $z=0$ and replacing $-t$ by t that

(4.11) \begin{equation} f(t) = \frac{1}{1-\textrm{e}^{t}}. \end{equation}

Finally, the explicit solution (4.2) clearly follows from (4.10) and (4.11). Moreover, we can observe that the radius of convergence comes immediately from (4.2), which completes the proof of Lemma 4.1.

The global expression (4.2) of the generating function F allows us to deduce a sharp expansion of the Laplace transform $m_n$ of $D_n$ , as follows.

Lemma 4.2. For any $t\neq 0$ ,

(4.12) \begin{equation} m_n(t)=\bigg(\frac{1-\textrm{e}^{-t}}{t}\bigg)\bigg(\frac{\textrm{e}^{t}-1}{t}\bigg)^n(1+r_n(t)), \end{equation}

where the remainder term $r_n(t)$ goes exponentially fast to zero as

(4.13) \begin{equation} |r_n(t)| \leq |t|\textrm{e} \bigg(1 + \frac{1}{\pi} + \frac{2+n}{\sqrt{t^2+4\pi^2}\,}\bigg) \bigg(1 + \frac{4\pi^2}{t^2}\bigg)^{-n/2}. \end{equation}

Proof. Throughout the proof, we assume that $t \neq 0$ . It follows from (4.2) that F is a meromorphic function on $\mathbb{C}$ with simple poles given, for all $\ell \in \mathbb{Z}$ , by

\begin{equation*} z_\ell^F(t) = \frac{t+2\textrm{i} \ell \pi}{\textrm{e}^t-1}. \end{equation*}

By a slight abuse of notation, we still denote by F this meromorphic extension. Hereafter, for the sake of simplicity, we consider the function $\mathcal{F}$ defined, for all $z \in \mathbb{C}$ , by

(4.14) \begin{equation} \mathcal{F}(t,z)= \frac{ 1}{1-\textrm{e}^{-t}} F(t,z)=f( \xi(t,z)), \end{equation}

where the function f was previously defined in (4.11) and the function $\xi$ is given, for all $z \in \mathbb{C}$ , by

(4.15) \begin{equation} \xi(t,z)= (\textrm{e}^{t}-1)z-t. \end{equation}

By the same token, we also introduce the functions $\mathcal{G}$ and $\mathcal{H}$ defined, for all $z \in \mathbb{C}$ , by

(4.16) \begin{equation} \mathcal{G}(t,z)= g( \xi(t,z)), \qquad \mathcal{H}(t,z)= h( \xi(t,z)), \end{equation}

where g and h are given, for all $z \in \mathbb{C}^*$ , by

(4.17) \begin{equation} g(z)= -\frac{1}{z}, \qquad h(z)= \frac{1}{1-\textrm{e}^z} + \frac{1}{z}. \end{equation}

We can immediately observe from (4.17) that $\mathcal{H} = \mathcal{F}-\mathcal{G}$ , which means that we have subtracted from $\mathcal{F}$ its simple pole at 0 to get $\mathcal{H}$ . Given a function $\mathcal{K}(t,z)$ analytic in z on some set $\{(t,z)\in \mathbb{R}\times \mathbb{C}, |z|\leq R^\mathcal{K}(t) \}$ , we denote by $m_n^\mathcal{K}(t)$ the coefficient of its Taylor series at point (t, 0), i.e. $\mathcal{K}(t,z)= \sum_{n=0}^\infty m_n^\mathcal{K}(t) z^n$ . Thanks to this notation, we clearly have $m_n^{\mathcal{F}} (t)= m_n^{\mathcal{G}} (t)+ m_n^{\mathcal{H}} (t)$ . Moreover, we deduce from (4.14) that

(4.18) \begin{equation} m_n^{\mathcal{F}} (t)= \frac{ 1}{1-\textrm{e}^{-t}} m_n^F(t). \end{equation}

The first coefficient $m_n^{\mathcal{G}} (t)$ can be explicitly computed by

(4.19) \begin{equation} m_n^{\mathcal{G}} (t)=\frac{1}{t} \bigg(\frac{\textrm{e}^t-1}{t}\bigg)^n. \end{equation}

As a matter of fact, for all $z \in \mathbb{C}$ such that $|z| < R^{\mathcal{G}}(t)=t(\textrm{e}^t-1)^{-1}$ , it follows from (4.15) and (4.16) that

\begin{equation*} \mathcal{G}(t,z)=-\frac{1}{\xi(t,z)}= \frac{1}{t-(\textrm{e}^{t}-1)z} = \frac{1}{t} \sum_{n= 0}^\infty \bigg(\frac{\textrm{e}^t-1}{t}\bigg)^n z^n. \end{equation*}

Consequently, as $m_n(t)=m_n^F(t)$ , we obtain from (4.18) that

\begin{equation*} m_n(t) = (1- \textrm{e}^{-t}) ( m_n^{\mathcal{G}} (t)+ m_n^{\mathcal{H}} (t) )= (1- \textrm{e}^{-t}) m_n^{\mathcal{G}}(t) (1+r_n(t)), \end{equation*}

which leads via (4.19) to

\begin{equation*} m_n(t)= \bigg(\frac{1-\textrm{e}^{-t}}{t} \bigg)\bigg(\frac{\textrm{e}^{t}-1}{t}\bigg)^n(1+r_n(t)), \end{equation*}

where the remainder term $r_n(t)$ is the ratio $r_n(t)=m_n^{\mathcal{H}} (t)/m_n^{\mathcal{G}} (t)$ . From now on, we focus our attention on a sharp upper bound for $m_n^{\mathcal{H}} (t)$ . The function h is meromorphic with simple poles at the points $2\textrm{i}\pi \mathbb{Z}^*$ . Moreover, for a given $t \neq 0$ , z is a pole of $\mathcal{H}$ if and only if $(\textrm{e}^t-1)z - t$ is a pole of h. Hence, the poles of $\mathcal{H}$ are given, for all $\ell \in \mathbb{Z}^*$ , by

\begin{equation*} z_\ell^\mathcal{H}(t)= \frac{t+2\textrm{i} \ell \pi}{\textrm{e}^t-1}. \end{equation*}

In addition, its radius of convergence $R^\mathcal{H} (t)$ is nothing more than the shortest distance between 0 and one of these poles. Consequently, we obtain

\begin{equation*} R^\mathcal{H}(t)= |z_1^\mathcal{H}(t)|= \frac{t}{\textrm{e}^t-1} \sqrt{1 + \frac{4\pi^2}{t^2}}. \end{equation*}

Furthermore, it follows from Cauchy’s inequality that, for any $0<\rho(t)<R^\mathcal{H}(t)$ ,

(4.20) \begin{equation} |m_n^\mathcal{H} (t)| \leq \frac{ \Vert \mathcal{H}(t,\cdot) \Vert_{\infty, \mathcal{C}(0,\rho(t))}} {\rho(t)^n }, \end{equation}

where the norm in the numerator is

\begin{equation*} \Vert \mathcal{H}(t,.)\Vert_{\infty, \mathcal{C}(0,\rho(t))}= \sup\{ |\mathcal{H}(t,z)|, |z|= \rho(t)\}.\end{equation*}

Since $\xi(t,\mathcal{C}(0,\rho(t)))$ coincides with the circle $\mathcal{C}( -t, |\textrm{e}^t- 1| \rho(t))$ , we deduce from the identity $\mathcal{H}(t,z)= h( \xi(t,z))$ that $\Vert \mathcal{H}(t,\cdot) \Vert_{\infty, \mathcal{C}(0,\rho(t))} = \Vert h \Vert_{\infty, \mathcal{C}({-}t, |\textrm{e}^t-1|\rho(t))}$ . Hereafter, we introduce a radial parameter

(4.21) \begin{equation} \rho(t,\alpha)=\frac{t}{\textrm{e}^t-1} \sqrt{1 + \frac{4 \alpha \pi^2}{t^2}}, \end{equation}

where $\alpha$ is a real number in the interval $]{-}t^2/4\pi^2, 1[$ . We also define the distance between the circle $\mathcal{C}({-}t, |\textrm{e}^t-1|\rho(t, \alpha))$ and the set of the poles of h,

\begin{equation*} \delta(t,\alpha)= d ( \mathcal{C}({-}t, |\textrm{e}^t-1|\rho(t, \alpha)) , 2\textrm{i}\pi \mathbb{Z}^*). \end{equation*}

We clearly have from the Pythagorean theorem that $\delta(t,\alpha)= \sqrt{t^2+ 4\pi^2} - \sqrt{t^2+ 4\alpha \pi^2}$ . In addition, we can easily check that

\begin{equation*} \delta(t,\alpha)= \frac{4\pi^2(1-\alpha)}{\sqrt{t^2+ 4\pi^2} + \sqrt{t^2+ 4\alpha \pi^2}\,}, \end{equation*}

which ensures that

(4.22) \begin{equation} \frac{2\pi^2(1-\alpha)}{\sqrt{t^2+ 4\pi^2}\,} < \delta(t,\alpha) < \frac{4\pi^2(1-\alpha)}{\sqrt{t^2+ 4\pi^2}\,}. \end{equation}

It follows from the maximum principle that $\Vert h \Vert_{\infty, \mathcal{C}({-}t, |\textrm{e}^t-1|\rho(t, \alpha))} \leq \Vert h \Vert_{\infty,\partial \mathcal{D} (L,\Lambda, \delta(t,\alpha))}$ where, for $L>0$ and $\Lambda>0$ large enough, $\mathcal{D} (L,\Lambda,\delta(t,\alpha))=\mathcal{B}(L,\Lambda)\cap{\mathcal{A}_h(\delta(t,\alpha))}^{\textrm{c}}$ is the domain given by the intersection of the box $\mathcal{B} (L,\Lambda)=\{z\in\mathbb{C},|\textrm{Re}(z)|<L,|\textrm{Im}(z)|<\Lambda\}$ and the complementary set of

\begin{equation*} \mathcal{A}_h(\delta(t,\alpha))=\{z\in \mathbb{C},d(z,2\textrm{i}\pi\mathbb{Z}^*) \leq \delta(t,\alpha) \text{ with } |\textrm{Im}(z)| \geq \pi\}.\end{equation*}

On the one hand we have, for all $y \in \mathbb{R}$ , $|\textrm{e}^{L+\textrm{i}y}-1|\geq \textrm{e}^{L}-1$ and $|L+iy| \geq L$ , implying that, for all $y \in \mathbb{R}$ ,

\begin{equation*}|h(L+\textrm{i}y)| \leq \frac{1}{\textrm{e}^{L}-1} + \frac{1}{L}.\end{equation*}

By the same token, we also have, for all $y \in \mathbb{R}$ , $|\textrm{e}^{-L+\textrm{i}y}-1|\geq 1- \textrm{e}^{-L}$ and $|-L+iy| \geq L$ , leading, for all $y \in \mathbb{R}$ , to

\begin{equation*}|h({-}L+\textrm{i}y)| \leq \frac{1}{1- \textrm{e}^{-L}} + \frac{1}{L}.\end{equation*}

On the other hand, we can choose $\Lambda$ of the form $\Lambda=(2 k+1) \pi $ for a value $k\in \mathbb{N}^*$ large enough. Then, for all $x \in \mathbb{R}$ , $\exp\!(x+(2k+1)\textrm{i}\pi)=-\exp\!(x)$ and $|x+(2k+1)\textrm{i}\pi|\geq(2k+1)\pi$ , implying that, for all $x \in \mathbb{R}$ ,

(4.23) \begin{equation} |h(x+(2k+1)\textrm{i}\pi)| \leq 1 + \frac{1}{(2 k+1) \pi}. \end{equation}

By letting L and $\Lambda$ go to infinity, we obtain

\begin{equation*} \Vert h\Vert_{\infty, \mathcal{C}({-}t,(\textrm{e}^t-1)\rho(t,\alpha))} \leq \max\!\big(1,\Vert h\Vert_{\infty,\partial\mathcal{D}(\delta(t,\alpha))}\big), \end{equation*}

where $\mathcal{D} (\delta(t,\alpha))$ is the domain $\mathcal{D} (\delta(t,\alpha))= {\mathcal{A}_h(\delta(t,\alpha))}^{\textrm{c}}$ . We clearly have from (4.17) that, for all $z\in \partial \mathcal{D} (\delta(t,\alpha))$ with $|\textrm{Im}(z)| > \pi$ ,

(4.24) \begin{equation} |h(z)| \leq |f(z)| + \frac{1}{|z|} \leq |f(z)| + \frac{1}{\pi}. \end{equation}

Moreover, it follows from tedious but straightforward calculations that

\begin{equation*} \inf_{z \in \mathbb{C}, |z|= \delta(t,\alpha)}|1-\textrm{e}^z| = 1-\textrm{e}^{-\delta(t,\alpha)}, \end{equation*}

which ensures that

(4.25) \begin{equation} |f(z)| \leq\frac{1}{1-\textrm{e}^{-\delta(t, \alpha)}}. \end{equation}

In addition, we obtain from (4.23) that, for all $z\in \partial \mathcal{D} (\delta(t,\alpha))$ with $|\textrm{Im}(z)|=\pi$ ,

(4.26) \begin{equation} |h(z)| \leq 1+ \frac{1}{\pi}. \end{equation}

Hence, we find from (4.24), (4.25), and (4.26) that

\begin{equation*} \Vert h \Vert_{\infty,\partial \mathcal{D} (\delta(t,\alpha))} \leq \frac{1}{1-\textrm{e}^{-\delta(t, \alpha)}}+ \frac{1}{\pi}. \end{equation*}

We were not able to find an explicit maximum for the previous upper bound. However, it is not hard to see that

\begin{equation*} \frac{1}{1-\textrm{e}^{-\delta(t, \alpha)}}\leq 1+\frac{1}{\delta(t, \alpha)}, \end{equation*}

which gives us

(4.27) \begin{equation} \Vert h \Vert_{\infty,\partial \mathcal{D} (\delta(t,\alpha))} \leq 1+ \frac{1}{\pi}+\frac{1}{\delta(t, \alpha)}. \end{equation}

Consequently, we deduce from (4.20), (4.21), (4.22), and (4.27) that, for all $t \neq 0$ and for all $n \geq 1$ ,

(4.28) \begin{equation} |m_n^\mathcal{H}(t)| \leq \bigg(\frac{\textrm{e}^t -1 }{t}\bigg)^n \varphi_n(t,\alpha), \end{equation}

where

(4.29) \begin{equation} \varphi_n(t,\alpha)= \bigg(1+\frac{1}{\pi}+\frac{\sqrt{t^2+ 4\pi^2}}{2\pi^2(1-\alpha)}\bigg) \bigg(1+\frac{4 \alpha \pi^2}{t^2} \bigg)^{-n/2}. \end{equation}

For the sake of simplicity, let $\Phi$ be the function defined, for all $\alpha \in\! ]{-}t^2/4\pi^2,1[$ , by

\begin{equation*} \Phi(\alpha)= \bigg(\frac{1}{1-\alpha}\bigg)\bigg(1+\frac{4 \alpha \pi^2}{t^2} \bigg)^{-n/2}. \end{equation*}

We can easily see that $\Phi$ is a convex function reaching its minimum for the value

\begin{equation*} \alpha=1-\bigg(1 + \frac{t^2}{4\pi^2}\bigg)\bigg(1+\frac{n}{2}\bigg)^{-1}. \end{equation*}

Some numerical experiments show that this explicit value seems to be not far from being the optimal value that minimizes $\varphi_n(t,\alpha)$ . By plugging $\alpha$ into (4.29), we obtain from (4.28) that, for all $t \neq 0$ and for all $n \geq 1$ ,

(4.30) \begin{align} |m_n^\mathcal{H}(t)| & \leq \bigg(\frac{\textrm{e}^t -1 }{t}\bigg)^n \bigg(1 + \frac{1}{\pi} + \frac{2+n}{\sqrt{t^2+4\pi^2}\,}\bigg)\bigg(1+\frac{2}{n} \bigg)^{n/2} \bigg(1+\frac{4 \pi^2}{t^2} \bigg)^{-n/2} \notag \\ & \leq \textrm{e}\bigg(\frac{\textrm{e}^t -1 }{t}\bigg)^n\bigg(1 + \frac{1}{\pi} + \frac{2+n}{\sqrt{t^2+4\pi^2}\,}\bigg) \bigg(1+\frac{4 \pi^2}{t^2} \bigg)^{-n/2}. \end{align}

Finally, (4.13) follows from (4.19) together with (4.30), which completes the proof of Lemma 4.2.

We now focus our attention on a complex estimate of the Laplace transform $m_n$ of $D_n$ since $m_n$ clearly extends to an analytic function on $\mathbb{C}$ . More precisely, our goal is to compute an estimate of $m_n(t+\textrm{i}v)$ for $t \neq 0$ and for $v \in \mathbb{R}$ such that $|v|< \pi$ . Note that $m_n$ is $2\textrm{i}\pi$ periodic.

Lemma 4.3. For any $t \neq 0$ and for all $v \in \mathbb{R}$ such that $|v|<\pi$ ,

(4.31) \begin{equation} m_n(t+\textrm{i}v) = \bigg(\frac{1-\textrm{e}^{-(t+\textrm{i}v)}}{t+\textrm{i}v}\bigg) \bigg(\frac{\textrm{e}^{t+\textrm{i}v}-1}{t+\textrm{i}v}\bigg)^n(1+r_n(t+iv)), \end{equation}

where the remainder term $r_n(t+\textrm{i}v)$ is exponentially negligible and satisfies

(4.32) \begin{equation} |r_n(t+\textrm{i}v)| \leq \sqrt{t^2+v^2}\bigg(1+\frac{1}{\pi}+\frac{\sqrt{t^2+4\pi^2}}{\pi(\pi-|v|)}\bigg) \bigg(\frac{t^2+v^2}{t^2+\pi^2}\bigg)^{n/2}. \end{equation}

Moreover, for any $t \neq 0$ and for all $v \in \mathbb{R}$ such that $|v|\leq \pi$ , we also have the alternative upper bound

(4.33) \begin{align} |m_n(t+\textrm{i}v)| & \leq |1-\textrm{e}^{-(t+\textrm{i}v)}|\bigg(\frac{\textrm{e}^t-1}{t}\bigg)^n \nonumber \\ & \quad \times \bigg(\frac{1}{\sqrt{t^2+v^2}\,} \exp\bigg({-}n\frac{t^2L^{\prime\prime}(t)}{t^2+\pi^2}\frac{v^2}{2}\bigg) \nonumber \\ & \qquad\quad + \bigg(1+\frac1\pi+\frac{2\sqrt{t^2+\pi^2}}{\pi^2-4}\bigg) \exp\bigg({-}n\frac{4t^2 L^{\prime\prime}(t)}{\pi^2(t^2+4)} \frac{v^2}{2} \bigg) \bigg), \end{align}

where the second derivative of the asymptotic cumulant-generating function L is the positive function given by

\begin{equation*} L^{\prime\prime}(t) = \frac{(\textrm{e}^t-1)^2 -t^2 \textrm{e}^t}{(t(\textrm{e}^t-1))^2}. \end{equation*}

Proof. We still assume in the following that $t \neq 0$ . We also extend F(t, z) in the complex plane with respect to the first variable, $F(t+\textrm{i}v,z) = \sum_{n= 0}^\infty m_n(t+\textrm{i}v) z^n$ , where the initial value is such that $m_0(t+\textrm{i}v)=1$ . Since $|m_n(t+\textrm{i}v)| \leq m_n(t)$ , the radius of convergence in z of $F(t+\textrm{i}v,\cdot)$ is at least the one for $v=0$ . Moreover, the poles of $F(t+\textrm{i}v,\cdot)$ are given, for all $\ell \in \mathbb{Z}$ , by

\begin{equation*} z_\ell^F(t+\textrm{i}v) = \frac{(t+\textrm{i}v)+2\textrm{i}\ell\pi}{\textrm{e}^{(t+\textrm{i}v)}-1}. \end{equation*}

Consequently, for all $v \in \mathbb{R}$ such that $|v|< \pi$ ,

\begin{equation*}R^F (t+\textrm{i}v)= \frac{|t+\textrm{i}v|}{|\textrm{e}^{t+\textrm{i}v}-1|}.\end{equation*}

As in the proof of Lemma 4.2, we can split $F(t+\textrm{i}v,z)$ into two terms,

\begin{equation*} F(t+\textrm{i}v,z) = (1-\textrm{e}^{-(t+\textrm{i}v)})(\mathcal{G}(t+\textrm{i}v,z) + \mathcal{H}(t+\textrm{i}v,z)), \end{equation*}

where we recall from (4.16) that, for all $z \in \mathbb{C}$ and for all $v \in \mathbb{R}$ such that $|v|< \pi$ ,

\begin{equation*} \mathcal{G}(t+\textrm{i}v,z) = g(\xi(t+\textrm{i}v,z)), \qquad \mathcal{H}(t+\textrm{i}v,z) = h(\xi(t+\textrm{i}v,z)), \end{equation*}

where g and h are given, for all $z \in \mathbb{C}^*$ , by

\begin{equation*} g(z)= -\frac{1}{z}, \qquad h(z) = \frac{1}{1-\textrm{e}^z} + \frac{1}{z} \end{equation*}

and the function $\xi$ is such that $\xi(t+\textrm{i}v,z)= (\textrm{e}^{t+\textrm{i}v}-1)z-(t+\textrm{i}v)$ . By holomorphic extension, we deduce from (4.19) that

\begin{equation*} m_n^{\mathcal{G}}(t+\textrm{i}v) = \frac{1}{t+\textrm{i}v}\bigg(\frac{\textrm{e}^{t+\textrm{i}v}-1}{t+\textrm{i}v}\bigg)^n. \end{equation*}

Moreover, the poles of $\mathcal{H}(t+\textrm{i}v)$ are given, for all $\ell \in \mathbb{Z}^*$ , by

\begin{equation*} z_\ell^\mathcal{H}(t+\textrm{i}v) = \frac{t + \textrm{i}(v+2 \ell \pi)}{\textrm{e}^{t+\textrm{i}v}-1}. \end{equation*}

Hence, we obtain that, for all $v \in \mathbb{R}$ such that $|v|< \pi$ ,

\begin{equation*} R^\mathcal{H}(t+\textrm{i}v) = \frac{\sqrt{t^2 + (2\pi-|v|)^2}}{|\textrm{e}^{t+\textrm{i}v}-1|} > \frac{\sqrt{t^2 + v^2}}{|\textrm{e}^{t+\textrm{i}v}-1|} = R^F(t+\textrm{i}v). \end{equation*}

It follows once again from Cauchy’s inequality that, for any $0<\rho(t+\textrm{i}v)<R^\mathcal{H}(t+\textrm{i}v)$ ,

(4.34) \begin{equation} |m_n^\mathcal{H}(t+\textrm{i}v)| \leq \frac{\Vert\mathcal{H}(t+\textrm{i}v,\cdot)\Vert_{\infty,\mathcal{C}(0,\rho(t+\textrm{i}v))}}{\rho(t+\textrm{i}v)^n}, \end{equation}

where the norm in the numerator is

\begin{equation*} \Vert\mathcal{H}(t+\textrm{i}v,\cdot)\Vert_{\infty,\mathcal{C}(0,\rho(t+\textrm{i}v))} = \sup\{|\mathcal{H}(t+\textrm{i}v,z)|,|z|=\rho(t+\textrm{i}v)\}. \end{equation*}

Since the image of the circle $\mathcal{C}(0,\rho(t+\textrm{i}v))$ by the application $\xi(t+\textrm{i}v,\cdot)$ coincides with the circle $\mathcal{C}({-}(t+\textrm{i}v),|\textrm{e}^{t+\textrm{i}v}-1|\rho(t+\textrm{i}v))$ , we obtain from $\mathcal{H}(t+\textrm{i}v,z)=h(\xi(t+\textrm{i}v,z))$ that

\begin{equation*} \Vert\mathcal{H}(t+\textrm{i}v,\cdot)\Vert_{\infty,\mathcal{C}({-}t,\rho(t+\textrm{i}v))} = \Vert h\Vert_{\infty,\mathcal{C}({-}(t+\textrm{i}v),|\textrm{e}^{t+\textrm{i}v}-1|\rho(t+\textrm{i}v))}. \end{equation*}

Hereafter, since $|v|< \pi$ , we can take the radius

(4.35) \begin{equation} \rho(t+\textrm{i}v) = \frac{\sqrt{t^2 + \pi^2}}{|\textrm{e}^{t+\textrm{i}v}-1|}. \end{equation}

Moreover, as in the proof of Lemma 4.2, denote by $\delta(t+\textrm{i}v)$ the distance between the circle $\mathcal{C}({-}(t+\textrm{i}v),|\textrm{e}^{t+\textrm{i}v}-1|\rho(t+\textrm{i}v)))$ and the set of poles of h,

\begin{equation*} \delta(t+\textrm{i}v) = d(\mathcal{C}({-}(t+\textrm{i}v),|\textrm{e}^{t+\textrm{i}v}-1|\rho(t+\textrm{i}v)),2\textrm{i}\pi\mathbb{Z}^*). \end{equation*}

It follows from (4.35) and the Pythagorean theorem that

\begin{equation*} \delta(t+\textrm{i}v) = \sqrt{t^2+(2\pi-|v|)^2} - \sqrt{t^2 + \pi^2}. \end{equation*}

We can observe that

\begin{equation*} \delta(t+\textrm{i}v) = \frac{(3\pi-|v|)(\pi-|v|)}{\sqrt{t^2+(2\pi-|v|)^2}+\sqrt{t^2+ \pi^2}\,}, \end{equation*}

which leads to

(4.36) \begin{equation} \frac{\pi(\pi-|v|)}{\sqrt{t^2+4\pi^2}\,} < \delta(t+\textrm{i}v) < \pi-| v |. \end{equation}

Using (4.34) together with (4.27) and (4.36), we obtain

\begin{equation*} |m_n^\mathcal{H}(t+\textrm{i}v)| \leq \bigg(1 + \frac{1}{\pi} + \frac{1}{\delta(t+\textrm{i}v)}\bigg) \frac{1}{\rho(t+\textrm{i}v)^n} \leq \bigg(1 + \frac{1}{\pi} + \frac{\sqrt{t^2+4\pi^2}}{\pi(\pi-|v|)}\bigg)\frac{1}{\rho(t+\textrm{i}v)^n}. \end{equation*}

Hence, we find that $m_n(t+\textrm{i}v) = (1-\textrm{e}^{-(t+\textrm{i}v)})m_n^{\mathcal{G}}(t+\textrm{i}v)(1+r_n(t+\textrm{i}v))$ , where the remainder term $r_n(t)$ is the ratio

\begin{equation*} r_n(t+\textrm{i}v) = \frac{m_n^{\mathcal{H}}(t+\textrm{i}v)}{m_n^{\mathcal{G}}(t+\textrm{i}v)} \end{equation*}

that satisfies

(4.37) \begin{equation} |r_n(t+\textrm{i}v)| \leq \sqrt{t^2+v^2}\bigg(1 + \frac{1}{\pi} + \frac{\sqrt{t^2+4\pi^2}}{\pi(\pi-|v|)}\bigg) \bigg(\frac{t^2+v^2}{t^2+\pi^2}\bigg)^{n/2}. \end{equation}

Hereafter, we go further in the analyses of $m_n(t+\textrm{i}v)$ by providing a different upper bound for $m_n^\mathcal{H}(t+\textrm{i}v)$ . Our motivation is that the factor ${1}/({\pi -|v|})$ in (4.37) becomes very large when $|v|$ is close to $\pi$ . Our strategy is not to obtain the best exponent by taking the largest radius, close to the radius of convergence. Instead, we consider a smaller radius in order to stay away from the poles, but not too small in order to still have an exponential term with respect to $m_n^\mathcal{F}(t)$ . Let $\beta$ be the function defined, for all $|v| < \pi$ , by

\begin{equation*} \beta(v) = \frac{2(1- \cos\!(v))}{v^2}. \end{equation*}

It is clear that $\beta$ is an even function, increasing on $[{-}\pi,0]$ and decreasing on $[0,\pi]$ with a maximum value $\beta(0)=1$ , and such that $\beta(\pi)=4/\pi^2$ . We replace the radius previously given by (4.35) by the new radius

\begin{equation*} \rho(t+\textrm{i}v) = \frac{\sqrt{t^2 + \beta(v)v^2}}{|\textrm{e}^{t+\textrm{i}v}-1|}. \end{equation*}

We can observe that we only replaced $\pi^2$ by $\beta(v)v^2=2(1-\cos\!(v))$ . As before, denote by $\delta(t+\textrm{i}v)$ the distance between the circle $\mathcal{C}({-}(t+\textrm{i}v),|\textrm{e}^{t+\textrm{i}v}-1|\rho(t+\textrm{i}v))$ and the set of the poles of h,

\begin{equation*} \delta(t+\textrm{i}v)= \sqrt{t^2 + (2\pi - |v|)^2} - \sqrt{t^2 + \beta(v)v^2}. \end{equation*}

As in the proof of (4.36), we obtain

\begin{equation*} \frac{\pi^2-4}{2\sqrt{t^2+\pi^2}\,} < \delta(t+\textrm{i}v) < 2\pi-\bigg(1+\frac{2}{\pi}\bigg)|v|, \end{equation*}

which ensures that

\begin{align*} |m_n^\mathcal{H}(t+\textrm{i}v)| & \leq \bigg(1 + \frac{1}{\pi} + \frac{1}{\delta(t+\textrm{i}v)}\bigg)\frac{1}{\rho(t+\textrm{i}v)^n}, \\ & \leq \bigg(1 + \frac{1}{\pi} + \frac{2\sqrt{t^2+\pi^2}}{\pi^2-4}\bigg) \bigg(\frac{|\textrm{e}^{t+\textrm{i}v}-1|^2}{t^2 + \beta(v)v^2}\bigg)^{n/2}. \end{align*}

It follows from straightforward calculation that

\begin{align*} \frac{|\textrm{e}^{t+\textrm{i}v}-1|^2}{t^2 + \beta(v)v^2} & = \frac{(\textrm{e}^t-1)^2 + 2\textrm{e}^t(1-\cos\!(v))}{t^2+\beta(v)v^2} = \frac{(\textrm{e}^t-1)^2 + \textrm{e}^t\beta(v)v^2}{t^2 + \beta(v)v^2} \\ & = \frac{(\textrm{e}^t-1)^2}{t^2}\bigg(\frac{t^2}{t^2+\beta(v)v^2} + \frac{t^2\textrm{e}^t\beta(v)v^2}{(\textrm{e}^t-1)^2(t^2+\beta(v)v^2)}\bigg) \\ & = \frac{(\textrm{e}^t-1)^2}{t^2}\bigg(1 - \bigg(\frac{(\textrm{e}^t-1)^2-t^2\textrm{e}^t}{(\textrm{e}^t-1)^2}\bigg) \frac{\beta(v)v^2}{t^2+\beta(v)v^2}\bigg). \end{align*}

Moreover, we also have from (1.5) that, for all $t \neq 0$ ,

(4.38) \begin{equation} L^{\prime\prime}(t) = \frac{(\textrm{e}^t-1)^2 - t^2\textrm{e}^t}{(t(\textrm{e}^t-1))^2}. \end{equation}

In addition, by using that for $|v| \leq \pi$ we have

\begin{equation*}\frac{\beta(v)v^2}{t^2+\beta(v)v^2} \geq \frac{4v^2}{\pi^2(t^2+4)},\end{equation*}

we deduce from the elementary inequality $1-x \leq \exp\!({-}x)$ that

(4.39) \begin{equation} |m_n^\mathcal{H}(t+\textrm{i}v)| \leq \bigg(1+\frac1\pi+\frac{2\sqrt{t^2+\pi^2}}{\pi^2-4}\bigg)\bigg(\frac{\textrm{e}^t-1}{t}\bigg)^n \exp\bigg({-}n\frac{4t^2L^{\prime\prime}(t)}{\pi^2(t^2+4)}\frac{v^2}{2}\bigg). \end{equation}

We also recall that

\begin{equation*} m_n^{\mathcal{G}}(t+\textrm{i}v) = \frac{1}{t+\textrm{i}v}\bigg(\frac{\textrm{e}^{t+\textrm{i}v}-1}{t+\textrm{i}v}\bigg)^n. \end{equation*}

We also have from straightforward calculation that

(4.40) \begin{align} \frac{|\textrm{e}^{t+\textrm{i}v}-1|^2}{|t+\textrm{i}v|^2} & = \frac{(\textrm{e}^t-1)^2+2\textrm{e}^t(1-\cos\!(v))}{t^2+v^2} = \frac{(\textrm{e}^t-1)^2+\textrm{e}^t\beta(v)v^2}{t^2+v^2} \notag \\ & = \frac{(\textrm{e}^t-1)^2}{t^2}\bigg(\frac{t^2}{t^2+v^2} + \frac{t^2\textrm{e}^t\beta(v)v^2}{(\textrm{e}^t-1)^2(t^2+v^2)}\bigg) \end{align}
(4.41) \begin{align} & = \frac{(\textrm{e}^t-1)^2}{t^2}\bigg(1 - \bigg(\frac{(\textrm{e}^t-1)^2-t^2\textrm{e}^t\beta(v)}{(\textrm{e}^t-1)^2}\bigg)\frac{v^2}{t^2+v^2}\bigg) \notag \\ & = \frac{(\textrm{e}^t-1)^2}{t^2}\bigg(1 - \bigg(\frac{(\textrm{e}^t-1)^2-t^2\textrm{e}^t}{(\textrm{e}^t-1)^2}\bigg)\frac{v^2}{t^2+v^2} - \frac{t^2\textrm{e}^t}{(\textrm{e}^t-1)^2}\frac{(1-\beta(v))v^2}{t^2+v^2}\bigg) \notag \\ & \leq \frac{(\textrm{e}^t-1)^2}{t^2}\bigg(1 - \bigg(\frac{(\textrm{e}^t-1)^2-t^2\textrm{e}^t}{(\textrm{e}^t-1)^2}\bigg)\frac{v^2}{t^2+\pi^2}\bigg), \end{align}

since $\beta(v) \leq 1$ . Hence, we obtain from (4.38) that

(4.42) \begin{equation} |m_n^\mathcal{G}(t+\textrm{i}v)| \leq \frac{1}{\sqrt{t^2+ v^2}\,}\bigg(\frac{\textrm{e}^t-1}{t}\bigg)^n \exp\bigg({-}n\frac{t^2L^{\prime\prime}(t)}{t^2+\pi^2}\frac{v^2}{2}\bigg). \end{equation}

Finally, we already saw that $m_n(t+\textrm{i}v) = (1-\textrm{e}^{-(t+\textrm{i}v)})(m_n^{\mathcal{G}}(t+\textrm{i}v) + m_n^{\mathcal{H}}(t+\textrm{i}v))$ . Consequently, (4.39) together with (4.42) clearly lead to (4.33), which achieves the proof of Lemma 4.3.

5. Proof of the sharp large-deviation principle

Let us start with an elementary lemma concerning the asymptotic cumulant-generating function L defined by (1.5).

Lemma 5.1. The function $L \,:\, \mathbb{R} \to \mathbb{R}$ is twice differentiable and strictly convex, and its first derivative $L^{\prime}\,:\,\mathbb{R} \to ]0,1[$ is a bijection. In particular, for each $x\in ]0,1[$ , there exists a unique value $t_x\in \mathbb{R}$ such that

(5.1) \begin{equation} I(x)= x t_x - L(t_x), \end{equation}

where I is the Fenchel–Legendre transform of L. The value $t_x$ is also characterized by the relation $L^{\prime}(t_x) =x$ where, for all $t \neq 0$ ,

(5.2) \begin{equation} L^{\prime}(t) = \frac{\textrm{e}^t(t-1)+1}{t(\textrm{e}^t-1)}. \end{equation}

Moreover, for all $x \in \big]\frac12,1\big[$ , $t_x >0$ , while for all $x \in \big]0,\frac12\big[$ , $t_x<0$ . In addition, for all $t \in \mathbb{R}$ , $L^{\prime\prime}(t)>0$ as the second derivative of L is given, for all $t \neq 0$ , by

(5.3) \begin{equation} L^{\prime\prime}(t) = \frac{(\textrm{e}^t-1)^2-t^2\textrm{e}^t}{(t(\textrm{e}^t-1))^2}. \end{equation}

Finally, the function L can be extended to a function $L\,:\,\mathbb{C}\setminus2\textrm{i}\pi\mathbb{Z}^*\to\mathbb{C}$ satisfying, for all $v \in \mathbb{R}$ such that $|v|\leq \pi$ , $\textrm{Re}(L(t+\textrm{i}v)) \leq L(t) - C(t) {v^2}/{2}$ , where

\begin{equation*} C(t)= \frac{t^2}{t^2+\pi^2} L^{\prime\prime}(t). \end{equation*}

Proof. We saw in the previous section that the calculation of the first two derivatives (5.2) and (5.3) of L follows from straightforward calculation. Let us remark that $\lim_{t \to 0}L^{\prime}(t)=\frac12$ and $\lim_{t \to 0}L^{\prime\prime}(t)=\frac{1}{12}$ , which means that L can be extended as a $C^2$ function on $\mathbb{R}$ . The above computation also gives $\lim_{t\to-\infty}L^{\prime}(t)=0$ and $\lim_{t\to+\infty} L^{\prime}(t)=1$ .

We now focus our attention on the complex extension of L. We deduce from (4.41) and (5.3) together with the elementary inequality $\ln(1-x) \leq -x$ that, for all $t\neq 0$ and $|v|\leq \pi$ ,

\begin{equation*} \textrm{Re} (L(t+\textrm{i}v)) = \ln\bigg(\frac{|\textrm{e}^{t+\textrm{i}v}-1|}{|t+\textrm{i}v|}\bigg) \leq L(t) + \frac{1}{2}\ln\bigg(1 - t^2L^{\prime\prime}(t)\frac{v^2}{t^2+\pi^2}\bigg) \leq L(t) - C(t)\frac{v^2}{2}, \end{equation*}

which completes the proof of Lemma 5.1.

We continue with an elementary lemma which can be seen as a slight extension of the usual Laplace method.

Lemma 5.2. Let us consider two real numbers $a< 0 <b$ , and two functions $f\,:\,[a,b]\rightarrow \mathbb{C}$ and $\varphi\,:\,[a,b] \rightarrow \mathbb{C}$ such that, for all $\lambda$ large enough, $\int_{a}^{b}\textrm{e}^{-\lambda\textrm{Re}\,\varphi(u)}|f(u)|\,\textrm{d} u < +\infty$ . Assume that f is a continuous function in $0, f(0)\neq 0$ , $\varphi$ is a $C^2$ function in $0, \varphi^{\prime}(0)=0$ , $\varphi^{\prime\prime}(0)$ is a real positive number, and there exists a constant $C>0$ such that $\textrm{Re}\,\varphi(u) \geq \textrm{Re}\,\varphi(0) + Cu^2$ . Then,

\begin{equation*} \lim_{\lambda\rightarrow\infty}\sqrt{\lambda}\textrm{e}^{\lambda\varphi(0)}\int_{a}^{b}\textrm{e}^{-\lambda\varphi(u)}f(u)\,\textrm{d} u = \sqrt{2\pi}\frac{f(0)}{\sqrt{\varphi^{\prime\prime}(0)}\,}. \end{equation*}

Proof. First of all, we can assume without loss of generality that $\varphi(0)=0$ . We can observe that, for all $\lambda$ large enough,

(5.4) \begin{equation} \int_{a}^{b}\textrm{e}^{-\lambda\varphi(u)}f(0)\,\textrm{d} u = \frac{f(0)}{\sqrt{\lambda}\,} \int_{a\sqrt{\lambda}}^{b\sqrt{\lambda}}\exp\bigg({-}\lambda\varphi\bigg(\frac{u}{\sqrt{\lambda}\,}\bigg)\bigg)\,\textrm{d} u. \end{equation}

However, it follows from the assumptions on the function $\varphi$ that

\begin{equation*}\lim_{\lambda\to+\infty}\lambda\varphi\bigg(\frac{u}{\sqrt{\lambda}\,}\bigg) = \varphi^{\prime\prime}(0)\frac{u^2}{2},\end{equation*}

together with

\begin{equation*}\bigg|\exp\bigg({-}\lambda\varphi\bigg(\frac{u}{\sqrt{\lambda}\,}\bigg)\bigg)\bigg| \leq \exp\!({-}Cu^2).\end{equation*}

Consequently, according to the dominated convergence theorem, we obtain

(5.5) \begin{equation} \lim_{\lambda\to+\infty}\int_{a\sqrt{\lambda}}^{b\sqrt{\lambda}} \exp\bigg({-}\lambda\varphi\bigg(\frac{u}{\sqrt{\lambda}\,}\bigg)\bigg)\,\textrm{d} u = \int_{-\infty}^{+\infty}\exp\bigg({-}\varphi^{\prime\prime}(0)\frac{u^2}{2}\bigg)\,\textrm{d} u = \frac{\sqrt{2\pi}}{\sqrt{\varphi^{\prime\prime}(0)}\,}. \end{equation}

Furthermore, by using the usual Laplace method, we find that

(5.6) \begin{equation} \bigg|\int_{a}^{b}\textrm{e}^{-\lambda\varphi(u)}(f(u)-f(0))\,\textrm{d} u\bigg| \leq \int_{a}^{b}\textrm{e}^{-\lambda\textrm{Re}\,\varphi(u)}|f(u)-f(0)|\,\textrm{d} u = o(\lambda^{-1/2}). \end{equation}

Finally, (5.4), (5.5), and (5.6) allow us to conclude the proof of Lemma 5.2.

Proof of Theorem 3.1. We now proceed to the proof of Theorem 3.1. Our goal is to estimate, for all $x \in \big]\frac12,1\big[$ , the probability

\begin{equation*} \mathbb{P}\bigg(\frac{D_n}{n} \geq x \bigg) = \sum_{k=\lceil nx \rceil}^{n-1}\mathbb{P}(D_n=k). \end{equation*}

To do so, we extend the Laplace transform $m_n$ of $D_n$ , defined in (4.1), into an analytic function on the complex plane. For all $t,v\in \mathbb{R}$ ,

\begin{equation*} m_n(t+\textrm{i}v) = \mathbb{E}\big[\textrm{e}^{(t+\textrm{i}v)D_n}\big] = \sum_{k=0}^{n-1}\textrm{e}^{(t+\textrm{i}v)k}\mathbb{P}(D_n=k). \end{equation*}

Therefore, for all $t,v\in \mathbb{R}$ and for all $k \geq 0$ ,

(5.7) \begin{equation} \mathbb{P}(D_n=k) = \textrm{e}^{-tk}\frac{1}{2\pi}\int_{-\pi}^{\pi}m_n(t+\textrm{i}v)\textrm{e}^{-\textrm{i}kv}\,\textrm{d} v. \end{equation}

We can observe that (5.7) is also true for $k \geq n$ , allowing us to recover that $\mathbb{P}(D_n=k)=0$ . Consequently, since $|m_n(t+\textrm{i}v)|\leq\textrm{e}^{tn}$ , it follows from Fubini’s theorem that, for all $t>0$ ,

(5.8) \begin{equation} \mathbb{P}\bigg(\frac{D_n}{n} \geq x\bigg) = \frac{1}{2\pi}\int_{-\pi}^{\pi}m_n(t+\textrm{i}v)\sum_{k = \lceil nx \rceil }^{+\infty}\textrm{e}^{-k(t+\textrm{i}v)}\,\textrm{d} v. \end{equation}

In the following we choose $t=t_x$ . In particular, $t_x > 0$ since $x> \frac12$ . Then, we deduce from (5.8) that

(5.9) \begin{equation} \mathbb{P}\bigg(\frac{D_n}{n} \geq x\bigg) = \frac{1}{2\pi}\int_{-\pi}^{\pi}m_n(t_x+\textrm{i}v) \frac{\exp\!({-}t_x\lceil nx\rceil-\textrm{i}\lceil nx\rceil v)}{1-\textrm{e}^{-(t_x+\textrm{i}v)}}\,\textrm{d} v = I_n, \end{equation}

where the integral $I_n$ can be separated into two parts, $I_n=J_n+K_n$ , with

\begin{align*} J_n & = \frac{1}{2\pi}\int_{|v|<\pi-\varepsilon_n}m_n(t_x+\textrm{i}v) \frac{\exp\!({-}t_x\lceil nx\rceil - \textrm{i}\lceil nx\rceil v)}{1-\textrm{e}^{-(t_x+\textrm{i}v)}}\,\textrm{d} v, \\ K_n & = \frac{1}{2\pi}\int_{\pi-\varepsilon_n<|v|<\pi}m_n(t_x+\textrm{i}v) \frac{\exp\!({-}t_x\lceil nx\rceil - \textrm{i}\lceil nx\rceil v)}{1-\textrm{e}^{-(t_x+\textrm{i}v)}}\,\textrm{d} v, \end{align*}

where $\varepsilon_n= n^{-3/4}$ . On the one hand, we obtain from (4.12) that

\begin{align*} |K_n| & \leq \frac{1}{2\pi}\int_{\pi-\varepsilon_n<|v|<\pi}|m_n(t_x)| \frac{\exp\!({-}t_x\lceil nx\rceil)}{|1-\textrm{e}^{-t_x}|}\,\textrm{d} v, \\ & \leq \frac{2\varepsilon_n}{2\pi t_x}\exp\!({-}t_x(\lceil nx\rceil-nx))\exp\!({-}nI(x))(1+|r_n(t_x)|), \\ & \leq \frac{\varepsilon_n}{\pi t_x}\exp\!({-}nI(x))(1+|r_n(t_x)|) \end{align*}

by using (4.12), (5.1), and the fact that $t_x>0$ . Consequently, (4.13) together with the definition of $\varepsilon_n$ ensure that

(5.10) \begin{equation} K_n =o\bigg(\frac{\exp\!({-}nI(x))}{\sqrt{n}}\bigg). \end{equation}

It only remains to evaluate the integral $J_n$ . We deduce from (4.31) and (5.1) that

(5.11) \begin{equation} J_n = \frac{1}{2\pi}\exp\!({-}t_x\{nx\}-nI(x))\int_{|v|<\pi-\varepsilon_n}\exp\!({-}n\varphi(v))g_n(v)\,\textrm{d} v, \end{equation}

where the functions $\varphi$ and g are given, for all $|v| < \pi$ , by

\begin{equation*} \varphi(v) = -(L(t_x+\textrm{i}v)-L(t_x)-\textrm{i}xv), \qquad g_n(v) = \frac{\exp\!({-}\textrm{i}\{nx\}v)}{t_x+\textrm{i}v}(1 + r_n(t_x+\textrm{i}v)). \end{equation*}

Thanks to Lemma 5.1, we have $\varphi(0)=0$ , $\varphi^{\prime}(0)=0$ , and $\varphi^{\prime\prime}(0)=\sigma_x^2$ , which is a positive real number. In addition, there exists a constant $C>0$ such that $\textrm{Re}\,\varphi(v) \geq Cv^2$ . Therefore, via the extended Laplace method given by Lemma 5.2, we obtain

(5.12) \begin{equation} \lim_{n\rightarrow\infty}\sqrt{n}\int_{-\pi}^{\pi}\exp\!({-}n\varphi(v))\frac{1}{t_x+\textrm{i}v}\,\textrm{d} v = \frac{\sqrt{2\pi}}{\sigma_xt_x}. \end{equation}

It follows from (4.32) that there exist positive constants $a_x, b_x, c_x$ such that, for all $|v|\leq \pi-\varepsilon_n$ ,

\begin{equation*} |r_n(t_x+\textrm{i}v)|\leq\frac{a_x}{\varepsilon_n}\bigg(1-b_x\varepsilon_n\bigg)^{n/2} \leq \frac{a_x}{\varepsilon_n}\exp\bigg({-}\frac{b_xn\varepsilon_n}{2}\bigg) \leq c_x\exp\bigg({-}\frac{b_x}{4}n^{1/4}\bigg), \end{equation*}

which ensures that

\begin{equation*} |\exp\!({-}\textrm{i}\{nx\}v)(1 + r_n(t_x+\textrm{i}v))-1| \leq c_x\exp\bigg({-}\frac{b_x}{4}n^{1/4}\bigg) + |v|. \end{equation*}

Then, we obtain

\begin{equation*} \bigg|\int_{-\pi}^{\pi}\exp\!({-}n\varphi(v))\bigg(g_n(v)\mathbf{1}_{|v| \lt \pi-\varepsilon_n} - \frac{1}{t_x+\textrm{i}v}\bigg)\,\textrm{d} v\bigg| \leq \Lambda_n, \end{equation*}

where

\begin{equation*} \Lambda_n = \int_{-\pi}^{\pi}\exp\!({-}n\textrm{Re}\,\varphi(v))\frac{1}{|t_x+\textrm{i}v|} \bigg(c_x\exp\bigg({-}\frac{b_x}{4}n^{1/4}\bigg) + 2|v|\bigg)\,\textrm{d} v \end{equation*}

since $\mathbf{1}_{|v|\geq\pi -\varepsilon_n} \leq |v|$ . By the standard Laplace method, $\lim_{n\rightarrow\infty}\sqrt{n}\Lambda_n=0$ . Consequently, we deduce from (5.11) and (5.12) that

(5.13) \begin{equation} \lim_{n\rightarrow\infty}\sqrt{n}\exp\!(t_x\{nx\}+nI(x))J_n = \frac{1}{\sigma_xt_x\sqrt{2\pi}\,}. \end{equation}

Finally, (5.9) together with (5.10) and (5.13) clearly lead to (3.1).

6. An alternative proof

Alternative proof of Theorem 3.1. We already saw from (3.3) that the distribution $D_n$ is nothing other than that of the integer part of the Irwin–Hall distribution, which is the sum $S_n=U_1+\cdots+U_n$ of independent and identically distributed random variables sharing the same uniform distribution on [0, 1]. It follows from some direct calculation that, for any $x\in \big]\frac12, 1\big[$ ,

(6.1) \begin{align} \mathbb{P}\bigg(\frac{D_n}{n}\geq x\bigg) = \mathbb{P}( S_n \geq \lceil nx \rceil) & = \mathbb{E}_n \big[\!\exp\!({-}t_xS_n + nL(t_x))\mathbf{1}_{\{{S_n}/{n} \geq x+\varepsilon_n\}}\big] \nonumber \\ & = \exp\!({-}nI(x))\mathbb{E}_n\big[\textrm{e}^{-nt_x({S_n}/{n}-x)}\mathbf{1}_{\{{S_n}/{n} \geq x+\varepsilon_n\}}\big], \end{align}

where $\mathbb{E}_n$ is the expectation under the new probability $\mathbb{P}_n$ given by

(6.2) \begin{equation} \frac{\textrm{d}\mathbb{P}_n}{\textrm{d}\mathbb{P}}= \exp\!(t_xS_n - nL(t_x))\end{equation}

and $\varepsilon_n= \{nx\}/n$ . Let

\begin{equation*}V_n= \frac{\sqrt{n}}{\sigma_x} \bigg( \frac{S_n}{n}-x \bigg),\end{equation*}

and denote by $f_n$ and $\Phi_n$ the probability density function and the characteristic function of $V_n$ under the new probability $\mathbb{P}_n$ , respectively. Let us remark that, under $\mathbb{P}_n$ , we know that $(V_n)$ converges in distribution to the standard Gaussian measure. Using the Parseval identity, we have

(6.3) \begin{align} \mathbb{E}_n\big[\textrm{e}^{-nt_x({S_n}/{n}-x)}\mathbf{1}_{\{{S_n}/{n} \geq x+\varepsilon_n\}}\big] & = \int_{\mathbb{R}}\textrm{e}^{-\sigma_x t_x\sqrt{n}v}\mathbf{1}_{\{v\geq({\sqrt n\varepsilon_n})/{\sigma_x}\}}f_{n}(v)\,\textrm{d} v \nonumber \\ & = \frac{1}{2\pi}\int_{\mathbb{R}}\frac{\textrm{e}^{-(\sigma_x t_x\sqrt n+\textrm{i}v)({\sqrt n \varepsilon_n}/{\sigma_x})}} {\sigma_x t_x\sqrt n + \textrm{i}v}\Phi_n(v)\,\textrm{d} v \nonumber \\ & = \frac{\textrm{e}^{-t_x\{nx\}}}{2\pi\sigma_x \sqrt{n}\,}\int_{\mathbb{R}}\frac{\textrm{e}^{-\textrm{i}({\{nx\}v})/({\sigma_x\sqrt n})}} {t_x+\textrm{i}{v}/({\sigma_x \sqrt n})}\Phi_n(v)\,\textrm{d} v. \end{align}

Recalling that L is also the logarithm of the Laplace transform of the uniform distribution in [0, 1], we obtain from (6.2) that

\begin{align*} \Phi_n(v) & = \mathbb{E}\bigg[\!\exp\bigg(\frac{\textrm{i}vS_n}{\sigma_x\sqrt n} - \frac{\textrm{i}\sqrt n x v}{\sigma_x} + t_xS_n - nL(t_x)\bigg)\bigg] \\ & = \exp\bigg(n\bigg(L\bigg(t_x+ \frac{\textrm{i}v}{\sigma_x\sqrt n\,}\bigg) - L(t_x) - \frac{\textrm{i} x v}{\sqrt n\sigma_x}\bigg)\bigg).\end{align*}

Let A be a positive constant chosen later. We can split the integral in (6.3) into two parts: we call $J_n$ the integral on $[{-}A\sigma_x\sqrt{n}, +A\sigma_x\sqrt{n}]$ and $K_n$ the integral on the complementary set. On the one hand, as (4.41) also holds when replacing $\pi^2$ by $v^2$ , which is smaller than $A^2 \sigma_x^2 n$ , and since $L^{\prime\prime}(t_x)=\sigma_x^2$ , we get, for all $v \in \mathbb{R}$ ,

(6.4) \begin{align} \bigg|\frac{\textrm{e}^{-\textrm{i}({\{nx\}v})/({\sigma_x\sqrt n})}}{t_x+\textrm{i}{v}/({\sigma_x\sqrt n})}\Phi_n(v) \mathbf{1}_{\{|v| \leq A \sigma_x\sqrt{n}\}}\bigg| & \leq \frac{1}{t_x}\bigg(1-\frac{t_x^2\sigma_x^2}{t_x^2+A^2}\frac{v^2}{\sigma_x^2 n}\bigg)^{n/2} \nonumber \\ & \leq \frac{1}{t_x}\exp\bigg({-}\frac{t_x^2}{t_x^2+A^2}\frac{v^2}{2}\bigg). \end{align}

Then, we deduce from Lebesgue’s dominated convergence theorem that

(6.5) \begin{equation} \lim_{n \rightarrow + \infty} J_n = \int_{\mathbb{R}}\frac{1}{t_x}\exp\bigg({-}\frac{L^{\prime\prime}(t_x)}{\sigma_x^2}\frac{v^2}{2}\bigg)\,\textrm{d} v = \frac{\sqrt{2\pi}}{t_x}.\end{equation}

On the other hand, concerning $K_n$ , since now v is large in the integral we use (4.40) to get

\begin{equation*}\bigg|\frac{\textrm{e}^{-\textrm{i}({\{nx\}v})/({\sigma_x\sqrt n})}}{t_x+\textrm{i}{v}/({\sigma_x\sqrt n})}\Phi_n(v)\bigg| \leq\frac{1}{t_x}\bigg(\frac{t_x^2+4t_x^2{\textrm{e}^{t_x}}/{(\textrm{e}^{t_x}-1)^2}}{t_x^2+{v^2}/({\sigma_x^2 n})}\bigg)^{n/2},\end{equation*}

leading to

(6.6) \begin{equation} K_n \leq \frac{2}{t_x}\bigg(t_x^2+4t_x^2\frac{\textrm{e}^{t_x}}{(\textrm{e}^{t_x}-1)^2}\bigg)^{n/2}\sigma_x\sqrt{n} \int_{A}^{+\infty}\frac{1}{(t_x^2 +v^2)^{n/2}}\,\textrm{d} v.\end{equation}

Moreover, for all $n>2$ ,

(6.7) \begin{equation} \int_{A}^{+\infty}\frac{1}{(t_x^2 +v^2)^{n/2}}\,\textrm{d} v \leq \int_{A}^{+\infty}\frac{v}{A}\frac{1}{(t_x^2 +v^2)^{n/2}}\,\textrm{d} v = \frac{1}{(n-2)A} \frac{1}{(t_x^2+ A^2)^{n/2-1}}.\end{equation}

By taking $A^2 = t_x^2+8t_x^2{\textrm{e}^{t_x}}/{(\textrm{e}^{t_x}-1)^2}$ , we obtain from (6.6) and (6.7) that

(6.8) \begin{equation} \lim_{n \rightarrow + \infty} K_n = 0\end{equation}

exponentially fast. Finally, (6.1) together with (6.3), (6.5), and (6.8) allow us to conclude the alternative proof of Theorem 3.1.

Remark 6.1. We can use the previous computations to also get a new concentration inequality. More precisely, by using the upper bound (6.4) to get an upper bound for $J_n$ instead of a limit, we are able to prove that

(6.9) \begin{equation} \mathbb{P}\bigg(\frac{D_n}{n} \geq x\bigg) \leq Q_n(x)\frac{\exp\!({-}nI(x)-\{nx\}t_x)}{\sigma_xt_x\sqrt{2\pi n}}, \end{equation}

where the prefactor can be taken as

\begin{equation*} Q_n(x) = \sqrt{2+\frac{8\textrm{e}^{t_x}}{(\textrm{e}^{t_x}-1)^2}} + \frac{4\sigma_xt_x}{\sqrt{2\pi}}\sqrt{1+\frac{8\textrm{e}^{t_x}}{(\textrm{e}^{t_x}-1)^2}}\frac{\sqrt{n}}{2^{n/2}(n-2)} . \end{equation*}

We can observe that (6.9) is similar to (3.2). Note that the constants in (3.2) as well as in (6.9) are not sharp. It is in fact possible to improve them by more precise cuttings in the integrals.

7. Proof of the concentration inequalities

We now proceed to prove the concentration inequalities. Recalling that $x \in \big]\frac12,1\big[$ , which implies that $t_x>0$ , we obtain from the equality in (5.8) that

\begin{align*} \mathbb{P}\bigg(\frac{D_n}{n} \geq x\bigg) & = \frac{1}{2\pi}\int_{-\pi}^{\pi}m_n(t_x+\textrm{i}v)\sum_{k=\lceil nx \rceil}^{+\infty}\textrm{e}^{-k(t_x+\textrm{i}v)}\,\textrm{d} v \\ & = \frac{1}{2\pi}\int_{-\pi}^{\pi}m_n(t_x+\textrm{i}v) \frac{\exp\!({-}t_x\lceil nx\rceil - \textrm{i}\lceil nx\rceil v)}{1-\textrm{e}^{-(t_x+\textrm{i}v)}}\,\textrm{d} v \\ & \leq \frac{\exp\!({-}\lceil n x\rceil t_x)}{2\pi} \int_{-\pi}^{\pi}\frac{|m_n(t_x+\textrm{i}v)|}{|1-\textrm{e}^{-(t_x+\textrm{i}v)}|}\,\textrm{d} v.\end{align*}

Consequently, we deduce from the alternative upper bound in (4.33) that

(7.1) \begin{equation} \mathbb{P}\bigg(\frac{D_n}{n} \geq x\bigg) \leq \frac{1}{2\pi}\exp\!({-}n(xt_x-L(t_x))-\{nx\}t_x)(A(x)+B(x)),\end{equation}

where

\begin{align*} A(x) & = \int_{-\pi}^{\pi}\frac{1}{\sqrt{t_x^2+ v^2}\,} \exp\bigg({-}n\frac{t_x^2L^{\prime\prime}(t_x)}{t_x^2+\pi^2}\frac{v^2}{2}\bigg)\,\textrm{d} v, \\ B(x) & = \bigg(1+\frac1\pi+\frac{2\sqrt{t_x^2+\pi^2}}{\pi^2-4}\bigg) \int_{-\pi}^{\pi}\exp\bigg({-}n\frac{4t_x^2L^{\prime\prime}(t_x)}{\pi^2(t_x^2+4)}\frac{v^2}{2}\bigg)\,\textrm{d} v.\end{align*}

Hereafter, we recall from (1.4) that $I(x)=x t_x -L(t_x)$ and we write $\sigma_x^2=L^{\prime\prime}(t_x)$ . It follows from standard Gaussian calculation that

(7.2) \begin{equation} \begin{aligned} A(x) & \leq \frac{2\pi}{\sigma_xt_x\sqrt{2\pi n}}\sqrt{\frac{t_x^2+\pi^2}{t_x^2}}, \\ B(x) & \leq \bigg(1+\frac1\pi+\frac{2\sqrt{t_x^2+\pi^2}}{\pi^2-4}\bigg) \frac{\pi^2\sqrt{t_x^2+4}}{\sigma_xt_x\sqrt{2\pi n}}. \end{aligned}\end{equation}

Finally, we find from (7.1) together with (7.2) that

\begin{equation*} \mathbb{P}\bigg(\frac{D_n}{n} \geq x\bigg) \leq P(x)\frac{\exp\!({-}nI(x)-\{nx\}t_x)}{\sigma_xt_x\sqrt{2\pi n}},\end{equation*}

where

\begin{equation*} P(x) = \sqrt{\frac{t_x^2+\pi^2}{t_x^2}} + \bigg(1+\frac1\pi+\frac{2\sqrt{t_x^2+\pi^2}}{\pi^2-4}\bigg)\sqrt{\frac{\pi^2(t_x^2+4)}{4}} ,\end{equation*}

which is exactly what we wanted to prove.

8. Proofs of the standard results

We now focus our attention on the more standard results concerning the sequence $(D_n)$ such as the quadratic strong law, the law of iterated logarithm, and the functional central limit theorem.

Proof of Proposition 2.1. First of all, we can observe from (2.2) and (2.5) that the martingale $(M_n)$ can be rewritten in the additive form $M_n=\sum_{k=1}^{n-1} (k+1) (\xi_{k+1}-p_k)$ . It follows from the almost sure convergence (1.2) together with (2.6) and the classical Toeplitz lemma that the predictable quadratic variation $\langle M \rangle_n$ of $(M_n)$ satisfies

(8.1) \begin{equation} \lim_{n \rightarrow \infty}\frac{\langle M \rangle_n}{n^3}=\frac{1}{12} \quad \text{a.s.} \end{equation}

Denote by $f_n$ the explosion coefficient associated with $(M_n)$ ,

\begin{equation*} f_n=\frac{\langle M \rangle_{n+1}-\langle M \rangle_n}{\langle M \rangle_{n+1}}=\frac{(n-D_n)(D_n+1)}{\langle M \rangle_{n+1}}. \end{equation*}

We obtain from (1.2) and (8.1) that $\lim_{n \rightarrow \infty}nf_n=3$ a.s., which implies that $f_n$ converges to zero almost surely as n goes to infinity. In addition, we clearly have for all $n\geq 1$ , $| \xi_{n+1} -p_n | \leq 1$ . Consequently, we deduce from the quadratic strong law for martingales given, e.g., by [Reference Bercu1, Theorem 3] that

\begin{equation*} \lim_{n\rightarrow\infty}\frac{1}{\log\langle M\rangle_n}\sum_{k=1}^nf_k\frac{M_k^2}{\langle M\rangle_k}=1 \quad \text{a.s.}, \end{equation*}

which ensures that

(8.2) \begin{equation} \lim_{n\rightarrow\infty}\frac{1}{\log n}\sum_{k=1}^n\frac{M_k^2}{k^4} = \frac{1}{12} \quad \text{a.s.} \end{equation}

However, it follows from (2.5) that

(8.3) \begin{equation} \frac{M_n^2}{n^4} = \bigg(\frac{D_n}{n}-\frac{1}{2}\bigg)^2 + \frac{1}{n}\bigg(\frac{D_n}{n} - \frac{1}{2}\bigg) + \frac{1}{4n^2}. \end{equation}

Therefore, we obtain once again from (1.2) together with (8.2) and (8.3) that

\begin{equation*} \lim_{n\rightarrow\infty}\frac{1}{\log n}\sum_{k=1}^n\bigg(\frac{D_k}{k}-\frac{1}{2}\bigg)^2 = \frac{1}{12} \quad \text{a.s.}, \end{equation*}

which is exactly the quadratic strong law (2.7). It only remains to prove the law of iterated logarithm given by (2.8). It immediately follows from the law of iterated logarithm for martingales given, e.g., by [Reference Duflo6, Corollary 6.4.25] that

\begin{equation*} \limsup_{n\rightarrow\infty}\bigg(\frac{1}{2\langle M\rangle_n\log\log\langle M\rangle_n}\bigg)^{1/2}M_n = -\liminf_{n\rightarrow\infty}\bigg(\frac{1}{2\langle M\rangle_n\log\log\langle M\rangle_n}\bigg)^{1/2}M_n = 1 \quad \text{a.s.}, \end{equation*}

which leads via (8.1) to

(8.4) \begin{equation} \limsup_{n\rightarrow\infty}\bigg(\frac{1}{2n^3\log\log n}\bigg)^{1/2}M_n = -\liminf_{n\rightarrow\infty}\bigg(\frac{1}{2n^3\log\log n}\bigg)^{1/2}M_n = \frac{1}{\sqrt{12}\,} \quad \text{a.s.} \end{equation}

Finally, we find from (2.5) and (8.4) that

\begin{equation*} \limsup_{n\rightarrow\infty}\bigg(\frac{n}{2\log\log n}\bigg)^{1/2}\bigg(\frac{D_n}{n}-\frac{1}{2}\bigg) = -\liminf_{n\rightarrow\infty}\bigg(\frac{n}{2\log\log n}\bigg)^{1/2}\bigg(\frac{D_n}{n}-\frac{1}{2}\bigg) = \frac{1}{\sqrt{12}\,} \quad \text{a.s.}, \end{equation*}

which completes the proof of Proposition 2.1.

Proof of Proposition 2.2. We now proceed to prove the functional central limit theorem given by the distributional convergence (2.10). On the one hand, it follows from (8.1) that, for all $t \geq 0$ ,

(8.5) \begin{equation} \lim_{n\rightarrow\infty}\frac{1}{n^3}\langle M\rangle_{\lfloor nt\rfloor} = \frac{t^{3}}{12} \quad \text{a.s.} \end{equation}

On the other hand, it is quite straightforward to check that $(M_n)$ satisfies Lindeberg’s condition given, for all $t \geq 0$ and for any $\varepsilon>0$ , by

(8.6) \begin{equation} \frac{1}{n^{3}}\sum_{k=2}^{\lfloor nt\rfloor} \mathbb{E}\big[\Delta M_k^2\mathbf{1}_{\{|\Delta M_k|>\varepsilon\sqrt{n^{3}}\}}\mid\mathcal{F}_{k-1}\big] \mathop{\longrightarrow}_{}^{\mathcal{P}} 0, \end{equation}

where $\Delta M_n=M_n-M_{n-1}=n(\xi_n - p_{n-1})$ . As a matter of fact, we have, for all $t \geq 0$ and for any $\varepsilon>0$ ,

\begin{equation*} \frac{1}{n^{3}}\sum_{k=2}^{\lfloor nt\rfloor} \mathbb{E}\big[\Delta M_k^2\mathbf{1}_{\{|\Delta M_k|>\varepsilon\sqrt{n^{3}}\}}\mid\mathcal{F}_{k-1}\big] \leq \frac{1}{n^{6}\varepsilon^2}\sum_{k=2}^{\lfloor nt\rfloor}\mathbb{E}\big[\Delta M_k^4\mid\mathcal{F}_{k-1}\big]. \end{equation*}

However, we already saw that, for all $n \geq 2$ , $|\Delta M_n|\leq n$ . Consequently, for all $t \geq 0$ and for any $\varepsilon>0$ ,

\begin{equation*} \frac{1}{n^{3}}\sum_{k=2}^{\lfloor nt\rfloor} \mathbb{E}\big[\Delta M_k^2\mathbf{1}_{\{|\Delta M_k|>\varepsilon\sqrt{n^{3}}\}}\mid\mathcal{F}_{k-1}\big] \leq \frac{1}{n^{6}\varepsilon^2}\sum_{k=2}^{\lfloor nt\rfloor}k^4 \leq \frac{t^5}{n\varepsilon^2}, \end{equation*}

which immediately implies (8.6). Therefore, we deduce from (8.5) and (8.6) together with the functional central limit theorem for martingales given, e.g., in [Reference Durrett and Resnick7, Theorem 2.5] that

(8.7) \begin{equation} \bigg(\frac{M_{\lfloor nt\rfloor}}{\sqrt{n^{3}}},t\geq 0\bigg) \Longrightarrow (B_t,t \geq 0), \end{equation}

where $(B_t, t \geq 0)$ is a real-valued centered Gaussian process starting at the origin with covariance given, for all $0<s \leq t$ , by $\mathbb{E}[B_s B_t]= s^{3}/12$ . Finally, (2.5) and (8.7) lead to (2.10) where $W_t=B_t/t^2$ , which is exactly what we wanted to prove.

Acknowledgements

The authors would like to thank the two anonymous reviewers and the associate editor for their careful reading and helpful comments which helped to improve the paper substantially.

Funding information

There are no funding bodies to thank relating to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Bercu, B. (2004). On the convergence of moments in the almost sure central limit theorem for martingales with statistical applications. Stoch. Process. Appl. 111, 157173.10.1016/j.spa.2002.10.001CrossRefGoogle Scholar
Bercu, B., Delyon, B. and Rio, E. (2015). Concentration Inequalities for Sums and Martingales. Springer, New York.10.1007/978-3-319-22099-4CrossRefGoogle Scholar
Bóna, M. (2012). Combinatorics of Permutations, 2nd edn. CRC Press, Boca Raton, FL.Google Scholar
Bryc, W., Minda, D. and Sethuraman, S. (2009). Large deviations for the leaves in some random trees. Adv. Appl. Prob. 41, 845873.10.1239/aap/1253281066CrossRefGoogle Scholar
Chatterjee, S. and Diaconis, P. (2017). A central limit theorem for a new statistic on permutations. Indian J. Pure Appl. Math. 48, 561573.10.1007/s13226-017-0246-3CrossRefGoogle Scholar
Duflo, M. (1997). Random Iterative Models (Appl. Math. (New York) 34). Springer, Berlin.Google Scholar
Durrett, R. and Resnick, S. I. (1978). Functional limit theorems for dependent variables. Ann. Prob. 6, 829846.10.1214/aop/1176995431CrossRefGoogle Scholar
Flajolet, P., Gabarró, J. and Pekari, H. (2005). Analytic urns. Ann. Prob. 33, 12001233.10.1214/009117905000000026CrossRefGoogle Scholar
Freedman, D. A. (1965). Bernard Friedman’s urn. Ann. Math. Statist. 36, 956970.10.1214/aoms/1177700068CrossRefGoogle Scholar
Friedman, B. (1949). A simple urn model. Commun. Pure Appl. Math. 2, 5970.10.1002/cpa.3160020103CrossRefGoogle Scholar
Garet, O. (2021). A central limit theorem for the number of descents and some urn models. Markov Process. Relat. Fields 27, 789801.Google Scholar
Gnedin, A. and Olshanski, G. (2006). The boundary of the Eulerian number triangle. Mosc. Math. J. 6, 461475, 587.10.17323/1609-4514-2006-6-3-461-475CrossRefGoogle Scholar
Gouet, R. (1993). Martingale functional central limit theorems for a generalized Pólya urn. Ann. Prob. 21, 16241639.10.1214/aop/1176989134CrossRefGoogle Scholar
Houdré, C. and Restrepo, R. (2010). A probabilistic approach to the asymptotics of the length of the longest alternating subsequence. Electron. J. Combinatorics 17, 168.10.37236/440CrossRefGoogle Scholar
Özdemir, A. (2022). Martingales and descent statistics. Adv. Appl. Math. 140, 102395.10.1016/j.aam.2022.102395CrossRefGoogle Scholar
Stanley, R. P. (2008). Longest alternating subsequences of permutations. Michigan Math. J. 57, 675687.10.1307/mmj/1220879431CrossRefGoogle Scholar
Tanny, S. (1973). A probabilistic interpretation of Eulerian numbers. Duke Math. J. 40, 717722.10.1215/S0012-7094-73-04065-9CrossRefGoogle Scholar
Widom, H. (2006). On the limiting distribution for the length of the longest alternating sequence in a random permutation. Electron. J. Combinatorics 13, 25.10.37236/1051CrossRefGoogle Scholar
Zhang, Y. (2015). On the number of leaves in a random recursive tree. Braz. J. Prob. Statist. 29, 897908.10.1214/14-BJPS252CrossRefGoogle Scholar
Zwillinger, D. (1989). Handbook of Differential Equations. Academic Press, Boston, MA.Google Scholar