Hostname: page-component-78c5997874-ndw9j Total loading time: 0 Render date: 2024-11-10T06:39:33.182Z Has data issue: false hasContentIssue false

Log-concavity and relative log-concave ordering of compound distributions

Published online by Cambridge University Press:  29 January 2024

Wanwan Xia*
Affiliation:
School of Physical and Mathematical Sciences, Nanjing Tech University, Nanjing, Jiangsu, China
Wenhua Lv
Affiliation:
School of Mathematical Sciences, Chuzhou University, Chuzhou, Anhui, China
*
Corresponding author: Wanwan Xia; Emails: 201910006533@njtech.edu.cn; whl@chzu.edu.cn
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we compare the entropy of the original distribution and its corresponding compound distribution. Several results are established based on convex order and relative log-concave order. The necessary and sufficient condition for a compound distribution to be log-concave is also discussed, including compound geometric distribution, compound negative binomial distribution and compound binomial distribution.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press.

1. Introduction

The entropy H(X) of a random variable X measures the uncertainty of X. In this paper, we only consider discrete random variables. Let X be a discrete random variable with probability mass function (pmf) $\{x_1,\ldots,x_n;p_1,\ldots,p_n\}$, that is,

\begin{equation*} p_i=\mathbb{P}(X=x_i), ~~~i=1,\ldots,n, \end{equation*}

with $p_i\ge0$, $i=1,\ldots,n$, and $\sum_{i=1}^np_i=1$. Here, n may be finite or infinite. The Shannon entropy of X is defined by [Reference Cover and Thomas1]:

\begin{equation*} H(X)=-\sum^n_{i=1} p_i \log p_i. \end{equation*}

The comparisons between distributions with respect to Shannon entropy are regarded as a measure of variability or dispersion. In insurance risk theory, similar comparisons are often established for compound distributions. The random variables corresponding to the compound distributions can be recorded as $S=\sum^M_{i=1} X_i$, which are extensively used in applied settings. For example, in [Reference Panjer3], S can be used to model the total claim amount, M is the number of claims, and the Xi are the sizes of claims.

Our results are closely related to those of [Reference Yu8], who established entropy comparison results concerning compound distributions of random variables taking nonnegative integers based on convex ordering and log-concavity. We recall the following definitions. First, denote $\mathbb{N} =\{0,1, 2, \ldots\}$.

Definition 1.1. A sequence $\{h_n,\,n\in\mathbb{N}\}$ is said to be log-concave (LC) if $h_n\ge0$ for $n\in\mathbb{N}$, and

\begin{equation*} h^2_n \ge h_{n+1} h_{n-1},\quad n\ge 1. \end{equation*}

A log-concave sequence $\{h_n, n\in\mathbb{N}\}$ does not have internal zeros, i.e., there does not exist $i \lt j \lt k$ such that $h(i)h(k)\ne 0$ and $h(j)=0$. A random variable X taking values in $\mathbb{N}$ is said to be log-concave, if its pmf $\{f_n, n\in\mathbb{N}\}$ is log-concave.

Definition 1.2. For an integer $n\ge 2$, a positive sequence $\{h_i,\,0\leq i\leq n\}$ is called ultra-log-concave of order n (ULC(n)) if,

\begin{equation*} \frac{h_{i+1}^2}{\binom{n}{i+1}^2}\ge\frac{h_{i}}{\binom{n}{i}}\frac{h_{i+2}}{\binom{n}{i+2}},\quad 0\le i\le n-2. \end{equation*}

A random variable X taking values in $\mathbb{N}$ is said to be ULC(n), if its pmf $\{f_i,\,0\le i\le n\}$ is ULC(n). Equivalently, X is ULC(n) if the sequence $\{f_i/\binom{n}{i},\ 0\le i\le n\}$ is log-concave.

Definition 1.3. A random variable X taking values in $\mathbb{N}$ is said to be ultra-log-concave (ULC), if the support of X is an interval on $\mathbb{N}$, and its pmf $\{\,f_i,\ i\in\mathbb{N}\}$ satisfies:

\begin{equation*} (i+1)f_{i+1}^2\geq(i+2)f_{i}f_{i+2},\quad i\ge 0. \end{equation*}

Equivalently, X is ULC if the sequence $\{i!f_i,\ i\in\mathbb{N}\}$ is log-concave.

In fact, both ULC(n) and ULC can be defined in terms of the relative log-concave order ([Reference Whitt5]).

Definition 1.4. Let f and g be two pmfs on $\mathbb{N}$. Then f is relative log-concave to g, written as $f\leq_{\rm lc}g$, if

  1. (1) the support of f and g are both intervals on $\mathbb{N}$;

  2. (2) ${\rm supp}(\,f)\subseteq {\rm supp}(g)$;

  3. (3) $f_i/g_i$ is log-concave on $i\in {\rm supp}\ (\,f)$.

From the above definitions, we have $X\in{\rm ULC}(n)$ is equivalent to $X\leq_{\rm lc} B(n,p)$, and $X\in{\rm ULC}$ is equivalent to $X\le_{\rm lc} {\rm Poi}(\lambda)$, where $p\in (0,1)$ and λ > 0. Also, we have the following inclusion relationship ${\rm ULC}(1)\subseteq{\rm ULC}(2)\subseteq\cdots\subseteq{\rm ULC}\subseteq{\rm LC}$.

Definition 1.5. For random variables X and Y on $\mathbb{N}$, X is smaller than Y in the convex order, written as $X\leq_{\rm cx}Y$, if $\mathbb{E}[\psi(X)]\leq\mathbb{E}[\psi(Y)]$ for all convex functions ψ on $\mathbb{N}$, provided the expectations exist.

The convex order compares the spread or variability of two distributions. Actually, if $X\leq_{\rm cx}Y$ and both X and Y have finite means, we have $\mathbb{E} [X] =\mathbb{E} [Y]$ and ${\rm Var}(X)\le {\rm Var}(Y)$. Further properties of the convex order can be found in [Reference Shaked and Shanthikumar4]. Since entropy ordering also compares the variability of distributions, it is reasonable to expect some connection with convex ordering. For compound distributions, [Reference Yu7Reference Yu9] draw the following conclusion.

Theorem 1.1 Suppose X and Y are two absolutely continuous or nonnegative integer-valued random variables.

  1. (1) [Reference Yu8],[Reference Yu9] If $X\le_{{\rm lc}}Y$ and $\mathbb{E} [X] =\mathbb{E} [Y]$, then $X\le_{{\rm cx}}Y$.

  2. (2) [Reference Yu7],[Reference Yu8] If $X\le_{{\rm cx}}Y$, and Y has a log-concave pdf or pmf, then $H(X)\le H(Y)$.

Theorem 1.1 (2) provides a new method to prove entropy inequality. Compared with direct proof, it is relatively easier to establish convex ordering and to verify log-concave condition. Many conclusions in this paper involve Theorem 1.1.

The contributions and the outline of this paper are as follows.

  1. (i) In Section 2, based on Theorem 1.1, we give a direct proof for Lemma 3(2) in [Reference Yu6]. Our proofs are different since [Reference Yu6] constructed a Markov chain whose limiting distribution is a Binomial distribution.

  2. (ii) It is interesting to compare the original distribution and its corresponding compound distribution in the sense of the log-concave order. [Reference Yu8] considered the Poisson and Binomial distributions. Similar result is established in Section 3 for the Negative Binomial distribution.

  3. (iii) [Reference Yu8] obtained the necessary and sufficient conditions for a compound Poisson distribution to be log-concave. In Section 4, we establish necessary and sufficient conditions under which a compound Negative Binomial distribution or a compound Binomial distribution is log-concave.

  4. (iv) The preservation of convex order under compound operation was investigated by [Reference Yu8]. In Section 5, we consider whether the log-concave order is preserved under compound operation.

2. The entropy of ULC distribution

Suppose that $\mathscr{F}$ is the set that contains all pmfs for nonnegative integer-valued random variables. Consider two operators Sp and Tn defined on $\mathscr{F}$ [Reference Yu6]:

  1. (1) For all $p\in (0,1)$, Sp maps pmf $f=\{\,f_i, i\in\mathbb{N}\}$ to another pmf $g=\{g_i, i\in\mathbb{N}\}$, where

    \begin{equation*} g_i=p f_{i-1} + (1-p) f_i, \quad i\ge 0, \end{equation*}

    and define $f_i=0$ for all i < 0. Suppose that f is the pmf of X, $Z\sim B(1,p)$, independent of X, then $S_p f$ is the pmf of X + Z.

  2. (2) For all n > 1, Tn maps pmf $g=\{g_i, i=0,\ldots, n\}$ to another pmf $\,f=\{f_i, i=0, \ldots, n-1\}$, where

    (2.1)\begin{equation} f_i= \frac {n-i}{n} g_i + \frac {i+1}{n} g_{i+1},\quad i=0,\ldots, n-1. \end{equation}

    Denote g as the pmf of Y. Given Y, consider hypergeometric distribution $(n, Y, n-1)$, that is, suppose there are n balls in an empty box, in which the number of white balls is Y. Now take n − 1 balls out of the box randomly without putting them back, and define random variable X as the number of white balls in the taken balls. Then $T_n g$ is the pmf of X.

Operators Sp and Tn satisfies [Reference Yu6]:

  1. (1) If $b_{n,p}=\{b(i,n,p), i=0,\ldots, n\}$ is the pmf of $B(n,p)$, then

    \begin{equation*} S_p b_{n,p}=b_{n+1,p},\quad T_{n+1} b_{n+1,p}=b_{n,p},\quad n\ge 1. \end{equation*}

    So, $T_{n+1}\circ S_p b_{n,p} =b_{n,p}$.

  2. (2) ${\rm ULC}(n)$ contains all the pmfs that are Ultra-log-concave of order n. Moreover, we have $S_p \,f\in {\rm ULC}(n+1)$ and $T_n\, f\in {\rm ULC}(n-1)$ for all $f\in {\rm ULC}(n)$.

[Reference Yu6] proved that if X has pmf $f\in {\rm ULC}(n)$ with $\mathbb{E} [X] =np$, $p\in (0,1)$, and $Z\sim B(1,p)$, independent of X, and if Y is a hypergeometric distribution with parameters $(n+1, X+Z, n)$, then $H(X)\le H(Y)$. The proof uses the properties of operators Sp and Tn. By constructing a Markov chain whose limiting distribution is $B(n, p)$ and showing that the entropy never decreases along the iterations of this Markov chain. Here, we give another method to prove the non-decreasing property of the corresponding entropy based on Theorem 1.1.

Proposition 2.1. Suppose that X has a pmf $f\in {\rm ULC}(n)$, $\mathbb{E} [X]=np$, $p\in (0,1)$, $Z\sim B(1,p)$, where Z is independent of X. Let Y be a hypergeometric distribution with parameters $(n+1, X+Z, n)$. Then $X\le_{\rm cx} Y$.

Proof. Define differential operation $\Delta h_i=h_i-h_{i-1}$ for any sequence $\{h_i\}$, and let $g=\{g_i, i=0,\ldots, n\}$ denote the pmf of Y. By (2.1), we have

(2.2)\begin{equation} g_i =\frac {n+1-i}{n+1} (\,p f_{i-1} + q \,f_i) + \frac {i+1}{n+1} (\,p f_i + q f_{i+1}),\quad q=1-p. \end{equation}

Hence,

\begin{equation*} f_i-g_i =\frac {1}{n+1} \Delta \big [ p(n-i) f_i -q(i+1) f_{i+1}\big ] =\Delta (u_i h_i), \end{equation*}

where

\begin{equation*} u_i=p-q \frac {(i+1) f_{i+1}}{(n-i) f_i},\quad h_i= \frac {(n-i)f_i}{n+1},\quad 0\le i \lt n, \end{equation*}

and $u_i h_i=0$, $i=-1, n$. Therefore, for any convex function sequence $\psi=\{\psi_i\}$, we have

\begin{equation*} \mathbb{E} [\psi(Y)]-\mathbb{E} [\psi(X)] = \sum^n_{i=0} \psi_i (g_i-f_i)= - \sum^n_{i-0}\psi_i \Delta (u_i h_i)=\sum^{n-1}_{i=0} (\psi_{i+1}-\psi_i) u_i h_i. \end{equation*}

Since $h_i\ge 0$ and $\sum^n_{i=0} h_i=nq/(n+1)$, h can be modified to a probability function. On the other hand, $f\in {\rm ULC}(n)$ means that ui is increasing in $i\in \{0, \ldots, n-1\}$; the convexity of ψ means that $\psi_{i+1}-\psi_i$ is increasing in i, and $\sum^{n-1}_{i=0} u_i h_i=0$. Hence, by Chebyshev rearrangement theorem, we have $\sum^{n-1}_{i=0} (\psi_{i+1}-\psi_i) u_i h_i\ge 0$, that is, $\mathbb{E} [\psi(Y)]\ge \mathbb{E} [\psi(X)]$.

Under the assumptions of Proposition 2.1, $Y\in {\rm ULC}(n)$, and the corresponding pmf is log-concave. Hence, by Theorem 1.1, $X\le_{\rm cx} Y$ leads to $H(X)\le H(Y)$.

3. Log-concave ordering between the original distribution and its corresponding compound distribution

Suppose that S is a nonnegative integer-valued random variable, defined by:

\begin{equation*} S=\sum^M_{i=1} X_i, \end{equation*}

where $\{X_n, n\ge 1\}$ is a sequence of independent and identically distributed (iid) nonnegative integer-valued random variables, and M is a counting random variable independent of all Xi’s. Let f and h be the pmfs of Xi and M, respectively. The distribution of S is called a compound distribution with its pmf denoted by $c_h\ (\,f)$. If $M\sim{\rm Poi}(\lambda)$, $B(n,p)$ or ${\rm NB}(\alpha, p)$, then the distribution of S is called compound Poisson distribution, compound Binomial distribution or compound Negative Binomial distribution. For α = 1, ${\rm NB}(1,p)$ reduces to the geometric distribution ${\rm Geo}(p)$.

In the following, the pmfs of Poi$(\lambda)$, ${\rm B}(n,p)$, ${\rm NB}(\alpha, p)$ and ${\rm Geo}(p)$ distributions are denoted by poi$(\lambda)$, ${\rm b}(n,p)$, ${\rm nb}(\alpha, p)$ and ${\rm geo}(p)$, respectively. Similarly, the pmfs of the corresponding compound distributions are denoted by $c_{{\rm poi}(\lambda)}\ (\,f)$, $c_{{\rm b}(n,p)}\ (\,f)$, $c_{{\rm nb}(\alpha,p)}\ (\,f)$ and $c_{{\rm geo}(p)}\ (\,f)$. Denote by µf the mean of a distribution with pmf f. Then the means of the distributions with pmf $c_{{\rm poi}(\lambda)}\ (\,f)$, $c_{{\rm b}(n,p)}\ (\,f)$, $c_{{\rm nb}(\alpha,p)}\ (\,f)$ are $\lambda\mu_f$, $np\mu_f$ and $\alpha (1-p)\mu_f/p$, respectively.

Proposition 3.1. ([Reference Yu8])

Let f be a pmf on $\mathbb{N}$.

  1. (1) If the compound Poisson distribution $c_{{\rm poi}(\lambda)}\ (\,f)$ is non-degenerate and log-concave, and $\lambda, \lambda^\ast \gt 0$, then ${\rm poi}(\lambda^\ast) \le_{\rm lc} c_{{\rm poi}(\lambda)}\ (\,f)$.

  2. (2) If the compound Binomial distribution $c_{{\rm b}(n,p)}\ (\,f)$ is non-degenerate and log-concave, and $p,p^\ast\in (0,1)$, then ${\rm b}(n,p^\ast) \le_{\rm lc} c_{{\rm b}(n,p)}(\,f)$.

By Theorem 1.1 and Proposition 3.1, we have

(3.1)\begin{equation} \lambda^\ast = \lambda \mu_f \Longrightarrow H({\rm poi}(\lambda^\ast))\leq H(c_{{\rm poi}(\lambda)}(\,f)), \end{equation}
(3.2)\begin{equation} p^\ast = p \mu_f \Longrightarrow H({\rm b}(n,p^\ast))\leq H(c_{{\rm b}(n,p)}(\,f)). \end{equation}

Here, $\lambda^\ast=\lambda\mu_f$ ensures that two pmfs ${\rm poi}(\lambda^\ast)$ and $c_{{\rm poi}(\lambda)}(\,f)$ have the same mean. Similarly, $p^\ast=p\mu_f$ ensures that two pmfs ${\rm b}(n, p^\ast)$ and $c_{{\rm b}(n, p)}(\,f)$ have the same mean. Suppose that $M\sim {\rm B}(n,p)$, $M^\ast\sim {\rm B}(n,p^\ast)$, and $\{X_i, i\geq 1\}$ are iid random variables with a common pmf f. Assume that all random variables considered here and below are independent of each other. The explanation of (3.2) is as follows, and the explanation of (3.1) can be given similarly. We consider two following cases:

  1. (i) Suppose $\mu_f\ge 1$. Then $p^\ast\ge p$ since $p^\ast=p\mu_f$ in (3.2). Let $\{I_i, i\geq 1\}$ be a sequence of iid Bernoulli random variables with $\mathbb{P}(I_i=1)= p/p^\ast\in(0,1]$. Since $\sum^{M^\ast}_{i=1} I_i$ has the same distribution as M, it follows that the pmf of $\sum_{i=1}^{M^\ast}I_iX_i$ is $c_{{\rm b}(n,p^\ast)}(\tilde{\,f})=c_{{\rm b}(n,p)}(\,f)$, where $\tilde{f}$ is the pmf of $I_iX_i$, given by:

    \begin{equation*} \tilde{f}=\frac{p}{p^\ast}f + \left (1-\frac{p}{p^\ast}\right )\delta_0, \end{equation*}

    where δ 0 is the pmf of a degenerate random variable Z = 0. Notice that $\mu_{\tilde{f}}=p\mu_f/p^\ast=1$, and the uncertainty of $\sum_{i=1}^{M^\ast}I_iX_i$ is obviously stronger than that of $M^\ast$. Thus, (3.2) holds.

  2. (ii) Suppose $\mu_f\in(0,1)$. In this case, $p^\ast \lt p$. Let $\{I_i, i\geq 1\}$ be a sequence of iid Bernoulli random variables with $\mathbb{P}(I_i=1)=p^\ast/p\in(0,1]$. Then $\sum_{i=1}^{M}I_i\sim {\rm B}(n,p^\ast)$, and the pmf of $\sum_{i=1}^{M}X_i$ is $c_{{\rm b}(n,p)}(\,f)$. Note that $\mathbb{E} [I_i] = \mathbb{E} [X_i]= p^\ast/p$ and $I_i\le_{{\rm cx}}X_i$ for each i. Thus, the uncertainty of $\sum_{i=1}^{M}X_i$ is obviously stronger than $\sum_{i=1}^{M}I_i$, that is (3.2).

To obtain the similar result for a compound Negative Binomial distribution, we need the recursive expression for the corresponding pmf.

Lemma 3.2. Denote the pmf of $c_{{\rm nb}(\alpha,p)}(\,f)$ as g. Then

(3.3)\begin{equation} (n+1) g_{n+1}= \frac{q}{1-qf_0} \sum^n_{j=0} [(\alpha-1) j + n+\alpha] f_{j+1} g_{n-j},\quad n\ge 0. \end{equation}

Proof. In [Reference Panjer3], it is assumed that $f_0=0$. We consider a more general situation, $f_0\ge 0$. The pmf of ${\rm NB}(\alpha,p)$ is denoted by $\{p_n, n\ge 0\}$, that is

\begin{equation*} p_n =\binom{\alpha+n-1}{n} p^\alpha q^n, \quad n\ge 0,\ q=1-p, \end{equation*}

so

\begin{equation*} \frac {p_n}{p_{n-1}}= a+\frac {b}{n}, \quad n\ge 1, \end{equation*}

where a = q, $b=(\alpha-1) q$. Hence, for all $n\ge 0$,

(3.4)\begin{eqnarray} \sum^{n+1}_{j=0} \left (a+\frac {bj}{n+1}\right ) f_{j} g_{n+1-j}=q f_0 g_{n+1}+ \frac {q}{n+1} \sum^n_{j=0} [(\alpha-1)j +n+\alpha] f_{j+1} g_{n-j}. \end{eqnarray}

Denote by $f^{(k)}=\{f^{(k)}_i, i\in\mathbb{N}\}$ the pmf of $\sum^k_{i=1} X_i$, where $X_1, \ldots, X_k$ are iid random variables with a common pmf f. On the other hand, by using $\sum^{n+1}_{j=0} j f_j f^{(k)}_{n+1-j}=\frac {n+1}{k+1} f_{n+1}^{(k+1)}$, we have

(3.5)\begin{align} \sum^{n+1}_{j=0} \left (a+\frac {bj}{n+1}\right ) f_{j} g_{n+1-j} &= \sum^{n+1}_{j=0} \left (a+\frac {bj}{n+1}\right ) f_j \sum^\infty_{k=0} p_k f^{(k)}_{n+1-j} \nonumber \\ &= \sum^\infty_{k=0} p_k \sum^{n+1}_{j=0} \left (a+\frac {bj}{n+1}\right ) f_j f^{(k)}_{n+1-j} \nonumber \\ &= \sum^\infty_{k=0} p_k \left (a f_{n+1}^{(k+1)}+ \frac {b}{n+1} \sum^{n+1}_{j=0} j f_j f^{(k)}_{n+1-j}\right ) \nonumber \\ &= \sum^\infty_{k=0} p_k \left (a+\frac {b}{k+1}\right ) f_{n+1}^{(k+1)} \nonumber \\ &= \sum^\infty_{k=1} p_k f_{n+1}^{(k)} \nonumber \\ &= p_0 f^{(0)}_{n+1}+ \sum^\infty_{k=1} p_k f_{n+1}^{(k)}\qquad [f^{(0)}_\ell=0,\ \ell\ge 1] \nonumber \\ &= g_{n+1}. \end{align}

By (3.4) and (3.5), we conclude (3.3).

Proposition 3.3. Let f be a pmf defined on $\mathbb{N}$. If $\alpha^\ast\ge \alpha\in (0,1]$, $p, p^\ast\in (0,1)$, and if the compound Negative Binomial distribution $c_{{\rm nb}(\alpha,p)}(\,f)$ is non-degenerate and log-concave, then ${\rm nb}(\alpha^\ast,p^\ast) \le_{\rm lc} c_{{\rm nb}(\alpha,p)}(\,f)$.

Proof. Denote $g=c_{{\rm nb}(\alpha,p)}(\,f)$. Since g is non-degenerate and log-concave, we have $g_n \gt 0$ for $n\in \mathbb{N}$ and

(3.6)\begin{equation} \frac {g_{n-j}}{g_{n-j-1}} \ge \frac {g_n}{g_{n-1}},\quad 0 \lt j \lt n. \end{equation}

Since $\alpha\in (0,1)$ and $\alpha^\ast\ge \alpha$, it follows that:

(3.7)\begin{equation} \frac{(\alpha-1)j+n+\alpha}{(\alpha-1)j+n-1+\alpha}\geq\frac{n+\alpha}{n+\alpha-1}\ge \frac{n+\alpha^\ast}{n+\alpha^\ast-1},\quad j\ge 1. \end{equation}

In view of (3.6) and (3.7), we have

\begin{eqnarray*} (n+1) g_{n+1} & \ge & \frac {q}{1-q f_0} \sum^n_{j=0} [(\alpha-1) j +n+\alpha] f_{j+1} g_{n-j-1} \cdot \frac {g_n}{g_{n-1}}\\ & \ge & \frac {g_n}{g_{n-1}}\cdot \frac {q}{1-q f_0} \sum^{n-1}_{j=0} [(\alpha-1) j +n+\alpha] f_{j+1} g_{n-j-1} \\ & \ge & \frac {g_n}{g_{n-1}}\cdot \frac {q}{1-q f_0} \sum^{n-1}_{j=0} [(\alpha-1) j +n-1+\alpha] \frac {\alpha^\ast+n}{\alpha^\ast +n-1} f_{j+1} g_{n-j-1} \\ & \ge & \frac {g_n (\alpha^\ast +n)}{g_{n-1}(\alpha^\ast +n-1)} \cdot \frac {q}{1-q f_0} \sum^{n-1}_{j=0} [(\alpha-1) j +(n-1)+\alpha] f_{j+1} g_{n-j-1}\\ & = & \frac {g_n (\alpha^\ast+n)}{g_{n-1}(\alpha^\ast +n-1)} \cdot n g_n,\quad n\ge 1. \end{eqnarray*}

The above inequality can be simplified to:

\begin{equation*} \left [\frac {g_n}{\binom{\alpha^\ast+n-1}{n}}\right ]^2 \le \frac {g_{n-1}}{\binom{\alpha^\ast+n-2}{n-1}} \cdot \frac {g_{n+1}}{\binom{\alpha^\ast +n}{n+1}},\quad n\ge 1, \end{equation*}

that is, $g_n/\binom{\alpha^\ast +n-1}{n}$ is log-convex, so ${\rm nb}(\alpha^\ast,p^\ast) \le_{\rm lc} g$.

Corollary 3.4. Suppose that f is a pmf defined on $\mathbb{N}$ with the mean µ > 0. For $\alpha\in(0,1]$, if there exists $\alpha^\ast \gt \alpha$ and $p^\ast\in(0,1)$ such that:

\begin{equation*} \frac {\alpha^\ast(1-p^\ast)}{p^\ast} = \frac {\alpha \mu (1-p)}{p}, \end{equation*}

and $c_{{\rm nb}(\alpha,p)}(\,f)$ is non-degenerate and log-concave, then $H\left ({\rm nb}(\alpha^\ast,p^\ast)\right ) \le H\left ( c_{{\rm nb}(\alpha,p)}(\,f)\right )$.

Proof. By Proposition 3.3, ${\rm nb}(\alpha^\ast,p^\ast)\le_{\rm lc} c_{{\rm nb}(\alpha,p)}(\,f)$. It is easy to verify that the means of ${\rm nb}(\alpha^\ast,p^\ast)$ and $c_{{\rm nb}(\alpha,p)}(\,f)$ are equal. Thus, the desired result follows from Theorem 1.1(1) directly.

Proposition 3.5. Suppose that α > 0 and $p\in(0,1)$. If $c_{{\rm nb}(\alpha,p)}(\,f)$ is non-degenerate and log-concave, then ${\rm poi}(\lambda)\leq_{{\rm lc}}c_{{\rm nb}(\alpha,p)}(\,f)$ for any λ > 0. In particular, when $\lambda^\ast= \alpha\mu(1-p)/p$, we have $H({\rm poi}(\lambda^\ast))\leq H(c_{{\rm nb}(\alpha,p)}(\,f))$.

Proof. The notations are the same as in the proof of Proposition 3.3. Obviously, $g_n \gt 0$ for $n\ge 0$ and by the log-concavity of g, we have

\begin{equation*} \frac{g_{n-j}}{g_{n+1-j}}\leq\frac{g_n}{g_{n+1}},\quad j=0,\ldots, n. \end{equation*}

Hence, for $n\ge 0$,

\begin{align*} (n+1) g_{n+1} & \le \frac {q}{1-q f_0} \sum^n_{j=0} [(\alpha-1) j +n+\alpha] f_{j+1} g_{n+1-j} \cdot \frac {g_n}{g_{n+1}}\\ & \le \frac {g_n}{g_{n+1}}\cdot \frac {q}{1-q f_0} \sum^{n+1}_{j=0} [(\alpha-1) j +n+1+\alpha] f_{j+1} g_{n+1-j} \\ & = \frac {g_n}{g_{n+1}} \cdot (n+2) g_{n+2}, \end{align*}

that is, $n!g_n$ is log-convex in $n\in\mathbb{N}$. Thus, ${\rm poi}(\lambda)\leq_{{\rm lc}}c_{{\rm nb}(\alpha,p)}(\,f)$. The rest can be derived directly from Theorem 1.1.

4. Log-concavity of a compound distribution

[Reference Yu8] proved that if f is log-concave, then $c_{{\rm poi}(\lambda)}(\,f)$ is log-concave if and only if $\lambda f_1^2\ge 2 f_2$. In order to show that Propositions 3.3, 3.5 and Corollary 3.4 are meaningful, we need to investigate the log-concavity of compound Negative Binomial distribution. Firstly, we study the necessary and sufficient condition for a compound Geometric distribution to be log-concave.

Proposition 4.1. Suppose f is a pmf defined on $\mathbb{N}$ such that $f_1 \gt 0$. For $p\in(0,1)$, we have $c_{{\rm geo}(p)}(\,f)\in {\rm LC}$ if and only if $f_k=0$ for all $k\ge 2$.

Proof. Denote the pmf of ${\rm Geo}(p)$ as $\{p_n, n\ge 0\}$. Then the pmf of $c_{{\rm geo}(p)}(\,f)$ is g. By (3.3), we have:

(4.1)\begin{equation} g_{n+1}=\eta\sum_{j=0}^nf_{j+1}g_{n-j}, \quad n\ge 0, \end{equation}

where $\eta= q/(1-qf_0) \gt 0$.

First of all, observe that

  1. (1) $g_0 = p_0+\sum_{n=1}^\infty p_nf_0^n = p+\sum_{n=1}^\infty pq^nf_0^n= p/(1-qf_0) = p\eta/q \gt 0$;

  2. (2) $f_1\neq 0 \Longrightarrow g_n \gt 0$ for $n\ge 0$.

$(\Longrightarrow)$ Prove that $f_n=0$ for all $n\ge 2$ by induction. For n = 2, by (4.1), we have $g_1=\eta f_1 g_0, g_2=\eta[f_1g_1+f_2g_0]$. Substituting in $g_1^2\ge g_0 g_2$, we have $f_2g_0=0$, that is, $f_2=0$ and $g_1^2 = g_0g_2$.

Now assume that $f_n=0$ for $n=2,\cdots,k$ and $g_{k-1}^2=g_{k-2}g_{k}$. Notice that

\begin{equation*} g_{k-1}=\eta f_1g_{k-2}, ~g_k=\eta f_1g_{k-1}, ~g_{k+1}=\eta[f_1g_k+f_{k+1}g_0]. \end{equation*}

Substituting in $g_k^2\geq g_{k-1}g_{k+1}$, we have

\begin{equation*} f_1^2g_{k-1}^2\geq f_1^2g_kg_{k-2}+f_1f_{k+1}g_0g_{k-2}. \end{equation*}

So, $f_1f_{k+1}g_0g_{k-2}=0$. Thus, $f_{k+1}=0$ and in the meantime, $g_k^2= g_{k-1}g_{k+1}$. The necessity is proved by induction.

$(\Longleftarrow)$ Suppose that $f_k=0$ for all $k\ge 2$. By (4.1), we have

(4.2)\begin{equation} g_{n+1}=\eta f_1g_n=(\eta f_1)^{n+1}g_0,\quad n\ge 0. \end{equation}

It is obvious that $g_n^2=g_{n-1}g_{n+1}$ for $n\ge 1$ and, hence, $g\in {\rm LC}$.

Remark 4.2.

  1. (1) By (4.2), we have

    \begin{equation*} c_{{\rm geo}(p)}(\,f)\in {\rm LC}\Longrightarrow c_{{\rm geo}(p)}(\,f)={\rm geo}\left(\frac{p}{1-qf_0}\right). \end{equation*}
  2. (2) Assume that $f_n=0$ for $n\ge 2$ with $f_1 \gt 0$. Then $g:=c_{{\rm nb}(\alpha,p)}(\,f) \in {\rm LC}$ if and only if $\alpha\ge 1$. In fact, if $f_n=0$ for $n\ge 2$, then (3.3) can be simplified to:

    \begin{equation*} g_{n+1}=\frac{n+\alpha}{n+1}\eta f_1g_n,\quad n\ge 0. \end{equation*}

    It is easy to verify that

    \begin{equation*} g_{n+1}^2\geq g_ng_{n+2} \Longleftrightarrow \frac{\alpha-1}{n+1}\geq\frac{\alpha-1}{n+2}, \end{equation*}

    that is, $\alpha\ge 1$.

Proposition 4.3. Suppose that $f\in {\rm LC}$, and denote $q=1-p$ and $\eta=q/(1-qf_0)$. Then $g:=c_{{\rm nb}(\alpha,p)}(\,f) \in {\rm LC}$ if and only if

(4.3)\begin{equation} (\alpha-1)\eta f_1^2\geq 2f_2. \end{equation}

The proof of Proposition 4.3 is postponed to Appendix A.1.

Remark 4.4. Suppose that $f\in {\rm LC}$, $0 \lt \alpha\leq\alpha^\ast$ and $p\geq p^\ast \gt 0$. If $g:=c_{{\rm nb}(\alpha,p)}(\,f) \in {\rm LC}$, then $c_{{\rm nb}(\alpha^\ast,p^\ast)}(\,f) \in {\rm LC}$.

Next, the necessary and sufficient condition for a compound Binomial distribution to be log-concave is discussed. To this end, we first give the recursive expression of its corresponding pmf.

Lemma 4.5. Denote $g:= c_{{\rm b}(n,p)}(\,f)$. Then the pmf $\{g_k, k\in\mathbb{N}\}$ has recursive expression as follows:

(4.4)\begin{equation} (k+1)g_{k+1}=\delta\sum_{j=0}^k[(n+1)j+n-k]f_{j+1}g_{k-j},\quad k\ge 0, \end{equation}

where $\delta = p/(pf_0+q)$ and $q=1-p$.

Proof. The proof of (4.4) is similar to (3.3) since

\begin{equation*} \frac{p_k}{p_{k-1}}=-\frac{p}{q}+\frac{(n+1)p}{qk}=a+\frac{b}{k},\quad \forall\ k\ge 1. \end{equation*}

Here, (3.5) still holds,

(4.5)\begin{equation} \sum_{j=0}^{k+1}\left(a+\frac{bj}{k+1}\right)f_jg_{k+1-j} = g_{k+1}, ~k\geq 0, \end{equation}

and (3.4) is replaced by the following formula

(4.6)\begin{align} & \sum_{j=0}^{k+1}\left(a+\frac{bj}{k+1}\right)f_jg_{k+1-j} \nonumber\\ & \qquad= -\frac{p}{q}f_0g_{n+1}+\frac{p}{q(n+1)}\sum_{j=0}^k[(n+1)j+n-k]f_{j+1}g_{k-j},\ \ k\ge 0. \end{align}

Thus, (4.4) follows from (4.5) and (4.6).

Proposition 4.6. Denote $g:=c_{{\rm b}(n,p)}(\,f)$, and let $f\in{\rm LC}$. If $g\in{\rm LC}$, then

(4.7)\begin{equation} (n+1)\delta f_1^2\geq 2f_2, \end{equation}

where $\delta=p/(pf_0+q)$ and $q=1-p$. In particular, for n = 1, (4.7) is also the sufficient condition for $c_{{\rm b}(1,p)}(\,f)\in{\rm LC}$. But for $n\geq 2$, (4.7) is not the sufficient condition for $g\in{\rm LC}$.

Proof. (1)  By lemma 4.5, we have $g_1 = n\delta f_1g_0$ and $g_2= [(n-1)\delta f_1g_1+2n\delta f_2g_0]/2$. In view of $g_1^2\geq g_0g_2$, we obtain (4.7).

(2)  For n = 1, (4.7) holds, that is $\delta f_1^2\geq f_2$. Based on the random expression of the random variable corresponding to g, we have $g_0=q+pf_0$ and $g_k=pf_k$ for $k\ge 1$. To prove $g\in{\rm LC}$, we only need to prove $g_1^2\geq g_0g_2$, that is, (4.7).

(3)  Take a counterexample to illustrate: assume n = 2, and $c_{{\rm b}(2,p)}(\,f) = c_{{\rm b}(1,p)}(\,f)\ast c_{{\rm b}(1,p)}(\,f)$. Denote $h = c_{{\rm b}(1,p)}(\,f)$, by (2), we have $h_0=q+pf_0, ~h_k=pf_k, ~k\geq 1$. Especially, take $p=20/23$,

\begin{equation*} f = \left(\frac{1}{20}, \frac{1}{5}, \frac{3}{10}, \frac{9}{20}, 0, 0, \cdots\right). \end{equation*}

Then

\begin{equation*} h = \left(\frac{4}{23}, \frac{4}{23}, \frac{6}{23}, \frac{9}{23}, 0, 0, \cdots\right). \end{equation*}

Hence,

\begin{equation*} g = h\ast h = \left(g_0, g_1, g_2, \frac{120}{23^2}, \frac{108}{23^2}, \frac{108}{23^2}, g_6, \cdots\right). \end{equation*}

Obviously, $g_4^2 \lt g_3g_5$, which means that g is not log-concave. On the other hand, $\delta = p/(pf_0+q)=5$, $f\in{\rm LC}$, and $3\delta f_1^2= 3/5 =2f_2$, that is (4.7) holds. Therefore, (4.7) is not the sufficient condition for $g\in{\rm LC}$.

5. The relative log-concavity

Lemma 2 in [Reference Yu8] states that, for pmfs $f, g, f^\ast$ and $g^\ast$ defined on $\mathbb{Z}_+$, if $f\le_{\rm cx} f^\ast$ and $g\le_{\rm cx}g^\ast$, then $c_g(\,f)\le_{\rm cx}c_{g^\ast}(\,f^\ast)$. On the other hand, $g\le_{\rm lc}{\rm poi}(\lambda)$ for any $g\in{\rm ULC}$ and λ > 0. Connected with Theorem 1.1, it is easy to establish the following proposition.

Proposition 5.1.

  1. (1) [Reference Yu8] If $g\in{\rm ULC}$ and $\mu_g = \lambda$, then $c_g(\,f)\leq_{\rm cx}c_{{\rm poi}(\lambda)}(\,f)$.

  2. (2) [Reference Yu8],[Reference Johnson, Kontoyiannis and Madiman2] If $g\in{\rm ULC}, ~\mu_g = \lambda \gt 0$ and $c_{{\rm poi}(\lambda)}(\,f)\in{\rm LC}$, then $H(c_g(\,f))\leq H(c_{{\rm poi}(\lambda)}(\,f))$.

Notice that $g\in{\rm ULC}(n)\Longleftrightarrow g\le_{\rm lc} b(n,p)$ for all $p\in(0,1)$, and that $g\in{\rm LC}\Longleftrightarrow g\le_{\rm lc} {\rm geo}(\lambda)$ for all λ > 0. The following two propositions are easily established by Theorem 1.1.

Proposition 5.2.

  1. (1) If $g\in{\rm ULC}(n)$ and $\mu_g = np$, then $c_g(\,f)\leq_{\rm cx}c_{{\rm b}(n,p)}(\,f)$.

  2. (2) If $g\in{\rm ULC}(n)$, $\mu_g=np$ and $c_{{\rm b}(n,p)}(\,f)\in{\rm LC}$, then $H(c_g(\,f))\leq H(c_{{\rm b}(n,p)}(\,f))$.

Proposition 5.3.

  1. (1) If $g\in{\rm LC}$ and $\mu_g = (1-\lambda)/\lambda$, then $c_g(\,f)\leq_{\rm cx}c_{{\rm geo}(\lambda)}(\,f)$.

  2. (2) If $g\in{\rm LC}$, $\mu_g = (1-\lambda)/\lambda$ and $c_{{\rm geo}(\lambda)}(\,f)\in{\rm LC}$, then $H(c_g(\,f))\leq H(c_{{\rm geo}(\lambda)}(\,f)$.

Naturally, the following problems will arise:

Question

  1. (1) If $g\in{\rm ULC}$ and $\mu_g = \lambda$, is it correct $c_g(\,f)\leq_{\rm lc}c_{{\rm poi}(\lambda)}(\,f)$?

  2. (2) If $g\in{\rm ULC}(n)$ and $\mu_g = np$, is it correct $c_g(\,f)\leq_{\rm lc}c_{{\rm b}(n,p)}(\,f)$?

  3. (3) If $g\in{\rm LC}$ and $\mu_g = (1-\lambda)/\lambda$, is it correct that $c_g(\,f)\leq_{\rm lc}c_{{\rm geo}(\lambda)}(\,f)$?

The following three counterexamples show that the assertions in Question 1 are negative.

Example 5.1. $g\in{\rm ULC}$, $\mu_g =\lambda$ and $f\in{\rm LC} \not\!\Longrightarrow c_g(\,f)\le_{\rm lc}c_{{\rm poi}(\lambda)}(\,f)$.

Suppose that $g=B(2,\lambda/2)$ and $0 \lt \lambda \lt 2$, f satisfies $f_j=0$ for $j\ge 3$, and denote $s=c_g(\,f)$ and $ t=c_{{\rm poi}(\lambda)}(\,f)$. Then

\begin{align*} s_0 & = \left ( 1-\frac{\lambda}{2}\right )^2+\lambda\left(1-\frac{\lambda}{2}\right)f_0+\frac{\lambda^2}{4}f_0^2,\qquad s_1 = \lambda\left (1-\frac{\lambda}{2}\right )f_1+\frac{\lambda^2}{2}f_0f_1,\\ s_2 & = \lambda\left (1-\frac{\lambda}{2}\right )f_2+\frac{\lambda^2}{2}f_0f_2+\frac{\lambda^2}{4}f_1^2,\qquad s_3 = \frac{\lambda^2}{2}f_1f_2, \qquad s_4=\frac{\lambda^2}{4}f_2^2, \end{align*}

and

\begin{align*} t_0 & = e^{\lambda(\,f_0-1)}, \qquad t_1=\lambda f_1e^{\lambda(\,f_0-1)},\qquad t_2 =\frac{\lambda}{2}(\lambda f_1^2+2f_2)e^{\lambda(\,f_0-1)},\\ t_3 & = \lambda^2f_1 \left (\frac{1}{6}\lambda f_1^2+f_2\right ) e^{\lambda(\,f_0-1)},\qquad t_4=\frac{\lambda^2}{4} \left (\frac{1}{6}\lambda^2 f_1^4+2\lambda f_1^2f_2+2f_2^2\right )e^{\lambda(\,f_0-1)}. \end{align*}

Especially, we take λ = 1 and $f=(\,f_0, f_1, f_2)=(1/3, 1/3, 1/3)$, that is, f is a discrete uniform distribution on $\{0, 1, 2\}$. Obviously, $f\in{\rm ULC}$. By calculation,

\begin{equation*} \left(\frac{s_0}{t_0}, \frac{s_1}{t_1}, \frac{s_2}{t_2}, \frac{s_3}{t_3}, \frac{s_4}{t_4}\right) = \left(\frac{4}{9}, \frac{2}{3}, \frac{9}{14}, \frac{9}{19}, \frac{54}{145}\right)e^{2/3}. \end{equation*}

It can be verified that

\begin{equation*} \left(\frac{s_1}{t_1}\right)^2-\frac{s_0}{t_0}\frac{s_2}{t_2} \gt 0, \qquad \left(\frac{s_3}{t_3}\right)^2-\frac{s_2}{t_2}\frac{s_4}{t_4} = -0.015 \lt 0. \end{equation*}

Hence, $s_k/t_k$ is not log-concave, so $s\nleq_{\rm lc}t$.

Example 5.2. $g\in{\rm LC}$, $\mu_g = (1-\lambda)/\lambda$ and $f\in{\rm LC} \not\!\Longrightarrow c_g(\,f)\leq_{\rm lc}c_{{\rm geo}(\lambda)}(\,f)$.

Suppose that f and g satisfy $f_j = g_j = 0$ for $j\ge 3$, and denote $s=c_g(\,f)$ and $t=c_{\rm geo}(\,f)$. Then

\begin{align*} s_0 & = g_0+g_1f_0+g_2f_0^2, \qquad s_1=g_1f_1+2g_2f_0f_1, \\ s_2 & = g_1f_2+g_2(2f_0f_2+f_1^2), \qquad s_3=2g_2f_1f_2, \qquad s_4=g_2f_2^2, \end{align*}

and

\begin{align*} t_1 = \eta f_1t_0, \quad t_2 = \eta(\,f_1t_1+f_2t_0), \quad t_3 = \eta(\,f_1t_2+f_2t_1),\quad t_4 = \eta(\,f_1t_3+f_2t_2), \end{align*}

with $\eta = (1-\lambda)/[1-(1-\lambda)f_0]$. If we take $f=(\,f_0, f_1, f_2)=(1/3, 1/3, 1/3)\in{\rm LC}$, then

\begin{equation*} g = (g_0,g_1,g_2)=\left (\frac{7\lambda-3}{4\lambda}, \frac{1-\lambda}{2\lambda}, \frac{1-\lambda}{4\lambda}\right). \end{equation*}

Given $\lambda= 40/77$, g is log-concave, that is, $g\le_{\rm lc}{\rm Geo}(\lambda)$. It is easy to verify that

\begin{equation*} \left(\frac{s_3}{t_3}\right)^2-\frac{s_2}{t_2}\frac{s_4}{t_4} = -0.082 \lt 0, \end{equation*}

hence, $s_k/t_k$ is not log-concave, so $s\nleq_{\rm lc}t$.

Example 5.3. $g\in{\rm ULC}(n), ~\mu_g = np, ~p\in(0,1)$ and $f\in{\rm LC} \not\!\Longrightarrow c_g(\,f)\le_{\rm lc} c_{{\rm b}(n,p)}(\,f)$.

Suppose that $f=(\,f_0, f_1, f_2)=(1/3, 1/3, 1/3)\in {\rm LC}$ and $g = (g_0,g_1,g_2)= (1-3p/2, p, p/2)$. If $p\in(1/2,2/3)$, then $g\in {\rm ULC}(2)$. Denote $s=c_g(\,f)$ and $t=c_{{\rm b}(2,p)}(\,f)$. Then sj and tj can be calculated as in Example 5.2. Choose $p=7/12$. Then

\begin{equation*} \left(\frac{s_3}{t_3}\right)^2-\frac{s_2}{t_2}\frac{s_4}{t_4} = -0.1728 \lt 0, \end{equation*}

hence, $s_k/t_k$ is not log-concave, so $s\nleq_{\rm lc}t$.

Acknowledgements

The authors are grateful to the Associate Editor and two anonymous referees for their comprehensive reviews of an earlier version of this paper.

Funding statement

W. Xia was supported by the Natural Science Foundation of Jiangsu Higher Education Institutions of China (No. 20KJB110014). W. Lv was supported by the Scientific Research Project of Chuzhou University (No. zrjz2021019) and by the Scientific Research Program for Universities in Anhui Province (No. 2023AH051573).

Appendix A. Proof of Proposition 4.3

Proof. For reading convenience, use P = g as pmf, and set $P_i=0$ for i < 0. Denote $r_j=\eta f_{j+1}$, and rewrite (3.3) as:

(A.1)\begin{equation} (n+1)P_{n+1}=\sum_{j=0}^n[(\alpha-1)j+n+\alpha]r_jP_{n-j}, \quad \forall~n\geq 0. \end{equation}

$(\Longrightarrow)$ By (A.1), we have $P_1=\alpha r_0P_0, P_2=[(1+\alpha)r_0P_1+2\alpha r_1P_0]/2$. Since $P\in{\rm LC}$, it follows that $P_1^2\geq P_0P_1$, and thus,

\begin{equation*} P_1\cdot\alpha r_0P_0\geq \frac{1}{2}[(1+\alpha)r_0P_1+2\alpha r_1P_0]P_0. \end{equation*}

So (4.3) can be proved directly by simplify the above equation.

$(\Longleftarrow)$ Suppose (4.3) holds. First, notice the following equations:

\begin{align*} \alpha P_nP_{n+1} = &\sum_{j=0}^n [(\alpha-1)j+n+\alpha](n+\alpha)r_jP_{n-j}P_n \\ & -\sum_{j=0}^{n-1}[(\alpha-1)j+n-1+\alpha](n+1+\alpha)r_jP_{n-1-j}P_{n+1},\\ (n+1)^2P_{n+1}^2 = & \sum_{k=0}^n\sum_{l=0}^n[(\alpha-1)k+n+\alpha][(\alpha-1)l+n+\alpha]r_kr_lP_{n-k}P_{n-l},\\ n(n+2)P_{n+1}^2 = & (n+1)^2P_{n+1}^2- P_{n+1}^2,\\ n(n+2)P_nP_{n+2} = & \sum_{k=0}^{n-1}\sum_{l=0}^{n+1}[(\alpha-1)k+n-1+\alpha][(\alpha-1)l+n+1+\alpha] \times r_kr_lP_{n-1-k}P_{n+1-l}. \end{align*}

The first equation follows from the fact that the right hand side equals $(n+\alpha)P_n\cdot(n+1)P_{n+1}-(n+1+\alpha)P_{n+1}\cdot nP_n=\alpha P_nP_{n+1}$. Define

(A.2)\begin{equation} J_n^{(1)}=(\alpha r_0P_n-P_{n+1})P_{n+1}. \end{equation}

Then,

\begin{align*} &n(n+2)[P_{n+1}^2-P_nP_{n+2}] \\ = & ~(\alpha r_0P_nP_{n+1}-P_{n+1}^2)+(n+1)^2P_{n+1}^2-n(n+2)P_nP_{n+2}-\alpha r_0P_nP_{n+1}\\ = & ~J_n^{(1)}+\sum_{k=0}^n\sum_{l=0}^n[(\alpha-1)k+n+\alpha][(\alpha-1)l+n+\alpha]r_kr_lP_{n-k}P_{n-l}\\ & ~-\sum_{k=0}^{n-1}\sum_{l=0}^{n+1}[(\alpha-1)k+n-1+\alpha][(\alpha-1)l+n+1+\alpha]r_kr_lP_{n-1-k}P_{n+1-l}\\ & \quad -r_0\sum_{j=0}^n [(\alpha-1)j+n+\alpha](n+\alpha)r_jP_{n-j}P_n \\ & \qquad +r_0\sum_{j=0}^{n-1}[(\alpha-1)j+n-1+\alpha](n+1+\alpha)r_jP_{n-1-j}P_{n+1}\\ = & ~J_n^{(1)}+\sum_{k=0}^n\sum_{l=1}^n[(\alpha-1)k+n+\alpha][(\alpha-1)l+n+\alpha]r_kr_lP_{n-k}P_{n-l}\\ & ~-\sum_{k=0}^{n-1}\sum_{l=1}^{n+1}[(\alpha-1)k+n-1+\alpha][(\alpha-1)l+n+1+\alpha]r_kr_lP_{n-1-k}P_{n+1-l}\\ = & ~J_n^{(1)}+\sum_{k=0}^n\sum_{l=0}^n[(\alpha-1)k+n+\alpha][(\alpha-1)(l+1)+n+\alpha]r_kr_{l+1}P_{n-k}P_{n-1-l}\\ & ~-\sum_{k=0}^{n}\sum_{l=0}^{n}[(\alpha-1)k+n-1+\alpha][(\alpha-1)(l+1)+n+1+\alpha]r_kr_{l+1}P_{n-1-k}P_{n-l}\\ = & ~J_n^{(1)}+\sum_{k=0}^n\sum_{l=0}^n[\alpha(k+1)+n-k][\alpha(l+2)+n-l-1]r_kr_{l+1}P_{n-k}P_{n-1-l}\\ & ~-\sum_{k=0}^{n}\sum_{l=0}^{n}[\alpha(k+1)+n-1-k][\alpha(l+2)+n-l]r_kr_{l+1}P_{n-1-k}P_{n-l}\\ = & ~J_n^{(1)}+J_n^{(2)}+J_n^{(3)}+J_n^{(4)}, \end{align*}

where $J_n^{(2)}, J_n^{(3)}$ and $J_n^{(4)}$ are defined by

\begin{align*} J_n^{(2)} & = \alpha^2\sum_{k=0}^n\sum_{l=0}^n(k+1)(l+2)r_kr_{l+1}[P_{n-k}P_{n-1-l}-P_{n-1-k}P_{n-l}]\\ & = \alpha^2\sum_{k\geq l}[P_{n-k}P_{n-1-l}-P_{n-1-k}P_{n-l}][(k+1)(l+2)r_kr_{l+1}-(l+1)(k+2)r_lr_{k+1}],\\ J_n^{(3)} & = \sum_{k=0}^n\sum_{l=0}^n[(n-k)(n-l-1)P_{n-k}P_{n-1-l}-(n-1-k)(n-l)P_{n-1-k}P_{n-l}]r_kr_{l+1}\\ & = \sum_{k\geq l}[(n-k)(n-l-1)P_{n-k}P_{n-1-l}-(n-1-k)(n-l)P_{n-1-k}P_{n-l}](r_kr_{l+1}-r_lr_{k+1}),\\ J_n^{(4)} & = \alpha\sum_{k=0}^n\sum_{l=0}^nr_kr_{l+1}[(k+1)(n-l-1)+(n-k)(l+2)]P_{n-k}P_{n-1-l}\\ & \quad - \alpha\sum_{k=0}^n\sum_{l=0}^nr_kr_{l+1}[(k+1)(n-l)+(n-k-1)(l+2)]P_{n-1-k}P_{n-l}. \end{align*}

Define function $h(k,l)=(k+1)(n-l)+(n-k-1)(l+2)$, which satisfies $h(k,l)=h(l,k)$. Therefore, $J_n^{(4)}$ can be simplified to:

(A.3)\begin{align} J_n^{(4)} & = \alpha\sum_{k=0}^n\sum_{l=0}^nr_kr_{l+1}[(h(k,l))+l-k+1)P_{n-k}P_{n-1-l}-h(k,l)P_{n-1-k}P_{n-l}]\nonumber\\ & = \alpha\sum_{k=0}^n\sum_{l=0}^nr_kr_{l+1}h(k,l)(P_{n-k}P_{n-1-l}-P_{n-1-k}P_{n-l})\nonumber\\ & \quad +\alpha\sum_{k=0}^n\sum_{l=0}^nr_kr_{l+1}(l-k+1)P_{n-k}P_{n-1-l}\nonumber\\ & = \alpha\sum_{k\geq l}h(k,l)(P_{n-k}P_{n-1-l}-P_{n-1-k}P_{n-l})(r_kr_{l+1}-r_lr_{k+1})\nonumber\\ & \quad +\alpha\sum_{k=0}^n\sum_{l=1}^{n+1}r_kr_{l}(l-k)P_{n-k}P_{n-l}\nonumber\\ & = \alpha\sum_{k\geq l}h(k,l)(P_{n-k}P_{n-1-l}-P_{n-1-k}P_{n-l})(r_kr_{l+1}-r_lr_{k+1})+\alpha\sum_{k=0}^nr_kr_{0}kP_{n-k}P_{n}\nonumber\\ & \geq \alpha\sum_{k\geq l}h(k,l)(P_{n-k}P_{n-1-l}-P_{n-1-k}P_{n-l})(r_kr_{l+1}-r_lr_{k+1}). \end{align}

Then, we prove the log-concavity of $\{P_n, n\geq 0\}$ by induction. For k = 1, it is the same as (4.3). Now assume $P_k^2\geq P_{k-1}P_{k+1}$ for all $k\le n$. To prove $P_{n+1}^2\geq P_{n}P_{n+2}$, it suffices to prove:

\begin{equation*} J_n^{(1)}+J_n^{(2)}+J_n^{(3)}+J_n^{(4)}\ge 0. \end{equation*}

In fact, $J_n^{(\nu)}\ge 0$ for all $\nu\in\{1,2,3,4\}$. Details are as follows.

  • ν = 1: By the assumption of induction, we have $\alpha r_0=P_1/P_0\geq P_2/P_1\ge P_{n+1}/P_n$ and, hence, $J_n^{(1)}\geq 0$ due to (A.2).

  • ν = 2: $f\in{\rm LC}$ implies that $\{(i+1)r_i, i\geq 0\}$ is also log-concave. Hence,

    \begin{equation*} (k+1)(l+2)r_kr_{l+1}\geq (l+1)(k+2)r_lr_{k+1},\quad k\geq l. \end{equation*}

    Furthermore, $P_{n-k}P_{n-1-l}\geq P_{n-1-k}P_{n-l}$ for $k\ge l$, which holds by the assumption of induction. So, $J_n^{(2)}\geq 0$.

  • ν = 3: The hypothesis means that Pk is log-concave in $k\in\{0,1,\cdots,n+1\}$, implying that kPk is log-concave in $k\in\{0, 1, \ldots, n+1\}$. Therefore,

    \begin{equation*} (n-k)(n-l-1)P_{n-k}P_{n-1-l}\ge (n-1-k)(n-l)P_{n-1-k}P_{n-l},\quad k\ge l. \end{equation*}

    Obviously, $r_kr_{l+1}\ge r_lr_{k+1}$ for $k\ge l$. Thus, $J_n^{(3)}\ge 0$.

  • ν = 4: By the definition of $h(k,l)$, we have

    \begin{equation*} h(k,l)\ge 0,\ \forall~k\leq n-1;\qquad h(n,n-1)=0;\qquad h(n,l) \gt 0, \ \forall~l\le n-2. \end{equation*}

    Applying (A.3), we have

    \begin{align*} J_n^{(4)} \ge & \alpha\sum_{l\le k\ge n-1} h(k,l) (P_{n-k}P_{n-1-l}-P_{n-1-k}P_{n-l})(r_kr_{l+1}-r_lr_{k+1})\\ & +\alpha\sum_{l=0}^n h(n,l)P_0P_{n-1-l}(r_nr_{l+1}-r_lr_{n+1})\ge 0. \end{align*}

Based on the above discussion, the log-concavity of $\{P_n, n\geq 0\}$ is proved by induction.

References

Cover, T.M. & Thomas, J.A. (2006). 2nd Edn. Elements of information theory. New York: Wiley-Interscience.Google Scholar
Johnson, O., Kontoyiannis, I. & Madiman, M. (2008). On the entropy and log-concavity of compound Poisson measures. arXiv.0805.4112.Google Scholar
Panjer, H.H. (1981). Recursive evaluation of a family of compound distributions. Astin Bulletin 12 (1): 2226.10.1017/S0515036100006796CrossRefGoogle Scholar
Shaked, M. & Shanthikumar, J.G. (2007). Stochastic orders, New York: Springer.Google Scholar
Whitt, W. (1985). Uniform conditional variability ordering of probability distributions. Journal of Applied Probability 22 (3): 619633.10.2307/3213865CrossRefGoogle Scholar
Yu, Y. (2008a). On the maximum entropy properties of the Binomial distribution. IEEE Transactions on Information Theory 54 (7): 33513353.Google Scholar
Yu, Y. (2008b). On an inequality of Karlin and Rinott concerning weighted sums of i.i.d. random variables. Advances in Applied Probability 40 (4): 12231226.10.1239/aap/1231340171CrossRefGoogle Scholar
Yu, Y. (2009). On the entropy of compound distributions on nonnegative integers. IEEE Transactions on Information Theory 55 (8): 36453650.10.1109/TIT.2009.2023725CrossRefGoogle Scholar
Yu, Y. (2010). Relative log-concavity and a pair of triangle inequalities. Bernoulli 16 (2): 459470.10.3150/09-BEJ216CrossRefGoogle Scholar