1. Introduction
Let
$X_i, i \geq 1,$ be a sequence of independent and identically distributed random variables with a density or probability mass function f(x) for which
$\log(f(x)) $ is a concave function. Let
$S_n = \sum_{i=1}^n X_i, n \geq 1.$ For given constants
$s_n, n \geq 1$, let

In Section 2 we present bounds on
$P(N \gt n)$. In Section 3, we specialize to the case
$s_k = a + b \sqrt{k},\, k \geq 1,$ where a is non-negative and b is positive, and present bounds on
$E[N].$
The exploration of random walk behavior with threshold boundaries has a rich history in probability theory. Blackwell and Freedman[Reference Blackwell and Freedman1] presented foundational insights into exit times for sums of independent random variables. They considered a simple coin-tossing model where
$X_i, i \geq 1,$ take values ±1 with probability
$\frac{1}{2}$ each. Let
$\tau(N, c)$ be the least
$n \geq N$ with
$\left|S_n\right| \gt c n^{\frac{1}{2}}$, where c is a constant. Their work demonstrated that
$E[\tau(1, 1)]$ is infinite but when
$0 \lt c \lt 1$,
$E[\tau(N, c)]$ is finite for all N.
Building on these concepts, Breiman[Reference Breiman2] investigated the asymptotic distribution of first exit times for random walks with a square root boundary, particularly examining both discrete sums of i.i.d. random variables and continuous processes like Brownian motion. Breiman’s work established an approximation for the probability
$P(N \gt n)$ as
$n \to \infty$, and highlighted that while invariance principles apply for certain distributions, they may not extend to more general cases. This extension provides a framework for understanding the impact of varying boundary functions on exit time distributions.
In more recent work, Hansen[Reference Hansen4] examined random walks reflected at general boundaries, focusing on conditions under which the global maximum remains finite almost surely. Specifically, Hansen considered random walks with light-tailed, negatively biased increments, showing that the tail of the distribution for the maximum decays exponentially.
To the best of our knowledge, this is the first paper that examines the bounds of
$P(N \gt n)$ and particularly
$E[N]$ for log-concave random walks.
2. Bounds on P(N > n)

To establish Proposition 2.1, we present some lemmas. The first one is Efron’s theorem[Reference Efron3].
Lemma 2.2. Efron’s Theorem
If
$X_1, \ldots, X_r$ are independent log-concave random variables, then
$(X_1, \ldots, X_r) \,|\, \sum_{t=1}^r X_t = x$ is stochastically increasing in
$x.$ That is, for any component-wise increasing function
$g(x_1, \ldots, x_r)$

Our next lemma states that Sk conditional on
$S_1 \lt s_1, \ldots, S_{k} \lt s_k $ is likelihood ratio smaller than Sk conditional on
$S_k \lt s_k.$

Proof. We need to show that the ratio of the conditional density of Sk given
$S_1 \lt s_1, \ldots, S_{k} \lt s_k$ to the conditional density of Sk given
$ S_{k} \lt s_k$ is decreasing. Now, for
$t \leq s_k$

Hence we need to show that
$ P(S_1 \lt s_1, \ldots, S_{k} \lt s_k|S_k = t) $ is a decreasing function of t. However, this follows from Efron’s theorem because
$ g(x_1, \ldots, x_k) = 1- I\{x_1 \lt s_1, x_1 + x_2 \lt s_2, \ldots, x_1+ \ldots + x_k \lt s_k\} $ is an increasing function of
$(x_1, \ldots, x_k)$.

Proof. Because being likelihood ratio smaller implies being stochastically smaller, it follows from Lemma 2.3 that
$S_{k-1} | (S_1 \lt s_1, \ldots, S_{k-1} \lt s_{k-1} ) $ is stochastically smaller than
$S_{k-1} | S_{k-1} \lt s_{k-1}$. Now, if
$X \leq_{st} Y$ and Z is independent of both X and Y, then
$X +Z \leq_{st} Y + Z.$ The result thus follows because
$S_k = S_{k-1} + X_k.$
Proof. Proposition 2.1.
Proposition 2.1 follows from Lemma 2.4 upon using that

Proposition 2.1 yields the following lower bound on
$E[N]$.

Remark. The log concave condition is essential for establishing Proposition 2.1. For a counterexample, suppose that
$p_0 = \epsilon,\; p_2 = .2, \; p_5 = .8 - \epsilon,$ where
$p_j = P(X_i = j)$ and ϵ is a small positive number. Then

whereas

The conditional expectation inequality (see [Reference Ross6]) can be used to obtain an upper bound on
$P(N \gt n).$
Lemma 2.7. The Conditional Expectation Inequality
For events
$B_1, \ldots, B_n$

With
$B_i = A_i^c = \{S_i \geq s_i\}, $ the inequality yields that

Whereas for many logconcave distributions it is difficult to compute
$P(S_k \lt s_k | S_{k-1} \lt s_{_{k-1}})$, this is easily accomplished in important special cases such as normal, exponential, binomial, and Poisson. Example 2.8 considers the normal case and Example 2.9 the exponential case.
Example 2.8. Suppose the Xi are normal random variables with mean µ and variance 1. Let Z be a standard normal whose distribution function is Φ; let U be uniform on
$(0, 1);$ and let
$c_{n-1} = \frac{s_{n-1} - (n-1)\mu}{\sqrt{n-1}}.$ Because
$S_{n-1}$ is normal with mean
$(n-1) \mu$ and variance
$n-1,$ it follows that

Hence

where
$\;g(x) = \Phi \left( s_n - n \mu - \sqrt{n-1} \, \Phi^{-1}(x \Phi(c_{n-1}) \right).$ Because Φ and Φ−1 are both increasing functions, it follows that g(x) is a decreasing function of x. Since
$\;\int_0^1 g(x) dx = \sum_{i=1}^r \int_{(i-1)/r}^{i/r} g(x) dx ,$ this shows that, for any
$r,$

To utilize the conditional expectation inequality we need to compute
$P(S_i \gt s_i, S_j \gt s_j), i \neq j.$ To do so, suppose that i < j, and let
$c_i = \frac{s_{i} - i\mu}{\sqrt{i}}$. Then, with
$\bar{\Phi} = 1 - \Phi,$ arguing as before yields

Using that
$S_j |S_i$ is normal with mean
$S_i + (j-i) \mu$ and variance
$(j-i)^2$ yields that

where
$h_{i,j}(x) = \bar{\Phi} \left( \frac{s_j - j \mu - \sqrt{i}\; \Phi^{-1} \left(\Phi(c_i ) + \bar{\Phi}(c_i) x \right)
}{\sqrt{j-i}} \right) .$ Because
$h_{i,j}(x)$ is an increasing function of x, this gives

Example 2.9. Suppose the Xi are exponential random variables with rate
$\lambda.$ Let N(t) be the number of events by time t of the Poisson process that has Xi as its i th interarrival time,
$i \geq 1.$ Now, if
$s_{n-1} \leq s_n$ then

giving that

If
$s_{n-1} \geq s_n,$ then
$\; P(S_n \lt s_n, S_{n-1} \lt s_{n-1}) = P(N(s_n) \geq n ) , $ giving that

To compute
$P(S_i \gt s_i, S_j \gt s_j) = P(N(s_i) \lt i, N(s_j ) \lt j)$ suppose that
$i \lt j.$ If
$s_j \gt s_i,$ conditioning on
$N(s_i)$ yields

If
$s_i \gt s_j,$ then

In Tables 1–2 and Figure 1, we present two numerical results for the probability bounds of
$P(N \gt n)$. One assumes that Xi follows a normal distribution with mean 1 and variance 1, denoted
$X_i \sim N(1,1)$. The other assumes that Xi follows an exponential distribution with rate parameter 1, denoted
$X_i \sim Exp(1)$. The boundary
$s_k = 2 + 2 \sqrt{k},\, k \geq 1$. For values of n ranging from 2 to 20, we calculate the lower and upper bounds for
$P(N \gt n)$ using the analytical method described alongside Monte Carlo simulation estimates.

Figure 1. Probability bounds and Monte Carlo estimates for
$P(N \gt n). s_k = 2 + 2\sqrt{k}$, Left:
$X_i \sim N(1,1)$. Right:
$X_i \sim \mathrm{Exp}(1)$.
Table 1. Probability bounds and Monte Carlo estimates for
$P(N \gt n)$ with
$X_i \sim N(1,1)$ and
$s_k = 2 + 2 \sqrt{k}$ for n = 2 to 20.

Table 2. Probability bounds and Monte Carlo estimates for
$P(N \gt n)$ with
$X_i \sim Exp(1)$ and
$s_k = 2 + 2 \sqrt{k}$ for n = 2 to 20

Whereas the bounds on
$P(N \gt n)$ also yield bounds on
$E[N]$, additional bounds for a square root barrier are given in the next section.
3. Additional bounds on
$E[N] $ for a square root barrier
Suppose that
$s_k = a + b \sqrt{k}, k \geq 1,$ where
$a \geq 0, b \gt 0.$ Also, suppose the log concave random variables Xi have a positive mean. Now, conditional on N and
$S_{N-1},$ the random variable SN is distributed as
$a+b \sqrt{N} $ plus the amount by which
$X,$ a random variable having density f exceeds the positive value
$a+b \sqrt{N} - S_{N-1} $ given that it does exceed that value. But a log concave random variable X conditioned to be positive has an increasing failure rate (see Shaked and Shanthikumar[Reference Moshe and George Shanthikumar.5]) implying that
$S_N - (a + b \sqrt{N})$ is stochastically smaller than
$X|X \gt 0.$ As this is true no matter what the values of N and
$S_{N-1}$, it follows that

Using Wald’s equation and Jensen’s inequality the preceding implies that

With
$d = a + E[X|X \gt 0],$ the preceding can be written as

If
$ d \leq \mu E[N] ,$ which can be checked using Corollary 2.5, the preceding yields that

Because the function
$g(x) = \mu^2 x^2 - (2d \mu+ b^2) x + d^2$ is convex with
$g(0) \gt 0, \lim_{x \rightarrow \infty} g(x) = \infty$, it follows that
$g(x) \lt 0$ in the region between the two roots of
$g(x) = 0.$ Thus,
$E[N]$ lies between these two roots.
Remarks. 1. If f is the normal density with mean µ > 0 and variance
$1,$ then
$ E[X|X \gt 0] = \mu + \frac{e^{- \mu^2/2} } {\sqrt{2 \pi} \Phi(\mu)},$ where Φ is the standard normal distribution function.
2. Whereas the condition
$d \leq \mu E[N]$ involves the unknown
$E[N],$ it can often be verified by showing that
$d \leq \mu \, \text{LB}$, where LB is the lower bound for
$E[N]$ given by Corollary 2.5. (Of course, it is possible that
$d \leq \mu E[N]$ but
$d \gt \mu \, \text{LB}$).
In Table 3, we give the numerical results of the lower and upper bounds of
$E[N]$ and compare them with the Monte Carlo estimate of
$E[N]$. In this case,
$X_i \sim N(1,1)$ and
$s_k = a + b \sqrt{k}, k \geq 1$.
Table 3. Comparison of Simulated
$E[N]$ with Lower and Upper Bounds for Different Values of a and b.

Let
$\hat{E[N]}$ be the Monte Carlo estimate of
$E[N]$, and let LB denote the lower bound in Corollary 2.5. Since the smaller root of
$ \mu^2 x^2 - (2d \mu+ b^2) x + d^2 = 0$ does not yield a good result, it is excluded. UB1 is the upper bound by conditional expectation inequality, and UB2 is the larger root. The results are as follows.
Remark. From the numerical results across all cases shown in Table 3, UB2 is consistently smaller than UB1.
Funding statement
This work was supported by, or in part, by the National Science Foundation under contract/grant CMMI2132759.
Conflict of interest statement
The authors declare they have no conflict of interest.