1 Introduction
Let
$\boldsymbol {W}=\{\boldsymbol {W}_t:t\geq 0 \}$
denote Brownian motion in
$\mathbb {R}^n$
starting at the origin
$\boldsymbol {0}$
. That is,

where each coordinate
$\{W_t^{(j)}:t\geq 0\}$
,
$j=1,\dots , n$
, is standard Brownian motion in
$\mathbb {R}$
, independent of the other coordinates. For a set
$\mathcal {A}\subset \mathbb {R}^n$
, let
$\operatorname {\mathrm {conv}} \mathcal {A}$
denote the convex hull of
$\mathcal {A}$
. In other words,
$\operatorname {\mathrm {conv}} \mathcal {A}$
is the smallest convex subset of
$\mathbb {R}^n$
that contains
$\mathcal {A}$
. Define
$\mathcal {H}_t$
to be the convex hull of the path of
$\boldsymbol {W}$
run up to time t, namely,

This article is concerned with estimating the expected values of several geometric functionals of
$\mathcal {H}_t$
and their inverse processes. The particular functionals of
$\mathcal {H}_t$
that we study are volume, surface area, diameter, and circumradius. For a convex body
$\mathcal {K}\subset \mathbb {R}^n$
(nonempty, convex, and compact subset), let
$V(\mathcal {K})$
denote its n-dimensional Lebesgue measure,
$S(\mathcal {K})$
the (
$n-1$
)-dimensional Lebesgue measure of its topological boundary,
$D(\mathcal {K})$
its diameter, and
$R(\mathcal {K})$
its circumradius. Since the path
${\{\boldsymbol {W}_s:0\leq s\leq t \}\subset \mathbb {R}^n}$
is almost surely compact, it follows that
$\mathcal {H}_t$
is almost surely a convex body. Hence, we can define the processes
$V_t$
,
$S_t$
,
$D_t$
, and
$R_t$
by

where
$X\in \{V,S,D,R\}$
.
As these four processes are almost surely nondecreasing functions of time, we can also study their right-continuous inverse processes. The inverse process tells us how long we must wait for the functional to exceed a given value. More precisely, we have the definition

where
$X\in \{V,S,D,R\}$
. As remarked upon in [Reference Cygan, Panzo and Šebek1], the functionals that we consider can be used to quantify the size of
$\mathcal {H}_t$
, while their inverse processes provide some information on its speed of growth. The scaling properties of Brownian motion, Lebesgue measure, and Euclidean distance imply that we can limit our study to the expected values of these functionals and their inverse processes at a fixed positive time without any loss of generality (see Proposition 3.1).
The study of the convex hull of Brownian motion has a long history going back to Lévy in the 1940s (see [Reference Lévy14]). Let us summarize some more recent results that involve the functionals we are interested in. Most impressive are the explicit formulas for the expected values of
$V_1$
and
$S_1$
that hold in all dimensions. These expressions were derived by Eldan in [Reference Eldan3] and are given by

The formula for
$\mathbb {E}[V_1]$
with
$n=2$
had appeared previously in [Reference El Bachir2], while for
$\mathbb {E}[S_1]$
, the formula had been derived for
$n=2$
in [Reference Letac and Takács13] and for
$n=3$
in [Reference Kampf, Last and Molchanov11]. The formulas (1.3) for all dimensions were subsequently recovered in [Reference Kabluchko and Zaporozhets10] by another method which realizes the n-dimensional Brownian convex hull as a random projection of an infinite-dimensional limiting object (see also [Reference Kabluchko and Marynych8, Reference Kabluchko, Marynych and Raschel9]). Analogous exact formulas, albeit less explicit than (1.3), have been derived for the convex hulls of multidimensional random walks [Reference Vysotsky and Zaporozhets20] and Lévy processes [Reference Molchanov and Wespi16].
In contrast to the work of Eldan, the articles [Reference McRedmond and Xu15], [Reference Jovalekić7], and [Reference Cygan, Panzo and Šebek1] deal exclusively with the planar case
$n=2$
and are limited to estimates instead of exact formulas like (1.3). However, these articles treat different geometric functionals of
$\mathcal {H}_1$
which are seemingly not amenable to the methods of [Reference Eldan3] and [Reference Kabluchko and Zaporozhets10]. More specifically, [Reference McRedmond and Xu15] and [Reference Jovalekić7] derive bounds for the expected diameter of
$\mathcal {H}_1$
when
$n=2$
, while [Reference Cygan, Panzo and Šebek1] does the same for the expected circumradius and inradius. Moreover, [Reference Cygan, Panzo and Šebek1] initiates the study of the inverse processes (1.2) of all five geometric functionals (volume, surface area, diameter, circumradius, and inradius) by computing two-sided bounds on their expected values when
$n=2$
. The article [Reference Cygan, Panzo and Šebek1] also complements its bounds with estimates from extensive Monte Carlo simulations.
The main contributions of the present article are to extend most of the bounds derived in [Reference McRedmond and Xu15], [Reference Jovalekić7], and [Reference Cygan, Panzo and Šebek1] from the plane to higher dimensions. All of our bounds capture the correct order of asymptotic growth or decay in the dimension n in the sense that the upper and lower bounds are asymptotically equivalent up to a constant factor as
$n\to \infty $
. We were unable to obtain bounds with matching orders of asymptotic growth or decay for the expected values of the inradius and its inverse process so we leave those cases for future investigation.
2 Main results
Our first two theorems concern the expected values of the inverse processes of the volume and surface area functionals that were defined in (1.1) and (1.2). These extend the corresponding bounds of [Reference Cygan, Panzo and Šebek1] from the plane to higher dimensions and also complement Eldan’s exact formulas (1.3) for the expected values of the functionals themselves. These theorems are proved in Section 4.1. We remark that the proof of the upper bound in Theorem 2.1 required significantly more work than any of the other theorems in this article.
Theorem 2.1 For any dimension
$n\geq 1$
, the inverse volume process satisfies

Remark 2.2 When
$n=2$
, the upper bound from [Reference Cygan, Panzo and Šebek1] is better than that of Theorem 2.1, since the former uses the minimum of two independent inverse range processes in the first stage instead of the exit time of a two-dimensional ball. The latter method is more favorable in higher dimensions, however, not only for its computational simplicity, but because for fixed range and radius, the expected value of the minimum of n independent inverse range processes decays like
$1/\log n$
, while that of the exit time of an n-dimensional ball decays like
$1/n$
(see also Remarks 2.7 and 2.10).
Theorem 2.3 For any dimension
$n\geq 2$
, the inverse surface area process satisfies

The asymptotic behavior of these bounds is straightforward to deduce from Stirling’s approximation and can be summarized in the following corollary. This result verifies our claim that the upper and lower bounds for the expected values of both inverse processes have the same asymptotic order as
$n\to \infty $
, namely,
$n^2$
.
Corollary 2.4 For either
$X=V$
or
$X=S$
, we have

Our next two theorems involve the diameter functional and its inverse process. For these results, it is plain to see that the upper and lower bounds have matching asymptotic orders as
$n\to \infty $
. The proofs are given in Section 4.2.
Theorem 2.5 For any dimension
$n\geq 1$
, the diameter process satisfies

Theorem 2.6 For any dimension
$n\geq 1$
, the inverse diameter process satisfies

Remark 2.7 When
$n=2$
, the lower bounds from [Reference Jovalekić7, Reference McRedmond and Xu15] and the upper bound from [Reference Cygan, Panzo and Šebek1] are better than those of Theorems 2.5 and 2.6, respectively, for the same reason described in Remark 2.2. However, just like in that case, the methods of the present article become more effective in high dimensions.
Our last two theorems deal with the circumradius functional and its inverse process. For these results, it is also clear that the upper and lower bounds have the same asymptotic order as
$n\to \infty $
. The proofs are given in Section 4.3.
Theorem 2.8 For any dimension
$n\geq 1$
, the circumradius process satisfies

Theorem 2.9 For any dimension
$n\geq 1$
, the inverse circumradius process satisfies

Remark 2.10 When
$n=2$
, the lower bound of Theorem 2.8 and the upper bound of Theorem 2.9 are worse than those of [Reference Cygan, Panzo and Šebek1]. This is also due to the reason described in Remark 2.2. Similarly to that case, the methods of the present article become more effective in high dimensions.
Remark 2.11 When
$n=1$
,
$V_t$
,
$D_t$
, and
$2R_t$
are nothing but the range of
$\boldsymbol {W}$
. Hence, their inverse processes can be expressed in terms of the inverse range. The distributions of the range and its inverse are known explicitly (see [Reference Feller4, Reference Imhof6]). In particular, the mean of the range and inverse range at time
$1$
is
$\sqrt {8/\pi }\approx 1.5958$
and
$1/2$
, respectively. For comparison, setting
$n=1$
in Theorems 2.1, 2.5, and 2.6 produces the bounds
$1\leq \mathbb {E}\left [D_1\right ]\leq 1.6652$
for the mean diameter and
$0.3926\leq \mathbb {E}\left [\Theta _1^V\right ]\leq 1$
and
$0.3606\leq \mathbb {E}\left [\Theta _1^D\right ]\leq 1$
for the mean of the inverse volume and inverse diameter, respectively.
Remark 2.12 Unlike [Reference Cygan, Panzo and Šebek1], we don’t have Monte Carlo estimates for the expected values that might indicate which, if any, of the above bounds are asymptotically sharp. Simulation becomes impractical in high dimensions because the runtimes of the standard convex hull algorithms increase exponentially in the dimension.
3 Preliminaries
Following [Reference Schneider19], we denote the volume and surface area of the unit ball in
$\mathbb {R}^n$
by
$\kappa _n$
and
$\omega _n$
, respectively. These quantities are given by the well-known formulas

Another well-known formula that is essential to our results is that of the mean exit time of n-dimensional Brownian motion from a ball of radius
$r>0$
in
$\mathbb {R}^n$
when it starts from the center of the ball. Let
$\tau _r$
denote this exit time. Then we have

This formula can be deduced from a routine martingale or PDE argument (see Problem 4.2.25 of [Reference Karatzas and Shreve12]).
As mentioned in the introduction, the familiar scaling properties of Brownian motion, Lebesgue measure, and Euclidean distance lead to convenient distributional identities that allow us to focus our study on the expected values of the geometric functionals of
$\mathcal {H}_t$
and their inverse processes at time
$t=1$
without any loss of generality. These distributional identities are the n-dimensional version of [Reference Cygan, Panzo and Šebek1, Proposition 4.1] and are listed in the following proposition. We stress that these distributional identities hold for fixed times and not as processes.
Proposition 3.1 Let
$n\geq 2$
and consider the V, S, D, and R functionals of the convex hull of n-dimensional Brownian motion along with their inverse processes that were defined in (1.2). Then, for all
$t\geq 0$
and
$y\geq 0$
, we have the following equalities in distribution:
-
(i)
$\displaystyle V_t\stackrel {d}{=}t^{n/2}V_1,~~\Theta _y^V\stackrel {d}{=}y^{2/n}\Theta _1^V,~~\Theta _1^V\stackrel {d}{=}V_1^{-2/n}$ ;
-
(ii)
$\displaystyle S_t\stackrel {d}{=}t^{(n-1)/2}S_1,~~\Theta _y^S\stackrel {d}{=}y^{2/(n-1)}\Theta _1^S,~~\Theta _1^S\stackrel {d}{=}S_1^{-2/(n-1)}$ ;
-
(iii)
$\displaystyle D_t\stackrel {d}{=}\sqrt {t} D_1,~~\Theta _y^D\stackrel {d}{=}y^2\Theta _1^D,~~\Theta _1^D\stackrel {d}{=}D_1^{-2}$ ;
-
(iv)
$\displaystyle R_t\stackrel {d}{=}\sqrt {t} R_1,~~\Theta _y^R\stackrel {d}{=}y^2\Theta _1^R,~~\Theta _1^R\stackrel {d}{=}R_1^{-2}$ .
Proof of Proposition 3.1
We follow the proof of Proposition 4.1 in [Reference Cygan, Panzo and Šebek1]. The scaling property of Brownian motion implies that for any
$t\geq 0$
and
$\lambda>0$
, we have

The distributional identities of Proposition 3.1 all follow from (3.3). As the proofs are similar, we give the details only for part (i) and leave the rest to the reader.
The first distributional identity of part (i) is trivial when
$t=0$
, so there is no loss of generality in assuming that
$t>0$
. For
$u\geq 0$
, we can use (3.3) to write

Taking
$u=1$
proves the identity.
Similarly to the first identity, we can also assume that
$y>0$
when proving the second identity of part (i). Now, for any
$t\geq 0$
, we can use (3.4) to write

This shows that
$\Theta _y^V$
and
$y^{2/n}\Theta _1$
have the same distribution.
The proof of the last identity of part (i) can be deduced similarly via

4 Proofs of the main results
4.1 Inverse processes of volume and surface area
The proof of the upper bound in Theorem 2.1 employs an n-stage construction which stops and restarts
$\boldsymbol {W}$
as it exits a sequence of hypercylinders of decreasing spherical dimension, that is, a sequence of Cartesian products where the first factor is an (
$n-j$
)-dimensional Euclidean ball and the second factor is
$\mathbb {R}^j$
, for
$j=0,\dots , n-1$
. This procedure allows us to embed an n-simplex
$\mathcal {S}$
with prescribed volume within the convex hull of
$\boldsymbol {W}$
at a certain stopping time
$T_n$
. This is essentially an n-dimensional version of the idea used to prove Proposition 1.6 of [Reference Cygan, Panzo and Šebek1]. The construction is parameterized by a sequence of n positive numbers
$r_1,r_2,\dots ,r_n$
. These parameters are the radii of the spherical parts of the aforementioned hypercylinders. Upon completion of the procedure at time
$T_n$
, we necessarily have
$V(\mathcal {H}_{T_n})\geq V(\mathcal {S})$
, whence we deduce the upper bound

After computing
$V(\mathcal {S})$
and
$\mathbb {E}[T_n]$
explicitly in terms of the parameters, we use the forthcoming Lemma 4.2 to optimize the right-hand side of (4.1) under the constraint
$V(\mathcal {S})\geq 1$
in order to obtain the best possible bound using this method.
As alluded to above, the upper bound of Theorem 2.1 requires solving a convex optimization problem whose solution we split into the following two lemmas. Lemma 4.1 verifies that certain functions are indeed convex and Lemma 4.2 solves the optimization problem. In fact, the proof of Lemma 4.1 can be easily modified to show that the functions are actually strictly convex, although this is not needed to prove Theorem 2.1. We refer to [Reference Rockafellar18] for the requisite convex analysis theory.
Lemma 4.1 Let
$n\in \mathbb {N}$
, and define the functions
$f_n$
and
$g_n$
by

Then
$f_n$
and
$g_n$
are both convex functions on the positive orthant
$\mathbb {R}_{>0}^n$
.
Proof of Lemma 4.1
We establish convexity by showing that the Hessian matrices of
$f_n$
and
$g_n$
are both positive semi-definite on the positive orthant
$\mathbb {R}_{>0}^n$
(see [Reference Rockafellar18, Theorem 4.5]). This is a trivial matter for
$f_n$
, since its Hessian matrix is a constant diagonal matrix with positive diagonal entries. The case of
$g_n$
requires a bit more work but is also straightforward. Indeed, routine calculations show that the entries of the Hessian matrix
$\boldsymbol {H}$
of
$g_n$
are given by

Now, if
$\boldsymbol {z}=(z_1,\dots , z_n)$
is any vector in
$\mathbb {R}^n$
and
$(x_1,\dots ,x_n)\in \mathbb {R}_{>0}^n$
, we can write

Lemma 4.2 For any
$n\in \mathbb {N}$
, we have

Proof of Lemma 4.2
Following the notation and results of Lemma 4.1, in order to prove Lemma 4.2, we need to minimize the convex objective function
$f_n$
over the convex domain
$\mathbb {R}_{>0}^n$
, subject to the convex constraint
$n!\,g_n-1\leq 0$
. In the nomenclature of [Reference Rockafellar18], this is an ordinary convex program and we appeal to Theorem 28.3 of that reference for a solution. Toward this end, we claim that the infimum on the left-hand side of (4.2) is attained at

and that the Kuhn–Tucker coefficient corresponding to the constraint is
$\lambda =2\sqrt [n]{n!}$
. We verify this by checking the three conditions of [Reference Rockafellar18, Theorem 28.3].
Condition (a): Inequality constraints
It is clear that
$\lambda \geq 0$
. Moreover,
$n!\,g_n(\boldsymbol {\overline {x}})-1\leq 0$
and
$\lambda (n!\,g_n(\boldsymbol {\overline {x}})-1)= 0$
both follow from

Condition (b): Equality constraints
This condition is vacuously satisfied since there are no equality constraints.
Condition (c): Lagrangian
Since
$f_n$
and
$g_n$
are both differentiable on
$\mathbb {R}_{>0}^n$
, Condition (c) becomes a statement about the gradient of the Lagrangian instead of its subdifferential. In particular, routine calculations show that

and

Adding (4.3) and (4.4) demonstrates that

This checks Condition (c) and verifies our claim.
Finally, we prove Lemma 4.2 by evaluating the objective function at
$\boldsymbol {\overline {x}}$
, namely,

With Lemma 4.2, in hand, we can begin to prove Theorem 2.1. The proof of Theorem 2.3 is much simpler and appears at the end of this section.
Proof of Theorem 2.1
The lower bound is a straightforward consequence of part (i) of Proposition 3.1 together with Jensen’s inequality and Eldan’s formula (1.3) for the expected value of the volume. In particular, these results allow us to write

Proving the upper bound requires the n-stage procedure that was described at the beginning of Section 4.1. Starting with
$T_0=0$
,
$\boldsymbol {x}_0=\boldsymbol {0}$
, and
$\mathcal {X}_0=\mathbb {R}^n$
, we recursively define
$T_j$
,
$\boldsymbol {x}_j$
, and
$\mathcal {X}_j$
for
$j=1,\dots ,n$
by

In words,
$T_j$
is the first time after
$T_{j-1}$
that the orthogonal projection of
$\boldsymbol {W}$
onto the linear subspace
$\mathcal {X}_{j-1}$
exits the centered open ball of radius
$r_j$
, the point
$\boldsymbol {x}_j$
is the position of
$\boldsymbol {W}$
at time
$T_j$
, and
$\mathcal {X}_j$
is the orthogonal complement in
$\mathbb {R}^n$
of the linear span of the vectors
$\boldsymbol {x}_0,\dots ,\boldsymbol {x}_j$
.
With this construction, it is clear that
$\{\boldsymbol {x}_0,\boldsymbol {x}_1,\dots ,\boldsymbol {x}_n\}\subset \mathcal {H}_{T_n}$
. Moreover,

implies that
$\{\boldsymbol {x}_1,\dots ,\boldsymbol {x}_n\}$
is a linearly independent set of vectors. In particular, it follows that
$\boldsymbol {x}_0,\boldsymbol {x}_1,\dots ,\boldsymbol {x}_n$
are affinely independent, and therefore constitute the vertices of some n-simplex
$\mathcal {S}\subset \mathcal {H}_{T_n}$
. Since the n-parallelotope spanned by the vectors
$\boldsymbol {x}_1,\dots ,\boldsymbol {x}_n$
can be partitioned into
$n!$
copies of
$\mathcal {S}$
, we deduce that

where
$\boldsymbol {G}$
is the Gram matrix of the vectors
$\boldsymbol {x}_1,\dots ,\boldsymbol {x}_n$
. Letting
$\boldsymbol {M}$
denote the
$n\times n$
matrix with columns
$\boldsymbol {x}_1,\dots ,\boldsymbol {x}_n$
, we can write

The rotational invariance of
$\boldsymbol {W}$
allows us to reorient the coordinate axes in a convenient way without affecting the distribution of the stopping times
$T_1,\dots ,T_n$
. This will considerably simplify the computation of
$\det (\boldsymbol {G})$
. Equivalently, we can apply an orthogonal transformation to each
$\boldsymbol {x}_j$
that changes the jth coordinate to
$r_j$
and the last
$n-j$
coordinates to
$0$
, while fixing the first
$j-1$
coordinates. Hence, without loss of generality, we can take
$\boldsymbol {x}_j=(x_{1j},\dots , x_{nj})$
with
$x_{kj}=r_j$
if
$k=j$
and
$x_{kj}=0$
if
$k>j$
. See Figure 1 for an illustration of each stage of the construction of
$\mathcal {S}$
and reorientation when
$n=3$
. In particular, this makes
$\boldsymbol {M}$
an upper triangular matrix with diagonal entries
$r_1,\dots ,r_n$
. Therefore,
$\det (\boldsymbol {M})=r_1\cdots r_n$
, and we can conclude from (4.5) and (4.6) that


Figure 1 Constructing the
$3$
-simplex
$\mathcal {S}$
in
$\mathbb {R}^3$
, with the Brownian path omitted for clarity. The three rows of figures correspond to the three stages. The left and right columns correspond, respectively, to before and after reorientation.
To compute the right-hand side of (4.1), we note that by construction,

Moreover, since
$\operatorname {\mathrm {proj}}(\boldsymbol {W},\mathcal {X}_{j-1})$
is Brownian motion in
$n-j+1$
dimensions, it follows for all
$1\leq j\leq n$
that
$T_j-T_{j-1}$
is the first exit time from an (
$n-j+1$
)-dimensional open ball of radius
$r_j$
by (
$n-j+1$
)-dimensional Brownian motion starting at the center of the ball. We can now use (3.2) to write

It follows from part (i) of Proposition 3.1 that
$\mathbb {E}[\Theta _y^V]$
is an increasing function of the volume y. Hence, (4.1) and (4.7) together imply that we have

whenever the radii satisfy
$r_1\dots r_n\geq n!$
. Now we can conclude from (4.8) and Lemma 4.2 that

Proof of Theorem 2.3
The lower bound is a straightforward consequence of part (ii) of Proposition 3.1 together with Jensen’s inequality and Eldan’s formula (1.3) for the expected value of the surface area. In particular, these results allow us to write

To prove the upper bound, first note that by the isoperimetric inequality, the surface area of the convex hull at time
$\Theta _1^V$
must be at least that of a ball in
$\mathbb {R}^n$
with volume
$1$
. From the scaling property of Lebesgue measure, we can deduce that a ball in
$\mathbb {R}^n$
with volume
$1$
has surface area
$\omega _n\kappa _n^{(1-n)/n}$
. Thus, part (ii) of Proposition 3.1 implies

Now combining (4.9) with (3.1) and the upper bound from Theorem 2.1 leads to

4.2 Diameter and its inverse process
Proof of Theorem 2.5
We start by proving the lower bound. Note that

where we used
$\tau _1$
to denote the exit time of the unit ball in
$\mathbb {R}^n$
by n-dimensional Brownian motion starting from the center. The equality in distribution in (4.10) is similar to those from Proposition 3.1 and follows from Brownian scaling (see also [Reference Pitman and Yor17, Equation (11)]). Taking the expected value of (4.10) while applying Jensen’s inequality on the right-hand side and using (3.2) leads to

To prove the upper bound, we consider the hyperrectangle
$\mathcal {R}$
circumscribed around
$\mathcal {H}_1$
that has each edge parallel to a coordinate axis and use the diagonal of
$\mathcal {R}$
to bound
$D_1$
from above. This idea was first used by McRedmond and Xu in the planar case (see [Reference McRedmond and Xu15, Proposition 5]). More precisely, we have

The quantity
$\sup _{0\leq t\leq 1} W_t^{(i)}-\inf _{0\leq t\leq 1} W_t^{(i)}$
is nothing but the range of the ith coordinate of
$\boldsymbol {W}$
. In particular, its second moment was computed by Feller in [Reference Feller4] and was shown to be
$4\log 2$
. Hence, it follows from Jensen’s inequality that

Proof of Theorem 2.6
For the lower bound, we can use part (iii) of Proposition 3.1 along with Jensen’s inequality and the upper bound from Theorem 2.5 to write

For the upper bound, note that as soon as
$\boldsymbol {W}$
exits a ball of radius
$1$
, its convex hull contains a line segment of length at least
$1$
. Hence, the diameter of the convex hull at this time is at least
$1$
. Thus,
$\Theta _1^D \leq \tau _1$
, and from (3.2) we get

4.3 Circumradius and its inverse process
Before proving Theorem 2.8, we need a lemma that equates the diameter and twice the circumradius of a centrally symmetric compact set in
$\mathbb {R}^n$
. A set
$\mathcal {C}\subset \mathbb {R}^n$
, not necessarily convex, is centrally symmetric with respect to the point
$\boldsymbol {p}\in \mathbb {R}^n$
if

The fact that the following lemma isn’t true for general compact
$\mathcal {C}$
is part of the substance of Jung’s theorem (see [Reference Gruber5, Theorem 3.3]).
Lemma 4.3 Let
$\mathcal {C}\subset \mathbb {R}^n$
be a centrally symmetric compact set. Then

In particular, (4.13) holds for any hyperrectangle.
Proof of Lemma 4.3
Without loss of generality, we can assume that
$\mathcal {C}$
is centrally symmetric with respect to the origin. By compactness, there exists points
$\boldsymbol {x},\boldsymbol {y}\in \mathcal {C}$
with
$\|\boldsymbol {x}-\boldsymbol {y}\|=D(\mathcal {C})$
. Hence,
$D(\mathcal {C})\leq 2R(\mathcal {C})$
, for otherwise,
$\boldsymbol {x}$
and
$\boldsymbol {y}$
could not both fit inside the circumball. To prove that the other inequality holds, define
$\rho :=\sup \{\|\boldsymbol {x}\|:\boldsymbol {x}\in \mathcal {C}\}$
. Certainly
$R(\mathcal {C})\leq \rho $
, since
$\mathcal {C}$
is contained in the closed ball of radius
$\rho $
centered at the origin. By compactness, we know there exists some
$\boldsymbol {x}\in \mathcal {C}$
with
$\|\boldsymbol {x}\|=\rho $
. By central symmetry,
$-\boldsymbol {x}\in \mathcal {C}$
, so we have
$D(\mathcal {C})\geq 2\rho $
. Putting these two inequalities together gives
$D(\mathcal {C})\geq 2R(\mathcal {C})$
, which proves the first claim.
We prove the last claim by establishing the central symmetry of any hyperrectangle
$\mathcal {R}\subset \mathbb {R}^n$
. Without loss of generality, we can assume that each edge of
$\mathcal {R}$
is parallel to a coordinate axis and that
$\mathcal {R}$
is centered at the origin. More precisely,

where
$w_1,\dots ,w_n\geq 0$
are the coordinate half-widths of
$\mathcal {R}$
. It is clear from (4.14) that
$\boldsymbol {x}\in \mathcal {R}$
if and only if
$-\boldsymbol {x}\in \mathcal {R}$
. Hence,
$\mathcal {R}$
satisfies (4.12) with
$\boldsymbol {p}=\boldsymbol {0}$
.
Proof of Theorem 2.8
In light of Theorem 2.5, the lower bound follows immediately from the inequality
$D(\mathcal {C})\leq 2R(\mathcal {C})$
that holds for any compact set
$\mathcal {C}\subset \mathbb {R}^n$
; refer to the proof of Lemma 4.3 for an explanation of this inequality.
For the upper bound, we consider the hyperrectangle
$\mathcal {R}$
circumscribing
$\mathcal {H}_1$
that has each edge parallel to a coordinate axis. Similarly to the proof of Theorem 2.5, we can write

where the first equality follows from Lemma 4.3. Taking the expected value of (4.15), while using Jensen’s inequality on the right-hand side along with Feller’s second moment calculation from (4.11), results in

Proof of Theorem 2.9
For the lower bound, we can use part (iv) of Proposition 3.1 along with Jensen’s inequality and the upper bound from Theorem 2.8 to write

For the upper bound, note that the trivial inequality
$D(\mathcal {H}_t)\leq 2R(\mathcal {H}_t)$
implies that the circumradius of the convex hull will attain
$1$
no later than when its diameter attains
$2$
. Hence, we can use Proposition 3.1 and Theorem 2.6 to write

Acknowledgments
The authors would like to thank two anonymous reviewers for their helpful comments and suggestions which have led to an improved manuscript.