Hostname: page-component-54dcc4c588-dbm8p Total loading time: 0 Render date: 2025-09-11T19:25:45.407Z Has data issue: false hasContentIssue false

Large and moderate deviations in Poisson navigations

Published online by Cambridge University Press:  10 September 2025

Partha Pratim Ghosh*
Affiliation:
Technische Universität Braunschweig
Benedikt Jahnel*
Affiliation:
Technische Universität Braunschweig & Weierstrass Institute Berlin
Sanjoy Kumar Jhawar*
Affiliation:
INRIA Paris & Telecom Paris
*
*Postal address: Technische Universität Braunschweig, Universitätsplatz 2, 38106 Braunschweig, Germany.
*Postal address: Technische Universität Braunschweig, Universitätsplatz 2, 38106 Braunschweig, Germany.
****Postal address: INRIA Paris, 48 Rue Barrault, 75013 Paris, France. Email: sanjoy-kumar.jhawar@inria.fr
Rights & Permissions [Opens in a new window]

Abstract

We derive large- and moderate-deviation results in random networks given as planar directed navigations on homogeneous Poisson point processes. In this non-Markovian routing scheme, starting from the origin, at each consecutive step a Poisson point is joined by an edge to its nearest Poisson point to the right within a cone. We establish precise exponential rates of decay for the probability that the vertical displacement of the random path is unexpectedly large. The proofs rest on controlling the dependencies of the individual steps and the randomness in the horizontal displacement as well as renewal-process arguments.

Information

Type
Original Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

We consider decentralized traffic-flow networks or drainage networks, where nodes transmit data individually to dedicated neighboring nodes according to local rules, or respectively liquid flows towards a certain direction. For this, directed navigations on homogeneous Poisson point processes in the Euclidean plane can be used as an underlying network model [Reference Baccelli and Bordenave3, Reference Bordenave5]. The traffic generated at each node is forwarded iteratively to the nearest node, for example, to the right in the horizontal direction. While analyzing such models, it has been observed in [Reference Hirsch, Jahnel, Keeler and Patterson10], that the spatial traffic-flow density at a given location in the domain, i.e. the spatial average of accumulated traffic at a microscopic area around that point, follows a law of large numbers in the high-density limit. Here, the asymptotic traffic density can be captured as an integral in terms of the spatial intensity of the point process along with their rate of traffic generation. The bulk contribution in the limit in this asymptotics comes from the traffic generated within a thin horizontal strip at that given point due to sub-ballisticity of the trajectory of the navigation. This sub-ballistic behavior is nothing but a version of the straightness condition studied in [Reference Howard and Newman11].

Going beyond sub-ballisticity and the law of large numbers, questions about the deviation behavior of paths in the traffic network arise. In a stationary setting, is it rare to have a path originating from the origin to deviate much away from the horizontal axis? For this, we denoted by ${\mathcal Y }_t$ the vertical position of the path measured as a function of horizontal position t. If the answer to the question is no then essentially these paths are not so unlikely to miss the microscopic region around a given location, and the traffic accumulated along the path may not contribute to the aggregated traffic at the location of interest. This in turn would lead to deviations in the throughput at the target location and it is an indication of unexpected network performance. On the other hand, if the answer is yes, that means that the paths do not deviate too much and contribute to the bulk of the throughput. But how rare is the deviation event? In other words, at what scale does the probability of such an event live on and what are the rates? To answer these sort of questions, in this paper we investigate the large- and moderate-deviation behavior of the deviation of these paths away from the horizontal axis in a Poisson navigation model where the consecutive successors are defined as the closest neighbor within a cone to the right of the current point in the Euclidean plane.

Large- and moderate-deviation principles constitute a cornerstone in the analysis of random geometric models and, in general, in stochastic processes in the last four to five decades, well documented, for example, in [Reference Dembo and Zeitouni7, Reference Hollander8, Reference Varadhan17]. These principles essentially capture rare events by quantifying their asymptotic unlikeliness. The small probability of the events is expressed in terms of an exponential rate and a rate function. In the context of telecommunication networks, large-deviation principles can be used to point out how badly the network is performing and what the most prominent reasons for this behavior are. The large-deviation principle has been studied for many different telecommunication network models, mainly identifying a poor service indicator; see, for example, [Reference Jahnel and König12, Chapter 6] and references therein. On the other hand, moderate deviations are nothing but large deviations with a slower scaling. The moderate-deviation principle for a sequence of independent and identically distributed (i.i.d.) centered random variables has been studied, for example, in [Reference Eichelsbacher and Löwe9] as a much cleaner and shorter version of [Reference Arcones1] and for sequences of Banach-space-valued random variables, see the classic work of Ledoux [Reference Ledoux13]. In the case of moderate deviations, the rate function turns out to be of Gaussian type irrespective of the model under consideration, whereas in large deviations the rate function depends on characteristics of the random variables.

The model we study in this paper is the underlying setup for the traffic-flow network mentioned above. It is known as a navigation on Poisson point processes in Euclidean space and specifically the directed and radial navigation were introduced in [Reference Bordenave5]. However, we introduce an additional parameter $0<\theta\le \pi/2$ that controls the angle of the cone in which the navigation searches for the next neighbor. By definition, if the navigation starts at multiple points, this type of navigation gives rise to a tree structure. In applications to telecommunication networks, the root of the spanning trees can be seen as a network head where all the information is gathered along the edges of the tree and processed. Hence, it is worthwhile to know the local and global structural properties of the tree. In the work of Baccelli and Bordenave [Reference Baccelli and Bordenave3], which only deals with the case $\theta=\pi/2$ , the quantities of interest were the local tree functionals around a vertex, for example, the degree, properties of the path in the tree from a vertex to the root, namely the total length or properties of the tree structure, and its shape. In a somewhat more geometric and analytical work, the convergence of the tree to a Brownian web under the appropriate scaling limit has been studied in [Reference Coupier, Saha, Sarkar and Tran6, Reference Roy, Saha and Sarkar15] for the Poisson setting, in general dimensions, and more recently, for a discrete setting on perturbed planar lattices [Reference Roy, Saha and Sarkar16]; however, all this is only in the case $\theta=\pi/2$ . In this paper, in particular, we improve the polynomial-decay properties in [Reference Baccelli and Bordenave3, Lemma 4.11] to stretched-exponential decay of a moderate-deviation type as well as go beyond the associated central limit theorem [Reference Baccelli and Bordenave3, Theorem 4.7]. Let us also mention the work of Bonichon and Marckert [Reference Bonichon and Marckert4], where navigation characteristics are investigated in a high-density regime for not necessarily homogeneous Poisson point processes.

When $\pi/4<\theta \le \pi/2$ , the model of navigation possesses a challenging dependence structure along its path, since each step of the navigation carries an extra piece of information from the previous steps via nontrivial regions that have been observed as void spaces for the underlying Poisson point process. We call this piece of information the history set and identify the steps where the history set is well behaved, giving rise to a renewal structure that is essential for our analysis. However, the tail properties of the step variables of the renewal process are challenging to analyze, which makes up for the bulk of the technical part of the paper. For this, we control the exponential decay of the inter-stopping time gaps via bounds on the dynamics of the width of the history set, using Markov-chain comparison ideas from [Reference Coupier, Saha, Sarkar and Tran6, Proposition 3.1]. However, this control fails to be sufficiently detailed in order to establish the large deviations in the whole parameter regime. Indeed, the model becomes substantially simpler for $\theta\in (0, \pi/4]$ as the steps become i.i.d. and we can then provide the full large- and moderate-deviation analysis with rather explicit rate functions.

Let us mention that our main results should still hold in higher dimensions with appropriately adjusted definitions, for example, of higher-dimensional angles. The associated proofs should not pose any additional substantial difficulties. In order to keep the exposition more accessible, we focus here only on the planar case. Similarly, the qualitative picture should remain intact if the directed navigation defined below, which is based on a nearest-neighbor relation, is replaced by a navigation based on connecting, for example, to the kth nearest neighbor, $k\ge 1$ . However, at least for $\theta\in (\pi/4,\pi/2]$ , many of the underlying definitions and arguments would need to be changed, but a corresponding renewal structure should still be present. Another interesting direction for future research could be to assign individual angles to every vertex via i.i.d. marks and study the resulting navigation process.

Organization of the paper is as follows. In Section 2 we describe the model in detail, state the main results about large- and moderate-deviation principles and the scaling property of the rate functions with respect to the intensity of the Poisson point process. In Section 3 we first uncover the hidden independence structure in the model and state the key supporting results that enable us to prove the main result for the moderate deviations for $\theta<\pi/2$ . In Section 4 we discuss the large-deviation principle in the dependent case, i.e. for $\pi/4<\theta<\pi/2$ , and prove it under a key assumption for the tail behavior of the renewal-step variables. Section 5 contains the proofs of all the supporting results that are stated in Section 3. Section 6 contains the separate argument for the moderate-deviation principle in case $\theta=\pi/2$ and in Section 7 we prove all statements regarding large deviations. Finally, in the Appendix A we elaborate on a connection between moderate and large deviations.

2. Setting and Main Results

Let ${\mathcal P }_{\lambda}$ denote the homogeneous Poisson point process with intensity $\lambda>0$ on $\mathbb{R}^2$ and consider an additional point o at the origin such that ${\mathcal P }_{\lambda}\cup \{o\}$ is the Poisson point process under its Palm distribution $\mathbb{P}$ . We are interested in the large and moderate deviations of a dependent sequence of waypoints, starting at o, that are defined iteratively by choosing a successor point, which is the closest Poisson point towards the right within a cone. More precisely, for any $v\in \mathbb{R}^2$ , let $(r_v, \varphi_v)\in \mathbb{R}_+\times[-\pi,\pi)$ denote its polar coordinates, with the first unit vector $e_1$ corresponding to the polar coordinates (1,0), and consider

\[{\mathcal C }_{\theta}\;:\!=\;\{v=(r_v, \varphi_v)\colon r_v>0, |\varphi_v|\leq \theta\},\]

the cone with angle $2\theta$ centered at the origin. For $0\le \theta\le \pi/2$ , we might say that the cone is facing towards the right. For any $v\in \mathbb{R}^2$ , we write ${\mathcal C }_{\theta}(v)\;:\!=\;v+{\mathcal C }_{\theta}$ and ${\mathcal C }^o_{\theta}(v)$ for the interior of ${\mathcal C }_{\theta}(v)$ . Then, we consider the following family of navigations based on the usual Euclidean metric $|\cdot|$ in $\mathbb{R}^2$ .

Definition 1. (Navigations.) Let $V_0=o$ . Then, we call ${\mathcal V }\;:\!=\;\{V_n\}_{n\geq 0}\subset {\mathcal P }_\lambda$ , iteratively defined as

\[ V_{i+1}\;:\!=\;\textrm{argmin}\{|v-V_i|\colon v\in {\mathcal P }_\lambda\cap {\mathcal C }_{\theta}(V_i)\},\]

a directed $\theta$ navigation. Here $V_{i+1}$ is denoted the successor of $V_i\in {\mathcal P }_\lambda$ and $U_i\;:\!=\;V_i-V_{i-1}$ the ith progress of the navigation.

Note that, under $\mathbb{P}$ , the argmin is uniquely defined almost surely (a.s.). We are interested in a continuous-time process based on the navigation ${\mathcal V }$ . For this, consider $\bar{\mathcal V }\;:\!=\;\bigcup_{k\ge 0}[V_k,V_{k+1}]$ , the interpolated trajectory for the navigation, where $[V_k,V_{k+1}]\subset\mathbb{R}^2$ should be understood as the one-dimensional line segment that connects $V_k$ and $V_{k+1}$ . Then, we can see the interpolated trajectory $\bar{\mathcal V }$ as a piecewise affine and continuous path, parametrized with respect to time t as $\{(t, {\mathcal Y }_t)\}_{t\geq 0}$ , where the parameter t denotes the progress along the x axis and ${\mathcal Y }_t\in\mathbb{R}$ is the corresponding y coordinate or vertical displacement at time t. In particular, for $t=\pi_1(V_k)$ , where $\pi_1$ denotes the projection to the first Cartesian coordinate, we have ${\mathcal Y }_t=\pi_2(V_k)$ , for every $k\ge 0$ , where $\pi_2$ denotes the projection to the second Cartesian coordinate; see Figure 1 for an illustration.

Figure 1. Simulated sample path of $\bar{\mathcal V }$ for $\theta=\arctan(5)$ .

The following standard notation is used throughout the paper. For any set $\Gamma$ , we denote its interior by $\Gamma^o$ , its closure by $\overline{\Gamma}$ , and its complement by $\Gamma^c$ . Our first main result is the moderate-deviation principle for ${\mathcal Y }=\{{\mathcal Y }_t\}_{t\ge 0}$ .

Theorem 1. For any $0<\lambda$ , $0<\theta \leq \pi/2$ and $0<\varepsilon<1/2$ , $\{t^{-1/2-\varepsilon} {\mathcal Y }_t\}_{t\geq 0}$ obeys the moderate-deviation principle with rate $t^{2\varepsilon}$ and rate function $I_{\lambda,\theta}(x)\;:\!=\;\rho(\lambda,\theta) x^2$ , where $\rho(\lambda,\theta)>0$ , meaning that, for any Borel set $\Gamma\subseteq \mathbb{R}$ ,

\begin{align*} -\inf_{x\in \Gamma^{o}} I_{\lambda,\theta}(x) &\leq \liminf_{t\to\infty} t^{-2\varepsilon} \log \mathbb{P}(t^{-1/2-\varepsilon} {\mathcal Y }_t \in \Gamma)\\[5pt] &\leq \limsup_{t\to\infty} t^{-2\varepsilon} \log \mathbb{P}(t^{-1/2-\varepsilon} {\mathcal Y }_t \in \Gamma) \\[5pt] &\leq -\inf_{x\in \overline\Gamma} I_{\lambda,\theta}(x). \end{align*}

Additionally, $\rho(\lambda,\theta)$ satisfies the scaling relation $\rho(\lambda,\theta)= \sqrt{\lambda}\rho(1,\theta)$ .

The constant $\rho(\lambda,\theta)$ can be generally expressed in a semi-explicit way; see (6) in Section 3. In the case $0<\theta\le \pi/4$ , it is given by

(1) \begin{align}\rho(\lambda,\theta)\;:\!=\;\frac{ \int_0^\infty\textrm{d} r\; r^2\exp\!(\!-\!\lambda\theta r^2)\int_{-\theta}^{\theta}\textrm{d} \varphi\; \cos\varphi}{2\int_0^\infty\textrm{d} r\; r^3\exp\!(\!-\!\lambda\theta r^2)\int_{-\theta}^{\theta}\textrm{d} \varphi\; \sin^2\varphi}=\frac{\sqrt{\pi\lambda\theta}\sin\theta }{2\theta-\sin(2\theta)}.\end{align}

Let us mention that Theorem 1 can readily be used to establish also the strong law of large numbers $t^{-1}{\mathcal Y }_t\to 0$ a.s. In fact, Theorem 1 implies that even $t^{-1/2-\varepsilon}{\mathcal Y }_t\to 0$ a.s. for all $0<\varepsilon<1/2$ .

For all sufficiently small angles, we show that ${\mathcal Y }$ also satisfies the large-deviation principle. The rate function is given in terms of the multivariate rate function of i.i.d. progress variables defined in terms of the usual scalar product $\langle \cdot,\cdot\rangle$ in $\mathbb{R}^2$ .

Lemma 1. Consider i.i.d. copies $\{\tilde U_i\}_{i\ge 1}$ of the progress variable $U_1\in \mathbb{R}^2$ in the directed $\theta$ navigation. Then, $\{n^{-1}\sum_{i=1}^n\tilde U_i\}_{n\ge 1}$ satisfies the large-deviation principle with rate n and rate function

$$\mathcal J_{\lambda,\theta}(u)\;:\!=\;\sup\{\langle \gamma, u \rangle -J_{\lambda,\theta}(\gamma)\colon \gamma\in \mathbb{R}^2\},$$

where, for all $\gamma\in \mathbb{R}^2$ ,

\begin{align*}J_{\lambda,\theta}(\gamma)\;:\!=\;\log\bigg(\lambda \int_0^\infty dr\; r\exp\!(\!-\!\lambda\theta r^2)\int_{-\theta}^{\theta}{d} \varphi\; \exp\!(\gamma_1 r\cos\varphi+\gamma_2 r\sin\varphi)\bigg)<\infty.\end{align*}

Furthermore, $u\mapsto \mathcal J_{\lambda,\theta}(u)$ is continuous on $\{u\in \mathbb{R}^2\colon \mathcal J_{\lambda,\theta}(u)<\infty\}={\mathcal C }^o_{\theta}$ .

We are now in the position to state our second main theorem.

Theorem 2. For any $0<\lambda$ and $0<\theta\le \pi/4$ , $\{t^{-1} {\mathcal Y }_t\}_{t\geq 0}$ obeys the large-deviation principle with rate t and rate function $\mathcal I_{\lambda,\theta}(x)\;:\!=\;\inf\{\beta \mathcal J_{\lambda,\theta}(1/\beta,x/\beta)\colon \beta>0\}$ , meaning that, for any Borel set $\Gamma\subseteq \mathbb{R}$ ,

\begin{align*} -\inf_{x\in \Gamma^{o}} \mathcal I_{\lambda,\theta}(x) &\leq \liminf_{t\to\infty} t^{-1} \log \mathbb{P}(t^{-1} {\mathcal Y }_t \in \Gamma)\\[5pt] &\leq \limsup_{t\to\infty} t^{-1} \log \mathbb{P}(t^{-1} {\mathcal Y }_t \in \Gamma) \\[5pt] &\leq -\inf_{x\in \overline\Gamma} \mathcal I_{\lambda,\theta}(x).\end{align*}

Additionally, $\mathcal I_{\lambda,\theta}$ satisfies the scaling relation $\mathcal I_{\lambda,\theta} = \sqrt{\lambda} \mathcal I_{1,\theta}$ .

We believe that ${\mathcal Y }$ also satisfies the large-deviation principle for $\pi/4<\theta<\pi/2$ , but with a substantially more involved rate function $\mathcal I'_{\lambda,\theta}$ (see Theorem 3). As we highlight in the following sections, the challenge comes from the fact that, for $\pi/4<\theta<\pi/2$ , the individual steps are not i.i.d. any more. The underlying renewal structure, mentioned in the introduction and explained in detail in the next section, which is also crucial for the proof of the moderate-deviation principle, has individual renewal steps with exponential tails. One renewal step therefore makes a negligible contribution on the moderate-deviation scale, but is nonnegligible on the large-deviation scale. Due to the challenging dependence structure within the renewal steps, we are only able to give the large-deviation principle in Theorem 3 below assuming control on the tail behavior of the interarrival steps, see Assumption 1, and postpone the details to Section 4.

Before we enter a more detailed discussion about the proofs, let us elaborate on the connection between Theorems 1 and 2.

Remark 1. There is a heuristic that suggests that $\rho(\lambda,\theta)$ coincides with the second derivative of the large-deviation rate function at zero, that is, $\rho(\lambda,\theta)=\ddot{\mathcal{I}}_{\lambda,\theta}(0)/2$ , which stems from the following intuition. With $h=xt^{\varepsilon-1/2}$ small, we roughly have, for large t,

\begin{align*}t^{-2\varepsilon}\log\mathbb{P}(t^{-1/2-\varepsilon}{\mathcal Y }_t\approx x)&=t^{-2\varepsilon}\log\mathbb{P}(t^{-1}{\mathcal Y }_t\approx xt^{\varepsilon-1/2})\\[5pt] &\approx -t^{1-2\varepsilon}\mathcal I_{\lambda,\theta}(xt^{\varepsilon-1/2})\\[5pt] &= -x^2h^{-2}\mathcal I_{\lambda,\theta}(h)\\[5pt] &\approx -x^2\ddot{\mathcal I}_{\lambda,\theta}(0)/2,\end{align*}

where we used the fact that $\mathcal I_{\lambda,\theta}(0)=\dot{\mathcal I}_{\lambda,\theta}(0)=0$ by the strong law of large numbers mentioned above. However, as we will investigate in detail in Appendix A, this intuition fails already in the independent case where $0<\theta\le \pi/4$ . Roughly, this comes from the fact that, on the large-deviation level, there is a non-negligible interplay between the progress in the vertical and horizontal direction, which manifests itself also in $\ddot{\mathcal I}_{\lambda,\theta}(0)$ . However, this is not the case on the level of moderate deviations. Here, deviations in the horizontal direction come at an exponential cost of rate t, and hence, play no role at rate $t^{2\varepsilon}$ , with $0<\varepsilon<1/2$ .

3. Strategy of Proof

The proof of Theorem 2 is a consequence of Lemma 1 and stepwise independence in the case $\theta\le \pi/4$ . We present the arguments in Section 7. Let us now focus on the moderate deviations. In general, the setting of Theorem 1 is more challenging since, for $\theta>\pi/4$ , we do not see independence in every step. However, for $\theta<\pi/2$ , independence can be recovered and we think of this case in the following. In the case $\theta=\pi/2$ , we almost surely never see independence of a step from the previous steps and we deal with this case separately in Section 6. More precisely, for $\theta<\pi/2$ , the navigation will occasionally make a step that is independent of the past. Conceiving the in-between steps as one segment, we re-enter a regime of independent segments for which we derive the moderate-deviation result. However, the distribution of the segments is hard to trace and has exponential tails. This is the main reason why we restrict ourselves to moderate deviations and only establish the large deviations under some assumptions on the segment distribution in Section 4.

To make this precise, we iteratively define a sequence of history sets. We set $H_0=\emptyset$ and define

\[H_1\;:\!=\; {\mathcal C }_{\theta}(U_1) \cap B(o, |U_1|), \]

where we recall that our navigation starts at the origin o. Here, B(x, r) denotes the open ball with radius r centered at $x\in \mathbb{R}^2$ . In words, the history set at step 1 is given by the region that lies both in the (potential) future of the first waypoint $V_1=U_1$ and in the void space that is responsible for finding the first waypoint. Note that $H_1=\emptyset$ whenever $0\le \theta\le \pi/4$ . For all larger n, we define

\[H_n\;:\!=\;{\mathcal C }_{\theta}(V_{n-1}+U_n)\cap \{H_{n-1}\cup B(V_{n-1}, |U_n|) \},\]

the region in the future of the nth waypoint that intersects the joint history. One way to look at the history set $H_n$ is to realize that this is the region where, for the next step, we already know that a certain part of space is already empty of points since we already searched for points there in the previous steps.

Based on our history sets, for $0<\theta< \pi/2$ , we define $\tau^\theta_0=0$ and

\[{\tau}^{\theta}_1\;:\!=\;\inf\{n>0\colon H_n =\emptyset\},\]

to be the first step where there is no history. For $k>1$ ,

\[{\tau}^{\theta}_k\;:\!=\;\inf\{n>{\tau}^\theta_{k-1}\colon H_n =\emptyset\}\]

is the kth step where the history set is empty. Note that the inter-stopping time gaps $\{{\tau}^{\theta}_k- {\tau}^{\theta}_{k-1}\}_{k\geq 1}$ are i.i.d. random variables due to the total independence of the underlying Poisson point process. In particular, $\tau^\theta_k=k$ a.s., whenever $0< \theta\le\pi/4$ . We will later verify that ${\tau}^{\theta}_1$ is almost-surely finite and has exponential tails. As anticipated, we now build segments

$$U'_i\;:\!=\;\sum_{j=\tau^\theta_{i-1}+1}^{\tau^\theta_i}U_j$$

and note that the sequence $\{U'_i\}_{i\ge 1}$ is i.i.d., again due to the total independence of the underlying Poisson point process. Furthermore, let us denote the number of steps before hitting time t by

$$K_t\;:\!=\;\sup\Big\{n>0\colon\sum_{i=1}^n X_i< t\Big\},$$

where $U_i\;=\!:\;(X_i,Y_i)$ in Cartesian coordinates. Based on our hitting times, we now define

(2) \begin{equation}K'_t\;:\!=\;\sup\{n>0\colon {\tau}^\theta_{n}\le K_t\}\end{equation}

as the index of the largest stopping time before $K_t$ . In particular, writing

\[{\mathcal Y }'_t\;:\!=\;\sum_{i=1}^{K'_t} Y'_i,\]

where $U_i'\;=\!:\;(X_i',Y_i')$ in Cartesian coordinates, we have

(3) \begin{align} \sum_{i=1}^{K_t} Y_i= {\mathcal Y }'_t +\sum_{i=\tau_{K'_t}^\theta+1}^{K_t} Y_i. \end{align}

Let us start by noting that $U_1'$ possesses some exponential moments.

Lemma 2. For some $\gamma=(\gamma_1,\gamma_2)$ with $\gamma_1,\gamma_2>0$ , we have $\mathbb{E}[\!\exp\!(\langle \gamma, U'_1\rangle)]<\infty$ .

We present the proof of this lemma, along with the proofs of all other statements in this section, later in Section 5. In Section 4, roughly speaking, we make the assumption that $U_1'$ obeys a large-deviation principle, which is the critical input in order to establish the large-deviation principle for ${\mathcal Y }_t$ with $\pi/4<\theta<\pi/2$ .

For the moderate deviations, the difference between ${\mathcal Y }_t$ and ${\mathcal Y }_t'$ is irrelevant in the following sense.

Proposition 1. For any $0<\lambda$ , $0<\theta < \pi/2$ , and $0<\varepsilon<1/2$ , $\{t^{-1/2-\varepsilon} {\mathcal Y }_t\}_{t\geq 0}$ and $\{t^{-1/2-\varepsilon} {\mathcal Y }'_t\}_{t\geq 0}$ are exponentially equivalent, i.e. for any $\delta>0$ ,

(4) \begin{equation}\limsup_{t\uparrow\infty} t^{-2\varepsilon}\log\mathbb{P}(|{\mathcal Y }_t-{\mathcal Y }'_t|\geq \delta t^{1/2+\varepsilon})= -\infty.\end{equation}

Now, we want to deal with the randomness in the number of steps $K'_t$ in the definition of ${\mathcal Y }'_t$ . For this, we define, for $w>0$ , the process

\[{\mathcal Y }^w_t \;:\!=\;\sum_{i=1}^{{\lfloor tw \rfloor}} Y'_i\]

and establish in the following statement that it obeys the moderate-deviation principle.

Proposition 2. For any $0<\lambda$ , $0<\theta< \pi/2$ , $0<w$ , and $0<\varepsilon<1/2$ , $\{t^{-1/2-\varepsilon} {\mathcal Y }^w_t\}_{t\geq 0}$ obeys the moderate-deviation principle with rate function $I_{\lambda,\theta}^w(x)\;:\!=\;x^2/(2w\mathbb{E}[Y_1'^2])$ .

Now let $\kappa\;:\!=\;\mathbb{E}[X'_1]^{-1}$ denote the inverse expected horizontal progress. By Lemma 2, we have $\kappa>0$ and, by Lemma 8 below, also $\kappa<\infty$ . The following result now presents the final ingredient for the proof of Theorem 1 in the case $0<\theta< \pi/2$ .

Proposition 3. For any $0<\lambda$ , $0<\theta< \pi/2$ , and $0<\varepsilon<1/2$ , $\{t^{-1/2-\varepsilon} {\mathcal Y }^\kappa_t\}_{t\geq 0}$ and $\{t^{-1/2-\varepsilon} {\mathcal Y }'_t\}_{t\geq 0}$ are exponentially equivalent, i.e. for any $\delta>0$ ,

(5) \begin{equation}\limsup_{t\uparrow\infty}t^{-2\varepsilon}\log\mathbb{P}(|{\mathcal Y }^\kappa_t-{\mathcal Y }'_t|\geq \delta t^{1/2+\varepsilon})= -\infty.\end{equation}

Proof of Theorem 1, case $0<\theta< \pi/2$ . By Propositions 1 and 3, we have the exponential equivalence of ${\mathcal Y }_t$ and ${\mathcal Y }_t^\kappa$ and, by Proposition 2, we have that ${\mathcal Y }_t^\kappa$ obeys the moderate-deviation principle with rate function

(6) \begin{align}I_{\lambda,\theta}(x)\;:\!=\;x^2\frac{\mathbb{E}[X_1']}{2\mathbb{E}[{Y_1'}^2]},\end{align}

as desired. For the scale invariance, if we denote by $U'_{\lambda,1}=(X'_{\lambda,1}, Y'_{\lambda,1})$ the progress of the first independent segment in the navigation based on a Poisson point process with intensity $\lambda>0$ , then, by scaling both the coordinates by $\sqrt{\lambda}$ , we find that $(\sqrt{\lambda}X'_{\lambda,1}, \sqrt{\lambda}Y'_{\lambda,1})$ and $(X'_{1,1}, Y'_{1,1})$ have the same distribution. Therefore, we obtain

$$2\rho(\lambda,\theta)=\frac{\mathbb{E}[X'_{\lambda,1}]}{\mathbb{E}[{Y'_{\lambda,1}}^2]} =\frac{\sqrt{\lambda}\mathbb{E}[X'_{1,1}]}{\mathbb{E}[{Y'_{1,1}}^2]}=2\sqrt{\lambda}\rho(1,\theta),$$

which completes the proof.

Note that, when $0<\theta\le\pi/4$ , we have $U_1'=U_1$ and the expression (1) for $\rho$ follows from a simple computation; see, for example, the proof of Lemma 1 in Section 7. The case $\theta=\pi/2$ requires a refined construction of a renewal process. We present the details in Section 6.

4. Large Deviations in the Dependent Case

In this section we study the large-deviation behavior of the model in the dependent case, where $\pi/4<\theta<\pi/2$ . Let us start with the large deviations for the empirical average.

Lemma 3. For any $0<\lambda$ and $\pi/4<\theta<\pi/2$ , $\{n^{-1}\sum_{i=1}^{n}U'_i\}_{n\ge 1}$ satisfies the large-deviation principle with rate n and rate function

$$\mathcal J'_{\lambda,\theta}(u)\;:\!=\;\sup\{\langle \gamma, u \rangle -J'_{\lambda,\theta}(\gamma)\colon \gamma\in \mathbb{R}^2\},$$

where, for all $\gamma\in \mathbb{R}^2$ , $J'_{\lambda,\theta}(\gamma)\;:\!=\;\log\mathbb{E}[\!\exp\!(\langle \gamma, U'_1\rangle)]$ .

We present the proof of this lemma and the following statements at the end of the paper in Section 7. One would like to establish now a large-deviation principle for ${\mathcal Y }_t'$ similar to the one provided in Theorem 2. However, the situation is more complicated. The key reason for this is that the upper tails of $|U_1|$ are of exponential order $O(\!-\!t^2)$ (this is crucially used in step 2 in the proof of Theorem 2), however, the upper tails of $|U'_1|$ are only of exponential order $O(\!-\!t)$ , as can be seen from the following statement.

Lemma 4. Let $\pi/4<\theta<\pi/2$ , then, for all $n\geq 0$ , $\mathbb{P}(\tau^\theta_1>n)\geq ((4\theta-\pi)/(4\theta))^n$ .

We present the proof further below. As a consequence, for example, for $X'_1$ , we have, for all n and $s>0$ ,

\begin{align*}\mathbb{P}(X'_1>t)&\ge \mathbb{P}\Bigg(\sum_{i=1}^{{\tau}^\theta_1}X_i>t, {\tau}^\theta_1>n\Bigg)\\[5pt] &\ge \mathbb{P}({\tau}^\theta_1>n)-\mathbb{P}\Bigg(\sum_{i=1}^{n}\underline R_i \cos\theta\le t\Bigg)\\[5pt] &\ge {\operatorname e }^{an}-{\operatorname e }^{st+n\log\mathbb{E}[\!\exp\!(\!-\!s\underline R \cos\theta)]},\end{align*}

where $a\;:\!=\;\log((4\theta-\pi)/(4\theta))$ and $\underline{R}$ and $\{\underline{R}_i\}_{i \geq 1}$ are as in Lemma 5. Hence, we have at least exponential decay if, for n coupled to t as $n=bt$ with $b>0$ , we have $s/b+\log\mathbb{E}[\!\exp\!(\!-\!s\underline R \cos\theta)]<a$ . But this is the case by first picking s large and then picking b large. We can proceed similarly for $Y'_1$ .

Hence, it is reasonable to assume that we have the following tail behavior within a segment.

Assumption 1. Let $0<\lambda$ and $\pi/4<\theta<\pi/2$ . Then, for all $a> 0, b\in \mathbb{R}$ , we have

$$\lim_{t\uparrow\infty}t^{-1}\log\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor t\rfloor}X_i< at, \sum_{i=1}^{\lfloor t\rfloor+1}X_i\ge at, \sum_{i=1}^{\lfloor t\rfloor}Y_i\ge bt, {\tau}^\theta_1> t\Bigg)=- \mathcal H_{\lambda,\theta}(a,b)\in ({-}\infty,0).$$

Note that, it is clear that the rate function must obey our usual scaling relation $\mathcal H_{\lambda,\theta}=\sqrt{\lambda}\mathcal H_{1,\theta}$ . With this in mind, we can prove the large-deviation principle for ${\mathcal Y }_t$ .

Theorem 3. Let Assumption 1 hold. Then, for any $0<\lambda$ and $\pi/4<\theta<\pi/2$ , $\{t^{-1} {\mathcal Y }_t\}_{t\geq 0}$ obeys the large-deviation principle with rate t and rate function

$$\mathcal I'_{\lambda,\theta}(x)\;:\!=\;\inf_{\substack{b\in\mathbb{R},\\[5pt] c\in(0,1)}}\hspace{-0.05in}\big\{\inf\{\beta\mathcal J'_{\lambda,\theta}(c/\beta,b/\beta)\colon \beta>0\}+\inf\{d \mathcal H_{\lambda,\theta}((1-c)/d,(a-b)/d)\colon d>0\}\big\}.$$

Moreover, $\mathcal I'_{\lambda,\theta}(x)$ satisfies the scaling relation $ \mathcal I'_{\lambda,\theta}(x) = \sqrt{\lambda} \mathcal I'_{1,\theta}(x)$ .

Let us try to explain the rate function in words. In the dependent case, the rate function is given by an optimization between the unlikely vertical displacement (up to level b, represented by the term involving $\mathcal J'_{\lambda,\theta}$ ), which is cheaper to achieve if the navigation performs fewer steps (up to level $c<1$ ) and the, then necessary, cost produced by having an unlikely large last horizontal step (of length of order $(1-c)t$ that covers the remaining vertical displacement, represented by the term involving $\mathcal H_{\lambda,\theta}$ ). The proof is presented in Section 7.

5. Proofs for the Moderate Deviations in Case $\theta<\pi/2$

5.1. Exponential tails of the stopping times

Let us start by establishing exponential-decay properties of the inter-stopping time gaps.

Proposition 4. Let $0<\lambda$ and $0<\theta< \pi/2$ . Then, for all $n,k\in\mathbb{N}$ , there exist constants $c,C>0$ depending only on $\theta$ such that

(7) \begin{equation}\mathbb{P}({\tau}^{\theta}_k-{\tau}^{\theta}_{k-1}\ge n)\leq C {\operatorname e }^{-c n}.\end{equation}

In particular, for all $n,k\in \mathbb{N}$ , $\mathbb{P}({\tau}^{\theta}_k\ge n)\leq kC {\operatorname e }^{-c n/k}$ .

First note that it suffices to prove that $\mathbb{P}({\tau}^{\theta}_1\ge n)\leq C{\operatorname e }^{-cn}$ since the remaining statements then follow from the fact that $\{{\tau}^{\theta}_k-{\tau}^{\theta}_{k-1}\}_{k\geq1}$ are i.i.d. We write $\tau\;:\!=\;\tau^\theta_1$ for the rest of the proof. It is worth noting that ${\tau}$ is invariant under scaling both the coordinates by $\sqrt{\lambda}$ , and therefore, $\mathbb{P}({\tau}\ge n)$ does not depend on $\lambda$ . Note also that we can focus on the case $\pi/4<\theta<\pi/2$ since, for $\theta\le \pi/4$ , we have almost surely that $\tau=1$ as mentioned before; see the first paragraph in the proof of Theorem 2 in Section 7.

Our arguments rest on ideas developed in [Reference Coupier, Saha, Sarkar and Tran6], more precisely the proof of [Reference Coupier, Saha, Sarkar and Tran6, Proposition 3.1], where however a slightly different situation is analyzed. Let us define, for all $n\geq 0$ ,

\[L_{n}\;:\!=\;\sup\bigg\{x-\sum_{i=1}^n X_i \colon (x,y)\in H_{n}\bigg\}\vee 0,\]

the width of the history set and write $U_n=(X_n,Y_n)=(R_n,\Phi_n)$ for the progress variables first in Cartesian and alternatively in polar coordinates. In particular, $L_0=0$ and $\mathbb{P}({\tau}\ge n)=\mathbb{P}(H_m\neq\emptyset\text{ for all }m< n)$ .

Since $H_n \subseteq H_{n-1}\cup B(V_{n-1}, R_n)$ , we have

\begin{align*}L_n &\leq \sup\bigg\{ x-\sum_{i=1}^n X_i \colon (x,y)\in H_{n-1}\cup B(V_{n-1}, R_n)\bigg\}\vee 0\\[5pt] &= \sup\bigg\{x-\sum_{i=1}^n X_i \colon (x,y)\in H_{n-1}\bigg\} \vee \sup\bigg\{ x-\sum_{i=1}^n X_i \colon (x,y)\in B(V_{n-1}, R_n)\bigg\} \vee 0 \\[5pt] &= (L_{n-1}-X_n) \vee (R_n-X_n) \vee 0\\[5pt] &= \max\{ (L_{n-1}- X_n)_+, R_n-X_n \},\end{align*}

where $x\vee y \;:\!=\; \max\{x,y\}$ and $x_+\;:\!=\;\max\{x,0\}$ . Note that, if $\Phi_n\in [-\pi/2+\theta, \pi/2-\theta]\;=\!:\; T_\theta$ then the nth step escapes its immediate history, i.e. $H_n={\mathcal C }_{\theta}(V_n)\cap H_{n-1}$ . In that case $H_n \subseteq H_{n-1}$ and, thus, we have

\[L_n \leq (L_{n-1}- X_n)_+.\]

Therefore, for all $n\geq 1$ , we have

$L_{n} \leq (L_{n-1}-X_{n})_+ \mathbb{1}{\{\Phi_n\in T_\theta\}} + \max\{ (L_{n-1}-X_{n})_+, (R_{n}- X_{n}) \}\mathbb{1}{\{\Phi_n\notin T_\theta\}}.$

The key ingredient for the proof is the following (random) monotone coupling of the progress variables with respect to an i.i.d. sequence, which is also used on multiple occasions throughout the proof section. Denoting $x\wedge y\;:\!=\; \min\{x,y\}$ , we have the following statement.

Lemma 5. For any $0<\theta< \pi/2$ , there exists a sequence of i.i.d. quadruplets of random variables $\{(\underline{R}_n, \underline{\Phi}_n, \overline{R}_n, \overline{\Phi}_n)\}_{n\geq 1}$ , defined on an extended probability space, such that, for all $n\geq 1$ ,

  1. (i) $\underline{R}_n \leq R_n \leq \overline{R}_n$ a.s.,

  2. (ii) if $\underline{\Phi}_n \in T_\theta$ then $\Phi_n=\underline{\Phi}_n=\overline{\Phi}_n$ a.s., and

  3. (iii) $(\underline{R}_n, \underline{\Phi}_n, \overline{R}_n, \overline{\Phi}_n) \stackrel{\textrm{d}}{=} (\underline{R}, \underline{\Phi}, \overline{R}, \overline{\Phi}) $ , where $(\underline{R}, \underline{\Phi}) \;:\!=\; \textrm{argmin} \{|v|\colon v \in {\mathcal C }_{\theta} \cap {\mathcal P }_{\lambda} \}$ and $(\overline{R}, \overline{\Phi}) \;:\!=\; \textrm{argmin} \{|v|\colon v \in {\mathcal C }_{\theta\wedge(\pi/2-\theta)} \cap {\mathcal P }_{\lambda} \},\underline{\Phi}\sim\textrm{Uniform}(\!-\!\theta,\theta)$ and $\overline{\Phi}\sim\textrm{Uniform} (\!-\!\theta\wedge(\pi/2-\theta), \theta\wedge(\pi/2-\theta)),\mathbb{P}(\underline{R}>t)= {\operatorname e }^{-\lambda \theta t^2}$ and $\mathbb{P}(\overline{R}>t)= {\operatorname e }^{-\lambda (\theta\wedge(\pi/2-\theta)) t^2}.$

We present the proof of this lemma later in this section. Now, observe that, almost surely,

\begin{align*} L_n &\leq (L_{n-1}-R_n\cos\theta)_+\mathbb{1}{\{\Phi_n\in T_\theta\}} + \max\{ (L_{n-1}-R_n\cos\theta)_+, R_n(1-\cos\theta) \} \mathbb{1}{\{\Phi_n\notin T_\theta\}}\\[5pt] & \leq (L_{n-1}-\underline{R}_n\cos\theta)_+\mathbb{1}{\{\Phi_n\in T_\theta\}} + \max\{ L_{n-1}, \overline{R}_n \} \mathbb{1}{\{\Phi_n\notin T_\theta\}}\\[5pt] & = (L_{n-1}-\underline{R}_n\cos\theta)_+\mathbb{1}{\{ \Phi_n\in T_\theta, \underline{\Phi}_n\in T_\theta\}}+(L_{n-1}-\underline{R}_n\cos\theta)_+\mathbb{1}{\{ \Phi_n\in T_\theta, \underline{\Phi}_n\notin T_\theta\}}\\[5pt] &\hskip10pt + \max\{ L_{n-1}, \overline{R}_n \} \mathbb{1}{\{\Phi_n\notin T_\theta, \underline{\Phi}_n\in T_\theta\}}+ \max\{ L_{n-1}, \overline{R}_n \} \mathbb{1}{\{\Phi_n\notin T_\theta, \underline{\Phi}_n\notin T_\theta\}}\\[5pt] & \leq (L_{n-1}-\underline{R}_n\cos\theta)_+\mathbb{1}{\{\underline{\Phi}_n\in T_\theta\}} + \max\{ L_{n-1}, \overline{R}_n \} \mathbb{1}{\{\underline{\Phi}_n\notin T_\theta\}}.\end{align*}

Note that, by Lemma 5(ii), we have $\{\underline{\Phi}_n \in T_\theta\} \subseteq \{\Phi_n \in T_\theta\}$ , which, together with the fact that $L_{n-1} - \underline{R}_n \cos\theta \leq L_{n-1} \leq \max\{L_{n-1}, \overline{R}_n\}$ , implies the final inequality. We now make a comparison with a Markov chain $\{M_n\}_{n\ge 0}$ on $\mathbb{N}_0$ , defined as $M_0\;:\!=\;0$ and

\begin{align*}M_n\;:\!=\;(M_{n-1}-\lfloor \underline{R}_n\cos\theta \rfloor)_+\mathbb{1}{\{\underline{\Phi}_n\in T_\theta\}} + \max\{ M_{n-1}, \lceil \overline{R}_n \rceil \} \mathbb{1}{\{\underline{\Phi}_n\notin T_\theta\}},\quad n>0.\end{align*}

The following result establishes that the stopping time ${\tau}^M\;:\!=\;\inf\{n>0\colon M_n=0\}$ of the Markov chain to hit 0 has an exponential tail.

Lemma 6 For any $n>0$ , $\mathbb{P}({\tau}^M\ge n)\le C{\operatorname e }^{- c n}$ for some $ C, c>0$ .

Proof of Proposition 4. The reason for constructing the Markov chain is to dominate the sequence of widths $\{L_n\}_{n\geq 0}$ . More precisely, by construction, $M_n\geq L_n$ for all $n\geq 0$ and, in particular, when $M_n=0$ , we have $L_n=0$ . Therefore, almost surely, ${\tau}^M \geq {\tau}$ , which completes the proof.

Proof of Lemma 6. Note that the Markov chain $\{M_n\}_{n\ge 0}$ is irreducible since, for any $m> 0$ ,

(8) \begin{align}\begin{split} \mathbb{P}(M_{1}=m \mid M_0=0 )& \geq \mathbb{P}( \lceil \overline{R}_1 \rceil =m \text{ and } \underline{\Phi}_1\notin T_\theta )>0\quad \text{ and}\\[5pt] \mathbb{P}(M_{1}=0 \mid M_0=m )& \geq \mathbb{P}( \lfloor \underline{R}_1\cos\theta \rfloor >m \text{ and } \underline{\Phi}_1\in T_\theta )>0. \end{split}\end{align}

Now, we demonstrate the recurrence of $\{M_n\}_{n\ge 0}$ by establishing that 0 is a recurrent state. From Lemma 5, we know that, for all $h\geq 2$ ,

\[\mathbb{P}( \lceil \overline{R}_1 \rceil >h)\leq \mathbb{P}( \overline{R}_1 >h/2) = \exp\!(\!-\!\lambda(\pi/2-\theta)h^2/4).\]

Let us define the sequence $\{a_n\}_{n\geq 1}$ as

\[a_n\;:\!=\;\Bigg\lceil \bigg( \frac{8}{\lambda(\pi/2-\theta)}\log n \bigg)^{1/2} \Bigg\rceil.\]

Then, for all sufficiently large n, we have $\mathbb{P}(\lceil \overline{R}_1 \rceil > a_n) \leq n^{-2}$ . This implies that, for all sufficiently large n,

\begin{align*} \mathbb{P}\Big(\max_{i=1}^n \lceil \overline{R}_i \rceil\leq a_n\Big)\geq (1-n^{-2})^n\end{align*}

and, therefore,

(9) \begin{align} \lim_{n\uparrow\infty} \mathbb{P}\Big(\max_{i=1}^n \lceil \overline{R}_i \rceil\leq a_n\Big) =1. \end{align}

Now, let $G_n\;:\!=\; \mathbb{1}{\{\underline{\Phi}_n\in T_\theta \text{ and } \underline{R}_n\cos\theta \geq 1\}} $ and $q\;:\!=\; \mathbb{P}(G_1=1)>0$ . From the definition of $M_n$ , it follows that, whenever $G_n=1$ , we have $M_n\leq (M_{n-1}-1)_+$ . We define $\mathcal{A}_n$ to be the event that the finite sequence $\{G_i\}_{i=1}^n$ has a run of 1s of length at least $a_n $ . Since $\{G_i\}_{i=1}^n$ are i.i.d., we have

\[\mathbb{P}({\mathcal{A}^c_n}) \leq ( 1-q^{ a_n } )^{\left\lfloor n/ a_n \right\rfloor} .\]

Furthermore, since, for all sufficiently large n,

\[ a_n \leq \frac{1}{2(\!-\!\log q)} \log n,\]

we have, for all sufficiently large n,

\[\mathbb{P}(\mathcal{A}^c_n) \leq (1-n^{-1/2})^{\left\lfloor 2(\!-\!\log q)n/\log n \right\rfloor},\]

which tends to 0 as $n\to\infty$ . Therefore, we obtain

(10) \begin{align}\lim_{n\uparrow\infty}\mathbb{P}(\mathcal{A}_n)=1. \end{align}

Now, by construction, we have

\begin{align*} &\mathbb{P}(M_{i}=0\text{ for some $1\leq i\leq n$} \mid M_0=0 ) \\[5pt] &\qquad\geq \mathbb{P}\Big(\max_{i=1}^n M_{i}\leq a_n \text{ and $\{G_{i}\}_{i=1}^n$ has $ a_n $ consecutive 1 s} \,\,\Big|\,\, M_0=0 \Big) \\[5pt] &\qquad\geq \mathbb{P}\Big(\max_{i=1}^n \lceil \overline{R}_i \rceil\leq a_n \text{ and $\{G_{i}\}_{i=1}^n$ has $ a_n $ consecutive 1 s} \,\,\Big| \,\, M_0=0 \Big) \\[5pt] &\qquad= \mathbb{P}\Big(\max_{i=1}^n \lceil \overline{R}_i \rceil\leq a_n \text{ and $\{G_{i}\}_{i=1}^n$ has $ a_n $ consecutive 1 s} \Big).\end{align*}

But this tends to 1 as $n\to\infty$ by (9) and (10). As a result, we established that 0 is a recurrent state, and therefore, in view of (8), the Markov chain $\{M_n\}_{n\ge 0}$ is recurrent.

The rest of the proof is similar to the proof of [Reference Asmussen2, Proposition 5.5], but we include some details for the convenience of the reader. Observe that

(11) \begin{align}\begin{split}&\mathbb{E}[{\operatorname e }^{M_1}\mid M_0=k]\\[5pt] &\qquad=\mathbb{E}\big[\!\exp\big((k-\lfloor \underline{R}_1\cos\theta \rfloor)_+\mathbb{1}{\{\underline{\Phi}_1\in T_\theta\}} + \max\{ k, \lceil \overline{R}_1 \rceil \} \mathbb{1}{\{\underline{\Phi}_1\notin T_\theta\}}\big)\big]\\[5pt] &\qquad={\operatorname e }^k\mathbb{E}\big[\!\exp\big(\!-\!\min\{k,\lfloor \underline{R}_1\cos\theta \rfloor\}\mathbb{1}{\{\underline{\Phi}_1\in T_\theta\}} + \max\{ 0, \lceil \overline{R}_1 \rceil -k \} \mathbb{1}{\{\underline{\Phi}_1\notin T_\theta\}}\big) \big] \end{split}\end{align}

and note that

\[\!\exp\big(\!-\!\min\{k,\lfloor \underline{R}_1\cos\theta \rfloor\}\mathbb{1}{\{\underline{\Phi}_1\in T_\theta\}} + \max\{ 0, \lceil \overline{R}_1 \rceil -k \} \mathbb{1}{\{\underline{\Phi}_1\notin T_\theta\}}\big)\le \exp\!( \lceil \overline{R}_1 \rceil ),\]

which has finite expectation. Therefore, for all $k\geq 0$ ,

(12) \begin{equation}\mathbb{E}[\!\exp\!(M_1)\mid M_0=k] <\infty.\end{equation}

Moreover, by the dominated-convergence theorem, we get

\begin{align*}&\lim_{k\uparrow\infty} \mathbb{E}[\!\exp\!(\!-\!\min\{k,\lfloor \underline{R}_1\cos\theta \rfloor\}\mathbb{1}{\{\underline{\Phi}_1\in T_\theta\}} + \max\{ 0, \lceil \overline{R}_1 \rceil -k) \} \mathbb{1}{\{\underline{\Phi}_1\notin T_\theta\}}) ]\\[5pt] &\qquad=\mathbb{E}[\!\exp\!(\!-\!\lfloor \underline{R}_1\cos\theta \rfloor\mathbb{1}{\{\underline{\Phi}_1\in T_\theta\}} ) ]\\[5pt] &\qquad <1\end{align*}

and, thus, there exists $r>1$ and $k_0\geq 0$ such that, for all $k>k_0$ ,

\[\mathbb{E}[\!\exp\!(\!-\!\min\{k,\lfloor \underline{R}_1\cos\theta \rfloor\}\mathbb{1}{\{\underline{\Phi}_1\in T_\theta\}} + \max\{ 0, \lceil \overline{R}_1 \rceil -k) \} \mathbb{1}{\{\underline{\Phi}_1\notin T_\theta\}})] <r^{-1}.\]

But this, together with (11), implies that, for all $k>k_0$ ,

(13) \begin{equation}\mathbb{E}[\!\exp\!(M_1)\mid M_0=k] < r^{-1}\exp\!(k).\end{equation}

Let $\sigma\;:\!=\; \inf\{n>0\colon M_n\le k_0\}$ denote the first time that $M_n$ is below $k_0$ and fix any $l>k_0$ . In view of (13), note that, for $M_0=l$ , the process $\{Y_n\}_{n\geq 0}$ defined as $Y_n\;:\!=\;r^{n\wedge \sigma}\exp\!(M_{n\wedge \sigma})$ is a nonnegative supermartingale. Therefore, by the recurrence of the Markov chain $\{M_n\}_{n\ge 0}$ , almost surely,

\[Y_n\to Y_{\infty}\;:\!=\; r^{ \sigma}\exp\!(M_{ \sigma})\geq r^{\sigma}.\]

This, together with Fatou’s lemma, implies that

\[\mathbb{E}[r^{\sigma} \mid M_0=l] \leq \mathbb{E}[Y_{\infty} \mid M_0=l] \leq \liminf_{n\uparrow\infty} \mathbb{E}[Y_{n} \mid M_0=l] \leq \mathbb{E}[Y_{0} \mid M_0=l]=\exp\!(l).\]

Since $l>k_0$ is arbitrary, we have, for all $l>k_0$ ,

\[\mathbb{E}[r^{\sigma}|M_0=l] \leq \exp\!(l).\]

Therefore, for any $z\leq k_0$ ,

\begin{align*} \mathbb{E}[r^{\sigma} \mid M_0=z] &\leq r+r\sum_{l>k_0} \mathbb{P}(M_1=l \mid M_0=z) \mathbb{E}[r^{\sigma} | M_0=l]\\ & \leq r +r\sum_{l>k_0} \mathbb{P}(M_1=l \mid M_0=z) \exp\!(l) \\ &\leq r +r \mathbb{E}[\!\exp\!(M_1) \mid M_0=z] \\ &<\infty.\end{align*}

Note that, for any $z\leq k_0$ and $l>k_0$ ,

\[\mathbb{P}(M_1=l \mid M_0=z)= \mathbb{P}(\underline{\Phi}_1\notin T_\theta, \lceil \overline{R}_1\rceil =l) = \mathbb{P}(M_1=l \mid M_0\leq k_0),\]

which does not depend on z. Therefore, if we define $\{\sigma_i\}_{i\geq 0}$ as $\sigma_0\;:\!=\;0$ and, for all $i\ge 1$ ,

\[\sigma_{i}\;:\!=\; \inf\{n> \sigma_{i-1}\colon M_{n}\le k_0\} ,\]

then $\{\sigma_{i}-\sigma_{i-1}\}_{i\ge 1}$ are i.i.d. copies of $\sigma$ . Moreover, for all $z\leq k_0$ ,

\[c_1\;:\!=\;\mathbb{E}[r^{\sigma} \mid M_0\le k_0]=\mathbb{E}[r^{\sigma} \mid M_0=z]<\infty.\]

We choose $c_2>0$ such that ${c_1}^{c_2}< r$ . Then, by Markov’s inequality,

\[ \mathbb{P}(\sigma_{\lfloor c_2n\rfloor} \geq n \mid M_0=0) \leq r^{-n}\mathbb{E}[r^{\sigma_{\lfloor c_2n\rfloor}} \mid M_0=0] = r^{-n}\mathbb{E}[r^{\sigma_1} \mid M_0=0]^{\lfloor c_2n\rfloor}\leq (r^{-1}{c_1}^{c_2})^n ,\]

and thus,

\begin{align*} &\mathbb{P}({\tau}^M>n \mid M_0=0\big) \\ &\qquad\le \mathbb{P}({\tau}^M>n, \sigma_{\lfloor c_2n\rfloor} <n \mid M_0=0) + \mathbb{P}(\sigma_{\lfloor c_2n\rfloor} \geq n \mid M_0=0) \\ &\qquad \le \mathbb{P}\Bigg(\bigcap_{i=1}^{\lfloor c_2n\rfloor} \{ \underline{\Phi}_{\sigma_i+1}\notin T_\theta \text{ or } \lfloor \underline{R}_{\sigma_i+1}\cos\theta \rfloor <k_0 \} \Bigg) + \mathbb{P}(\sigma_{\lfloor c_2n\rfloor} \geq n\mid M_0=0) \\ &\qquad \le \mathbb{P}( \underline{\Phi}_{1}\notin T_\theta \text{ or } \lfloor \underline{R}_{1}\cos\theta \rfloor <k_0 )^{\lfloor c_2n\rfloor} + (r^{-1}{c_1}^{c_2})^n \\ &\qquad\le C_1 {\operatorname e }^{-cn}\end{align*}

for some $C_1,c>0$ . This proves the result.

Now, the only thing that remains to be proven is Lemma 5. For this, let us first verify the following statement.

Lemma 7. For any $n\geq 0 $ and $\pi/4\le \theta<\pi/2$ , we have ${\mathcal C }_{\pi/2-\theta}(V_{n}) \cap H_{n} = \emptyset$ .

Proof of Lemma 7. From the definition of $H_n$ , we see that, for any $n\geq0$ ,

\[ H_n\subseteq \bigcup_{m=0}^{n-1} B(V_m, R_{m+1}). \]

Now, since, for any $0\le m \le n-1$ , $V_n\in {\mathcal C }_{\theta}(V_{m})$ , it follows that $R_{m+1}\le |V_n-V_m|$ . Therefore,

\[H_n\subseteq \bigcup_{m=0}^{n-1} B(V_m, |V_n-V_m|).\]

Hence, it suffices to show that, for any $0\le m \le n-1$ , ${\mathcal C }_{\pi/2-\theta}(V_{n}) \cap B(V_m, |V_n-V_m|)=\emptyset$ . For this, note that the lines

\begin{align*} L_{n}(\pi/2-\theta) &\;:\!=\;V_n+\{(r, \varphi)\colon r>0 , \varphi=\pi/2-\theta\} \quad\text{ and } \\[5pt] L_{n}(\!-\!\pi/2+\theta)&\;:\!=\;V_n+\{(r, \varphi)\colon r>0 , \varphi=-\pi/2+\theta\}\end{align*}

are the two boundary lines of ${\mathcal C }_{\pi/2-\theta}(V_{n})$ . Hence, it is also adequate to prove that, for any $0\le m \le n-1$ , $(L_{n}(\pi/2-\theta) \cup L_{n}(\!-\!\pi/2+\theta)) \cap B(V_m, |V_n-V_m|)=\emptyset$ .

We now fix $m<n$ and write $V_m$ and $V_n$ in Cartesian coordinates as $V_m=(V_{m,1},V_{m,2} )$ and $V_n=(V_{n,1},V_{n,2} )$ . If $V_{m,2}=V_{n,2}$ then $|V_n-V_m|= V_{n,1}-V_{m,1}$ and since, for any $x=(x_1,x_2)\in L_{n}(\pi/2-\theta) \cup L_{n}(\!-\!\pi/2+\theta)$ , we have $x_1> V_{n,1}> V_{m,1}$ , it follows that

\[|x-V_m|\geq x_1- V_{m,1}> V_{n,1}- V_{m,1} =|V_n-V_m|.\]

Therefore, $(L_{n}(\pi/2-\theta) \cup L_{n}(\!-\!\pi/2+\theta)) \cap B(V_m, |V_n-V_m|)=\emptyset$ .

Now, suppose that $V_{m,2}\neq V_{n,2}$ . Without loss of generality, we assume that $V_{m,2}< V_{n,2}$ . Then, for any $x=(x_1,x_2)\in L_{n}(\pi/2-\theta)$ , we have $x_1> V_{n,1}> V_{m,1}$ and $x_2> V_{n,2} > V_{m,2}$ , which implies that

\[|x-V_m| = \sqrt{(x_1- V_{m,1})^2+(x_2- V_{m,2})^2} > \sqrt{(V_{n,1}- V_{m,1})^2+(V_{n,2}- V_{m,2})^2} =|V_n-V_m|.\]

Therefore, $L_{n}(\pi/2-\theta) \cap B(V_m, |V_n-V_m|)=\emptyset$ . Now, we draw two horizontal line segments $\overline{V_m A}$ and $\overline{V_n B}$ passing through $V_m$ and $V_n$ , respectively, such that $\overline{V_m A}$ intersects $L_{n}(\!-\!\pi/2+\theta)$ at point C; see Figure 2 for an illustration. From the definition of $L_{n}(\!-\!\pi/2+\theta)$ , we know that $\angle B V_n C = \pi/2-\theta$ . Since $\overline{V_m A}$ and $\overline{V_n B}$ are parallel, we also have $\angle V_n C V_m = \pi/2-\theta$ . Furthermore, as $V_n\in {\mathcal C }_{\theta}(V_{m})$ , it follows that $\angle V_n V_m C \leq \theta$ . Now, focusing on the triangle $\triangle V_n V_m C$ , we observe the relationship

\[\angle V_n C V_m + \angle V_n V_m C +\angle V_m V_n C = \pi,\]

which implies that $\angle V_m V_n C \geq \pi/2$ , and this, in turn, implies that $\cos\angle V_m V_n C\leq 0$ . Therefore, for any $x\in L_{n}(\!-\!\pi/2+\theta)$ , by the law of cosines,

\[|x-V_m| = \sqrt{|V_n-V_m|^2+ |x-V_n|^2 - 2\cos\angle V_m V_n C\cdot|V_n-V_m|\cdot |x-V_n| } > |V_n-V_m|,\]

which means that $L_{n}(\pi/2-\theta) \cap B(V_m, |V_n-V_m|)=\emptyset$ . This proves that, for any $0\le m \le n-1$ , ${\mathcal C }_{\pi/2-\theta}(V_{n}) \cap B(V_m, |V_n-V_m|)=\emptyset$ , thereby completing the proof.

Figure 2. Illustration for the proof of Lemma 7.

Proof of Lemma 5. For $0<\theta\le \pi/4$ , the random variables $\{(R_n,\Phi_n)\}_{n\ge 1}$ are i.i.d. Therefore, Lemma 5 holds trivially by taking $(\underline R_n, \underline\Phi_n)=(\overline R_n, \overline\Phi_n)=(R_n,\Phi_n)$ for all $n\ge1$ . So, we now assume that $\pi/4<\theta<\pi/2$ . We define a sequence of sets $\{D_n\}_{n\geq 1}$ as

\[D_n\;:\!=\; {\mathcal C }_{\theta}(V_{n-1}) \cap \overline{B(V_{n-1}, R_n)} \cap H^c_{n-1}.\]

Essentially, $D_n$ is the previously unexplored set where we have explored at the nth step to find $V_n$ . Clearly, $\{D_n\}_{n\geq 1}$ are pairwise disjoint. Let $\{{\mathcal P }_{\lambda}^{(n)}\}_{n\geq 1}$ be i.i.d. copies of the Poisson point process ${\mathcal P }_{\lambda}$ . We define $\{{\mathcal Q }_{\lambda}^{(n)}\}_{n\geq 1}$ as

\[{\mathcal Q }_{\lambda}^{(n)}\;:\!=\; ({\mathcal P }_{\lambda}\cap D_n)\cup( {\mathcal P }_{\lambda}^{(n)}\cap D^c_n).\]

Since the $D_n$ are disjoint, $\{{\mathcal Q }_{\lambda}^{(n)}\}_{n\geq 1}$ is a sequence of i.i.d. Poisson point processes with intensity $\lambda$ . We write

\[\underline{W}_n \;:\!=\; \textrm{argmin} \{|v-V_{n-1}|\colon v \in {\mathcal C }_{\theta}(V_{n-1}) \cap {\mathcal Q }_{\lambda}^{(n)} \},\]

and define $(\underline{R}_n,\underline{\Phi}_n)$ to be the polar coordinates of $\underline{W}_n-V_{n-1}$ . Similarly, we write

\[\overline{W}_n \;:\!=\; \textrm{argmin} \{|v-V_{n-1}|\colon v \in {\mathcal C }_{\pi/2-\theta}(V_{n-1}) \cap {\mathcal Q }_{\lambda}^{(n)} \},\]

and define $(\overline{R}_n,\overline{\Phi}_n)$ to be the polar coordinates of $\overline{W}_n-V_{n-1}$ . From the definition, it follows that $(\underline{R}_n,\underline{\Phi}_n,\overline{R}_n,\overline{\Phi}_n)$ and $ (\underline{R},\underline{\Phi}, \overline{R},\overline{\Phi})$ are equal in distribution. Furthermore, note that, since $\{{\mathcal Q }_{\lambda}^{(n)}\}_{n\geq 1}$ are i.i.d., $\{(\underline{R}_n, \underline{\Phi}_n, \overline{R}_n, \overline{\Phi}_n)\}_{n\geq 1}$ are also i.i.d. We know that $(R_n,\Phi_n)$ are the polar coordinates of $V_n-V_{n-1}$ , where

\begin{align*}V_n &= \textrm{argmin} \{|v-V_{n-1}|\colon v \in {\mathcal C }_{\theta}(V_{n-1}) \cap {\mathcal P }_{\lambda} \}\\[5pt] &= \textrm{argmin} \{|v-V_{n-1}|\colon v \in {\mathcal C }_{\theta}(V_{n-1})\cap \overline{B(V_{n-1}, R_n)} \cap H^c_{n-1} \cap {\mathcal P }_{\lambda} \}\\[5pt] &= \textrm{argmin} \{|v-V_{n-1}|\colon v \in D_n \cap {\mathcal P }_{\lambda} \}\\[5pt] &= \textrm{argmin} \{|v-V_{n-1}|\colon v \in D_n \cap {\mathcal Q }_{\lambda}^{(n)} \}\\[5pt] &= \textrm{argmin} \{|v-V_{n-1}|\colon v \in {\mathcal C }_{\theta}(V_{n-1})\cap \overline{B(V_{n-1}, R_n)} \cap H^c_{n-1} \cap {\mathcal Q }_{\lambda}^{(n)} \}\\[5pt] &= \textrm{argmin} \{|v-V_{n-1}|\colon v \in {\mathcal C }_{\theta}(V_{n-1}) \cap H^c_{n-1} \cap {\mathcal Q }_{\lambda}^{(n)} \}.\end{align*}

Now, from Lemma 7, we obtain ${\mathcal C }_{\pi/2-\theta}(V_{n-1})\subseteq {\mathcal C }_{\theta}(V_{n-1}) \cap H^c_{n-1}\subseteq {\mathcal C }_{\theta}(V_{n-1})$ , which implies that $\overline{R}_n \geq R_n \geq \underline{R}_n$ . Additionally, if $\underline{\Phi}_n \in T_\theta$ then we have $(\underline{R}_n,\underline{\Phi}_n)=(R_n,\Phi_n)=(\overline{R}_n,\overline{\Phi}_n)$ . This concludes the proof.

5.2. Proof of Lemma 2 and Proposition 2

Proof of Lemma 2. Using Proposition 4, Lemma 5, and the Cauchy–Schwarz inequality, we get

\begin{align*}\mathbb{E}[{\operatorname e }^{\langle\gamma, U'_1\rangle}]\le \sum_{n\ge 1}\mathbb{E}[{\operatorname e }^{2\langle\gamma, \sum_{j=1}^n U_j\rangle}]^{1/2}\mathbb{P}({\tau}_1^{\theta}\ge n)^{1/2}\le\sum_{n\ge 1}\mathbb{E}[{\operatorname e }^{2(\gamma_1+\gamma_2)\overline{R}}]^{n/2}\sqrt{C}{\operatorname e }^{-cn/2}.\end{align*}

Since $\overline{R}$ has all exponential moments finite, for all sufficiently small $\gamma_1,\gamma_2>0$ , the above sum is also finite.

Proof of Proposition 2. By symmetry and the exponential Markov inequality, for sufficiently small $s>0$ ,

\begin{align*}\mathbb{P}(|Y'_1|\ge n^{1/2+\varepsilon})\le 2{\operatorname e }^{-sn^{1/2+\varepsilon}}\mathbb{E}[{\operatorname e }^{s Y_1'}]<\infty,\end{align*}

where we also used Lemma 2, and thus,

\begin{align*}\limsup_{n\uparrow\infty}n^{-2\varepsilon}\log\mathbb{P}(|Y'_1|\ge n^{1/2+\varepsilon})\le -s\limsup_{n\uparrow\infty}n^{1/2-\varepsilon}=-\infty.\end{align*}

Hence, $\{{\mathcal Y }_{\lfloor t\rfloor}^w\}_{t\ge 0}$ satisfies the conditions of [Reference Eichelsbacher and Löwe9, Theorem 2.2] and, thus, obeys the moderate-deviation principle with rate $x^2/(2w\mathbb{E}[Y_1'^2])$ . But $\{{\mathcal Y }_{\lfloor t\rfloor}^w\}_{t\ge 0}$ and $\{{\mathcal Y }_{t}^w\}_{t\ge 0}$ are exponentially equivalent since, for all $\delta>0$ ,

\begin{align*} \limsup_{t\uparrow\infty} t^{-2\varepsilon}&\log \mathbb{P}(|{\mathcal Y }_{\lfloor t\rfloor}^w-{\mathcal Y }_{t}^w|\geq \delta t^{1/2+\varepsilon})\le \limsup_{t\uparrow\infty} t^{-2\varepsilon}\log \mathbb{P}\Bigg(\sum_{i=1}^{\lceil w+1\rceil}|Y_i'|\geq \delta t^{1/2+\varepsilon}\Bigg),\end{align*}

as $\lfloor tw\rfloor-\lfloor \lfloor t\rfloor w\rfloor\le \lceil w+1\rceil$ . Hence, again using symmetry, independence, and the exponential Markov inequality, for sufficiently small $s>0$ ,

\begin{align*}\mathbb{P}\Bigg(\sum_{i=1}^{\lceil w+1\rceil}|Y_i'|\geq \delta t^{1/2+\varepsilon}\Bigg)\le {\operatorname e }^{-s\delta t^{1/2+\varepsilon}}\mathbb{E}[{\operatorname e }^{s |Y_1'|}]^{\lceil w+1\rceil}\le {\operatorname e }^{-s\delta t^{1/2+\varepsilon}}2^{\lceil w+1\rceil}\mathbb{E}[{\operatorname e }^{s Y_1'}]^{\lceil w+1\rceil}<\infty,\end{align*}

and thus,

\begin{align*}\limsup_{t\uparrow\infty}t^{-2\varepsilon}\log\mathbb{P}(|{\mathcal Y }_{\lfloor t\rfloor}^w-{\mathcal Y }_{t}^w|\geq \delta t^{1/2+\varepsilon})\le -s\delta\limsup_{t\uparrow\infty}t^{1/2-\varepsilon}=-\infty,\end{align*}

as desired.

5.3 Proof of Propositions 1

Proof of Proposition 1. First note that we have ${\mathcal Y }_t=\sum_{i=1}^{K_t}Y_i+Y_{K_t+1}(t-\sum_{i=1}^{K_t}X_i)/X_{K_t+1}$ with $(t-\sum_{i=1}^{K_t}X_i)/X_{K_t+1}\le 1$ , and hence, by Lemma 5,

\begin{align*}\mathbb{P}(|{\mathcal Y }_t-{\mathcal Y }_t'|\geq \delta t^{1/2+\varepsilon})&\le\mathbb{P}\Bigg(\sum_{i=\tau^\theta_{K'_t}+1}^{K_t+1}|Y_i|\geq \delta t^{1/2+\varepsilon}\Bigg)\\[5pt] &\le\mathbb{P}\Bigg(\sum_{i=\tau^\theta_{K'_t}+1}^{\tau^\theta_{K'_t+1}}|Y_i|\geq \delta t^{1/2+\varepsilon}\Bigg)\\[5pt] &\le\sum_{j=0}^{\lfloor t^2\rfloor}\mathbb{P}\Bigg(\sum_{i=\tau^\theta_{j}+1}^{\tau^\theta_{j+1}}|Y_i|\geq \delta t^{1/2+\varepsilon}, K'_t=j \Bigg) + \mathbb{P}(K'_t \ge t^2)\\[5pt] &\le t^2 \mathbb{P}\Bigg(\sum_{i=1}^{\tau^\theta_1}|Y_i|\geq \delta t^{1/2+\varepsilon}\Bigg) + \mathbb{P}(K'_t \ge t^2)\\[5pt] &\le t^2 \mathbb{P}\Bigg(\sum_{i=1}^{\lfloor t^{\varepsilon'}\rfloor}\overline{R}_i\geq \delta t^{1/2+\varepsilon}\Bigg) + t^2 \mathbb{P}(\tau^\theta_1 > \lfloor t^{\varepsilon'}\rfloor) + \mathbb{P}\Bigg(\sum_{i=1}^{ \lceil t^2\rceil}X'_i \le t \Bigg)\end{align*}

for some $\varepsilon'\in(2\varepsilon, 1/2+\varepsilon)$ . This, together with Proposition 4 and the exponential Markov inequality, implies that

\begin{align*} \mathbb{P}(|{\mathcal Y }_t-{\mathcal Y }_t'|\geq \delta t^{1/2+\varepsilon})\le t^2{\operatorname e }^{-\delta t^{1/2+\varepsilon} }\mathbb{E}[{\operatorname e }^{\overline{R}}]^{\lfloor t^{\varepsilon'}\rfloor}+ t^2 C{\operatorname e }^{-c\lfloor t^{\varepsilon'}\rfloor} + {\operatorname e }^{t}\mathbb{E}[{\operatorname e }^{-X'_1}]^{\lceil t^2\rceil},\end{align*}

and therefore,

\begin{align*}&\limsup_{t\uparrow\infty} t^{-2\varepsilon}\log \mathbb{P}(|{\mathcal Y }_t-{\mathcal Y }_t'|\geq \delta t^{1/2+\varepsilon})\\[5pt] &\qquad\le -\min\Big\{ \liminf_{t\uparrow\infty} t^{-2\varepsilon}(\delta t^{1/2+\varepsilon}- \lfloor t^{\varepsilon'}\rfloor \log \mathbb{E}[{\operatorname e }^{\overline{R}}] ),\liminf_{t\uparrow\infty} c\lfloor t^{\varepsilon'}\rfloor t^{-2\varepsilon},\\[5pt] &\hskip61pt -\limsup_{t\uparrow\infty}t^{-2\varepsilon}(t+\lceil t^2\rceil\log\mathbb{E}[{\operatorname e }^{-X'_1}] ) \Big\}\\[5pt] &\qquad=-\infty,\end{align*}

as desired.

5.4. Proof of Proposition 3

Before we prove Proposition 3, let us collect concentration properties of the horizontal displacement.

Lemma 8. We have $\kappa\;:\!=\; \mathbb{E}[X'_1]^{-1} \in(0,\infty)$ and, for all $\varepsilon>0$ , there exist constants $c_1, c_2>0$ such that, for all $t>0$ ,

(14) \begin{align}\mathbb{P}(K'_t/t \notin B_{\varepsilon}( \kappa))\leq c_1{\operatorname e }^{-c_2t}.\end{align}

Proof of Lemma 8. First, note that

\begin{align*} \kappa^{-1}&=\mathbb{E}[X_1'] \\[5pt] &\ge \mathbb{E}[X_1{\mathbb{1}}\{\tau^\theta_1=1\}] \\[5pt] &=\lambda\int_0^\infty\textrm{d} r\; r^2{\operatorname e }^{-\theta r^2}\int_{-\theta}^{\theta}\textrm{d} \varphi\; \cos\varphi{\mathbb{1}}\Bigg\{ \theta-\frac{\pi}{2}\le \varphi\le \frac{\pi}{2}- \theta\Bigg\} \\[5pt] &>0,\end{align*}

since independence after one step is achieved precisely if the angle is in the interval $[\theta-\pi/2,\pi/2- \theta]$ . Using Lemma 5 and Proposition 4 together with the Cauchy–Schwarz inequality, we get

\begin{align*}\kappa^{-1} &=\mathbb{E}[X_1']\\[5pt] &=\mathbb{E}\left[\sum_{i=1}^{\tau}X_i\right]\\[5pt] &\leq \mathbb{E}\left[\sum_{i=1}^{\tau}\overline{R}_i\right]\\[5pt] &\leq \sum_{i=1}^{\infty}\mathbb{E}[\overline{R}_i\mathbb{1}{\{\tau\ge i\}}]\\[5pt] &\le \sum_{i=1}^{\infty}(\mathbb{E}[\overline{R}_i^2]\mathbb{P}(\tau\ge i))^{1/2}\\[5pt] &\le \sum_{i=1}^{\infty} (\mathbb{E}[\overline{R}^2]C)^{1/2} {\operatorname e }^{-ci/2}\\[5pt] &<\infty.\end{align*}

For the second part of the statement, note that we can bound

(15) \begin{align}\mathbb{P}(K'_t/t \notin B_{\varepsilon}( \kappa))&\le\mathbb{P}(K'_t\le t(\kappa-\varepsilon))+\mathbb{P}(K'_t\ge t(\kappa+\varepsilon))\nonumber\\[5pt] &\le \mathbb{P}\Bigg(\sum_{i=1}^{\lceil t(\kappa-\varepsilon)\rceil}X'_i \ge t\Bigg)+\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor t(\kappa+\varepsilon)\rfloor}X'_i < t\Bigg), \end{align}

where we used the fact that $K'_t$ , as defined in (2), can also be represented as

\begin{equation*}K'_t=\sup\bigg\{n>0\colon\sum_{i=1}^{n}X'_i < t \bigg\}.\end{equation*}

Now, by the exponential Markov inequality, for any $s>0$ ,

\begin{align*}\limsup_{t\uparrow\infty}t^{-1}\log\mathbb{P}\Bigg(\sum_{i=1}^{\lceil t(\kappa-\varepsilon)\rceil}X'_i \ge t\Bigg)\le -s(1-(\kappa-\varepsilon)s^{-1}\log \mathbb{E}[{\operatorname e }^{s X_1'}]).\end{align*}

Using Lemma 5 and Proposition 4 together with the Cauchy–Schwarz inequality, we obtain

\begin{align*} \mathbb{E}[{\operatorname e }^{s X_1'}] &\leq \mathbb{E}[{\operatorname e }^{\sum_{i=1}^{\tau} s\overline R_i}] \\[5pt] &=\sum_{n=1}^{\infty} \mathbb{E}[{\operatorname e }^{\sum_{i=1}^{n} s\overline R_i}\mathbb{1}{\{\tau=n\}}] \\[5pt] &\le \sum_{n=1}^{\infty} (\mathbb{E}[{\operatorname e }^{\sum_{i=1}^{n} 2s\overline R_i}]\mathbb{P}(\tau=n))^{1/2} \\[5pt] &\le \sum_{n=1}^{\infty} (\mathbb{E}[{\operatorname e }^{ 2s\overline R}])^{n/2} \sqrt{C}{\operatorname e }^{-cn/2},\end{align*}

which is finite for all $0<s\leq s_0$ and for some $s_0>0$ , as a consequence of Lemma 5. Hence, we can use dominated convergence to conclude that $\lim_{s\downarrow 0} \mathbb{E} [ {\operatorname e }^{sX'_1}]=1$ . Then, we have

\begin{align*} \lim_{s\downarrow 0} s^{-1}\log \mathbb{E}[{\operatorname e }^{s X_1'}] &= \lim_{s\downarrow 0} s^{-1}\log(1+ ( \mathbb{E}[{\operatorname e }^{s X_1'}]-1)) \\[5pt] &= \lim_{s\downarrow 0} s^{-1}( \mathbb{E}[{\operatorname e }^{s X_1'}]-1) \\[5pt] &= \mathbb{E}[ \lim_{s\downarrow 0} s^{-1}( {\operatorname e }^{s X_1'}-1)] \\[5pt] &= \mathbb{E}[X'_1] \\[5pt] &= \kappa^{-1}.\end{align*}

Here, the third equality also follows from dominated convergence using the bound $s^{-1}( {\operatorname e }^{s X_1'}-1)<s_0^{-1}( {\operatorname e }^{s_0 X_1'}-1)$ for all $0<s\le s_0$ , where $\mathbb{E}[s_0^{-1}( {\operatorname e }^{s_0 X_1'}-1)]<\infty$ . Therefore, there exists $c'_1>0$ such that

(16) \begin{align} \limsup_{t\uparrow\infty}t^{-1}\log\mathbb{P}\Bigg(\sum_{i=1}^{\lceil t(\kappa-\varepsilon)\rceil}X'_i \ge t\Bigg)<-c'_1. \end{align}

Again, by using the exponential Markov inequality on the other term, we obtain, for all $s>0$ ,

\begin{align*}\limsup_{t\uparrow\infty}t^{-1}\log\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor t(\kappa+\varepsilon)\rfloor}-X'_i > -t\Bigg)\le s(1+(\kappa+\varepsilon)s^{-1}\log \mathbb{E}[{\operatorname e }^{-s X_1'}]).\end{align*}

Since, by the dominated-convergence theorem, $\lim_{s\downarrow 0} \mathbb{E} [ e^{-sX'_1}]=1$ , we have

\begin{align*} \lim_{s\downarrow 0} s^{-1}\log \mathbb{E}[{\operatorname e }^{-s X_1'}] &= \lim_{s\downarrow 0} s^{-1}\log(1- (1- \mathbb{E}[{\operatorname e }^{-s X_1'}])) \\[2pt] &= \lim_{s\downarrow 0} -s^{-1}(1- \mathbb{E}[{\operatorname e }^{-s X_1'}]) \\[2pt] &= \mathbb{E}[ \lim_{s\downarrow 0} -s^{-1}(1- {\operatorname e }^{-s X_1'})] \\[2pt] &= \mathbb{E}[-X'_1] \\[2pt] &= -\kappa^{-1}.\end{align*}

Here, the third equality follows from the dominated-convergence theorem, using the bound $s^{-1}\big(1- {\operatorname e }^{-s X'_1}\big)<X'_1$ , where $X'_1$ has finite expectation. Therefore, there exists $c'_2>0$ such that

(17) \begin{align} \limsup_{t\uparrow\infty}t^{-1}\log\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor t(\kappa+\varepsilon)\rfloor}-X'_i > -t\Bigg)<-c'_2. \end{align}

Now, combining (15), (16), and (17) proves the lemma.

Proof of Proposition 3. Let us fix $\varepsilon'>0$ and note that

(18) \begin{equation}\begin{split}\mathbb{P}(|{\mathcal Y }^\kappa_t-{\mathcal Y }'_t|\ge \delta t^{1/2+\varepsilon})&\le\mathbb{P}\Bigg(\Bigg\{\Bigg\vert\sum_{i=\lfloor t\kappa\rfloor+1}^{K'_t}Y_i'\Bigg\vert\ge \delta t^{1/2+\varepsilon}\Bigg\}\cap \Bigg\{\kappa\le \frac{K'_t}{t}\le\kappa+\varepsilon'\Bigg\}\Bigg)\\[2pt] &\hskip10pt+\mathbb{P}\Bigg(\Bigg\{\Bigg\vert\sum_{i=K'_t+1}^{\lfloor t\kappa\rfloor}Y_i'\Bigg\vert\ge \delta t^{1/2+\varepsilon}\Bigg\}\cap \Bigg\{\kappa-\varepsilon'\le \frac{K'_t}{t}\le \kappa\Bigg\}\Bigg)\\[2pt] &\hskip10pt+\mathbb{P}\Bigg(\frac{K'_t}{t}\not\in B_{\varepsilon'}(\kappa)\Bigg),\end{split}\end{equation}

where, by Lemma 8, $\limsup_{t\uparrow\infty}t^{-2\varepsilon}\log\mathbb{P}(K'_t/t\not\in B_{\varepsilon'}(\kappa))=-\infty$ and, hence, the third summand plays no role, by logarithmic equivalence. For the first summand in (18), since the $Y_i'$ are i.i.d., we have the bound

(19) \begin{align}\sum_{m=\lfloor t\kappa \rfloor+1}^{\lfloor t(\kappa+\varepsilon') \rfloor}\mathbb{P}\Bigg(\Bigg\{\Bigg\vert\sum_{i=\lfloor t\kappa\rfloor+1}^{K'_t}Y_i'\Bigg\vert\ge \delta t^{1/2+\varepsilon}\Bigg\}\cap \{ K'_t=m\}\Bigg)\le \sum_{m=1}^{r_1(t,\varepsilon')}\mathbb{P}\Bigg(\Bigg\vert\sum_{i=1}^{m}Y_i' \Bigg\vert\ge \delta t^{1/2+\varepsilon}\Bigg),\end{align}

where the number of summands in the above sum is $r_1(t, \varepsilon')\;:\!=\;\lfloor t(\kappa+\varepsilon')\rfloor-\lfloor t\kappa\rfloor$ . Note that the corresponding upper bound is valid also for the second summand in (18) with $r_1(t,\varepsilon')$ replaced by $r_2(t, \varepsilon')\;:\!=\;\lfloor t\kappa\rfloor-\lfloor t(\kappa-\varepsilon')\rfloor$ . Furthermore, since $\limsup_{t\uparrow \infty}t^{-2\varepsilon}\log r(t,\varepsilon')=0$ , where $r(t,\varepsilon')=r_1(t,\varepsilon')\vee r_2(t,\varepsilon')$ , we have

\begin{align*}\limsup_{t\uparrow \infty}t^{-2\varepsilon}\log\mathbb{P}(|{\mathcal Y }^\kappa_t-{\mathcal Y }'_t|\ge \delta t^{1/2+\varepsilon})&\le \limsup_{t\uparrow \infty}t^{-2\varepsilon}\sup_{0\le \alpha \le \varepsilon'}\log\mathbb{P}\Bigg(\Bigg\vert\sum_{i=1}^{\lfloor t\alpha\rfloor}Y_i'\Bigg\vert\ge \delta t^{1/2+\varepsilon}\Bigg)\\[2pt] &\le \limsup_{t\uparrow \infty}t^{-2\varepsilon}\log\mathbb{P}\Bigg(\Bigg\vert\sum_{i=1}^{\lfloor t\varepsilon'\rfloor}Y_i'\Bigg\vert\ge \delta t^{1/2+\varepsilon}\Bigg),\end{align*}

where we used the fact that, by symmetry and independence,

\begin{align*}\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor t\alpha\rfloor}Y_i'\ge \delta t^{1/2+\varepsilon}\Bigg)&=2 \mathbb{P}\Bigg(\sum_{i=1}^{\lfloor t\alpha\rfloor}Y_i'\ge \delta t^{1/2+\varepsilon}, \sum_{i=\lfloor t\alpha\rfloor+1}^{\lfloor t\varepsilon'\rfloor}Y_i'\ge 0\Bigg)\le 2 \mathbb{P}\Bigg(\sum_{i=1}^{\lfloor t\varepsilon'\rfloor}Y_i'\ge \delta t^{1/2+\varepsilon}\Bigg),\end{align*}

and similar for $\mathbb{P}(\sum_{i=1}^{\lfloor t\alpha\rfloor}Y_i'\le - \delta t^{1/2+\varepsilon})$ . Hence, using the moderate-deviation principle, Proposition 2, we have

\begin{align*}\limsup_{t\uparrow \infty}t^{-2\varepsilon}\log\mathbb{P}(|{\mathcal Y }^\kappa_t-{\mathcal Y }'_t|\ge \delta t^{1/2+\varepsilon})&\le -I^{\varepsilon'}_{\lambda,\theta}(\delta),\end{align*}

which tends to $-\infty$ as $\varepsilon'$ tends to 0, as desired.

6. Proofs for the Moderate Deviations in Case $\theta=\pi/2$

As we can see, for $ \theta = \pi/2 $ , the history set will never vanish, so we never observe independence between consecutive steps. In this case, we define the renewal steps as in [Reference Coupier, Saha, Sarkar and Tran6, Section 4]. For a constant $ \varkappa \geq 6 $ and $ u \in \mathbb{R}^2 $ , we define $ u^{\rightarrow} \;:\!=\; u + (\varkappa,0) $ . We set $\varsigma_0 = 0$ and then iteratively define

\begin{align*} \varsigma_j\;:\!=\;\inf \left\{n>\varsigma_{j-1}\colon \begin{array}{l} \displaystyle L_n\leq \varkappa, \sum_{i=\varsigma_{j-1}+1}^n X_i \geq \varkappa+1 , \\[.5cm] \displaystyle\# \left( {\mathcal C }_{\pi/2}(V_n)\cap B(V_n, \varkappa+1)\cap {\mathcal P }_{\lambda}\right) =1 \text{ and} \\[.5cm] \displaystyle\# \left( {\mathcal C }_{\pi/2}(V_n^{\rightarrow})\cap B(V_n^{\rightarrow}, 1)\cap {\mathcal P }_{\lambda}\right) =1 \end{array} \right\}.\end{align*}

We call the $\varsigma_j$ th step the jth renewal step. At a renewal step, the width of the history set is at most $\varkappa$ , and the condition $\sum_{i=\varsigma_{j-1}+1}^{\varsigma_j} X_i \geq \varkappa+1$ ensures that the history sets at different renewal steps are disjoint. There is only one Poisson point in the set ${\mathcal C }_{\pi/2}(V_{\varsigma_j}) \cap B(V_{\varsigma_j}, \varkappa+1)$ , which is actually included in the subset ${\mathcal C }_{\pi/2}(V_{\varsigma_j}^{\rightarrow}) \cap B(V_{\varsigma_j}^{\rightarrow}, 1)$ . Therefore, this Poisson point is the next point in our exploration, i.e. it is $V_{\varsigma_j+1}$ . Moreover, if we had started our exploration from $V_{\varsigma_j}^{\rightarrow}$ then $V_{\varsigma_j+1}$ would still have been the next exploration point. As noted in [Reference Coupier, Saha, Sarkar and Tran6], conditional on being at the jth renewal step, this Poisson point $V_{\varsigma_j+1}$ is uniformly distributed on ${\mathcal C }_{\pi/2}(V_{\varsigma_j}^{\rightarrow}) \cap B(V_{\varsigma_j}^{\rightarrow}, 1)$ . Thus, for $j \geq 1$ , the paths starting from $V_{\varsigma_j}^{\rightarrow}$ and ending at $V_{\varsigma_{j+1}}$ are i.i.d. copies of the path starting at o until the first renewal step, with the following initial conditions.

  1. (a) A single point is uniformly distributed in $ {\mathcal C }_{\pi/2}(o) \cap B(o, 1) $ .

  2. (b) The set $ \left({\mathcal C }_{\pi/2}((\!-\!\varkappa,0)) \cap B((\!-\!\varkappa,0), \varkappa+1) \right) \setminus \left( {\mathcal C }_{\pi/2}(o) \cap B(o, 1) \right) $ contains no points.

  3. (c) An independent Poisson point process is placed on $\left({\mathcal C }_{\pi/2}((\!-\!\varkappa,0)) \cap B((\!-\!\varkappa,0), \varkappa+1) \right)^c$ .

Let $\mathfrak Z=(\mathfrak Z_1,\mathfrak Z_2)$ be the position of the path at the first renewal step. Then, writing

\[\mathfrak U'_{j+1} \;:\!=\; V_{\varsigma_{j+1}} - V_{\varsigma_j},\]

the sequence $\{ \mathfrak U'_{j+1} \}_{j \geq 1}$ consists of i.i.d. copies of $\mathfrak Z + (\varkappa, 0)$ , as stated in [Reference Coupier, Saha, Sarkar and Tran6, Proposition 4.5]. Furthermore, as in [Reference Coupier, Saha, Sarkar and Tran6, Proposition 4.2], there exist constants $\mathfrak{c}, \mathfrak{C} > 0$ such that, for any $j \geq 0$ and any $n \geq 1$ ,

(20) \begin{align} \mathbb{P}(\varsigma_{j+1} - \varsigma_j > n) \leq \mathfrak{C} {\operatorname e }^{-\mathfrak{c} n}. \end{align}

As earlier, we now define

(21) \begin{equation}\mathfrak K'_t\;:\!=\;\sup\{n>0\colon \varsigma_{n}\le K_t\}\end{equation}

as the index of the largest stopping time before $K_t$ . In particular, writing

\[{\mathcal Y }''_t\;:\!=\;\sum_{i=2}^{\mathfrak K'_t} \mathfrak Y'_i,\]

where $\mathfrak U_i'\;=\!:\;(\mathfrak X_i',\mathfrak Y_i')$ in Cartesian coordinates, we have, for $\mathfrak K'_t\geq 1$ ,

(22) \begin{align} \sum_{i=1}^{K_t} Y_i= \sum_{i=1}^{\mathfrak K'_t} \mathfrak Y'_i+\sum_{i=\varsigma_{\mathfrak K'_t}+1}^{K_t} Y_i= \mathfrak Y'_1 +{\mathcal Y }''_t +\sum_{i=\varsigma_{\mathfrak K'_t}+1}^{K_t} Y_i. \end{align}

Note that, since $\mathfrak{Y}_1'$ does not have the same distribution as the other $\mathfrak{Y}'_i$ s, we have taken the sum from $i = 2$ in the definition of ${\mathcal Y }''_t$ to ensure that ${\mathcal Y }''_t$ is a sum of i.i.d. random variables. As earlier, for the moderate deviations, the difference between ${\mathcal Y }_t$ and ${\mathcal Y }_t''$ is irrelevant in the following sense.

Proposition 5. For any $\lambda>0$ and $0<\varepsilon<1/2$ , $\{t^{-1/2-\varepsilon} {\mathcal Y }_t\}_{t\geq 0}$ and $\{t^{-1/2-\varepsilon} {\mathcal Y }''_t\}_{t\geq 0}$ are exponentially equivalent, i.e. for any $\delta>0$ ,

\begin{equation*}\limsup_{t\uparrow\infty} t^{-2\varepsilon}\log\mathbb{P}(|{\mathcal Y }_t-{\mathcal Y }''_t|\geq \delta t^{1/2+\varepsilon})= -\infty.\end{equation*}

Note that we cannot use Lemma 5 to obtain a bound for the $Y_i$ s in the case $\theta = \pi/2$ . Therefore, we need the following result to prove Proposition 5.

Lemma 9. For any $0<\theta\leq \pi/2$ , there exist two sequences of nonnegative random variables $\{\tilde{X}_i\}_{i\geq 1}$ and $\{\tilde{Y}_i\}_{i\geq 1}$ , defined on the same probability space as $\{(X_i,Y_i)\}_{i\ge 1}$ , such that

  1. (i) $\{\tilde{X}_i\}_{i\geq 1}$ are i.i.d. with $\mathbb{P}(\tilde{X}_1\geq t)= {\operatorname e }^{-\lambda\theta t^2}$ for all $t\geq 0$ , and $\sum_{i=1}^n X_i\le \sum_{i=1}^n \tilde X_i$ for all $n\geq 1$ a.s.;

  2. (ii) $\{\tilde{Y}_i\}_{i\geq 1}$ are i.i.d. with $\mathbb{P}(\tilde{Y}_1\geq t) = {\operatorname e }^{-\lambda\theta t^2/2}$ for all $t\geq 0$ , and $\sum_{i=1}^n Y_i\le \sum_{i=1}^n \tilde Y_i$ for all $n\geq 1$ a.s.

Proof of Lemma 9. We treat the domination in the vertical and horizontal direction separately. Let us start with the horizontal direction.

Part (i): Existence of $\{\tilde{X}_i\}_{i\geq0}$ .Let $\{U_i\}_{i\geq1}$ be the sequence of progress random variables. Recall that we write $U_i=(X_i,Y_i)$ in Cartesian coordinates and $U_i=(R_i,\Phi_i)$ in polar coordinates. Recall further that $V_0=o$ and $V_k=(\sum_{i=1}^k X_i, \sum_{i=1}^k Y_i )$ for all $k\geq1$ . Now, we set $\tilde{V}_0\;:\!=\;V_0$ and define the sequences $\{\tilde{V}_k\}_{k\geq 1}$ , $\{\tilde{W}_k\}_{k\geq 1}$ , and $\{\tilde{X}_k\}_{k\geq 1}$ using the following recursive equations:

\begin{align*} \tilde{W}_k &\;:\!=\; \arg\min \big\{\big|v- \tilde{V}_{k-1}\big| : v\in {\mathcal P }\cap C_{\theta}(\tilde{V}_{k-1})\big\},\\[5pt] \tilde{X}_k &\;:\!=\; |\tilde{W}_k - \tilde{V}_{k-1}| \text{ and}\\[5pt] \tilde{V}_k &\;:\!=\; \Bigg(\sum_{i=1}^k \tilde{X}_i, \sum_{i=1}^k Y_i \Bigg)\end{align*}

(see Figure 3 for an illustration). Namely, the sequence of new points $\{\tilde V_k\}_{k\ge 1}$ is constructed in such a way that it dominates the original navigation along the x axis, by considering $\tilde X_k$ to be the maximum progress made along the x axis and the first coordinate of $\tilde V_k$ to be the total maximum progress made by the original navigation along the x axis.

Figure 3. A realization of part (i): existence of $\{\tilde{X}_i\}_{i\geq0}$ .

Note that, by construction, for all $k\geq 1$ and for all $1\leq i \leq k-1$ ,

\[C_{\theta}(\tilde{V}_{k}) \cap B(\tilde{V}_{i}, \tilde{X}_{i+1}) =\emptyset.\]

As a consequence, by the total independence of the underlying Poisson point process, $\{\tilde{X}_k\}_{k\geq 1}$ are i.i.d. Also, by construction,

\[\mathbb{P}(\tilde{X}_1\geq t)= {\operatorname e }^{-\lambda\theta t^2}.\]

Now, we show the domination by induction. First, note that $X_1\leq R_1 = \tilde{X}_1$ . Then, by assuming that $\sum_{i=1}^{k} X_i\le \sum_{i=1}^{k} \tilde X_i$ , we note that

\[C_{\theta}({V}_{k}) \supseteq C_{\theta}(\tilde{V}_{k}) \ni \tilde{W}_{k+1}.\]

Therefore,

\begin{align*} X_{k+1} \leq R_{k+1} \leq |\tilde{W}_{k+1} - {V}_{k} | & \leq|\tilde{W}_{k+1} - \tilde{V}_{k}| + |\tilde{V}_{k} - {V}_{k} |= \tilde{X}_{k+1}+ \sum_{i=1}^{k} \tilde{X}_i - \sum_{i=1}^{k} X_i,\end{align*}

which implies that $\sum_{i=1}^{k+1} X_i\le \sum_{i=1}^{k+1} \tilde X_i$ . This, by induction, proves Lemma 9(i).

Part (ii): Existence of $\{\tilde{Y}_i\}_{i\geq0}$ . Given an angle $\theta>0$ , we define

\[C_{\theta}^u\;:\!=\;\{y=(r_y, \varphi_y): r_y>0, 0\leq \varphi_y\leq \theta\}\]

and, for any $v\in \mathbb{R}^2$ , we write $C_{\theta}^u(v)\;:\!=\;v+C_{\theta}^u$ . Then, similar to part (i), we set $\hat{V}_0\;:\!=\;V_0$ and define sequences $\{\hat{V}_k\}_{k\geq 1}$ , $\{\hat{W}_k\}_{k\geq 1}$ , and $\{\tilde{Y}_k\}_{k\geq 1}$ using the recursive equations

\begin{gather*} \hat{W}_k \;:\!=\; \arg\min \big\{\big|v- \hat{V}_{k-1}\big| : v\in {\mathcal P }\cap C_{\theta}^u(\hat{V}_{k-1})\big\}, \\[5pt] \tilde{Y}_k \;:\!=\; |\hat{W}_k - \hat{V}_{k-1}|,\text{ and } \\[5pt] \hat{V}_k \;:\!=\; \Bigg(\sum_{i=1}^k X_i - \sum_{i=1}^k Y_i \cot\theta + \sum_{i=1}^k \tilde{Y}_i \cos\theta, \sum_{i=1}^k \tilde{Y}_i \sin\theta \Bigg);\end{gather*}

see Figure 4 for an illustration. Similar to above, $\{\sum_{i=1}^k\tilde{Y}_i \sin\theta\}_{k\geq 1}$ , the sequence of second coordinates of $\hat V_k$ , makes sure that the navigation stays dominated along the y axis, as $\tilde Y_k\sin \theta$ is the maximum progress that can be made along the y axis starting from $\hat V_{k-1}$ . Whereas the first coordinates of $\hat V_k$ are chosen in such a way that the domination holds, as described in (23). Note that, by construction, for all $k\geq 1$ and for all $1\leq i \leq k-1$ ,

\[C_{\theta}^u(\hat{V}_{k}) \cap B(\hat{V}_{i}, \tilde{Y}_{i+1}) =\emptyset.\]

Hence, by the independence of the Poisson point process in disjoint regions, $\{\tilde{Y}_k\}_{k\geq 1}$ are i.i.d. Also, by construction,

\[\mathbb{P}(\tilde{Y}_1\geq t)= {\operatorname e }^{-\lambda\theta t^2/2}.\]

Now, we show the domination by induction. First, note that $Y_1\leq R_1\sin\theta \leq \tilde{Y}_1 \sin\theta$ . Then, by assuming that $\sum_{i=1}^{k} Y_i\le \sum_{i=1}^{k} \tilde Y_i \sin\theta$ , we note that

\[C_{\theta}({V}_{k}) \supseteq C_{\theta}^u(\hat{V}_{k}) \ni \hat{W}_{k+1}.\]

Therefore,

(23) \begin{align} & \frac{Y_{k+1}}{\sin\theta} \leq R_{k+1} \leq |\hat{W}_{k+1} - {V}_{k} | \leq |\hat{W}_{k+1} - \hat{V}_{k} | + |\hat{V}_{k} - {V}_{k}|\nonumber\\&= \tilde{Y}_{k+1}+ \frac{1}{\sin\theta}\Bigg( \sum_{i=1}^{k} \tilde Y_i \sin\theta - \sum_{i=1}^{k} Y_i\Bigg), \end{align}

which implies that $\sum_{i=1}^{k+1} Y_i\le \sum_{i=1}^{k+1} \tilde Y_i \sin\theta$ . This, by induction, proves Lemma 9(ii), thus completing the proof.

Figure 4. A realization of part (ii): existence of $\{\tilde{Y}_i\}_{i\geq0}$ .

Proof of Proposition 5. We can proceed as in the proof of Proposition 1. Note that, by (22),

\begin{align*} |{\mathcal Y }_t-{\mathcal Y }''_t| &\leq \Bigg|\sum_{i=1}^{\varsigma_1}Y_i\Bigg| + \max\Bigg\{\Bigg|\sum_{i=\varsigma_{\mathfrak K'_t}+1}^{K_t}Y_i\Bigg|, \Bigg|\sum_{i=\varsigma_{\mathfrak K'_t}+1}^{K_t+1}Y_i\Bigg| \Bigg\} \end{align*}

and, therefore, by symmetry,

\begin{align*} \mathbb{P}(|{\mathcal Y }_t-{\mathcal Y }_t''|\geq \delta t^{1/2+\varepsilon})&\le\mathbb{P}\Bigg(\Bigg|\sum_{i=1}^{\varsigma_1}Y_i\Bigg|\geq\frac{1}{2} \delta t^{1/2+\varepsilon}\Bigg)+ \mathbb{P}\Bigg(\Bigg|\sum_{i=\varsigma_{\mathfrak K'_t}+1}^{K_t}Y_i\Bigg|\geq\frac{1}{2} \delta t^{1/2+\varepsilon}\Bigg)\\[5pt] &\hskip10pt + \mathbb{P}\Bigg(\Bigg|\sum_{i=\varsigma_{\mathfrak K'_t}+1}^{K_t+1}Y_i\Bigg|\geq\frac{1}{2} \delta t^{1/2+\varepsilon}\Bigg)\\[5pt] &=2\mathbb{P}\Bigg(\sum_{i=1}^{\varsigma_1}Y_i\geq\frac{1}{2} \delta t^{1/2+\varepsilon}\Bigg)+ 2\mathbb{P}\Bigg(\sum_{i=\varsigma_{\mathfrak K'_t}+1}^{K_t}Y_i\geq\frac{1}{2} \delta t^{1/2+\varepsilon}\Bigg)\\[5pt] &\hskip10pt + 2\mathbb{P}\Bigg(\sum_{i=\varsigma_{\mathfrak K'_t}+1}^{K_t+1}Y_i\geq\frac{1}{2} \delta t^{1/2+\varepsilon}\Bigg),\end{align*}

which, together with Lemma 9, yields

\begin{align*}&\mathbb{P}(|{\mathcal Y }_t-{\mathcal Y }_t''|\geq \delta t^{1/2+\varepsilon})\\[5pt] &\qquad\le 2\mathbb{P}\Bigg(\sum_{i=1}^{\varsigma_1}\tilde Y_i\geq\frac{1}{2} \delta t^{1/2+\varepsilon}\Bigg)+ 4 \mathbb{P}\Bigg(\sum_{i=\varsigma_{\mathfrak K'_t}+1}^{\varsigma_{\mathfrak K'_t+1}} \tilde Y_i\geq\frac{1}{2} \delta t^{1/2+\varepsilon}\Bigg)\\[5pt] &\qquad\le2\mathbb{P}\Bigg(\sum_{i=1}^{\varsigma_1}\tilde Y_i\geq\frac{1}{2} \delta t^{1/2+\varepsilon}\Bigg)+ 4\sum_{j=0}^{\lfloor t^2\rfloor}\mathbb{P}\Bigg(\sum_{i=\varsigma_{j}+1}^{\varsigma_{j+1}}\tilde Y_i\geq\frac{1}{2} \delta t^{1/2+\varepsilon}, \mathfrak K'_t=j \Bigg)+ 4\mathbb{P}(\mathfrak K'_t > \lfloor t^2\rfloor)\\[5pt] &\qquad\le 6\mathbb{P}\Bigg(\sum_{i=1}^{\varsigma_1}\tilde Y_i\geq\frac{1}{2} \delta t^{1/2+\varepsilon}\Bigg)+ 4\lfloor t^2\rfloor \mathbb{P}\Bigg(\sum_{i=\varsigma_1+1}^{\varsigma_2}\tilde Y_i\geq\frac{1}{2} \delta t^{1/2+\varepsilon}\Bigg) +4 \mathbb{P}(\mathfrak K'_t > \lfloor t^2\rfloor)\\[5pt] &\qquad\le (6+4\lfloor t^2\rfloor)\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor t^{\varepsilon'}\rfloor}\tilde Y_i \geq \frac{1}{2}\delta t^{1/2+\varepsilon}\Bigg)+6\mathbb{P}(\varsigma_1 > \lfloor t^{\varepsilon'}\rfloor )+4\lfloor t^2\rfloor\mathbb{P}(\varsigma_2-\varsigma_1 > \lfloor t^{\varepsilon'}\rfloor )\\[5pt] &\hskip29pt+ 4\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor t^2\rfloor}\mathfrak X'_i < t \Bigg)\end{align*}

for some $\varepsilon'\in(2\varepsilon, \tfrac12+\varepsilon)$ . This, together with (20) and the exponential Markov inequality, implies that

\begin{align*}\mathbb{P}(|{\mathcal Y }_t-{\mathcal Y }_t''|\geq \delta t^{1/2+\varepsilon})&\le (6+4\lfloor t^2\rfloor) {\operatorname e }^{-\frac{1}{2}\delta t^{1/2+\varepsilon}}\mathbb{E}[{\operatorname e }^{\tilde Y_1}]^{\lfloor t^{\varepsilon'}\rfloor}\\[5pt] &\hskip10pt+ (6+4\lfloor t^2\rfloor) \mathfrak{C}{\operatorname e }^{-\mathfrak{c}\lfloor t^{\varepsilon'}\rfloor} + 4{\operatorname e }^{t}\mathbb{E}[{\operatorname e }^{-\mathfrak{X}'_1}]^{\lfloor t^2\rfloor},\end{align*}

and therefore,

\begin{align*}&\limsup_{t\uparrow\infty} t^{-2\varepsilon}\log \mathbb{P}(|{\mathcal Y }_t-{\mathcal Y }_t''|\geq \delta t^{1/2+\varepsilon})\\[5pt] &\qquad\le -\min\Big\{\liminf_{t\uparrow\infty} t^{-2\varepsilon}\big(\tfrac{1}{2}\delta t^{1/2+\varepsilon}- \lfloor t^{\varepsilon'}\rfloor \log \mathbb{E}[{\operatorname e }^{\tilde Y_1}] \big),\liminf_{t\uparrow\infty} c\lfloor t^{\varepsilon'}\rfloor t^{-2\varepsilon} ,\\[5pt] &\hskip62pt -\limsup_{t\uparrow\infty}t^{-2\varepsilon}(t+\lfloor t^2\rfloor\log\mathbb{E}[{\operatorname e }^{-\mathfrak{X}'_1}] \big) \Big\}\\[5pt] &\qquad=-\infty,\end{align*}

as desired.

Proof of Theorem 1, case $\theta= \pi/2$ . Using Proposition 5, it suffices to consider ${\mathcal Y }''_t$ . However, an argument similar to that in the proof of Propositions 2 and 3 gives us that, for any $0 < \lambda$ and $0 < \varepsilon < 1/2$ , the process $\{t^{-1/2-\varepsilon} {\mathcal Y }''_t\}_{t\geq 0}$ obeys the moderate-deviation principle with rate function

\[I_{\lambda,\pi/2}(x) \;:\!=\; x^2 \frac{\mathbb{E}[\mathfrak Z_1]}{2\mathbb{E}[\mathfrak Z_2^2]},\]

and hence, setting

\[\rho(\lambda,\pi/2)\;:\!=\; \frac{\mathbb{E}[\mathfrak Z_1]}{2\mathbb{E}[\mathfrak Z_2^2]}\]

gives us the required form. From the definition of moderate deviations, it follows that, by scaling both the coordinates by $\sqrt{\lambda}$ , we have

\[I_{\lambda,\pi/2}(x) = \lambda^{\varepsilon} I_{1,\pi/2}(x\lambda^{1/4-\varepsilon/2}),\]

and hence,

\[\rho(\lambda,\pi/2)=\sqrt{\lambda}\rho(1,\pi/2).\]

This completes the proof of the theorem.

7. Proofs for the Large Deviations

7.1. The independent case $0<\theta\le \pi/4$

Proof of Lemma 1. The statement follows from an application of the multivariate Cramér theorem for empirical means of sequences of i.i.d. random variables that possess exponential moments; see [Reference Dembo and Zeitouni7, Corollary 6.1.6]. In order to compute the exponential moments, we use polar coordinates, i.e. consider $U_1=(R,\Phi)\in \mathbb{R}_+\times [-\pi,\pi)$ . As already used earlier, the radius follows a Rayleigh distribution, i.e.

$$\mathbb{P}(R>r)=\exp\!(\!-\!\lambda \theta r^2),$$

and we note that, due to the isotropy of the model, $\Phi$ is uniformly distributed in $[-\theta, \theta]$ . Hence,

(24) \begin{equation}\begin{split}\mathbb{E}[\!\exp\!(\langle \gamma, U_1\rangle)]&=\int_0^\infty\textrm{d} r\;\frac{1}{2\theta}\int_{-\theta}^{\theta}\textrm{d} \varphi\; \exp\!(\gamma_1 r\cos\varphi+\gamma_2 r\sin\varphi-\lambda\theta r^2)\lambda 2\theta r\\[5pt] &=\lambda \int_0^\infty\textrm{d} r\; r\exp\!(\!-\!\lambda\theta r^2)\int_{-\theta}^{\theta}\textrm{d} \varphi\;\exp\!(\gamma_1 r\cos\varphi+\gamma_2 r\sin\varphi),\end{split}\end{equation}

as desired. In particular,

\begin{align*}{\operatorname e }^{J_{\lambda, \theta}(\gamma)}\le 2\pi \lambda \int_0^\infty\textrm{d} r\; r\exp\!(\!-\!\lambda\theta r^2)\exp\!(r(|\gamma_1|+|\gamma_2|))<\infty.\end{align*}

Since $J_{\lambda, \theta}$ is strictly convex and differentiable, by [Reference Rockafellar14, Theorem 1], its Legendre transform $\mathcal J_{\lambda, \theta}$ is strictly convex and differentiable on $\{u\in \mathbb{R}^2\colon \mathcal J_{\lambda,\theta}(u)<\infty\}$ .

Further note that, writing $u=(r_u, \varphi_u)$ in polar coordinates,

\begin{align*} {\operatorname e }^{-\mathcal J_{\lambda,\theta}(u)}=\inf_{q\geq 0, \,\alpha\in[-\pi,\pi]} \int_{0}^{\infty}\textrm{d} r\;\int_{-\theta}^{\theta}\textrm{d} \varphi\;\lambda r \exp\!(\!-\!\lambda\theta r^2 + q[r\cos(\varphi-\alpha) - r_u\cos(\varphi_u-\alpha) ] ),\end{align*}

and, if $\varphi_u\notin (\!-\!\theta, \theta)$ , then we can always choose $\alpha_0\in[-\pi,\pi] $ such that, for all $\varphi \in [-\theta,\theta]$ , we have $r\cos(\varphi-\alpha_0) - r_u\cos(\varphi_u-\alpha_0)\leq 0$ . One choice of $\alpha_0$ for example is given by

\begin{align*} \alpha_0=\left\{ \begin{array}{l@{\quad}l} {\pi}/{2}+\theta & \text{ if } \varphi_u\in[\theta, \pi], \\[5pt] -{\pi}/{2}-\theta & \text{ if } \varphi_u\in[-\pi,-\theta]. \end{array} \right.\end{align*}

Then, since

\[\int_{0}^{\infty}\textrm{d} r\;\int_{-\theta}^{\theta}\textrm{d} \varphi\; \lambda r \exp\!(\!-\!\lambda\theta r^2 ) <\infty,\]

by the dominated-convergence theorem, we obtain

\begin{align*}{\operatorname e }^{-\mathcal J_{\lambda,\theta}(u)}&\leq\lim_{q\rightarrow \infty}\int_{0}^{\infty}\textrm{d} r\;\int_{-\theta}^{\theta}\textrm{d}\varphi\; \lambda r \exp\!(\!-\!\lambda\theta r^2 + q[r\cos(\varphi-\alpha_0) - r_u\cos(\varphi_u-\alpha_0) ] )\\[5pt] & = \int_{0}^{\infty}\textrm{d} r\; \int_{-\theta}^{\theta} \textrm{d}\varphi\;\lim_{q\rightarrow \infty} \lambda r \exp\!(\!-\!\lambda\theta r^2 + q[r\cos(\varphi-\alpha_0) - r_u\cos(\varphi_u-\alpha_0) ] )\\[5pt] &= 0.\end{align*}

Therefore, we get $\mathcal J_{\lambda,\theta}(u)= \infty$ whenever $ \varphi_u\notin (\!-\!\theta, \theta)$ . A similar calculation also gives us that $\mathcal J_{\lambda,\theta}(u)= \infty$ for $r_u=0$ . This implies that $\mathcal J_{\lambda,\theta}(u)= \infty$ for $u\notin {\mathcal C }^o_{\theta}$ .

Now, we show that $\mathcal J_{\lambda,\theta}(u)< \infty$ for $u\in{\mathcal C }^o_{\theta}$ . In this case, $r_u>0$ and $\varphi_u\in(\!-\!\theta,\theta)$ . For this, we pick $0<\eta<\pi/16$ small enough such that $[\varphi_u-4\eta, \varphi_u+4\eta]\in(\!-\!\theta,\theta)$ and define two sets $A_{\alpha}$ , $B_{\alpha}$ as

\begin{align*} A_{\alpha}\;:\!=\;\left\{ \begin{array}{l@{\quad}l} (r_u/(\sin\eta), \infty) & \text{ if } |\alpha-\varphi_u|\le \pi/2+ \eta, \\[5pt] (0, r_u \sin\eta) & \text{ if } |\alpha-\varphi_u|> \pi/2+ \eta, \end{array} \right.\end{align*}

and

\begin{align*} B_{\alpha}\;:\!=\;\left\{ \begin{array}{l@{\quad}l} [\varphi_u+2 \eta, \varphi_u+ 3 \eta] & \text{ if } \alpha\in[\varphi_u, \varphi_u+\pi], \\[5pt] [\varphi_u-3 \eta, \varphi_u- 2 \eta] & \text{ if } \alpha\in[\varphi_u-\pi, \varphi_u). \end{array} \right.\end{align*}

Here, we consider the range of $\alpha$ as $[\varphi_u-\pi, \varphi_u+\pi]$ instead of $[-\pi,\pi]$ for simplicity. Now, for any $\alpha\in[\varphi_u-\pi, \varphi_u+\pi]$ , $r\in A_{\alpha}$ , and $\varphi\in B_{\alpha}$ , we have $r\cos(\varphi-\alpha) - r_u\cos(\varphi_u-\alpha)\ge 0$ . Therefore,

\begin{align*} {\operatorname e }^{-\mathcal J_{\lambda,\theta}(u)}&\ge\inf_{\alpha\in[\varphi_u-\pi, \varphi_u+\pi]} \int_{A_{\alpha} }\textrm{d} r\;\int_{B_{\alpha}}\textrm{d} \varphi\;\lambda r \exp\!(\!-\!\lambda\theta r^2 ) \\[5pt] &=\eta\lambda\min\bigg\{\int_{0}^{r_u \sin\eta} \textrm{d} r\;r \exp\!(\!-\!\lambda\theta r^2 ) , \int_{r_u/(\sin\eta)}^{\infty} \textrm{d} r\;r \exp\!(\!-\!\lambda\theta r^2 ) \bigg\} \\[5pt] &>0,\end{align*}

which implies that, for all $u\in{\mathcal C }^o_{\theta}$ , we have $\mathcal J_{\lambda,\theta}(u)< \infty$ . This completes the proof.

In order to establish certain bounds, it will be convenient to also have a large-deviation result for i.i.d. sequences of progress variables $\overline U_i=(X_i,|Y_i|)$ at our disposal. For this, let $\overline{J}_{\lambda,\theta}$ denote the logarithmic moment-generating function of $\overline U_1$ .

Lemma 10. The sequence of i.i.d. copies $\{\overline U_i\}_{i\ge 1}$ of the progress variable $\overline U_1\in \mathbb{R}\times \mathbb{R}_+$ satisfies the large-deviation principle with rate n and rate function

$$\overline{ \mathcal J}_{\lambda, \theta}(u)\;:\!=\;\sup\{\langle \gamma, u \rangle -\overline{J}_{\lambda,\theta}(\gamma)\colon \gamma\in \mathbb{R}^2\},$$

where $\overline{J}_{\lambda,\theta}(\gamma)<\infty$ for all $\gamma\in \mathbb{R}^2$ .

Proof of Lemma 10. Using the same arguments as in the proof of Lemma 1, we have

\begin{align*}\mathbb{E}[\!\exp\!(\langle \gamma, \overline U_1\rangle)]&=2\lambda \int_0^\infty\textrm{d} r\;r\exp\!(\!-\!\lambda\theta r^2)\int_0^{\theta}\textrm{d} \varphi\;\exp\!(\gamma_1 r\cos\varphi+\gamma_2 r\sin\varphi)<\infty,\end{align*}

which, together with Cramér’s theorem, gives the result.

Proof of Theorem 2. The key observation is that the sequence of progress variables $\{U_i\}_{i\ge1}$ is i.i.d. for $\theta<\pi/4$ . Indeed, note that, for all $i\ge 1$ , $(C_\theta(V_i)\cap B(V_i,|U_{i+1}|))\cap C_\theta(V_{i+1})=\emptyset$ . Hence, in every step, the navigation discovers a previous undiscovered region in space and the corresponding Poisson point clouds are stochastically independent. However, even though the progress steps are i.i.d., the statement is not a direct application of, e.g. Cramér’s theorem, since $\{{\mathcal Y }_t\}_{t\ge 0}$ only tracks the vertical displacement; however, there is also a random horizontal displacement. Let us write $U_i=(X_i,Y_i)$ , where $X_i$ is the first and $Y_i$ the second Cartesian coordinate of $U_i$ and recall that $K_t\;:\!=\;\sup\{n>0\colon\sum_{i=1}^n X_i< t\}$ denotes the number of steps the navigation takes before reaching t along the x axis.

Step 1: Lower bound for upper tail. Let us start with the lower bound for the upper tail. It suffices to consider $a>0$ . Then, for all $1>b>0, c>0$ , and $\alpha>\beta>0$ ,

\begin{align*}\mathbb{P}({\mathcal Y }_t&> at)\ge \mathbb{P}({\mathcal Y }_t> at,\lfloor \beta t\rfloor\le K_t< \lfloor \alpha t\rfloor)\\[5pt] &\ge \mathbb{P}\Bigg(\sum_{i=1}^{\lfloor \beta t\rfloor} Y_i> at+\sum_{j=\lfloor \beta t\rfloor+1}^{\lfloor \alpha t\rfloor}|Y_j|,bt< \sum_{i=1}^{\lfloor \beta t\rfloor} X_i<t, \sum_{j=\lfloor \beta t\rfloor+1}^{\lfloor \alpha t\rfloor} X_j>(1-b)t\Bigg)\\[5pt] &\ge \mathbb{P}\Bigg(\sum_{j=\lfloor \beta t\rfloor+1}^{\lfloor \alpha t\rfloor}|Y_j|<ct,\sum_{j=\lfloor \beta t\rfloor+1}^{\lfloor \alpha t\rfloor} X_j>(1-b)t\Bigg)\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor \beta t\rfloor} Y_i> (a+c)t,bt<\sum_{i=1}^{\lfloor \beta t\rfloor} X_i< t\Bigg) \end{align*}

by independence. Consequently, for $\delta=\alpha-\beta$ ,

\begin{align*}\liminf_{t\uparrow\infty}t^{-1}\log\mathbb{P}({\mathcal Y }_t> at)&\ge -\delta\inf\{\overline{\mathcal J}_{\theta,\lambda}(x,y)\colon x> (1-b)/\delta, y<c/\delta \}\\[5pt] &\quad-\beta\inf\{\mathcal J_{\theta,\lambda}(x,y)\colon b/\beta<x< 1/\beta, y>(a+c)/\beta\}.\end{align*}

Sending b to 1 and fixing $\delta=\delta(c)$ such that $c/\delta> \mathbb{E}[|Y_1|]$ , the first summand vanishes and we have

\begin{align*}&\liminf_{t\uparrow\infty}t^{-1}\log\mathbb{P}({\mathcal Y }_t> at)\ge -\beta\mathcal J_{\theta,\lambda}(1/\beta,(a+c)/\beta).\end{align*}

Sending c to 0, we arrive at

\begin{align*}&\liminf_{t\uparrow\infty}t^{-1}\log\mathbb{P}({\mathcal Y }_t> at)\ge -\inf\{\beta \mathcal J_{\theta,\lambda}(1/\beta,a/\beta)\colon \beta>0\}.\end{align*}

This expression reflects that the process has to find the optimal compromise between making the right number of steps for the displacement along the y axis to be not too unlikely as well as hitting the time t along the x axis.

Step 2: Upper bound for upper tail. For the upper bound, we can proceed similarly. For all $\alpha>0$ , we can bound

\begin{align*}\mathbb{P}({\mathcal Y }_t\ge at)&\le \mathbb{P}\Bigg(\sum_{i=1}^{K_t}Y_i\ge at-|Y_{K_t+1}|, K_t\le \lfloor \alpha t\rfloor\Bigg)+\mathbb{P}(K_t>\lfloor \alpha t\rfloor)\\[5pt] &=\sum_{m=0}^{\lfloor \alpha t\rfloor} \mathbb{P}\Bigg(\sum_{i=1}^{m}Y_i\ge at-|Y_{m+1}|, K_t=m\Bigg)+\mathbb{P}(K_t>\lfloor \alpha t\rfloor)\\[5pt] &\le\sum_{m=0}^{\lfloor \alpha t\rfloor} \mathbb{P}\Bigg(\sum_{i=1}^{m}Y_i\ge t(a-\varepsilon), t(1-\varepsilon)\le \sum_{i=1}^m X_i <t\Bigg)+\mathbb{P}(\mathfrak B_\varepsilon^c(t))+\mathbb{P}(K_t>\lfloor \alpha t\rfloor),\end{align*}

where $\mathfrak B_\varepsilon(t)=\{X_{m+1}\le \varepsilon t, |Y_{m+1}|\le \varepsilon t\}$ with $\varepsilon>0$ . Note that

\begin{align*}\limsup_{t\uparrow\infty}t^{-1}\log\mathbb{P}(\mathfrak B^c_\varepsilon(t))=-\infty,\end{align*}

since the upper tails of $|U_i|$ are of order $O(\!-\!t^2)$ on the exponential level. Furthermore,

\begin{align*}\limsup_{\alpha\uparrow\infty}\limsup_{t\uparrow\infty}t^{-1}\log\mathbb{P}(K_t>\lfloor \alpha t\rfloor)=\limsup_{\alpha\uparrow\infty}\limsup_{t\uparrow\infty}t^{-1}\log \mathbb{P}\Bigg(\sum_{i=1}^{\lfloor \alpha t\rfloor}X_i< t\Bigg)=-\infty,\end{align*}

and hence, the error terms play no role on the exponential scale with rate t. Now, let $(t_n)_{n\ge 0}$ be a subsequence such that

\begin{align*}\limsup_{t\uparrow\infty} t^{-1}&\log \sum_{m=0}^{\lfloor \alpha t\rfloor} \mathbb{P}\Bigg(\sum_{i=1}^{m}Y_i\ge t(a-\varepsilon), t(1-\varepsilon)\le \sum_{i=1}^m X_i <t\Bigg)\\[5pt] &=\lim_{n\uparrow\infty} t_n^{-1}\log \sum_{m=0}^{\lfloor \alpha t_n\rfloor} \mathbb{P}\Bigg(\sum_{i=1}^{m}Y_i\ge t_n(a-\varepsilon), t_n(1-\varepsilon)\le \sum_{i=1}^m X_i <t_n\Bigg)\end{align*}

and define

$$\beta_{n,\varepsilon}\;:\!=\;{\textrm{argmax}}\Bigg\{\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor \beta t_n\rfloor}Y_i\ge t_n(a-\varepsilon), t_n(1-\varepsilon)\le \sum_{i=1}^{\lfloor \beta t_n\rfloor} X_i <t_n\Bigg)\colon 0\le \beta\le \alpha\Bigg\},$$

where we simply take $\beta_{n,\varepsilon}$ to be the smallest solution in case of ambiguity. Then, for a suitable further sub-sequence $(t_{n_k})_{k\ge 0}$ , we have

\begin{align*}&\lim_{n\uparrow\infty} t_n^{-1}\log \sum_{m=0}^{\lfloor \alpha t_n\rfloor} \mathbb{P}\Bigg(\sum_{i=1}^{m}Y_i\ge t_n(a-\varepsilon), t_n(1-\varepsilon)\le \sum_{i=1}^m X_i <t_n\Bigg)\\[5pt] &\qquad\le \limsup_{n\uparrow\infty}t_n^{-1}\log \mathbb{P}\Bigg(\sum_{i=1}^{\lfloor\beta_{n,\varepsilon} t_n\rfloor}Y_i\ge t_n(a-\varepsilon), t_n(1-\varepsilon)\le \sum_{i=1}^{\lfloor \beta_{n,\varepsilon} t_n\rfloor} X_i <t_n\Bigg)\\[5pt] &\qquad= \lim_{k\uparrow\infty}t_{n_k}^{-1}\log \mathbb{P}\Bigg(\sum_{i=1}^{\lfloor\beta_{{n_k},\varepsilon} t_{n_k}\rfloor}Y_i\ge t_{n_k}(a-\varepsilon), t_{n_k}(1-\varepsilon)\le \sum_{i=1}^{\lfloor \beta_{{n_k},\varepsilon} t_{n_k}\rfloor} X_i <t_{n_k}\Bigg),\end{align*}

and we note that $(\beta_{{n_k},\varepsilon})_{k\ge 0}$ must contain a convergent sub-sequence $(\beta_{{n_{k_l}},\varepsilon})_{l\ge 0}$ with limit $0\le \beta^*_\varepsilon\le \alpha$ . Hence,

\begin{align*}\lim_{l\uparrow\infty}t_{n_{k_l}}^{-1}&\log \mathbb{P}\Bigg(\sum_{i=1}^{\lfloor\beta_{{n_{k_l}},\varepsilon} t_{n_{k_l}}\rfloor}Y_i\ge t_{n_{k_l}}(a-\varepsilon), t_{n_{k_l}}(1-\varepsilon)\le \sum_{i=1}^{\lfloor \beta_{{n_{k_l}},\varepsilon} t_{n_{k_l}}\rfloor} X_i <t_{n_{k_l}}\Bigg)\\[5pt] &\le -\inf\{\beta\mathcal J_{\theta,\lambda}\big(1/\beta,a/\beta\big)\colon \beta>0\},\end{align*}

since $\limsup_{t\uparrow\infty}t^{-1}\log \lfloor \alpha t\rfloor=0$ and we could use continuity.

Step 3: General sets. Note that, by symmetry, $\mathbb{P}({\mathcal Y }_t\ge at)=\mathbb{P}({\mathcal Y }_t\le -at)$ and, hence, for $O\subset\mathbb{R}^2$ open, we find that, for all $z=(x,y)\in O$ , there exists $\varepsilon_z>0$ such that $\bar B_{\varepsilon_z}(z)\subset O$ . In particular, for all $z\in O$ ,

\begin{align*}&\mathbb{P}(t^{-1}{\mathcal Y }_t\in O)\ge\mathbb{P}(z-\varepsilon_z< t^{-1}{\mathcal Y }_t< z+\varepsilon_z)=\mathbb{P}(t^{-1}{\mathcal Y }_t> -z-\varepsilon_z)-\mathbb{P}(t^{-1}{\mathcal Y }_t\ge -z+\varepsilon_z),\end{align*}

where the rate to zero is dominated by the first summand. Taking an infimum over $z\in O$ gives the desired result. The upper bound can be proved similarly.

Step 4: Scaling. If we denote by ${\mathcal Y }_{\lambda,t}$ the vertical displacement at time t in the navigation based on a Poisson point process with intensity $\lambda>0$ , then, by scaling both the coordinates by $\sqrt{\lambda}$ , we find that ${\mathcal Y }_{\lambda,t}$ and ${\mathcal Y }_{1,\sqrt{\lambda}t}/\sqrt{\lambda}$ are equal in distribution. Therefore, for any $x>0$ ,

\begin{align*} \mathcal I_{\lambda,\theta}(x)&= -\lim_{t\uparrow\infty}t^{-1}\log\mathbb{P}( t^{-1} {\mathcal Y }_{\lambda,t} >x )\\[5pt] &= - \lim_{t\uparrow\infty} \sqrt{\lambda}(\sqrt{\lambda}t)^{-1}\log\mathbb{P}( (\sqrt{\lambda}t)^{-1} {\mathcal Y }_{1,\sqrt{\lambda}t} >x ) \\[5pt] &= \sqrt{\lambda} \mathcal I_{1,\theta}(x).\end{align*}

By a similar calculation, we also find that, for any $x<0$ , $\mathcal I_{\lambda,\theta}(x)= \sqrt{\lambda} \mathcal I_{1,\theta}(x)$ . This proves the theorem.

7.2. The dependent case $\pi/4<\theta< \pi/2$

Proof of Lemma 3. The proof follows again from an application of the multivariate Cramér theorem for empirical means of sequences of i.i.d. random variables; see [Reference Dembo and Zeitouni7, Corollary 6.1.6]. We need to show existence of $\gamma=(\gamma_1,\gamma_2)\in \mathbb{R}^2\setminus\{0\}$ , such that $J'_{\lambda,\theta}(\gamma)<\infty$ , but this is an immediate consequence of Lemma 2.

Proof of Lemma 4. Observe that if, for all $1\leq m\leq n$ , $\Phi_m\in(\pi/2-\theta,\theta]$ then, for all $1\leq m\leq n$ , $H_m\neq\emptyset$ , which then ensures that ${\tau}^\theta_1>n$ . Therefore,

\[ \mathbb{P}({\tau}^\theta_1>n)\geq \mathbb{P}( \Phi_m\in(\pi/2-\theta,\theta] \text{ for all }1\leq m\leq n). \]

Note that, conditioned on $V_{m-1}$ , $R_m$ and $H_{m-1}$ , $\Phi_m$ is uniformly distributed on the set

\[\{\varphi\in[-\theta,\theta]\colon V_{m-1}+(R_m,\varphi)\notin H_{m-1}\}.\]

Hence, with $\ell$ denoting the Lebesgue measure,

\[\mathbb{P}(\Phi_m\in(\pi/2-\theta,\theta] | V_{m-1},R_m, H_{m-1}) = \frac{\ell (\{\varphi\in(\pi/2-\theta,\theta]\colon V_{m-1}+(R_m,\varphi)\notin H_{m-1}\})}{\ell (\{\varphi\in[-\theta,\theta]\colon V_{m-1}+(R_m,\varphi)\notin H_{m-1}\})}.\]

Also, note that if, for all $1\leq i< m$ , $\Phi_i>0$ then ${\mathcal C }_{\theta}^u(V_{m-1})\cap H_{m-1} =\emptyset$ . Therefore, in this case,

\begin{align*} \frac{\ell (\{\varphi\in(\pi/2-\theta,\theta]\colon V_{m-1}+(R_m,\varphi)\notin H_{m-1}\})}{\ell (\{\varphi\in[-\theta,\theta]\colon V_{m-1}+(R_m,\varphi)\notin H_{m-1}\})} \geq \frac{\ell ((\pi/2-\theta,\theta])}{\ell ([-\theta,\theta])} &= \frac{2\theta-\pi/2}{2\theta} = \frac{4\theta-\pi}{4\theta}.\end{align*}

As a result, we find that

\[\mathbb{P}(\Phi_m\in(\pi/2-\theta,\theta] | \Phi_i\in(\pi/2-\theta,\theta] \text{ for all }1\leq i< m) \geq (4\theta-\pi)/(4\theta),\]

and thus,

\begin{align*} \mathbb{P}(\tau>n)&\geq \mathbb{P}( \Phi_m\in(\pi/2-\theta,\theta] \text{ for all }1\leq m\leq n)\\[5pt] &= \prod_{m=1}^n \mathbb{P}(\Phi_m\in(\pi/2-\theta,\theta] | \Phi_i\in({\pi}/{2}-\theta,\theta] \text{ for all }1\leq i< m)\\[5pt] & \geq ((4\theta-\pi)/(4\theta))^n,\end{align*}

which completes the proof.

Proof of Theorem 3. We proceed via several steps. First, note that, for all $\delta>0$ ,

\begin{align*}\limsup_{t\uparrow\infty}t^{-1}\log\mathbb{P}\big(|{\mathcal Y }_t-\hat{\mathcal Y }'_t|\ge \delta t\big)=-\infty,\end{align*}

where $\hat{\mathcal Y }'_t\;:\!=\;\sum_{i=1}^{K_t}Y_i$ , since

\begin{align*} \mathbb{P}\big(|{\mathcal Y }_t-\hat{\mathcal Y }'_t|\ge \delta t\big) &\le \mathbb{P}(|Y_{K_t+1}|>\delta t) \\[5pt] &\leq \mathbb{P}(K_t\geq \lfloor t \rfloor^2) + \mathbb{P}\left(\max_{i=1}^{\lfloor t \rfloor^2} |Y_i|>\delta t\right) \\[5pt] &\leq \mathbb{P}\Bigg(\sum_{i=1}^{\lfloor t \rfloor^2} X_i < t\Bigg) + \mathbb{P}\left(\max_{i=1}^{\lfloor t \rfloor^2} |Y_i|>\delta t\right) \\[5pt] &\leq \mathbb{P}\Bigg(\sum_{i=1}^{\lfloor t \rfloor^2} \underline R_i \cos \theta < t\Bigg) + \mathbb{P}\left(\max_{i=1}^{\lfloor t \rfloor^2} \overline R_i>\delta t\right) \\[5pt] &\leq {\operatorname e }^t \mathbb{E}[ {\operatorname e }^{-\underline R \cos \theta}]^{\lfloor t \rfloor^2} + t ^2 {\operatorname e }^{-\lambda (\theta\wedge(\pi/2-\theta)) \delta^2 t^2},\end{align*}

and hence, for the large deviations, we can focus on $\hat{\mathcal Y }'_t$ . For this, as in (3), we can further split $\hat{\mathcal Y }'_t$ into

$$\hat{\mathcal Y }'_t=\sum_{i=1}^{K'_t}Y'_i+\sum_{j=\tau^\theta_{K'_t}+1}^{ K_t}Y_j\;=\!:\;{\mathcal Y }_t'+\hat Y_t.$$

The first summand is a sum of i.i.d. segments, but the second summand is a sum of dependent random variables. The challenge comes from the fact that all the involved random variables $Y'_i, \hat Y_t$ have exponential tails and, therefore, contribute on the large-deviation scale t.

Step 1: Upper bound for upper tail. For all $\alpha>0$ , we can bound

\begin{align*}\mathbb{P}(\hat{\mathcal Y }'_t \ge at)&\le \mathbb{P}\Bigg(\sum_{i=1}^{K'_t}Y'_i+\hat Y_t\ge at, K'_t\le \lfloor \alpha t\rfloor\Bigg)+\mathbb{P}(K'_t>\lfloor \alpha t\rfloor)\\[5pt] &\le \sup\nolimits_{0<\beta\le \alpha}\alpha t\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor \beta t\rfloor}Y'_i+\hat Y_t\ge at, K'_t=\lfloor \beta t\rfloor\Bigg)+\mathbb{P}(K'_t>\lfloor \alpha t\rfloor),\end{align*}

where the second summand plays no role on the exponential scale with rate t since

\begin{align*}\limsup_{\alpha\uparrow\infty}\limsup_{t\uparrow\infty}t^{-1}\log\mathbb{P}(K'_t>\lfloor \alpha t\rfloor)=\limsup_{\alpha\uparrow\infty}\limsup_{t\uparrow\infty}t^{-1} \log\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor \alpha t\rfloor}X'_i< t\Bigg)=-\infty.\end{align*}

Similarly, for all $0<\beta\le \alpha$ ,

\begin{align*}\limsup_{\gamma\uparrow\infty}\limsup_{t\uparrow\infty}t^{-1}\log\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor \beta t\rfloor}Y'_i>\gamma t\Bigg)=-\infty\end{align*}

and

\begin{align*}\limsup_{\delta\uparrow\infty}\limsup_{t\uparrow\infty}t^{-1}\log\mathbb{P}({\tau}^\theta_1>\delta t)=-\infty,\end{align*}

and hence, it suffices to further bound as

\begin{align*}&\mathbb{P}\Bigg( \sum_{i=1}^{\lfloor \beta t\rfloor}Y'_i+\hat Y_t\ge at, K'_t=\lfloor \beta t\rfloor\Bigg)\\[5pt] &\qquad\le\sup\nolimits_{|b|<\gamma,\ c<1}\gamma t^2\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor \beta t\rfloor}Y'_i{\ \hat =\ } bt, \sum_{i=1}^{\lfloor \beta t\rfloor}X'_i{\ \hat =\ } ct\Bigg)\\[5pt] &\hskip29pt\times\sup\nolimits_{d\le \delta}\delta t\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor d t\rfloor}Y_i\ge (a-b)t, \sum_{i=1}^{\lfloor d t\rfloor}X_i< (1-c)t, \sum_{i=1}^{\lfloor d t\rfloor+1}X_i\ge (1-c)t, {\tau}^\theta_1>d t\Bigg)\\[5pt] &\hskip29pt+\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor \beta t\rfloor}Y'_i>\gamma t\Bigg)+ \mathbb{P}({\tau}^\theta_1>\delta t),\end{align*}

where we write $x{\ \hat =\ } y$ if and only if $\lfloor x\rfloor=\lfloor y\rfloor$ and use independence. Now, we can combine Assumption 1 and Lemma 3 to obtain

\begin{align*}&\limsup_{t\uparrow\infty}t^{-1}\log\mathbb{P}(\hat{\mathcal Y }'_t\ge at)\\[5pt] &\quad\le -\!\!\!\inf_{b\in\mathbb{R},\ c\in(0,1)}\!\big\{\inf\{\beta\mathcal J'_{\lambda,\theta}(c/\beta,b/\beta)\colon \beta>0\}+\inf\{d \mathcal H_{\lambda,\theta}((1-c)/d,(a-b)/d)\colon d>0\}\big\},\end{align*}

as desired.

Step 2: Lower bound for upper tail. Next, we consider the lower bound for the upper tail with $a>0$ . Then, for $b\in \mathbb{R}, 0<c<1$ and $ \beta,d>0$ ,

\begin{align*}\mathbb{P}(\hat{\mathcal Y }'_t> at)&\ge \mathbb{P}\Bigg(\sum_{i=1}^{K'_t}Y'_i>bt,K'_t= \lfloor \beta t\rfloor,\hat Y_t> (a-b)t\Bigg)\\[5pt] &\ge \mathbb{P}\Bigg(\sum_{i=1}^{\lfloor \beta t\rfloor}Y'_i>bt,\sum_{i=1}^{\lfloor \beta t\rfloor}X'_i{\ \hat =\ } ct\Bigg)\\[5pt] &\hskip10pt\times\mathbb{P}\Bigg(\sum_{i=1}^{\lfloor d t\rfloor}Y_i>(a-b)t,\sum_{i=1}^{\lfloor d t\rfloor}X_i< (1-c)t, \sum_{i=1}^{\lfloor d t\rfloor+1}X_i\ge (1-c)t, {\tau}^\theta_{1}>\lfloor d t\rfloor\Bigg), \end{align*}

where we used independence. Consequently,

\begin{align*}\liminf_{t\uparrow\infty}t^{-1}\log\mathbb{P}(\hat{\mathcal Y }'_t> at)\ge -\beta\mathcal{I'_{\lambda,\theta}}(c/\beta,b/\beta)-d \mathcal H_{\lambda,\theta}((1-c)/d,(a-b)/d).\end{align*}

Optimizing first with respect to $\beta$ and d in the individual summands and then with respect to b, c in the joint expression, we arrive at the desired lower bound that matches the upper bound.

Step 3: General sets. Using symmetry and the previous steps, we can follow the exact same arguments as in the independent case, step 4 in the proof of Theorem 2, to arrive at the large-deviation principle.

Step 4: Scaling. A scaling argument similar to that in the proof of Theorem 2 implies that $\mathcal I'_{\lambda,\theta}(x)= \sqrt{\lambda} \mathcal I'_{1,\theta}(x)$ for all $x\in \mathbb{R}$ .

Appendix A. Details for Remark 1

Let us finally present some more details for Remark 1. Let $0<\theta\le \pi/4$ and note that $\gamma\mapsto J_{\lambda,\theta}$ is strictly convex and twice differentiable. Hence, fixing x, with $|x|<\tan\theta$ , and writing $\zeta=1/\beta$ , for every $\zeta>0$ , there exists a unique $\gamma(\zeta)$ such that

$$\mathcal J(\zeta)\;:\!=\;\mathcal J_{\lambda,\theta}(\zeta,\zeta x)=\langle \gamma(\zeta), (\zeta,\zeta x) \rangle -J_{\lambda,\theta}(\gamma(\zeta)),$$

where $\gamma(\zeta)$ is twice differentiable, due to the explicit-function theorem, and satisfies

(A.1) \begin{align}\zeta=\frac{\mathbb{E}[\!\exp\!(\langle \gamma(\zeta), U\rangle)X]}{\mathbb{E}[\!\exp\!(\langle \gamma(\zeta), U\rangle)]}\quad\text{ and }\quad\zeta x=\frac{\mathbb{E}[\!\exp\!(\langle \gamma(\zeta), U\rangle)Y]}{\mathbb{E}[\!\exp\!(\langle \gamma(\zeta), U\rangle)]},\end{align}

where we suppressed the dependence on $\lambda$ and $\theta$ in the notation and let $U=(X,Y)$ represent the random variable for the first step. In particular, the minimizing of $\zeta=\zeta(x)$ in the definition of $\mathcal I_{\lambda,\theta}(x)$ satisfies

\begin{align*}0&= \mathcal -\frac{1}{\zeta^2}\mathcal J(\zeta)+\frac{1}{\zeta}\dot{\mathcal J}(\zeta)=\mathcal -\frac{1}{\zeta^2}\mathcal J(\zeta)+\frac{1}{\zeta}(\gamma_1(\zeta)+x\gamma_2(\zeta)),\end{align*}

where we used (A.1). In other words, $\mathcal J(\zeta)=\zeta\gamma_1(\zeta)+\zeta x\gamma_2(\zeta)=\langle \gamma(\zeta), (\zeta,\zeta x) \rangle$ , which implies that $\mathbb{E}[\!\exp\!(\langle \gamma(\zeta(x)), U\rangle)]=1$ and, moreover,

$$\mathcal I_{\lambda,\theta}(x)=\varrho_1(x)+x\varrho_2(x),$$

where we abbreviated $\varrho_1(x)\;:\!=\;\gamma_1(\zeta(x))$ and $\varrho_2(x)\;:\!=\;\gamma_2(\zeta(x))$ . This in particular provides us with a representation of the large-deviation rate function in terms of the unique solution to (A.1). Hence,

\begin{align*} \ddot{\mathcal I}_{\lambda,\theta}(0)&=\ddot{\varrho_1}(0)+2\dot{\varrho_2}(0).\end{align*}

Now, in order to analyze this, note that, by symmetry, $\mathcal I_{\lambda,\theta}(0)=0$ implies that $\varrho_1(0)=\varrho_2(0)=0$ and, hence, using the left-hand side of (A.1), $\zeta(0)=\mathbb{E}[X]$ . Also, note that $\dot{\mathcal I}_{\lambda,\theta}(0)=0$ implies that $\dot{\varrho}_1(0)=0$ . Considering the first derivative on the left-hand side of (A.1), we have

\begin{align*}\dot{\zeta}(x)=\frac{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)X\langle \dot{\varrho}(x), U\rangle]}{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)]}-\frac{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)X]\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)\langle \dot{\varrho}(x), U\rangle]}{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)]^2},\end{align*}

which implies that $\dot{\zeta}(0)=\mathbb{E}[X\langle \dot{\varrho}(0), U\rangle]-\mathbb{E}[X]\mathbb{E}[\langle \dot{\varrho}(0), U\rangle]=\dot{\varrho_1}(0)\mathbb V[X]=0$ . By symmetry, it is immediate that, for the right-hand side of (A.1),

\begin{align*}&\dot{\zeta}(x)x+\zeta(x)\\[5pt] &\qquad=\frac{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)Y\langle \dot{\varrho}(x), U\rangle]}{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)]}-\frac{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)Y]\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)\langle \dot{\varrho}(x), U\rangle]}{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)]^2},\end{align*}

and hence, $\mathbb{E}[X]=\zeta(0)=\mathbb{E}[Y\langle \dot{\varrho}(0), U\rangle]-\mathbb{E}[Y]\mathbb{E}[\langle \dot{\varrho}(0), U\rangle]=\dot{\varrho_2}(0)\mathbb V[Y]$ , that is, $\dot{\varrho_2}(0)=\mathbb{E}[X]/\mathbb{E}[Y^2]$ . Furthermore, considering the left-hand side of (A.1), we have

\begin{align*}\ddot{\zeta}(x)=&\frac{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)X\langle \dot{\varrho}(x), U\rangle^2]+\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)X\langle \ddot{\varrho}(x), U\rangle]}{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)]}\\[5pt] &-2\frac{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)X\langle \dot{\varrho}(x), U\rangle]\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)\langle \dot{\varrho}(x), U\rangle]}{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)]^2}\\[5pt] & -\frac{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)X]\big(\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)\langle \dot{\varrho}(x), U\rangle^2]+\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)\langle \ddot{\varrho}(x), U\rangle]\big)}{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)]^2}\\[5pt] & +2\frac{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)X]\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)\langle \dot{\varrho}(x), U\rangle]^2\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)]}{\mathbb{E}[\!\exp\!(\langle \varrho(x), U\rangle)]^4},\end{align*}

which gives a representation of $\ddot{\varrho_1}(0)$ in terms of mixed moments,

\begin{align*}\ddot{\zeta}(0)&=\mathbb{E}[X\langle \dot{\varrho}(0), U\rangle^2]+\mathbb{E}[X\langle \ddot{\varrho}(0), U\rangle]-2\mathbb{E}[X\langle \dot{\varrho}(0), U\rangle]\mathbb{E}[\langle \dot{\varrho}(0), U\rangle]\\[5pt] &\quad -\mathbb{E}[X]\big(\mathbb{E}[\langle \dot{\varrho}(0), U\rangle^2]+\mathbb{E}[\langle \ddot{\varrho}(0), U\rangle]\big)+2\mathbb{E}[X]\mathbb{E}[\langle \dot{\varrho}(0), U\rangle]^2\\[5pt] &=\dot{\varrho_2}(0)^2\mathbb{E}[XY^2]+\ddot{\varrho_1}(0)\mathbb{E}[X^2]-\mathbb{E}[X]\big(\dot{\varrho_2}(0)^2\mathbb{E}[Y^2]+\ddot{\varrho_1}(0)\mathbb{E}[X]\big)\\[5pt] &=\dot{\varrho_2}(0)^2(\mathbb{E}[XY^2]-\mathbb{E}[X]\mathbb{E}[Y^2])+\ddot{\varrho_1}(0)\mathbb V[X].\end{align*}

Putting things together, we get the representation

\begin{align*} \ddot{\mathcal I}_{\lambda,\theta}(0)=\bigg(\ddot{\zeta}(0)-\frac{(\mathbb{E}[XY^2]- \mathbb{E}[X]\mathbb{E}[Y^2])\mathbb{E}[X]^2}{\mathbb{E}[Y^2]^2}\bigg)/ \mathbb V[X]+2\mathbb{E}[X]/\mathbb{E}[Y^2].\end{align*}

In order to see that indeed $\rho_{\lambda,\theta}\neq \ddot{\mathcal I}_{\lambda,\theta}(0)/2$ , first note that $\ddot{\zeta}(0)\le 0$ since, for a deviation event in the vertical direction, it is not beneficial to increase the number of horizontal steps. This would increase the averaging effect and would make it harder to achieve the vertical deviation. Hence,

\begin{align*} \frac{\ddot{\mathcal I}_{\lambda,\theta}(0)}{2}&\le \frac{\mathbb{E}[X]}{\mathbb{E}[Y^2]}-\frac{(\mathbb{E}[XY^2]-\mathbb{E}[X]\mathbb{E}[Y^2])\mathbb{E}[X]^2}{2\mathbb{E}[Y^2]^2\mathbb V[X]}\\[5pt] &= \rho_{\lambda,\theta}\bigg(2-\frac{(\mathbb{E}[XY^2]-\mathbb{E}[X]\mathbb{E}[Y^2])\mathbb{E}[X]}{\mathbb{E}[Y^2]\mathbb V[X]}\bigg),\end{align*}

and it suffices to show that

\begin{align*} \textrm{Cov}(X,Y^2)\mathbb{E}[X]>\mathbb{E}[Y^2]\mathbb V[X].\end{align*}

However, using the moment-generating function (24), we can compute the left- and right-hand side of this, for example, for $\theta=\pi/4$ and $\lambda=2$ , numerically to see that it can be satisfied.

Acknowledgements

The authors would like to thank Christian Hirsch, Kumarjit Saha, Wolfgang König, and Alexander Zass for many fruitful discussions about the topic and for mentioning important references. The author also thanks the anonymous referees for their careful reading and detailed comments, which have helped to improve the paper.

Funding information

This work is supported by the Leibniz Association within the Leibniz Junior Research Group on Probabilistic Methods for Dynamic Communication Networks as part of the Leibniz Competition (grant no. J105/2020). The third author would also like to thank the grant ERC NEMO (grant no. 788851) of the research group DYOGENE at INRIA Paris.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this paper.

References

Arcones, M. A. (2003). Large deviations of empirical processes. In High Dimensional Probability, III (Sandjberg, 2002), Vol. 55 of Progress in Probability. Birkhäuser, Basel, pp. 205–223.10.1007/978-3-0348-8059-6_13CrossRefGoogle Scholar
Asmussen, S. (2003). Applied Probability and Queues, 2nd edn., Vol. 51 of Applications of Mathematics (New York), Stochastic Modelling and Applied Probability. Springer-Verlag, New York.Google Scholar
Baccelli, F. and Bordenave, C. (2007). The radial spanning tree of a Poisson point process. Ann. Appl. Probab. 17(1), 305359.10.1214/105051606000000826CrossRefGoogle Scholar
Bonichon, N. and Marckert, J. F. (2011). Asymptotics of geometrical navigation on a random set of points in the plane. Adv. Appl. Probab. 43(4), 899942.10.1239/aap/1324045692CrossRefGoogle Scholar
Bordenave, C. (2008). Navigation on a Poisson point process. Ann. Appl. Probab. 18(2), 708746.10.1214/07-AAP472CrossRefGoogle Scholar
Coupier, D., Saha, K., Sarkar, A. and Tran, V. C. (2021). The 2d-directed spanning forest converges to the Brownian web. Ann. Probab. 49(1), 435484.10.1214/20-AOP1478CrossRefGoogle Scholar
Dembo, A. and Zeitouni, O. (2010). Large Deviations Techniques and Applications, Vol. 38 of Stochastic Modelling and Applied Probability. Springer-Verlag, Berlin. Corrected reprint of the second (1998) edition.10.1007/978-3-642-03311-7CrossRefGoogle Scholar
Hollander, F. den. (2000). Large Deviations, Vol. 14 of Fields Institute Monographs. American Mathematical Society, Providence, RI.Google Scholar
Eichelsbacher, P. and Löwe, M. (2003). Moderate deviations for i.i.d. random variables. ESAIM Probab. Stat. 7, 209218.10.1051/ps:2003005CrossRefGoogle Scholar
Hirsch, C., Jahnel, J., Keeler, P. and Patterson, R. I. A. (2017). Traffic flow densities in large transport networks. Adv. Appl. Probab. 49(4), 10911115.10.1017/apr.2017.35CrossRefGoogle Scholar
Howard, C. D. and Newman, C. M. (1997). Euclidean models of first-passage percolation. Probab. Theory Related Fields 108(2), 153170.10.1007/s004400050105CrossRefGoogle Scholar
Jahnel, B. and König, W. (2020). Probabilistic Methods in Telecommunications . Compact Textbooks in Mathematics. Birkhäuser, Cham.Google Scholar
Ledoux, M. (1992). Sur les déviations modérées des sommes de variables aléatoires vectorielles indépendantes de même loi. Ann. Inst. H. Poincaré Probab. Statist. 28(2), 267280.Google Scholar
Rockafellar, R. T. (1967). Conjugates and Legendre transforms of convex functions. Canad. J. Math. 19, 200205.10.4153/CJM-1967-012-4CrossRefGoogle Scholar
Roy, R., Saha, K. and Sarkar, A. (2016) Random directed forest and the Brownian web. Ann. Inst. Henri Poincaré Probab. Stat. 52(3), 11061143.10.1214/15-AIHP672CrossRefGoogle Scholar
Roy, R., Saha, K. and Sarkar, A. (2023). Scaling limit of a drainage network model on perturbed lattice. arXiv:2302.09489.Google Scholar
Varadhan, S. R. S. (1984). Large Deviations and Applications, Vol. 46 of CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA.Google Scholar
Ball, K. and Chain, H. (1988). Kurtosis: A Critical Review, 2nd edn. John Wiley, New York.Google Scholar
Boyd, W. (1978). Hyperbolic distributions. Doctoral Thesis, University of Boston School of Mathematics.Google Scholar
Sichel, H. S., Kleingeld, W. J. and Assibey-Bonsu, W. (1992). A comparative study of three frequency-distribution models for use in ore valuation. J. S. Afr. Inst. Min. Met. 92, 9199.Google Scholar
Figure 0

Figure 1. Simulated sample path of $\bar{\mathcal V }$ for $\theta=\arctan(5)$.

Figure 1

Figure 2. Illustration for the proof of Lemma 7.

Figure 2

Figure 3. A realization of part (i): existence of $\{\tilde{X}_i\}_{i\geq0}$.

Figure 3

Figure 4. A realization of part (ii): existence of $\{\tilde{Y}_i\}_{i\geq0}$.