Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-26T19:13:55.719Z Has data issue: false hasContentIssue false

A subgeometric convergence formula for finite-level M/G/1-type Markov chains: via a block-decomposition-friendly solution to the Poisson equation of the deviation matrix

Published online by Cambridge University Press:  01 December 2023

Hiroyuki Masuyama*
Affiliation:
Tokyo Metropolitan University
Yosuke Katsumata*
Affiliation:
Kyoto University
Tatsuaki Kimura*
Affiliation:
Osaka University
*
*Postal address: Graduate School of Management, Tokyo Metropolitan University, Tokyo 192–0397, Japan. Email address: masuyama@tmu.ac.jp
**Postal address: Department of Systems Science, Graduate School of Informatics, Kyoto University, Kyoto 606–8501, Japan. Email address: katsumata@sys.i.kyoto-u.ac.jp
***Postal address: Department of Information and Communications Technology, Graduate School of Engineering, Osaka University, Suita 565–0871, Japan. Email address: kimura@comm.eng.osaka-u.ac.jp
Rights & Permissions [Opens in a new window]

Abstract

The purpose of this study is to present a subgeometric convergence formula for the stationary distribution of the finite-level M/G/1-type Markov chain when taking its infinite-level limit, where the upper boundary level goes to infinity. This study is carried out using the fundamental deviation matrix, which is a block-decomposition-friendly solution to the Poisson equation of the deviation matrix. The fundamental deviation matrix provides a difference formula for the respective stationary distributions of the finite-level chain and the corresponding infinite-level chain. The difference formula plays a crucial role in the derivation of the main result of this paper, and the main result is used, for example, to derive an asymptotic formula for the loss probability in the MAP/GI/1/N queue.

Type
Original Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Finite-level M/G/1-type Markov chains are used for the stationary analysis of finite semi-Markovian queues (see e.g. [Reference Baiocchi3, Reference Baiocchi and Bléfari-Melazzi4, Reference Cuyt6, Reference Herrmann12]). However, it is not easy to derive a simple and analytical expression for the stationary distribution in finite-level M/G/1-type Markov chains, except in some special cases such that the matrix generating functions of the level increments are rational [Reference Akar, Oğuz and Sohraby1, Reference Kim and Kim15].

There are related studies on the asymptotics of finite-level quasi-birth-and-death processes (QBDs) and finite-level GI/M/1-type Markov chains in the infinite-level limit, where the upper boundary level goes to infinity. As is well known, the finite-level QBD is a special case of the finite-level GI/M/1-type Markov chain as well as the finite-level M/G/1-type Markov chain. Miyazawa et al. [Reference Miyazawa, Sakuma and Yamaguchi35] presented an asymptotic formula for the stationary probability of a finite-level QBD being in the upper boundary level. The asymptotic formula is used to study the asymptotic behavior of the loss probability in the MAP/MSP/c/ $K+c$ queue. Kim and Kim [Reference Kim and Kim16] extended the asymptotic formula in [Reference Miyazawa, Sakuma and Yamaguchi35] to the finite-level GI/M/1-type Markov chain.

Some researchers have studied the infinite-level limit of finite-level M/G/1-type Markov chains, focusing on the asymptotics of the loss probability in finite M/G/1-type queues, such as queues with Markovian arrival processes. Ishizaki and Takine [Reference Ishizaki and Takine13] established a direct relationship between the respective stationary distributions of a special finite-level M/G/1-type Markov chain and its infinite-level-limit chain (the corresponding infinite-level M/G/1-type Markov chain). Using this direct relationship, the authors obtained the loss probability in a finite M/G/1-type queue with geometrically distributed off-periods. Baiocchi [Reference Baiocchi3] derived a geometric asymptotic formula for the loss probability in the MAP/G/1/K queue through the asymptotic analysis of a finite-level M/G/1-type Markov chain with light-tailed level increments. Liu and Zhao [Reference Liu and Zhao22] presented power-law asymptotic formulas for the loss probability in an M/G/1/N queue with vacations, where the embedded queue length process is a special finite-level M/G/1-type Markov chain with a single background state.

As we can see above, there are no previous studies on the exact convergence rate of the stationary distribution of a finite-level M/G/1-type Markov chain when taking its infinite-level limit; however, we can find other studies [Reference Masuyama28, Reference Masuyama29, Reference Masuyama31, Reference Masuyama32] with related but different perspectives. They are concerned with the upper bound for the error of the last-column-block-augmented (LCBA) truncation approximation of the stationary distribution in block-structured Markov chains, including infinite-level M/G/1-type Markov chains. Note that the finite-level M/G/1-type Markov chain can be viewed as the LCBA truncation approximation to an infinite-level M/G/1-type Markov chain. Thus, using the results of these previous studies, we can obtain upper bounds for the difference between the respective stationary distributions of the finite-level and (corresponding) infinite-level M/G/1-type Markov chains.

The purpose of this study is to present a subgeometric convergence formula for the stationary distribution of the finite-level M/G/1-type Markov chain in the infinite-level limit. Subgeometric convergence is much slower than geometric (exponential) convergence, and an example of it is polynomial convergence. The subgeometric convergence formula can be used, for example, to derive a subexponential asymptotic formula for the loss probability in the MAP/GI/1/N queues with and without vacations and their batch arrival models (i.e., fed by the batch Markovian arrival process (BMAP) [Reference Lucantoni23]). To save space, this paper presents the application only to the standard MAP/GI/1/N queue (without vacations).

The key to this study is a block-decomposition-friendly solution to the Poisson equation of the deviation matrix (see e.g. [Reference Coolen-Schrijner and van Doorn5]). We refer to the block-decomposition-friendly solution as the fundamental deviation matrix. The fundamental deviation matrix yields a difference formula for the respective stationary distributions of the finite-level M/G/1-type Markov chain and its infinite-level-limit chain. Furthermore, using the difference formula, we derive a subgeometric convergence formula for the stationary distribution of the finite-level M/G/1-type Markov chain as its upper boundary level goes to infinity. The subgeometric convergence formula requires a technical but not very restrictive condition (Assumption 5.10) for the subexponentiality of the integrated tail distribution of nonnegative level increments in steady state of the infinite-level-limit chain.

The rest of this paper consists of five sections. Section 2 describes infinite-level and finite-level M/G/1-type Markov chains. Section 3 discusses the second moment condition on the level increments of the infinite-level M/G/1-type Markov chain, which leads to the finiteness of the mean of the stationary distribution and to the finiteness of the mean first passage time to level zero. Section 4 introduces the fundamental deviation matrix of the infinite-level M/G/1-type Markov chain, and presents a closed block-decomposition form of the fundamental deviation matrix. Using the results of the previous sections, Section 5 proves the main theorem on the subgeometric convergence of the stationary distribution of the finite-level M/G/1-type Markov chain when we take its infinite-level limit. In addition, Section 5 provides an application of the main theorem to the subexponential asymptotics of the loss probability in the MAP/GI/1/N queue. Finally, Section 6 contains concluding remarks.

2. Model description

This section consists of three subsections. Section 2.1 provides basic definitions and notation. Sections 2.2 and 2.3 describe infinite-level and finite-level M/G/1-type Markov chains, respectively.

2.1. Basic definitions and notation

We begin by introducing symbols and notation for numbers. Let

\begin{align*}\mathbb{Z} &=\{0, \pm1, \pm2,\ldots\}, &\!\!\!\!\mathbb{Z}_+ &= \{0,1,2,\dots\}, &\!\!\!\!\mathbb{N} &= \{1,2,3,\dots\},\\\mathbb{Z}_{\ge k} &= \{n \in \mathbb{Z}\,:\, n \ge k\}, & \!\!\!\!k &\in \mathbb{Z},\\\mathbb{Z}_{[k,\ell]} &= \{n \in \mathbb{Z}\,:\, k \le n \le \ell\}, & \!\!\!\! k,\ell &\in \mathbb{Z},\, k \le \ell,\end{align*}

and let $\mathbb{M}_0$ and $\mathbb{M}_1$ denote

\[\mathbb{M}_0 = \mathbb{Z}_{[1,M_0]} = \{1, 2, \ldots,M_0\},\qquad \mathbb{M}_1 = \mathbb{Z}_{[1,M_1]} = \{1, 2, \ldots, M_1\},\]

respectively, where $M_0, M_1 \in \mathbb{N}$ . For $x,y \in \mathbb{R}\,:\!=\,({-}\infty,\infty)$ , let $x \wedge y = \min(x,y)$ and let $\delta_{k,\ell}$ , $k,\ell \in \mathbb{R}$ , denote the Kronecker delta; that is, $\delta_{k,k} = 1$ and $\delta_{k,\ell} = 0$ for $k \neq \ell$ . Furthermore, let $\mathbb{1}({\cdot})$ denote the indicator function that takes the value of one if the statement within the parentheses is true; otherwise it takes the value of zero.

Next we describe our notation for vectors and matrices. All matrices are denoted by bold uppercase letters, and in particular, ${\boldsymbol{O}}$ and ${\boldsymbol{I}}$ denote the zero matrix and the identity matrix, respectively, with appropriate sizes (i.e., with appropriate numbers of rows and columns). In general, all row vectors are denoted by bold Greek lowercase letters, except for ${\boldsymbol{g}}$ (this exception follows from the convention in the matrix analytic methods pioneered by Neuts [Reference Neuts36]); and all column vectors are denoted by bold English lowercase letters. In particular, ${\boldsymbol{e}}$ denotes the column vector of 1’s of appropriate size.

In addition, we introduce more definitions for vectors and matrices. For any matrix (or vector), the absolute value operator $|{\cdot}|$ works on it elementwise, and $({\cdot})_{i,j}$ (resp. $({\cdot})_{i}$ ) denotes the (i, j)th (resp. ith) element of the matrix (resp. vector) within the parentheses. We then denote by $\|{\cdot}\|$ the total variation norm for vectors. Thus, $\|{\boldsymbol{s}} \| = \sum_{j} |({\boldsymbol{s}})_j|$ for any vector ${\boldsymbol{s}}$ . Furthermore, for any matrix (or vector) function ${\boldsymbol{Z}}({\cdot})$ and scalar function $f({\cdot})$ on $\mathbb{R}$ , we use the notation ${\boldsymbol{Z}}(x) = \overline{\boldsymbol{\mathcal{O}}}(f(x)) $ and ${\boldsymbol{Z}}(x) = \underline{\boldsymbol{\mathcal{O}}}(f(x))$ :

\begin{align*}{\boldsymbol{Z}}(x) = \overline{\boldsymbol{\mathcal{O}}}(f(x))&\Longleftrightarrow\limsup_{x\to\infty} { \sup_i \sum_{j}|({\boldsymbol{Z}}(x))_{i,j}| \over f(x)}< \infty,\\{\boldsymbol{Z}}(x) = \underline{\boldsymbol{\mathcal{O}}}(f(x))&\Longleftrightarrow\lim_{x\to\infty}{ \sup_i \sum_{j}|({\boldsymbol{Z}}(x))_{i,j}| \over f(x)} = 0.\end{align*}

The symbols $\overline{\boldsymbol{\mathcal{O}}}({\cdot})$ and $\underline{\boldsymbol{\mathcal{O}}}({\cdot})$ are applied to vector functions; when applied to scalar functions, they are replaced by $O({\cdot})$ and $o({\cdot})$ (according to the standard notation). Finally, for any nonnegative matrix ${\boldsymbol{S}} \ge {\boldsymbol{O}}$ (including nonnegative vectors), we write ${\boldsymbol{S}} < \infty$ if every element of ${\boldsymbol{S}}$ is finite.

2.2. The infinite-level M/G/1-type Markov chain

This subsection introduces the infinite-level M/G/1-type Markov chain and the preliminary results concerning the chain which are used later in our analysis. First, we define the infinite-level M/G/1-type Markov chain along with the basic assumption for the existence and uniqueness of its stationary distribution. Next, we introduce the R- and G-matrices, which are helpful for our analysis in this study; for example, they allow us to describe Ramaswami’s recursion for the stationary distribution [Reference Ramaswami38].

We define the infinite-level M/G/1-type Markov chain as follows. Let $\{(X_n, J_n);\, n \in \mathbb{Z}_+\}$ denote a discrete-time Markov chain on state space $\mathbb{S}\,:\!=\, \bigcup_{k=0}^{\infty} \mathbb{L}_k$ , where $\mathbb{L}_k = \{k\} \times \mathbb{M}_{k \wedge 1}$ for $k \in \mathbb{Z}_+$ . The subset $\mathbb{L}_k$ of state space $\mathbb{S}$ is referred to as level k. Let ${\boldsymbol{P}}$ denote the transition probability matrix of the Markov chain $\{(X_n, J_n)\}$ , and assume that ${\boldsymbol{P}}$ is a stochastic matrix such that

(2.1)

Therefore, the component block matrices ${\boldsymbol{A}}(k)$ and ${\boldsymbol{B}}(k)$ satisfy the following:

(2.2) \begin{align}\sum_{k=-1}^{\infty}{\boldsymbol{A}}(k){\boldsymbol{e}} = {\boldsymbol{e}},\qquad \sum_{k=0}^{\infty}{\boldsymbol{B}}(k){\boldsymbol{e}} = {\boldsymbol{e}},\end{align}
(2.3) \begin{align}{\boldsymbol{B}}({-}1){\boldsymbol{e}} = {\boldsymbol{A}}({-}1){\boldsymbol{e}}.\qquad\qquad\qquad\end{align}

The Markov chain $\{(X_n, J_n)\}$ is referred to as an M/G/1-type Markov chain (see [Reference Neuts36]). For convenience, we call this Markov chain the infinite-level M/G/1-type Markov chain or infinite-level chain for short, in order to distinguish it from its finite-level version (described later).

Throughout the paper (unless otherwise noted), we proceed under Assumption 2.1, the standard mean drift condition for the existence and uniqueness of the stationary distribution (see [Reference Asmussen2, Chapter XI, Proposition 3.1] and [Reference Zhao, Li and Braun40, Theorem 16]), as our basic assumption.

Assumption 2.1. (Standard mean drift condition.) Let

(2.4) \begin{eqnarray}{\boldsymbol{A}} = \sum_{k=-1}^{\infty}{\boldsymbol{A}}(k),\qquad \overline{{\boldsymbol{m}}}_{A} = \sum_{k=-1}^{\infty} k {\boldsymbol{A}}(k){\boldsymbol{e}},\end{eqnarray}

where ${\boldsymbol{A}}$ is a stochastic matrix owing to (2.2). Suppose that the following conditions are satisfied: (i) the stochastic matrices ${\boldsymbol{A}}$ and ${\boldsymbol{P}}$ (given in (2.1)) are irreducible; (ii) $\overline{{\boldsymbol{m}}}_{B}\,:\!=\,\sum_{k=1}^{\infty} k{\boldsymbol{B}}(k){\boldsymbol{e}} < \infty$ ; and (iii) $\sigma \,:\!=\,\boldsymbol{\varpi}\overline{{\boldsymbol{m}}}_{A} < 0$ , where $\boldsymbol{\varpi}$ denotes the unique stationary distribution vector of ${\boldsymbol{A}}$ .

Remark 2.2. The mean drift conditions [Reference Asmussen2, Chapter XI, Proposition 3.1] and [Reference Zhao, Li and Braun40, Theorem 16] are the same and are established for the positive recurrence (or equivalently, for the existence and uniqueness of the stationary distribution) of the irreducible GI/G/1-type Markov chain, which is a generalization of the M/G/1-type Markov chain considered in this paper. Both imply that if the condition (iii) of Assumption 2.1 is weakened so that $\sigma =\boldsymbol{\varpi}\overline{{\boldsymbol{m}}}_{A} \le 0$ , then the infinite-level chain $\{(X_n, J_n)\}$ is irreducible and recurrent. We note that a stronger mean drift condition (than Assumption 2.1) for the M/G/1-type Markov chain is given in [Reference Neuts36, Theorem 3.2.1], where the G-matrix is assumed to be irreducible, and this assumption is more restrictive than the irreducibility of ${\boldsymbol{A}}$ .

Assumption 2.1 ensures that the infinite-level chain $\{(X_n, J_n)\}$ is irreducible and positive recurrent, and therefore this chain has a unique stationary distribution vector, denoted by $\boldsymbol{\pi} = (\pi(k,i))_{(k,i)\in\mathbb{S}}$ . For later convenience, $\boldsymbol{\pi}$ is partitioned level-wise: $\boldsymbol{\pi} = (\boldsymbol{\pi}(0),\boldsymbol{\pi}(1),{\dots})$ , where $\boldsymbol{\pi}(k) = (\pi(k,i))_{i \in \mathbb{M}_{k \wedge 1}}$ for $k \in \mathbb{Z}_+$ .

We conclude this subsection by defining the G- and R-matrices and describing Ramaswami’s recursion for the stationary distribution vector $\boldsymbol{\pi}$ . Let ${\boldsymbol{G}}\,:\!=\,(G_{i,j})_{i,j\in\mathbb{M}_1}$ denote an $M_1 \times M_1$ matrix such that

\[G_{i,j}=\mathbb{P} \big(J_{\tau_k} = j \mid (X_0, J_0) = (k+1, i) \in \mathbb{L}_{\ge 2}\big),\]

where $\tau_k = \inf\{n \in \mathbb{N}\,:\, X_n = k\}$ and $\mathbb{L}_{\ge k}= \bigcup_{\ell=k}^{\infty} \mathbb{L}_{\ell}$ for $k \in \mathbb{Z}_+$ . The conditions (i) and (iii) of Assumption 2.1 ensure that ${\boldsymbol{G}}$ is a stochastic matrix with a single closed communicating class [Reference Kimura, Daikoku, Masuyama and Takahashi17, Proposition 2.1] and thus a unique stationary distribution vector, denoted by ${\boldsymbol{g}}$ . In addition, using the G-matrix ${\boldsymbol{G}}$ , we define ${\boldsymbol{R}}_0(k)$ and ${\boldsymbol{R}}(k)$ , $k \in \mathbb{N}$ , as

(2.5) \begin{align}{\boldsymbol{R}}_0(k) =\sum_{m=k}^{\infty}{\boldsymbol{B}}(m){\boldsymbol{G}}^{m-k} ({\boldsymbol{I}}-\boldsymbol{\Phi}(0))^{-1}, \quad k \in \mathbb{N},\end{align}
(2.6) \begin{align}{\boldsymbol{R}}(k)&=\sum_{m=k}^{\infty}{\boldsymbol{A}}(m){\boldsymbol{G}}^{m-k} ({\boldsymbol{I}}-\boldsymbol{\Phi}(0))^{-1}, \quad k \in \mathbb{N},\end{align}

respectively, where

(2.7) \begin{equation}\boldsymbol{\Phi}(0)= \sum_{m=0}^{\infty} {\boldsymbol{A}}(m){\boldsymbol{G}}^{m}.\end{equation}

We then describe Ramaswami’s recursion [Reference Ramaswami38]:

(2.8) \begin{equation}\boldsymbol{\pi}(k) = \boldsymbol{\pi}(0){\boldsymbol{R}}_0(k)+ \sum_{\ell=1}^{k-1}\boldsymbol{\pi}(\ell){\boldsymbol{R}}(k-\ell),\quad k \in \mathbb{N}.\end{equation}

This is used later to prove Theorem 3.4.

2.3. The finite-level M/G/1-type Markov chain

This subsection introduces the finite-level M/G/1-type Markov chain and the fundamental results on its stationary distribution. We begin with the definition of the finite-level M/G/1-type Markov chain and then present a proposition on the uniqueness of its stationary distribution. We close by providing the definitions and convention associated with the finite-level M/G/1-type Markov chain and its stationary distribution. Throughout the paper, the symbol N is assumed to an arbitrary positive integer, unless otherwise noted.

We provide the definition of the finite-level M/G/1-type Markov chain. For any $N \in \mathbb{N}$ , let $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big);\, n\in\mathbb{Z}_+\Big\}$ denote a discrete-time Markov chain on state space $\mathbb{L}_{\le N}\,:\!=\, \bigcup_{\ell=0}^N \mathbb{L}_{\ell}$ with transition probability matrix ${\boldsymbol{P}}^{(N)}$ given by

(2.9)

where

(2.10a) \begin{align}\overline{{\boldsymbol{A}}}(k) = \sum_{\ell=k+1}^{\infty} {\boldsymbol{A}}(\ell), \quad k \in \mathbb{Z}_{\ge -2},\end{align}
(2.10b) \begin{align}\overline{{\boldsymbol{B}}}(k) = \sum_{\ell=k+1}^{\infty} {\boldsymbol{B}}(\ell), \quad k \in \mathbb{Z}_+.\end{align}

The Markov chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ is referred to as the finite-level M/G/1-type Markov chain or finite-level chain for short. Without loss of generality, we assume that the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ is defined on the same probability space as that of the infinite-level chain $\{(X_n,J_n)\}$ .

Remark 2.3. The stochastic matrix ${\boldsymbol{P}}^{(N)}$ can be considered the last-column-block-augmented (LCBA) truncation of ${\boldsymbol{P}}$ (see [Reference Masuyama28, Reference Masuyama29, Reference Masuyama31, Reference Masuyama32]).

Proposition 2.4 presents the basic results on the recurrence of $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ and the uniqueness of its stationary distribution.

Proposition 2.4. If ${\boldsymbol{P}}$ is irreducible, then the following statements hold:

  1. (i) For all sufficiently large $N \in \mathbb{N}$ , the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ can reach any state $(0,j) \in \mathbb{L}_0$ from any state $(0,i) \in \mathbb{L}_0$ .

  2. (ii) If ${\boldsymbol{P}}$ is recurrent, then, for each $N \in \mathbb{N}$ , the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ reaches level zero $\mathbb{L}_0$ from any state in its state space $\mathbb{L}_{\le N}$ with probability one.

  3. (iii) If ${\boldsymbol{P}}$ is positive recurrent, then the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ has a unique stationary distribution vector for all sufficiently large $N \in \mathbb{N}$ .

Proof. See Appendix A.

Remark 2.5. According to Remark 2.2, ${\boldsymbol{P}}$ is recurrent under the conditions (i) and (ii) of Assumption 2.1 together with $\sigma \le 0$ , and ${\boldsymbol{P}}$ is positive recurrent under Assumption 2.1.

For later use, we introduce some definitions associated with the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ . Let $\boldsymbol{\pi}^{(N)}\,:\!=\,\big(\pi^{(N)}(k,i)\big)_{(k,i)\in\mathbb{L}_{\le N}}$ denote the stationary distribution vector, which is uniquely determined for all sufficiently large $N \in \mathbb{N}$ (see Proposition 2.4). By definition,

(2.11) \begin{equation}\boldsymbol{\pi}^{(N)}{\boldsymbol{P}}^{(N)} = \boldsymbol{\pi}^{(N)},\quad \boldsymbol{\pi}^{(N)}{\boldsymbol{e}} = 1,\quad \boldsymbol{\pi}^{(N)} \ge {\textbf{0}}.\end{equation}

For later use, $\boldsymbol{\pi}^{(N)}$ is partitioned as

\[\boldsymbol{\pi}^{(N)} = \big(\boldsymbol{\pi}^{(N)}(0),\boldsymbol{\pi}^{(N)}(1),\dots,\boldsymbol{\pi}^{(N)}(N)\big),\]

where $\boldsymbol{\pi}^{(N)}(k) = \big(\pi^{(N)}(k,i)\big)_{i \in \mathbb{M}_{k \wedge 1}}$ for $k \in \mathbb{Z}_{[0,N]}$ .

Finally, we introduce our convention for performing calculations (such as addition and multiplication) over finite- and infinite-dimensional matrices (including vectors). As mentioned in the introduction, the main purpose of this paper is to study the convergence of $\big\{\boldsymbol{\pi}^{(N)};\, N \in\mathbb{N}\big\}$ . Thus, we consider the situation where the probability vectors $\boldsymbol{\pi}^{(N)}$ , $N \in\mathbb{N}$ , of different finite dimensions converge to a certain probability vector of infinite dimension (which is expected to be equal to $\boldsymbol{\pi}$ ). To facilitate this study, the following convention is introduced: a finite-dimensional matrix (or vector) is extended (if necessary) to an infinite-dimensional matrix by appending zeros to it, keeping its original elements in their original positions. Following this convention, for example, $\boldsymbol{\pi}^{(N)} - \boldsymbol{\pi}$ and ${\boldsymbol{P}}^{(N)} - {\boldsymbol{P}}$ are well-defined.

3. Second-order moment condition on level increments in the infinite-level chain

This section introduces a condition on the level increments of the infinite-level chain (Assumption 3.1 below). For convenience, we call it the second-order moment condition. As shown later, the second-order moment condition ensures that the mean of the stationary distribution is finite and that the mean first passage time to level zero is finite; these facts lead to the proof of the convergence of $\big\{\boldsymbol{\pi}^{(N)}\big\}$ to $\boldsymbol{\pi}$ .

Assumption 3.1. (Second-order moment condition on level increments.)

\[\sum_{k=1}^{\infty} k^2 {\boldsymbol{A}}(k) < \infty,\quad \sum_{k=1}^{\infty} k^2 {\boldsymbol{B}}(k) < \infty.\]

Remark 3.2. Assumption 3.1 implies that

(3.1) \begin{align}{\boldsymbol{A}}(k) = \underline{\boldsymbol{\mathcal{O}}}\big(k^{-3}\big),\quad {\boldsymbol{B}}(k) = \underline{\boldsymbol{\mathcal{O}}}\big(k^{-3}\big).\end{align}

3.1. Finiteness of the mean of the stationary distribution

In this subsection, we first establish a certain Foster–Lyapunov drift condition (called the drift condition for short). Using this drift condition, we show that the second-order moment condition is equivalent to $\sum_{k=1}^{\infty}k\boldsymbol{\pi}(k){\boldsymbol{e}} < \infty$ , and also show that the second-order moment condition implies $\sup_{N \in \mathbb{N}}\sum_{k=1}^N k \boldsymbol{\pi}^{(N)}(k) {\boldsymbol{e}} < \infty$ .

Consider the Poisson equation

(3.2) \begin{equation}({\boldsymbol{I}} - {\boldsymbol{A}}){\boldsymbol{x}}= -\sigma{\boldsymbol{e}} + \overline{{\boldsymbol{m}}}_{A},\end{equation}

to establish the desired drift condition under Assumption 3.1. Let ${\boldsymbol{a}}$ denote

(3.3) \begin{equation}{\boldsymbol{a}} = ({\boldsymbol{I}} - {\boldsymbol{A}} + {\boldsymbol{e}}\boldsymbol{\varpi})^{-1}\overline{{\boldsymbol{m}}}_{A}+ c {\boldsymbol{e}},\end{equation}

where $c\in \mathbb{R}$ is an arbitrary constant. It then follows from ${\boldsymbol{A}}{\boldsymbol{e}}={\boldsymbol{e}}$ , $\sigma=\boldsymbol{\varpi}\overline{{\boldsymbol{m}}}_{A}$ , and $\boldsymbol{\varpi}({\boldsymbol{I}} - {\boldsymbol{A}} + {\boldsymbol{e}}\boldsymbol{\varpi})^{-1}=\boldsymbol{\varpi}$ that

(3.4) \begin{align}({\boldsymbol{I}} - {\boldsymbol{A}}){\boldsymbol{a}}= -\sigma{\boldsymbol{e}} + \overline{{\boldsymbol{m}}}_{A}.\end{align}

Therefore, ${\boldsymbol{a}}$ is a solution to the Poisson equation (3.2) (see e.g. [Reference Dendievel, Latouche and Liu7, Reference Makowski and Shwartz25]) and is unique up to constant multiples (see [Reference Glynn and Meyn10, Proposition 1.1]). For later use, fix $c > 0$ sufficiently large so that ${\boldsymbol{a}} \ge {\textbf{0}}$ , and then let ${\boldsymbol{v}}\,:\!=\,(v(k,i))_{(k,i)\in\mathbb{S}}$ and ${\boldsymbol{f}}\,:\!=\,(f(k,i))_{(k,i)\in\mathbb{S}}$ denote nonnegative column vectors such that

(3.5) \begin{align}{\boldsymbol{v}}(k)\,:\!=\, (v(k,i))_{i\in\mathbb{M}_{k\wedge1}}&=\left\{\begin{array}{l@{\quad}l}{\textbf{0}}, & k =0,\\[5pt]\displaystyle{1 \over -\sigma} \left(k^2{\boldsymbol{e}} + 2k{\boldsymbol{a}} \right), & k \in\mathbb{N},\end{array}\right.\end{align}
(3.6) \begin{align}\boldsymbol{f}(k)\,:\!=\, (f(k,i))_{i\in\mathbb{M}_{k\wedge1}} =\left\{\begin{array}{l@{\quad}l}\boldsymbol{e}, & k=0,\\[5pt](k+1)\boldsymbol{e}, & k \in\mathbb{N},\end{array}\right.\qquad\qquad\end{align}

respectively, where the size $M_0$ of ${\boldsymbol{v}}(0)$ and ${\boldsymbol{f}}(0)$ is, in general, different from the size $M_1$ of ${\boldsymbol{v}}(k)$ and ${\boldsymbol{f}}(k)$ , $k \in \mathbb{N}$ . Furthermore, let $\textbf{1}_{\mathbb{C}}\,:\!=\,(1_{\mathbb{C}}(k,i))_{(k,i)\in\mathbb{S}}$ , $\mathbb{C} \subseteq \mathbb{S}$ , denote a column vector such that

\[1_{\mathbb{C}}(k,i) =\left\{\begin{array}{l@{\quad}l}1, & (k,i) \in \mathbb{C},\\[5pt]0, & (k,i) \not\in \mathbb{C}.\end{array}\right.\]

When $\mathbb{C}$ consists of a single state $(\ell,j) \in \mathbb{S}$ , that is, $\mathbb{C} = \{(\ell,j)\}$ , we write $\textbf{1}_{(\ell,j)}$ for $\textbf{1}_{\{(\ell,j)\}}$ .

The following lemma presents our desired drift condition.

Lemma 3.3. If Assumptions 2.1 and 3.1 hold, then there exist some $b \in (0,\infty)$ and $K \in \mathbb{N}$ such that

(3.7) \begin{align}{\boldsymbol{P}}^{(N)}{\boldsymbol{v}}\le{\boldsymbol{P}}{\boldsymbol{v}}\le {\boldsymbol{v}} - {\boldsymbol{f}} + b \textbf{1}_{\mathbb{L}_{\le K}}\quad for\ all\ N\ \in \mathbb{N},\end{align}

where $\mathbb{L}_{\le k} = \bigcup_{\ell=0}^k \mathbb{L}_{\ell}$ for $k \in \mathbb{Z}_+$ .

Proof. To prove this lemma, it suffices to show that

(3.8a) \begin{align}\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(k;\,\ell){\boldsymbol{v}}(\ell) & < \infty \qquad\qquad\quad\, \text{for all}\ k \in\ \mathbb{Z}_+,\quad\qquad\qquad\qquad\end{align}
(3.8b) \begin{align}\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(k;\,\ell){\boldsymbol{v}}(\ell)&\le {\boldsymbol{v}}(k) - {\boldsymbol{f}}(k) \quad \quad \text{for all sufficiently large}\ k \in\ \mathbb{Z}_+,\end{align}

where ${\boldsymbol{P}}(k;\,\ell)$ , $k,\ell \in \mathbb{Z}_+$ , denotes a submatrix of ${\boldsymbol{P}}$ that contains the transition probabilities from level k to level $\ell$ . Indeed, (3.8) implies that there exist some $b \in (0,\infty)$ and $K \in \mathbb{N}$ such that

\begin{equation*}{\boldsymbol{P}}{\boldsymbol{v}}\le {\boldsymbol{v}} - {\boldsymbol{f}} + b \textbf{1}_{\mathbb{L}_{\le K}}.\end{equation*}

Furthermore, ${\boldsymbol{P}}^{(N)}{\boldsymbol{v}} \le {\boldsymbol{P}}{\boldsymbol{v}}$ for $N \in \mathbb{N}$ , which follows from (2.1) and (2.9) together with the fact that ${\boldsymbol{v}}(1) \le {\boldsymbol{v}}(2) \le {\boldsymbol{v}}(3) \le \cdots$ owing to (3.5).

First, we prove (3.8a). It follows from (2.1), (2.4), and (3.5) that, for all $k \in \mathbb{Z}_{\ge 2}$ ,

(3.9) \begin{align}&\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(k;\,\ell){\boldsymbol{v}}(\ell)\nonumber\\&\quad =\sum_{\ell=-1}^{\infty} {\boldsymbol{A}}(\ell){\boldsymbol{v}}(k+\ell)={1 \over -\sigma}\left[\sum_{\ell=-1}^{\infty} (k+\ell)^2 {\boldsymbol{A}}(\ell){\boldsymbol{e}}+ 2\sum_{\ell=-1}^{\infty} (k+\ell) {\boldsymbol{A}}(\ell){\boldsymbol{a}}\right]\nonumber\\&\quad ={1 \over -\sigma}\left[ k^2 {\boldsymbol{e}}+ 2 k \left(\overline{{\boldsymbol{m}}}_{A} + {\boldsymbol{A}}{\boldsymbol{a}}\right)\right]+ {1 \over -\sigma}\left[\sum_{\ell=-1}^{\infty} \ell^2 {\boldsymbol{A}}(\ell){\boldsymbol{e}}+ 2\sum_{\ell=-1}^{\infty} \ell {\boldsymbol{A}}(\ell){\boldsymbol{a}}\right]\nonumber\\&\quad ={1 \over -\sigma}\left[k^2 {\boldsymbol{e}} + 2k ( {\boldsymbol{a}} + \sigma{\boldsymbol{e}} )\right]+{1 \over -\sigma}\left[\sum_{\ell=-1}^{\infty} \ell^2 {\boldsymbol{A}}(\ell){\boldsymbol{e}}+ 2\sum_{\ell=-1}^{\infty} \ell {\boldsymbol{A}}(\ell){\boldsymbol{a}}\right],\end{align}

where the last equality holds because $\overline{{\boldsymbol{m}}}_{A} + {\boldsymbol{A}}{\boldsymbol{a}} ={\boldsymbol{a}} + \sigma{\boldsymbol{e}}$ owing to (3.4). It also follows from (3.9) and Assumption 3.1 that $\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(k;\,\ell){\boldsymbol{v}}(\ell)$ is finite for each $k \in \mathbb{Z}_{\ge 2}$ . Similarly, we can confirm that $\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(0;\,\ell){\boldsymbol{v}}(\ell)$ and $\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(1;\,\ell){\boldsymbol{v}}(\ell)$ are finite.

Next, we prove (3.8b). Using (3.5) and (3.6), we rewrite (3.9) as

(3.10) \begin{align}&\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(k;\,\ell){\boldsymbol{v}}(\ell)\nonumber\\&\,= {\boldsymbol{v}}(k) - 2k{\boldsymbol{e}}+ {1 \over -\sigma}\left[\sum_{\ell=-1}^{\infty} \ell^2 {\boldsymbol{A}}(\ell){\boldsymbol{e}}+ 2\sum_{\ell=-1}^{\infty} \ell {\boldsymbol{A}}(\ell){\boldsymbol{a}}\right]\nonumber\\&\,= {\boldsymbol{v}}(k) - {\boldsymbol{f}}(k) - (k-1){\boldsymbol{e}}+{1 \over -\sigma}\left[ \sum_{\ell=-1}^{\infty} \ell^2 {\boldsymbol{A}}(\ell){\boldsymbol{e}}+ 2\sum_{\ell=-1}^{\infty} \ell {\boldsymbol{A}}(\ell){\boldsymbol{a}}\right],\,\, k \in \mathbb{Z}_{\ge 2}.\end{align}

Clearly, there exists some $K \in \mathbb{N}$ such that

\[- (k-1){\boldsymbol{e}}+{1 \over -\sigma}\left[ \sum_{\ell=-1}^{\infty} \ell^2 {\boldsymbol{A}}(\ell){\boldsymbol{e}}+ 2\sum_{\ell=-1}^{\infty} \ell {\boldsymbol{A}}(\ell){\boldsymbol{a}}\right]\le {\textbf{0}}\quad {for\ all\ k\ \in\ \mathbb{Z}_{\ge K+1}}.\]

Combining this and (3.10) results in (3.8b). The proof is complete.

Lemma 3.3 leads to the results on the finite means of the stationary distribution vectors $\boldsymbol{\pi}$ and $\boldsymbol{\pi}^{(N)}$ of the infinite-level and finite-level chains.

Theorem 3.4. Under Assumption 2.1, the following are true:

  1. (i) Assumption 3.1 holds if and only if

    (3.11) \begin{eqnarray}\sum_{k=1}^{\infty} k \boldsymbol{\pi}(k) {\boldsymbol{e}} &<& \infty.\end{eqnarray}
  2. (ii) If Assumption 3.1 holds, then

    (3.12) \begin{eqnarray}\sup_{N \in \mathbb{N}}\sum_{k=1}^N k \boldsymbol{\pi}^{(N)}(k) {\boldsymbol{e}} &<& \infty.\end{eqnarray}

Proof. The ‘only if’ part of the statement (i), that is, ‘Assumption 3.1 implies (3.11)’, can be proved in a similar way to the proof of the statement (ii). Thus in what follows, we prove the statement (ii) and the ‘if’ part of the statement (i).

We prove the statement (ii). Suppose that Assumption 3.1 holds (in addition to Assumption 2.1) and thus Lemma 3.3 holds. Pre-multiplying both sides of (3.7) by $\boldsymbol{\pi}^{(N)}$ and using (2.11), we obtain

\begin{align*}\boldsymbol{\pi}^{(N)}{\boldsymbol{v}}&= \boldsymbol{\pi}^{(N)} {\boldsymbol{P}}^{(N)}{\boldsymbol{v}}\le \boldsymbol{\pi}^{(N)} {\boldsymbol{v}} - \boldsymbol{\pi}^{(N)}{\boldsymbol{f}} + b,\end{align*}

and thus $\boldsymbol{\pi}^{(N)}{\boldsymbol{f}} \le b$ for all $N \in \mathbb{N}$ . Combining this inequality and (3.6) yields (3.12). The statement (ii) has been proved.

We prove the ‘if’ part of the statement (i). To this end, suppose that (3.11) holds. It then follows from (2.8) that

\begin{align*}\infty> \sum_{k=1}^{\infty} k\boldsymbol{\pi}(k){\boldsymbol{e}}&=\boldsymbol{\pi}(0) \sum_{k=1}^{\infty} k{\boldsymbol{R}}_0(k){\boldsymbol{e}}+ \sum_{k=1}^{\infty} k\sum_{\ell=1}^{k-1} \boldsymbol{\pi}(\ell) {\boldsymbol{R}}(k-\ell){\boldsymbol{e}}\nonumber\\&=\boldsymbol{\pi}(0) \sum_{k=1}^{\infty} k{\boldsymbol{R}}_0(k){\boldsymbol{e}}+\sum_{\ell=1}^{\infty} \boldsymbol{\pi}(\ell)\sum_{k=\ell+1}^{\infty} k {\boldsymbol{R}}(k-\ell){\boldsymbol{e}}\nonumber\\&\ge\boldsymbol{\pi}(0) \sum_{k=1}^{\infty} k{\boldsymbol{R}}_0(k){\boldsymbol{e}}+\sum_{\ell=1}^{\infty} \boldsymbol{\pi}(\ell)\sum_{k=\ell+1}^{\infty} (k-\ell) {\boldsymbol{R}}(k-\ell){\boldsymbol{e}},\end{align*}

which yields $\sum_{k=1}^{\infty} k{\boldsymbol{R}}_0(k){\boldsymbol{e}} < \infty$ and $\sum_{k=1}^{\infty} k{\boldsymbol{R}}(k){\boldsymbol{e}}< \infty$ . It also follows from (2.6), ${\boldsymbol{G}}{\boldsymbol{e}}={\boldsymbol{e}}$ , and $({\boldsymbol{I}} - \boldsymbol{\Phi}(0))^{-1}{\boldsymbol{e}} \ge {\boldsymbol{e}}$ that

\begin{align*}{\boldsymbol{R}}(k){\boldsymbol{e}}\ge \sum_{\ell=k}^{\infty} {\boldsymbol{A}}(\ell) {\boldsymbol{G}}^{\ell-k}{\boldsymbol{e}}= \sum_{\ell=k}^{\infty} {\boldsymbol{A}}(\ell){\boldsymbol{e}},\quad k \in \mathbb{N},\end{align*}

which leads to

\begin{eqnarray*}\infty > \sum_{k=1}^{\infty} k{\boldsymbol{R}}(k){\boldsymbol{e}}&\ge&\sum_{k=1}^{\infty} k\sum_{\ell=k}^{\infty}{\boldsymbol{A}}(\ell){\boldsymbol{e}}={1 \over 2}\sum_{\ell=1}^{\infty} \ell(\ell+1){\boldsymbol{A}}(\ell){\boldsymbol{e}},\end{eqnarray*}

and thus $\sum_{\ell=1}^{\infty} \ell^2{\boldsymbol{A}}(\ell){\boldsymbol{e}} < \infty$ . Similarly, we can prove that $\sum_{\ell=1}^{\infty} \ell^2{\boldsymbol{B}}(\ell){\boldsymbol{e}} < \infty$ by combining (2.5) and $\sum_{k=1}^{\infty} k{\boldsymbol{R}}_0(k){\boldsymbol{e}} < \infty$ . Consequently, the ‘if’ part of the statement (i) has been proved.

3.2. Finiteness of the mean first passage time to level zero

This subsection presents basic results on the mean first passage time to level zero of the infinite level chain. First, we introduce some definitions to derive the basic results. We then show that the mean first passage time to level zero is basically linear in the initial level. We also show that the second moment condition (Assumption 3.1) is equivalent to the finiteness of the mean first passage time to level zero in steady state. These results are used in the following sections (some of them might be in the literature, but the authors have not been able to find them and therefore provide them with their proofs for the convenience of the reader).

We present some definitions related to the mean first passage time to level zero. Let ${\boldsymbol{u}}(k)\,:\!=\,(u(k,i))_{i\in\mathbb{M}_{k\wedge1}}$ , $k\in\mathbb{Z}_+$ , denote a column vector such that

(3.13) \begin{equation}u(k,i) = \mathbb{E}_{(k,i)}[\tau_0] \ge 1,\quad (k,i) \in \mathbb{S},\end{equation}

where $\tau_k = \inf\{n \in \mathbb{N}\,:\, X_n = k\}$ and $\mathbb{E}_{(k,i)}[{\cdot}] = \mathbb{E}[{\cdot}\mid (X_0, J_0)=(k,i)]$ for $k \in \mathbb{Z}_+$ and $i \in \mathbb{M}_{k \wedge 1}$ . Let $\widehat{{\boldsymbol{G}}}_1(z)$ , $z \in [0,1]$ , denote an $M_1 \times M_0$ matrix such that

(3.14) \begin{align}\big(\widehat{{\boldsymbol{G}}}_1(z)\big)_{i,j} = \mathbb{E}_{(1,i)} \!\left [z^{\tau_0}\mathbb{1}\big(J_{\tau_0} = j\big) \right],\quad i \in \mathbb{M}_1,\, j \in \mathbb{M}_0.\end{align}

Furthermore, let $\widehat{{\boldsymbol{G}}}(z)$ , $z \in [0,1]$ , denote an $M_1 \times M_1$ matrix such that

\begin{alignat*}{2}\big(\widehat{{\boldsymbol{G}}}(z)\big)_{i,j} = \mathbb{E}_{(2,i)}\!\left [z^{\tau_1}\mathbb{1}\big(J_{\tau_1} = j\big)\right],\quad i,j \in \mathbb{M}_1.\end{alignat*}

Note that the infinite-level M/G/1-type Markov chain has level homogeneity of transitions except for level zero. Therefore, for all $k\in\mathbb{N}$ ,

(3.15) \begin{align}\big(\widehat{{\boldsymbol{G}}}(z)\big)_{i,j} = \mathbb{E}_{(k+1,i)}\!\left [z^{\tau_k}\mathbb{1}\big(J_{\tau_k} = j\big)\right],\quad i,j \in \mathbb{M}_1,\end{align}

and $\widehat{{\boldsymbol{G}}}(1)$ is equal to the G-matrix ${\boldsymbol{G}}$ . In addition, it follows from (3.13)–(3.15) that

(3.16) \begin{eqnarray}{\boldsymbol{u}}(k) = \left. {\textrm{d} \over \textrm{d} z} \big[ \widehat{{\boldsymbol{G}}}(z) \big]^{k-1}\widehat{{\boldsymbol{G}}}_1(z) \right|_{z=1} {\boldsymbol{e}},\quad k \in \mathbb{N}.\end{eqnarray}

The following lemma shows that the mean first passage time to level zero from level k has its dominant term linear in k.

Lemma 3.5. Suppose that Assumption 2.1 holds. We then have

(3.17a) (3.17b) \begin{align*}\qquad\left\{\begin{array}{l}{\boldsymbol{u}}(0) = {\boldsymbol{e}} + \sum\limits_{m=1}^{\infty} {\boldsymbol{B}}(m)({\boldsymbol{I}} - {\boldsymbol{G}}^m)({\boldsymbol{I}} - {\boldsymbol{A}} - \overline{{\boldsymbol{m}}}_{A}{\boldsymbol{g}} )^{-1}{\boldsymbol{e}}+ \sum\limits_{m=1}^{\infty} \dfrac{m{\boldsymbol{B}}(m)}{-\sigma}{\boldsymbol{e}},\qquad\qquad\qquad \\[10pt]{\boldsymbol{u}}(k) = \big({\boldsymbol{I}} - {\boldsymbol{G}}^k\big)({\boldsymbol{I}} - {\boldsymbol{A}} - \overline{{\boldsymbol{m}}}_{A}{\boldsymbol{g}} )^{-1}{\boldsymbol{e}}+ \dfrac{k}{-\sigma}{\boldsymbol{e}},\quad k\in\mathbb{N},\quad\qquad\qquad\qquad\qquad\quad\ \ \; \end{array}\right.\end{align*}

and thus

(3.18) \begin{align}\lim_{k\to\infty}{{\boldsymbol{u}}(k) \over k}= {1 \over -\sigma}{\boldsymbol{e}}.\end{align}

Proof. First, we prove (3.17b), which leads directly to the proof of (3.18). The matrix generating function $\widehat{{\boldsymbol{G}}}(z)$ is the minimal nonnegative solution to the matrix equation $\widehat{\mathscr{G}}(z)= z\sum_{m=-1}^{\infty}{\boldsymbol{A}}(m) \big\{ \widehat{\mathscr{G}}(z) \big\}^{m+1}$ (see [Reference Neuts36, Theorems 2.2.1 and 2.2.2]), and thus

(3.19) \begin{equation}\widehat{{\boldsymbol{G}}}(z)= z \sum_{m=-1}^{\infty}{\boldsymbol{A}}(m) \big[\widehat{{\boldsymbol{G}}}(z) \big]^{m+1},\end{equation}

which is equivalent to [Reference Neuts36, Equation (2.2.9)] with $s=0$ . Equation (3.19) is rewritten as

(3.20) \begin{eqnarray}\widehat{{\boldsymbol{G}}}(z) = \left[{\boldsymbol{I}} - z\sum_{m=0}^{\infty}{\boldsymbol{A}}(m) \big\{ \widehat{{\boldsymbol{G}}}(z) \big\}^m\right]^{-1} z{\boldsymbol{A}}({-}1).\end{eqnarray}

Similarly, we have

(3.21) \begin{eqnarray}\widehat{{\boldsymbol{G}}}_1(z) = \left[{\boldsymbol{I}} - z\sum_{m=0}^{\infty}{\boldsymbol{A}}(m) \big\{ \widehat{{\boldsymbol{G}}}(z) \big\}^m\right]^{-1} z{\boldsymbol{B}}({-}1),\end{eqnarray}

which is equivalent to [Reference Neuts36, Equation (2.4.3)] with $s=0$ . Combining (3.20), (3.21), and (2.3) yields

(3.22) \begin{equation}\widehat{{\boldsymbol{G}}}_1(z){\boldsymbol{e}}= \widehat{{\boldsymbol{G}}}(z){\boldsymbol{e}}.\end{equation}

Substituting (3.22) into (3.16) and using $\widehat{{\boldsymbol{G}}}(1){\boldsymbol{e}}={\boldsymbol{G}}{\boldsymbol{e}}={\boldsymbol{e}}$ , we obtain

(3.23) \begin{align}{\boldsymbol{u}}(k)&= \left. {\textrm{d} \over \textrm{d} z} \big[ \widehat{{\boldsymbol{G}}}(z) \big]^k \right|_{z=1} {\boldsymbol{e}}= \sum_{n=0}^{k-1} {\boldsymbol{G}}^n \left. {\textrm{d} \over \textrm{d} z} \widehat{{\boldsymbol{G}}}(z) \right|_{z=1} {\boldsymbol{e}} ,\quad k \in \mathbb{N}.\end{align}

Note here (see [Reference Neuts36, Equations (3.1.3), (3.1.12), and (3.1.14)]) that

(3.24) \begin{align}\left. {\textrm{d} \over \textrm{d} z} \widehat{{\boldsymbol{G}}}(z) \right|_{z=1} {\boldsymbol{e}} & = ({\boldsymbol{I}} - {\boldsymbol{G}} + {\boldsymbol{e}}{\boldsymbol{g}})({\boldsymbol{I}} - {\boldsymbol{A}} - \overline{{\boldsymbol{m}}}_{A}{\boldsymbol{g}} )^{-1}{\boldsymbol{e}}\nonumber\\& = ({\boldsymbol{I}} - {\boldsymbol{G}} )({\boldsymbol{I}} - {\boldsymbol{A}} - \overline{{\boldsymbol{m}}}_{A}{\boldsymbol{g}} )^{-1}{\boldsymbol{e}}+ {1 \over -\sigma}{\boldsymbol{e}},\end{align}

where the second equality is due to the fact that ${\boldsymbol{g}}({\boldsymbol{I}} - {\boldsymbol{A}} - \overline{{\boldsymbol{m}}}_{A}{\boldsymbol{g}})^{-1}=\boldsymbol{\varpi}/({-}\sigma)$ (see [Reference Neuts36, Equation (3.1.15)]). Inserting (3.24) into (3.23) results in

\begin{align*}{\boldsymbol{u}}(k) & = \sum_{n=0}^{k-1} {\boldsymbol{G}}^n({\boldsymbol{I}} - {\boldsymbol{G}})({\boldsymbol{I}} - {\boldsymbol{A}} - \overline{{\boldsymbol{m}}}_{A}{\boldsymbol{g}} )^{-1}{\boldsymbol{e}}+ {k \over -\sigma}{\boldsymbol{e}}\nonumber\\& = \big({\boldsymbol{I}} - {\boldsymbol{G}}^k\big)({\boldsymbol{I}} - {\boldsymbol{A}} - \overline{{\boldsymbol{m}}}_{A}{\boldsymbol{g}} )^{-1}{\boldsymbol{e}}+ {k \over -\sigma}{\boldsymbol{e}},\quad k \in \mathbb{N},\end{align*}

which shows that (3.17b) holds.

Next, we prove (3.17a). Let $\widehat{{\boldsymbol{K}}}(z)$ , $z \in [0,1]$ , denote an $M_0 \times M_0$ matrix such that

\begin{alignat*}{2}\big(\widehat{{\boldsymbol{K}}}(z)\big)_{i,j}&= \mathbb{E}_{(0,i)} \!\left [z^{\tau_0}\mathbb{1}\big(J_{\tau_0} = j\big) \right],\quad i,j\in\mathbb{M}_0.\end{alignat*}

From (3.13), we then have

(3.25) \begin{eqnarray}{\boldsymbol{u}}(0) = \left. {\textrm{d} \over \textrm{d} z} \widehat{{\boldsymbol{K}}}(z) \right|_{z=1} {\boldsymbol{e}}.\end{eqnarray}

We also have

(3.26) \begin{eqnarray}\widehat{{\boldsymbol{K}}}(z) = z{\boldsymbol{B}}(0)+ z\sum_{m=1}^{\infty}{\boldsymbol{B}}(m) \big[ \widehat{{\boldsymbol{G}}}(z) \big]^{m-1} \widehat{{\boldsymbol{G}}}_1(z),\end{eqnarray}

which is equivalent to [Reference Neuts36, Equation (2.4.8)] with $s=0$ . Combining (3.26) and (3.22) yields

(3.27) \begin{align}\widehat{{\boldsymbol{K}}}(z){\boldsymbol{e}}&= z\sum_{m=0}^{\infty}{\boldsymbol{B}}(m) \big[ \widehat{{\boldsymbol{G}}}(z) \big]^m {\boldsymbol{e}}.\end{align}

Substituting (3.27) into (3.25) and using (3.23) and $\sum_{m=0}^{\infty}{\boldsymbol{B}}(m){\boldsymbol{e}} = {\boldsymbol{e}}$ , we obtain

(3.28) \begin{align}{\boldsymbol{u}}(0)&= \sum_{m=0}^{\infty}{\boldsymbol{B}}(m){\boldsymbol{e}}+ \sum_{m=1}^{\infty}{\boldsymbol{B}}(m)\left. {\textrm{d} \over \textrm{d} z} \big[ \widehat{{\boldsymbol{G}}}(z) \big]^m \right|_{z=1} {\boldsymbol{e}}\nonumber\\&= {\boldsymbol{e}} + \sum_{m=1}^{\infty}{\boldsymbol{B}}(m) {\boldsymbol{u}}(m).\end{align}

Finally, inserting (3.17b) into (3.28) leads to (3.17a). The proof is complete.

Lemma 3.5, together with Theorem 3.4, yields Theorem 3.6 below, which ensures that the second moment condition (Assumption 3.1) is equivalent to the finiteness of the mean first passage time to level zero in steady state.

Theorem 3.6. Suppose that Assumption 2.1 is satisfied. Assumption 3.1 holds if and only if

(3.29) \begin{equation}\sum_{k=0}^{\infty} \boldsymbol{\pi}(k) {\boldsymbol{u}}(k) < \infty.\end{equation}

Proof. The constant $-\sigma$ in (3.18) is positive and finite. Indeed, it follows from (2.4) and Assumption 2.1 that

\begin{align*}0< -\sigma&= - \boldsymbol{\varpi} \sum_{k=-1}^{\infty} k{\boldsymbol{A}}(k){\boldsymbol{e}}\le \boldsymbol{\varpi} {\boldsymbol{A}}({-}1){\boldsymbol{e}}\le \boldsymbol{\varpi} {\boldsymbol{A}}{\boldsymbol{e}} = \boldsymbol{\varpi}{\boldsymbol{e}} = 1.\end{align*}

Therefore, (3.17) implies that (3.29) is equivalent to (3.11). Equation (3.11) is equivalent to Assumption 3.1 owing to Theorem 3.4(i). The proof is complete.

4. Fundamental deviation matrix of the infinite-level chain

This section has two subsections. Section 4.1 defines the fundamental deviation matrix of the infinite-level chain as a block-decomposition-friendly solution to the Poisson equation

(4.1) \begin{align}({\boldsymbol{I}} - {\boldsymbol{P}}) {\boldsymbol{X}} &= {\boldsymbol{I}} - {\boldsymbol{e}}\boldsymbol{\pi},\end{align}

which is also solved by the deviation matrix

(4.2) \begin{align}{\boldsymbol{D}}&\,:\!=\, \sum_{k=0}^{\infty} \big( {\boldsymbol{P}}^k - {\boldsymbol{e}}\boldsymbol{\pi} \big),\end{align}

if it exists (see [Reference Coolen-Schrijner and van Doorn5] and [Reference Dendievel, Latouche and Liu7, Lemma 2.7]). The deviation matrix ${\boldsymbol{D}}$ satisfies the constraint

(4.3) \begin{align}\boldsymbol{\pi}{\boldsymbol{X}} &={\textbf{0}},\end{align}

while the fundamental deviation matrix satisfies another constraint (shown later). Furthermore, the fundamental deviation matrix always exists, while the deviation matrix ${\boldsymbol{D}}$ does not necessarily exist, provided that the infinite-level chain is irreducible and positive recurrent. In this sense, the fundamental deviation matrix is more fundamental than the deviation matrix ${\boldsymbol{D}}$ . Section 4.2 presents a closed block-decomposition form of the fundamental deviation matrix, which contributes to the analysis in the next section.

4.1. A block-decomposition-friendly solution to the Poisson equation

In this subsection, after some preparations, we define the fundamental deviation matrix. We then show that the fundamental deviation matrix is a solution to the Poisson equation (4.1). The fundamental deviation matrix has a finite base set (as its parameters) and is suitable for block decomposition by choosing the base set appropriately according to the block structure of ${\boldsymbol{P}}$ . We also establish upper bounds for the fundamental deviation matrix and for any solution to the Poisson equation. Finally, we show a relationship between the fundamental deviation matrix and the deviation matrix.

We make some preparations to describe the fundamental deviation matrix. Let $\mathbb{A}$ denote an arbitrary finite subset of $\mathbb{S}$ , and $\mathbb{B} = \mathbb{S} \setminus \mathbb{A}$ , and partition ${\boldsymbol{P}}$ as

Let $\widetilde{{\boldsymbol{P}}}_{\mathbb{A}}$ denote

\begin{align*}\widetilde{{\boldsymbol{P}}}_{\mathbb{A}} = {\boldsymbol{P}}_{\mathbb{A}} + {\boldsymbol{P}}_{\mathbb{A},\mathbb{B}}({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}{\boldsymbol{P}}_{\mathbb{B},\mathbb{A}},\end{align*}

which is considered to be the transition probability matrix of a Markov chain obtained by observing the infinite-level chain $\{(X_n,J_n)\}$ when it is in $\mathbb{A}$ (see [Reference Zhao39, Definition 1 and Theorem 2]). Since ${\boldsymbol{P}}$ is an irreducible and positive recurrent stochastic matrix, so is $\widetilde{{\boldsymbol{P}}}_{\mathbb{A}}$ . Therefore, $\widetilde{{\boldsymbol{P}}}_{\mathbb{A}}$ has a unique stationary distribution vector, denoted by $\widetilde{\boldsymbol{\pi}}_{\mathbb{A}} \,:\!=\,(\widetilde{\pi}_{\mathbb{A}}(k,i))_{(k,i) \in \mathbb{A}}$ , which is given by

\begin{align*}\widetilde{\pi}_{\mathbb{A}}(k,i)= {\pi(k,i) \over \sum_{(\ell,j) \in \mathbb{A}}\pi(\ell,j)},\quad (k,i) \in \mathbb{A}.\end{align*}

Finally, let $T(\mathbb{A}) = \inf\{n \in \mathbb{N}\,:\, (X_n,J_n) \in \mathbb{A}\}$ and $u_{\mathbb{A}}(k,i) = \mathbb{E}_{(k,i)}[T(\mathbb{A})]$ for $(k,i) \in \mathbb{S}$ .

We define the fundamental deviation matrix of the infinite-level chain. Fix $\boldsymbol{\alpha}\,:\!=\,(\alpha_1,\alpha_2) \in \mathbb{S}$ arbitrarily, and let ${\boldsymbol{H}}_{\mathbb{A}}\,:\!=\,(H_{\mathbb{A}}(k,i;\,\ell,j))_{(k,i;\,\ell,j) \in \mathbb{S}^2}$ denote a matrix such that

(4.4) \begin{align}H_{\mathbb{A}}(k,i;\,\ell,j)&=\mathbb{E}_{(k,i)}\!\!\left[\sum_{n=0}^{\tau_{\boldsymbol{\alpha}}-1} \overline{1}_{(\ell,j)}(X_n, J_n)\right]\nonumber\\&\qquad {} - \sum_{(k^{\prime},i^{\prime}) \in \mathbb{A}}\widetilde{\pi}_{\mathbb{A}}(k^{\prime},i^{\prime}) \mathbb{E}_{(k^{\prime},i^{\prime})}\!\!\left[\sum_{n=0}^{\tau_{\boldsymbol{\alpha}}-1} \overline{1}_{(\ell,j)}(X_n, J_n)\right],\end{align}

where

\begin{align*}\overline{1}_{(\ell,j)}(k,i)= 1_{(\ell,j)}(k,i) - \pi(\ell,j),\quad (k,i;\,\ell,j) \in \mathbb{S}^2.\end{align*}

Note here that $\{(X_n,J_n)\}$ is irreducible and positive recurrent. It then follows that

\begin{align*}\left|\mathbb{E}_{(k,i)}\!\!\left[\sum_{n=0}^{\tau_{\boldsymbol{\alpha}}-1} \overline{1}_{(\ell,j)}(X_n, J_n)\right]\right|&\le2\mathbb{E}_{(k,i)}[ \tau_{\boldsymbol{\alpha}}] < \infty\quad \text{for all}\ (k,i)\ \in\ \mathbb{S}\ \text{and}\ \boldsymbol{\alpha} \in \mathbb{S}.\end{align*}

Note also that $\mathbb{A}$ is a finite subset of $\mathbb{S}$ . Therefore, the matrix ${\boldsymbol{H}}_{\mathbb{A}}$ is well-defined. We refer to this ${\boldsymbol{H}}_{\mathbb{A}}$ as the fundamental deviation matrix with finite base set $\mathbb{A}$ or as the fundamental deviation matrix for short.

The following theorem shows that the fundamental deviation matrix ${\boldsymbol{H}}_{\mathbb{A}}$ is a solution to the Poisson equation (4.1) and has a closed block-decomposition form.

Theorem 4.1. Suppose that Assumption 2.1 holds, and fix $\boldsymbol{\alpha}=(\alpha_1,\alpha_2) \in \mathbb{S}$ arbitrarily. For any finite $\mathbb{A} \subset \mathbb{S}$ , ${\boldsymbol{H}}_{\mathbb{A}}$ is independent of $\boldsymbol{\alpha}$ and is the unique solution to the Poisson equation (4.1) with the constraint

(4.5) \begin{align}\big( \widetilde{\boldsymbol{\pi}}_{\mathbb{A}},\,{\textbf{0}} \big){\boldsymbol{X}} &={\textbf{0}}.\end{align}

Furthermore, partition ${\boldsymbol{H}}_{\mathbb{A}}$ as

(4.6)

We then have

(4.7a) \begin{align}{\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})&= \big({\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}}_{\mathbb{A}} + {\boldsymbol{e}}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}\big)^{-1}\left[({\boldsymbol{I}},\, {\boldsymbol{P}}_{\mathbb{A},\mathbb{B}}({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1})- {\boldsymbol{u}}_{\mathbb{A}}(\mathbb{A})\boldsymbol{\pi}\right],\end{align}
(4.7b) \begin{align}{\boldsymbol{H}}_{\mathbb{A}}(\mathbb{B})&=\big({\boldsymbol{O}},\, ({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}\big)- {\boldsymbol{u}}_{\mathbb{B}}(\mathbb{A})\boldsymbol{\pi}+ ({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}{\boldsymbol{P}}_{\mathbb{B},\mathbb{A}}{\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A}),\end{align}

respectively, where ${\boldsymbol{u}}_{\mathbb{A}}(\mathbb{A}) = (u_{\mathbb{A}}(k,i))_{(k,i) \in \mathbb{A}}$ and ${\boldsymbol{u}}_{\mathbb{A}}(\mathbb{B}) = (u_{\mathbb{A}}(k,i))_{(k,i) \in \mathbb{B}}$ .

Proof. See Appendix B.

Remark 4.2. Equation (4.7a) is a closed form for ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ , and thus substituting (4.7a) into (4.7b) yields a closed form for ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{B})$ . Furthermore, if $\mathbb{A} = \mathbb{L}_0$ , then these block-decomposition forms are expressed with probabilistically interpretable matrices (see Corollary 4.7).

Remark 4.3. Theorem 4.1 is immediately extended to the general class of Markov chains with countable states, since the proof of this theorem does not depend on the M/G/1-type structure. Related results can be found in [Reference Liu, Liu and Zhao19, Theorem 3.2].

The following lemma gives respective upper bounds for $|{\boldsymbol{H}}_{\mathbb{A}}|{\boldsymbol{e}}$ and any solution to the Poisson equation (4.1) by a drift condition different from the one presented in Lemma 3.3.

Lemma 4.4. Suppose that Assumption 2.1 is satisfied. Let ${\boldsymbol{v}}^{\prime}\,:\!=\,(v^{\prime}(k,i))_{(k,i) \in \mathbb{S}}$ denote a column vector such that

(4.8) \begin{equation}{\boldsymbol{v}}^{\prime}(k)\,:\!=\, (v^{\prime}(k,i))_{i\in\mathbb{M}_{k\wedge1}}=\left\{\begin{array}{l@{\quad}l}\displaystyle{1 \over -\sigma}{\boldsymbol{e}}, & k=0,\\\rule{0mm}{6mm}\displaystyle{1 \over -\sigma}(k{\boldsymbol{e}} + {\boldsymbol{a}}), & k \in\mathbb{N},\end{array}\right.\end{equation}

where ${\boldsymbol{a}} \ge {\textbf{0}}$ is given in (3.3). The following statements then hold:

  1. (i) There exist some $b^{\prime} \in (0,\infty)$ and $K^{\prime} \in \mathbb{N}$ such that

    (4.9) \begin{align}{\boldsymbol{P}}^{(N)}{\boldsymbol{v}}^{\prime} \le {\boldsymbol{P}}{\boldsymbol{v}}^{\prime} &\le {\boldsymbol{v}}^{\prime} - {\boldsymbol{e}} + b^{\prime} \textbf{1}_{\mathbb{L}_{\le K^{\prime}}}& \quad &{for\ all\ N\ \in\ \mathbb{N}}.\end{align}
  2. (ii) Any solution ${\boldsymbol{X}}$ to the Poisson equation (4.1) satisfies $|{\boldsymbol{X}}| \le C_0{\boldsymbol{v}}^{\prime}{\boldsymbol{e}}^{\top}$ for some $C_0 > 0$ . Furthermore, $\boldsymbol{\pi}|{\boldsymbol{X}}| < \infty$ under Assumption 3.1.

  3. (iii) For each finite $\mathbb{A} \subset \mathbb{S}$ , there exists some $C_{\mathbb{A}} > 0$ such that $|{\boldsymbol{H}}_{\mathbb{A}}|{\boldsymbol{e}} \le C_{\mathbb{A}}{\boldsymbol{v}}^{\prime}$ .

Proof. See Appendix C.

The following proposition shows that the deviation matrix ${\boldsymbol{D}}$ does not necessarily exist even if ${\boldsymbol{H}}_{\mathbb{A}}$ does, and that ${\boldsymbol{D}}$ (if it exists) is expressed by ${\boldsymbol{H}}_{\mathbb{A}}$ .

Proposition 4.5. Suppose that Assumption 2.1 is satisfied and ${\boldsymbol{P}}$ is aperiodic.

  1. (i) Assumption 3.1 holds if and only if ${\boldsymbol{D}}$ exists.

  2. (ii) If ${\boldsymbol{D}}$ exists, then it is the unique solution to the Poisson equation (4.1) with the constraint (4.3) such that $\boldsymbol{\pi} |{\boldsymbol{X}}| < \infty$ , and furthermore,

    (4.10) \begin{align}{\boldsymbol{D}} = ({\boldsymbol{I}} - {\boldsymbol{e}}\boldsymbol{\pi}){\boldsymbol{H}}_{\mathbb{A}}\quad {for\ any\ finite\ \mathbb{A}\ \subset\ \mathbb{S}}.\end{align}

Proof. See Appendix D.

Remark 4.6. A similar and general version of Proposition 4.5 is presented in [Reference Liu, Liu and Zhao19, Corollary 3.1].

We note that the deviation matrix has been used for perturbation analysis of block-structured Markov chains (see [Reference Dendievel, Latouche and Liu7, Reference Liu, Wang and Xie20, Reference Liu, Wang and Zhao21]). However, the deviation matrix requires the aperiodicity of the transition probability matrix, and such aperiodicity is not necessary for the analysis in this paper. Therefore, we use the fundamental deviation matrix ${\boldsymbol{H}}_{\mathbb{A}}$ instead of the deviation matrix ${\boldsymbol{D}}$ .

4.2. Closed block-decomposition form of the fundamental deviation matrix

Corollary 4.7 (which follows from Theorem 4.1) shows that the fundamental deviation matrix with the base set $\mathbb{L}_0$ , i.e., ${\boldsymbol{H}}_{\mathbb{L}_0}=(H_{\mathbb{L}_0}(k,i;\,\ell,j))_{(k,i;\,\ell,j) \in \mathbb{S}^2}$ , can be expressed in a block-decomposition form with probabilistically interpretable matrices. For simplicity, we omit the subscript ‘ $\mathbb{L}_0$ ’ of ${\boldsymbol{H}}_{\mathbb{L}_0}$ (and thus its element $H_{\mathbb{L}_0}(k,i;\,\ell,j)$ ), since we do not consider any base set other than $\mathbb{L}_0$ in the following.

Corollary 4.7. Suppose that Assumption 2.1 holds. Let

\begin{align*}{\boldsymbol{H}}(k;\,\ell) =(H(k,i;\,\ell,j))_{(k,i;\,\ell,j) \in \mathbb{M}_{k\wedge1} \times \mathbb{M}_{\ell\wedge1}},\quad k,\ell \in \mathbb{Z}_+,\end{align*}

which is the $(k,\ell)$ -block of ${\boldsymbol{H}}\,:\!=\,{\boldsymbol{H}}_{\mathbb{L}_0}$ . We then have, for $\ell \in \mathbb{Z}_+$ ,

(4.11a) \begin{align}{\boldsymbol{H}}(0;\,\ell)&=({\boldsymbol{I}} - {\boldsymbol{K}} + {\boldsymbol{e}}\boldsymbol{\kappa})^{-1}({\boldsymbol{I}} - {\boldsymbol{u}}(0) \boldsymbol{\pi}(0)){\boldsymbol{F}}_+(0;\,\ell),\\{\boldsymbol{H}}(k;\,\ell)&= (1 - \delta_{0,\ell}) {\boldsymbol{F}}_{+}(k;\,\ell) - {\boldsymbol{u}}(k) \boldsymbol{\pi}(\ell)\nonumber\end{align}
(4.11b) \begin{align} \quad\qquad\qquad {} + {\boldsymbol{G}}^{k-1} ({\boldsymbol{I}}-\boldsymbol{\Phi}(0))^{-1} {\boldsymbol{B}}({-}1){\boldsymbol{H}}(0;\,\ell),& \quad k \in \mathbb{N},\end{align}

where $\boldsymbol{\kappa}$ denotes the unique stationary distribution vector of ${\boldsymbol{K}}\,:\!=\,\widehat{{\boldsymbol{K}}}(1)$ , and where ${\boldsymbol{F}}_+(k;\,\ell)$ , $k,\ell\in\mathbb{Z}_+$ , denotes an $\mathbb{M}_{k \wedge 1} \times \mathbb{M}_{\ell \wedge 1}$ matrix such that

(4.12) \begin{equation}({\boldsymbol{F}}_+(k;\,\ell))_{i,j}=\mathbb{E}_{(k,i)}\!\!\left[\sum_{n=0}^{\tau_0-1} 1_{(\ell,j)}(X_n, J_n)\right],\quad k,\ell\in\mathbb{Z}_+,\end{equation}

and thus, for all $k,\ell \in \mathbb{Z}_+$ such that $k \wedge \ell = 0$ ,

(4.13a) (4.13b) (4.13c) \begin{align*}\qquad\qquad\qquad\quad {\boldsymbol{F}}_+(k;\,\ell)= \left\{\begin{array}{l@{\quad}l}{\boldsymbol{I}}, & k = 0,\,\ell = 0,\qquad\qquad\qquad\qquad \\[4pt]\displaystyle\sum_{m=1}^{\infty} {\boldsymbol{B}}(m){\boldsymbol{F}}_{+}(m;\,\ell),& k=0,\,\ell \in \mathbb{N},\qquad\qquad\qquad\qquad \\[16pt]{\boldsymbol{O}}, & k \in \mathbb{N},\,\ell=0.\qquad\qquad\qquad\qquad\ \end{array}\right.\end{align*}

Proof. We first prove (4.11a). By definition, ${\boldsymbol{K}} = \widetilde{{\boldsymbol{P}}}_{\mathbb{L}_0}$ , and thus $\boldsymbol{\kappa}$ is the stationary probability vector of $\widetilde{{\boldsymbol{P}}}_{\mathbb{L}_0}$ . Therefore, letting $\mathbb{A} = \mathbb{L}_0$ in (4.7a), we have

\begin{align*}&({\boldsymbol{H}}(0;\,0),{\boldsymbol{H}}(0;\,1),{\boldsymbol{H}}(0;\,2),{\dots})\nonumber\\&\quad = ({\boldsymbol{I}} - {\boldsymbol{K}} + {\boldsymbol{e}}\boldsymbol{\kappa})^{-1}\left[({\boldsymbol{I}}, {\boldsymbol{F}}_+(0;\,1), {\boldsymbol{F}}_+(0;\,2),{\dots}) - {\boldsymbol{u}}(0) (\boldsymbol{\pi}(0),\boldsymbol{\pi}(1),\boldsymbol{\pi}(2),{\dots})\right].\end{align*}

Combining this equation with (4.13a) leads to

(4.14) \begin{align}{\boldsymbol{H}}(0;\,\ell)&= ({\boldsymbol{I}} - {\boldsymbol{K}} + {\boldsymbol{e}}\boldsymbol{\kappa})^{-1}[{\boldsymbol{F}}_+(0;\,\ell) - {\boldsymbol{u}}(0)\boldsymbol{\pi}(\ell)],\qquad \ell \in \mathbb{Z}_+.\end{align}

It also follows from (4.12) and [Reference Meyn and Tweedie34, Theorem 10.0.1] that

(4.15) \begin{align}\boldsymbol{\pi}(\ell)= \boldsymbol{\pi}(0){\boldsymbol{F}}_+(0;\,\ell),\qquad \ell \in \mathbb{Z}_+.\end{align}

Substituting (4.15) into (4.14) yields (4.11a).

Next, we prove (4.11b). Letting $\mathbb{A} = \mathbb{L}_0$ in (4.7b) yields

\begin{align*}&({\boldsymbol{H}}(k;\,0),{\boldsymbol{H}}(k;\,1),{\boldsymbol{H}}(k;\,2),{\dots})\nonumber\\&\quad = ({\boldsymbol{O}}, {\boldsymbol{F}}_+(k;\,1), {\boldsymbol{F}}_+(k;\,2),{\dots})- {\boldsymbol{u}}(k) (\boldsymbol{\pi}(0),\boldsymbol{\pi}(1),\boldsymbol{\pi}(2),{\dots})\nonumber\\&\qquad + {\boldsymbol{F}}_+(k;\,1){\boldsymbol{B}}({-}1)({\boldsymbol{H}}(0;\,0),{\boldsymbol{H}}(0;\,1),{\boldsymbol{H}}(0;\,2),{\dots}),\quad k \in \mathbb{N}.\end{align*}

This equation together with (4.13c) leads to

(4.16) \begin{align}{\boldsymbol{H}}(k;\,\ell) &= {\boldsymbol{F}}_+(k;\,\ell) - {\boldsymbol{u}}(k)\boldsymbol{\pi}(\ell)+ {\boldsymbol{F}}_+(k;\,1){\boldsymbol{B}}({-}1){\boldsymbol{H}}(0;\,\ell),\quad k \in \mathbb{N},\,\ell \in \mathbb{Z}_+.\end{align}

Note here ([Reference Zhao, Li and Braun40, Theorem 9]) that

(4.17) \begin{equation}{\boldsymbol{F}}_{+}(k;\,\ell)= {\boldsymbol{G}}^{k-\ell}{\boldsymbol{F}}_{+}(\ell;\,\ell),\quad k \in \mathbb{Z}_{\ge \ell},\,\ell \in\mathbb{N},\end{equation}

and thus

(4.18) \begin{equation}{\boldsymbol{F}}_{+}(k;\,1)= {\boldsymbol{G}}^{k-1}{\boldsymbol{F}}_{+}(1;\,1)= {\boldsymbol{G}}^{k-1}({\boldsymbol{I}} - \boldsymbol{\Phi}(0))^{-1},\quad k \in \mathbb{N},\end{equation}

where $\boldsymbol{\Phi}(0)$ is given in (2.7). Note also that ${\boldsymbol{F}}_{+}(k;\,0) = {\boldsymbol{O}}$ for $k\in\mathbb{N}$ , which is shown in (4.13c). To emphasize this exceptional case, we write

(4.19) \begin{align}{\boldsymbol{F}}_{+}(k;\,\ell) = (1 - \delta_{0,\ell}){\boldsymbol{F}}_{+}(k;\,\ell),\quad k \in \mathbb{N},\,\ell\in\mathbb{Z}_+.\end{align}

Substituting (4.18) and (4.19) into (4.16) results in (4.11b). The proof is complete.

Remark. Let $\widetilde{{\boldsymbol{H}}}(k;\,\ell)=\big(\widetilde{H}(k,i;\,\ell,j)\big)_{(i,j) \in \mathbb{M}_{k\wedge1} \times \mathbb{M}_{\ell\wedge1}}$ for $k,\ell \in \mathbb{Z}_+$ . Letting $\mathbb{A} = \mathbb{L}_0$ in (B.6) and following the proof of Corollary 4.7, we obtain equations similar to (4.11a) and (4.11b): for $\ell \in \mathbb{Z}_+$ ,

\begin{alignat*}{2}({\boldsymbol{I}} - {\boldsymbol{K}})\widetilde{{\boldsymbol{H}}}(0;\,\ell)&=({\boldsymbol{I}} - {\boldsymbol{u}}(0) \boldsymbol{\pi}(0)){\boldsymbol{F}}_+(0;\,\ell),\\\widetilde{{\boldsymbol{H}}}(k;\,\ell)&= (1 - \delta_{0,\ell}) {\boldsymbol{F}}_{+}(k;\,\ell) - {\boldsymbol{u}}(k) \boldsymbol{\pi}(\ell)\nonumber\\& \qquad {} + {\boldsymbol{G}}^{k-1} ({\boldsymbol{I}}-\boldsymbol{\Phi}(0))^{-1} {\boldsymbol{B}}({-}1)\widetilde{{\boldsymbol{H}}}(0;\,\ell),& \quad k \in \mathbb{N}.\end{alignat*}

5. Subgeometric convergence in the infinite-level limit

The main purpose of this section is to prove the subgeometric convergence of the level-wise difference $\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)$ . This section consists of three subsections. Section 5.1 presents a difference formula for $\boldsymbol{\pi}^{(N)}$ and $\boldsymbol{\pi}$ , and then Section 5.2 shows the uniform convergence of $\|\boldsymbol{\pi}^{(N)} - \boldsymbol{\pi}\|$ under Assumption 3.1. Based on these results, Section 5.3 presents a subgeometric convergence formula for $\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)$ under an additional condition (Assumption 5.10). The subgeometric convergence formula is presented in Theorem 5.12, which is the main theorem of this paper. Section 5.4 applies the main theorem to derive a subexponential asymptotic formula for the loss probability in the MAP/GI/1/N queue.

5.1. A difference formula for the finite- and infinite-level stationary distributions

This subsection presents a difference formula for $\boldsymbol{\pi}^{(N)}$ and $\boldsymbol{\pi}$ with the fundamental deviation matrix ${\boldsymbol{H}}$ . The following two matrices are needed to describe the difference formula:

(5.1) \begin{align}\overline{\overline{{\boldsymbol{A}}}}(k)&= \sum_{\ell=k+1}^{\infty} \overline{{\boldsymbol{A}}}(\ell),\quad \overline{\overline{{\boldsymbol{B}}}}(k) = \sum_{\ell=k+1}^{\infty} \overline{{\boldsymbol{B}}}(\ell),\quad k \in \mathbb{Z}_{\ge -1},\end{align}

where $\overline{{\boldsymbol{A}}}(k)$ and $\overline{{\boldsymbol{B}}}(k)$ are given in (2.10a) and (2.10b), respectively.

Lemma 5.1. (Difference formula.) Suppose that Assumption 2.1 holds; then

(5.2) \begin{align}\boldsymbol{\pi}^{(N)} - \boldsymbol{\pi}= \boldsymbol{\pi}^{(N)} \big( {\boldsymbol{P}}^{(N)} - {\boldsymbol{P}} \big) {\boldsymbol{H}},\quad N \in \mathbb{N}.\end{align}

For $k \in \mathbb{Z}_{[0,N]}$ , we also have

(5.3) \begin{align}&\boldsymbol{\pi}^{(N)}(k)-\boldsymbol{\pi}(k)\nonumber\\&\quad =\boldsymbol{\pi}^{(N)}(0)\left[\sum_{n=N+1}^{\infty} {\boldsymbol{B}}(n) {\boldsymbol{S}}^{(N)}(n;\,k) + {1 \over -\sigma} \overline{\overline{{\boldsymbol{B}}}}(N-1) {\boldsymbol{e}}\boldsymbol{\pi}(k)\right]\notag\\&\quad \quad + \sum_{\ell=1}^N \boldsymbol{\pi}^{(N)}(\ell)\left[\sum_{n=N+1}^{\infty} {\boldsymbol{A}}(n-\ell) {\boldsymbol{S}}^{(N)}(n;\,k)+ {1 \over -\sigma} \overline{\overline{{\boldsymbol{A}}}}(N-\ell-1) {\boldsymbol{e}}\boldsymbol{\pi}(k)\right],\end{align}

where

(5.4) \begin{align}{\boldsymbol{S}}^{(N)}(n;\,k)&=\big(1 - \delta_{0,k}\big)\big({\boldsymbol{G}}^{N-k}-{\boldsymbol{G}}^{n-k}\big) {\boldsymbol{F}}_{+}(k;\,k)\nonumber\\& \quad + \big({\boldsymbol{G}}^{N-1}-{\boldsymbol{G}}^{n-1}\big)({\boldsymbol{I}}-\boldsymbol{\Phi}(0))^{-1} {\boldsymbol{B}}({-}1){\boldsymbol{H}}(0;\,k)\nonumber\\& \quad + \big({\boldsymbol{G}}^{N}-{\boldsymbol{G}}^{n}\big)\left( {\boldsymbol{I}}-{\boldsymbol{A}} - \overline{{\boldsymbol{m}}}_{A}{\boldsymbol{g}} \right)^{-1} {\boldsymbol{e}} \boldsymbol{\pi}(k),\quad n \in \mathbb{Z}_{\ge N+1},\,k \in \mathbb{Z}_{[0,N]}.\end{align}

Proof. Equation (5.2) is proved as follows. Theorem 4.1 implies that ${\boldsymbol{H}} = {\boldsymbol{H}}_{\mathbb{L}_0}$ is a solution to the Poisson equation (4.1) and thus $({\boldsymbol{I}} - {\boldsymbol{P}}){\boldsymbol{H}} = {\boldsymbol{I}} - {\boldsymbol{e}}\boldsymbol{\pi}$ . Combining this equation with $\boldsymbol{\pi}^{(N)} =\boldsymbol{\pi}^{(N)}{\boldsymbol{P}}^{(N)}$ and $\boldsymbol{\pi}{\boldsymbol{e}}=1$ yields

\begin{eqnarray*}\boldsymbol{\pi}^{(N)} \big( {\boldsymbol{P}}^{(N)} - {\boldsymbol{P}} \big) {\boldsymbol{H}} = \boldsymbol{\pi}^{(N)} ( {\boldsymbol{I}} - {\boldsymbol{P}} ) {\boldsymbol{H}}= \boldsymbol{\pi}^{(N)} ( {\boldsymbol{I}} - {\boldsymbol{e}}\boldsymbol{\pi} )= \boldsymbol{\pi}^{(N)} - \boldsymbol{\pi},\end{eqnarray*}

which shows that (5.2) holds.

We start the proof of (5.3) with the level-wise expression for the difference formula (5.2). From (2.1) and (2.9), we have

With this equation, we decompose (5.2) into level-wise expressions: for $k \in \mathbb{Z}_{[0,N]}$ ,

(5.5) \begin{align}\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k) & = \left[\boldsymbol{\pi}^{(N)}(0) \overline{{\boldsymbol{B}}}(N)+ \sum_{\ell=1}^N \boldsymbol{\pi}^{(N)}(\ell)\overline{{\boldsymbol{A}}}(N-\ell)\right]{\boldsymbol{H}}(N;\,k)\nonumber\\& \quad -\sum_{n=N+1}^{\infty}\left[\boldsymbol{\pi}^{(N)}(0){\boldsymbol{B}}(n) + \sum_{\ell=1}^N \boldsymbol{\pi}^{(N)}(\ell) {\boldsymbol{A}}(n-\ell)\right] {\boldsymbol{H}}(n;\,k)\nonumber\\&= \boldsymbol{\pi}^{(N)}(0)\sum_{n=N+1}^{\infty} {\boldsymbol{B}}(n)\left[ {\boldsymbol{H}}(N;\,k) - {\boldsymbol{H}}(n;\,k) \right]\nonumber\\& \quad +\sum_{\ell=1}^N \boldsymbol{\pi}^{(N)}(\ell)\sum_{n=N+1}^{\infty} {\boldsymbol{A}}(n-\ell)\left[ {\boldsymbol{H}}(N;\,k)-{\boldsymbol{H}}(n;\,k) \right].\qquad \end{align}

To proceed further, we rewrite the term ${\boldsymbol{H}}(N;\,k) - {\boldsymbol{H}}(n;\,k)$ in (5.5) by using ${\boldsymbol{G}}$ and related matrices (including vectors) introduced in Section 3. Combining (4.11b) and (4.17), we obtain, for $n \in \mathbb{Z}_{\ge N+1}$ and $k \in \mathbb{Z}_{[0,N]}$ ,

(5.6) \begin{align}{\boldsymbol{H}}(N;\,k)-{\boldsymbol{H}}(n;\,k)&=\big(1 - \delta_{0,k}\big)\big({\boldsymbol{G}}^{N-k}-{\boldsymbol{G}}^{n-k}\big) {\boldsymbol{F}}_{+}(k;\,k)\nonumber\\&\quad + \big({\boldsymbol{G}}^{N-1}-{\boldsymbol{G}}^{n-1}\big) ({\boldsymbol{I}}-\boldsymbol{\Phi}(0))^{-1} {\boldsymbol{B}}({-}1){\boldsymbol{H}}(0;\,k)\nonumber\\&\quad + \left[ {\boldsymbol{u}}(n)-{\boldsymbol{u}}(N) \right] \boldsymbol{\pi}(k).\end{align}

Using (3.17b), we rewrite ${\boldsymbol{u}}(n)-{\boldsymbol{u}}(N)$ in (5.6) as

(5.7) \begin{align}{\boldsymbol{u}}(n)-{\boldsymbol{u}}(N)&= \big({\boldsymbol{G}}^{N}-{\boldsymbol{G}}^{n}\big)({\boldsymbol{I}} - {\boldsymbol{A}} - \overline{{\boldsymbol{m}}}_{A}{\boldsymbol{g}} )^{-1} {\boldsymbol{e}}+ {1 \over -\sigma}(n-N){\boldsymbol{e}},\,\,\, n \in \mathbb{Z}_{\ge N+1}.\end{align}

Applying (5.7) to (5.6), we obtain, for $n \in \mathbb{Z}_{\ge N+1}$ and $k \in \mathbb{Z}_{[0,N]}$ ,

(5.8) \begin{align}{\boldsymbol{H}}(N;\,k)-{\boldsymbol{H}}(n;\,k)& = \big(1 - \delta_{0,k}\big)\big({\boldsymbol{G}}^{N-k}-{\boldsymbol{G}}^{n-k}\big) {\boldsymbol{F}}_{+}(k;\,k)\nonumber\\& \quad + \big({\boldsymbol{G}}^{N-1}-{\boldsymbol{G}}^{n-1}\big) ({\boldsymbol{I}}-\boldsymbol{\Phi}(0))^{-1} {\boldsymbol{B}}({-}1){\boldsymbol{H}}(0;\,k)\nonumber\\& \quad + \big({\boldsymbol{G}}^{N}-{\boldsymbol{G}}^{n}\big)({\boldsymbol{I}} - {\boldsymbol{A}} - \overline{{\boldsymbol{m}}}_{A}{\boldsymbol{g}} )^{-1} {\boldsymbol{e}}\boldsymbol{\pi}(k)\nonumber\\& \quad + {1 \over -\sigma} (n-N) {\boldsymbol{e}}\boldsymbol{\pi}(k)\nonumber\\& = {\boldsymbol{S}}^{(N)}(n;\,k) + {1 \over -\sigma} (n-N) {\boldsymbol{e}}\boldsymbol{\pi}(k),\end{align}

where the second equality follows from the fact that the first three terms on the right-hand side are expressed by ${\boldsymbol{S}}^{(N)}(n;\,k)$ defined in (5.4).

We are ready to prove (5.3). Substituting (5.8) into (5.5) results in the following: for $k \in \mathbb{Z}_{[0,N]}$ ,

(5.9) \begin{align}&{\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)} \nonumber\\& = \boldsymbol{\pi}^{(N)}(0)\sum_{n=N+1}^{\infty} {\boldsymbol{B}}(n)\left\{ {\boldsymbol{S}}^{(N)}(n;\,k) + {1 \over -\sigma} (n-N) {\boldsymbol{e}}\boldsymbol{\pi}(k)\right\}\nonumber\\& \quad +\sum_{\ell=1}^N \boldsymbol{\pi}^{(N)}(\ell)\sum_{n=N+1}^{\infty} {\boldsymbol{A}}(n-\ell)\left\{ {\boldsymbol{S}}^{(N)}(n;\,k) + {1 \over -\sigma} (n-N) {\boldsymbol{e}}\boldsymbol{\pi}(k)\right\}.\qquad \end{align}

Note here that, for $\ell\in\mathbb{Z}_{[1,N]}$ ,

(5.10) \begin{align}\sum_{n=N+1}^\infty (n-N) {\boldsymbol{A}}(n-\ell)&=\sum_{m=1}^{\infty} m {\boldsymbol{A}}(m + N - \ell)=\sum_{m=1}^{\infty} \sum_{k=1}^m {\boldsymbol{A}}(m + N - \ell)\nonumber\\&= \sum_{k=1}^{\infty} \sum_{m=k}^{\infty} {\boldsymbol{A}}(m + N - \ell)= \sum_{k=1}^{\infty} \overline{{\boldsymbol{A}}}(k + N - \ell - 1)\nonumber\\&\quad = \overline{\overline{{{\boldsymbol{A}}}}}(N - \ell - 1),\end{align}

and similarly,

(5.11) \begin{align}\sum_{n=N+1}^{\infty} (n-N){\boldsymbol{B}}(n) &= \overline{\overline{{\boldsymbol{B}}}}(N-1).\end{align}

Combining (5.9)–(5.11) yields (5.3). The proof is complete.

Remark 5.2. In the same way as for (5.2), we can derive a similar difference formula for any solution ${\boldsymbol{X}}$ to the Poisson equation (4.1):

(5.12) \begin{equation}\boldsymbol{\pi}^{(N)} - \boldsymbol{\pi}= \boldsymbol{\pi}^{(N)} \big( {\boldsymbol{P}}^{(N)} - {\boldsymbol{P}} \big){\boldsymbol{X}},\qquad N \in \mathbb{N}.\end{equation}

Note that (5.12) holds for ${\boldsymbol{X}}=\widetilde{{\boldsymbol{H}}}$ defined in (B.1) and for ${\boldsymbol{X}}={\boldsymbol{D}}$ defined in (4.2) (if it exists); the latter case is presented in [Reference Heidergott11, Equation (3)].

5.2. Uniform convergence under the second-order moment condition

This subsection presents two theorems, Theorems 5.3 and 5.4. Theorem 5.3 shows that $\big\{\boldsymbol{\pi}^{(N)}\big\}$ converges uniformly to $\boldsymbol{\pi}$ at a rate of $o\big(N^{-1}\big)$ under the second-order moment condition (Assumption 3.1). Theorem 5.4 shows that the tail probabilities of $\boldsymbol{\pi}^{(N)}=\big(\boldsymbol{\pi}^{(N)}(0),\boldsymbol{\pi}^{(N)}(1),\dots,\boldsymbol{\pi}^{(N)}(N)\big)$ are bounded from above by the corresponding tail probabilities of $\boldsymbol{\pi}=(\boldsymbol{\pi}(0),\boldsymbol{\pi}(1),{\dots})$ and the gap between them is $o\big(N^{-1}\big)$ .

Theorem 5.3. If Assumptions 2.1 and 3.1 hold, then $\| \boldsymbol{\pi}^{(N)} - \boldsymbol{\pi} \| = o\big(N^{-1}\big)$ .

Proof. It suffices to show that

(5.13) \begin{align}\sum_{k=0}^N \big|\boldsymbol{\pi}^{(N)}(k)-\boldsymbol{\pi}(k)\big|{\boldsymbol{e}}= o\big(N^{-1}\big),\end{align}

because

\begin{align*}\big\| \boldsymbol{\pi}^{(N)} - \boldsymbol{\pi} \big\|&= \sum_{k=0}^N \big|\boldsymbol{\pi}^{(N)}(k)-\boldsymbol{\pi}(k)\big|{\boldsymbol{e}}+ \sum_{k=N+1}^{\infty} \boldsymbol{\pi}(k){\boldsymbol{e}}\nonumber\\&= \sum_{k=0}^N \big|\boldsymbol{\pi}^{(N)}(k)-\boldsymbol{\pi}(k)\big|{\boldsymbol{e}} + o\big(N^{-1}\big),\end{align*}

where the second equality follows from $\boldsymbol{\pi}(k) =\underline{\boldsymbol{\mathcal{O}}}(k^{-2})$ owing to Theorem 3.4(i).

In what follows, we prove (5.13). Equation (4.8) shows that ${\boldsymbol{v}}^{\prime}(n)> {\boldsymbol{v}}^{\prime}(N)$ for $n \in\mathbb{Z}_{\ge N+1}$ , and thus Lemma 4.4(iii) yields

(5.14) \begin{align}\sum_{k=0}^N \big[ |{\boldsymbol{H}}(N;\,k)|{\boldsymbol{e}} + |{\boldsymbol{H}}(n;\,k)|{\boldsymbol{e}} \big]\le 2C{\boldsymbol{v}}^{\prime}(n),\quad n \in\mathbb{Z}_{\ge N+1},\end{align}

where $C > 0$ is some constant. Applying (5.14) to (5.5) results in

(5.15) \begin{align}&\sum_{k=0}^N \big|\boldsymbol{\pi}^{(N)}(k)-\boldsymbol{\pi}(k)\big|{\boldsymbol{e}}\nonumber\\&\,\le2C\left[\boldsymbol{\pi}^{(N)}(0)\sum_{n=N+1}^{\infty} {\boldsymbol{B}}(n) {\boldsymbol{v}}^{\prime}(n)+\sum_{\ell=1}^N \boldsymbol{\pi}^{(N)}(\ell)\sum_{n=N+1}^{\infty} {\boldsymbol{A}}(n-\ell) {\boldsymbol{v}}^{\prime}(n)\right].\end{align}

Furthermore, (4.8) implies that there exists some $C^{\prime} > 0$ such that

(5.16) \begin{align}{\boldsymbol{v}}^{\prime}(n) \le {C^{\prime} \over 2C} {\boldsymbol{e}}\qquad \text{for all}\ n\ \in\ \mathbb{N}.\end{align}

Combining (5.15) with (5.16), (3.1), and $\boldsymbol{\pi}^{(N)}(\ell) = \underline{\boldsymbol{\mathcal{O}}}\big(\ell^{-2}\big)$ (owing to Theorem 3.4(ii)), we obtain

\begin{align*}&\sum_{k=0}^N \big|\boldsymbol{\pi}^{(N)}(k)-\boldsymbol{\pi}(k)\big|{\boldsymbol{e}}\nonumber\\&\quad \le C^{\prime}\left[\sum_{n=N+1}^{\infty} o\big(n^{-3}\big) n+\sum_{\ell=1}^N o\big(\ell^{-2}\big)\sum_{n=N+1}^{\infty} o\big((n-\ell)^{-3}\big) n\right]\nonumber\\&\quad =C^{\prime}\left[\sum_{n=N+1}^{\infty} o(n^{-2})+\sum_{\ell=1}^N o\big(\ell^{-2}\big)\sum_{n=N+1}^{\infty} o\big((n-\ell)^{-3}\big) [(n-\ell) + \ell]\right]\nonumber\\&\quad =C^{\prime}\left[o\big(N^{-1}\big)+\sum_{\ell=1}^N o\big(\ell^{-2}\big)\left\{ o\big(\big(N-\ell-1\big)^{-1}\big) + \ell o\big((N-\ell+1)^{-2}\big) \right\}\right]\nonumber\\&\quad =C^{\prime}\left[o\big(N^{-1}\big)+\sum_{\ell=1}^N\left\{o\big(\ell^{-2}\big) o\big(\big(N-\ell-1\big)^{-1}\big) + o\big(\ell^{-1}\big) o\big(\big(N-\ell+1\big)^{-2}\big)\right\}\right]\nonumber\\&\quad =C^{\prime} o\big(N^{-1}\big),\end{align*}

which shows that (5.13) holds. The proof is complete.

Theorem 5.4. Suppose that Assumptions 2.1 and 3.1 are satisfied. For $k \in \mathbb{Z}_+$ , let $\overline{\boldsymbol{\pi}}(k)\,:\!=\,(\overline{\pi}(k,i))_{i \in \mathbb{M}_1}$ and $\overline{\boldsymbol{\pi}}^{(N)}(k)\,:\!=\,\big(\overline{\pi}^{(N)}(k,i)\big)_{i \in \mathbb{M}_1}$ denote

\begin{align*}\overline{\boldsymbol{\pi}}(k) =\sum_{\ell=k+1}^{\infty}\boldsymbol{\pi}(\ell), \quad \overline{\boldsymbol{\pi}}^{(N)}(k)=\sum_{\ell=k+1}^{\infty}\boldsymbol{\pi}^{(N)}(\ell),\end{align*}

respectively, where $\boldsymbol{\pi}^{(N)}(k) = {\textbf{0}}$ for $k \in \mathbb{Z}_{\ge N+1}$ . We then have

\begin{equation*}\big(\overline{\boldsymbol{\pi}}^{(N)}(0), \overline{\boldsymbol{\pi}}^{(N)}(1),\overline{\boldsymbol{\pi}}^{(N)}(2), \dots\big)\le \big(1+ o\big(N^{-1}\big)\big) (\overline{\boldsymbol{\pi}}(0), \overline{\boldsymbol{\pi}}(1), \overline{\boldsymbol{\pi}}(2), {\dots}).\end{equation*}

Proof. See Appendix E.

5.3. Subgeometric convergence

In this subsection, we first provide preliminaries on heavy-tailed distributions and subgeometric functions, and then present the main theorem and its corollaries on the subgeometric convergence of $\big\{\boldsymbol{\pi}^{(N)}\big\}$ .

5.3.1. Preliminaries: heavy-tailed distributions and subgeometric functions

We introduce the class of heavy-tailed distributions on the domain $\mathbb{Z}_+$ together with and its two subclasses. To this end, let $F\,:\,\mathbb{Z}_+\to [0,1]$ denote a probability distribution (function), and let $\overline{F} = 1 - F$ , which is the complementary distribution (function) of F. Furthermore, let $F^{*2}(k) = \sum_{\ell=0}^k F(\ell) F(k-\ell)$ for $k \in \mathbb{Z}_+$ .

Definition 5.5. (Heavy-tailed, long-tailed, and subexponential distributions.)

  1. (i) A distribution F is said to be heavy-tailed if and only if

    \begin{equation*}\limsup_{k \to \infty} e^{\theta k} \overline{F}(k) = \infty\quad \text{for any}\ \theta > 0.\end{equation*}
    The class of heavy-tailed distributions is denoted by $\mathcal{H}$ .
  2. (ii) A distribution F is said to be long-tailed if and only if

    \begin{equation*}\lim_{k \to \infty}{\overline{F}(k+\ell) \over \overline{F}(k)} = 1\quad \text{for any fixed}\ \ell \in \mathbb{N}.\end{equation*}
    The class of long-tailed distributions is denoted by $\mathcal{L}$ .
  3. (iii) A distribution F is said to be subexponential if and only if

    \begin{equation*}\lim_{k \to \infty}{1 - F^{*2}(k) \over \overline{F}(k)} = 2.\end{equation*}
    The class of subexponential distributions is denoted by $\mathcal{S}$ .

Remark 5.6. The following inclusion relation holds for the above three classes: $\mathcal{S} \subsetneq \mathcal{L} \subsetneq \mathcal{H}$ (see [Reference Foss, Korshunov and Zachary9, Lemmas 2.17 and 3.2; Section 3.7]).

Next we introduce the class of subgeometric functions.

Definition 5.7. (Subgeometric functions.) A function $r\,:\,\mathbb{Z}_+\to\mathbb{R}_+\,:\!=\,[0,\infty)$ is said to be a subgeometric function if and only if $\log r(k) = o(k)$ as $k \to \infty$ . The class of subgeometric functions is denoted by $\varTheta$ .

Proposition 5.8. If $F \in \mathcal{L}$ , then $\overline{F} \in \varTheta$ .

Proof. Assuming $F \in \mathcal{L}$ , we prove that

(5.17) \begin{align}\log \overline{F}(k) = o(k)\quad \text{as}\ k\to\infty.\end{align}

By definition,

\begin{align*}\lim_{k\to\infty} \big[\log \overline{F}(k+1) - \log \overline{F}(k)\big]= \lim_{k\to\infty} \log {\overline{F}(k+1) \over \overline{F}(k) }= \log 1= 0.\end{align*}

Therefore, for any $\varepsilon > 0$ , there exists some $k_0 \in \mathbb{N}$ such that $\big|\log \overline{F}(k+1) - \log \overline{F}(k)\big| < \varepsilon$ for all $k \in \mathbb{Z}_{\ge k_0}$ , which yields

\begin{align*}{\log \overline{F}(k_0) - \varepsilon n \over k_0+n}<{\log \overline{F}(k_0+n) \over k_0+n}< {\log \overline{F}(k_0) + \varepsilon n \over k_0+n}\quad \text{for all}\ n\ \in \mathbb{N}.\end{align*}

Letting $n\to\infty$ and then $\varepsilon \downarrow 0$ in the above inequality, we obtain

\begin{align*}\lim_{n\to\infty}{\log \overline{F}(k_0+n) \over k_0+n} = 0,\end{align*}

which implies that (5.17) holds. The proof is complete.

Remark 5.9. In relation to the class $\varTheta$ , there is a class $\varLambda$ of subgeometric rate functions introduced in [Reference Nummelin and Tuominen37]. A function $\psi\,:\,\mathbb{Z}_+\to\mathbb{R}_+$ belongs to the class $\varLambda$ if and only if

\[0 <\liminf_{k\to\infty}{\psi(k) \over \psi_0(k)} \le\limsup_{k\to\infty}{\psi(k) \over \psi_0(k)} < \infty,\]

for some $\psi_0\,:\,\mathbb{Z}_+\to \mathbb{R}_+$ such that $\psi_0(1) \ge 2$ and $\psi_0$ is nondecreasing with $\log \psi_0(k)/k \searrow 0$ as $k \to \infty$ . By definition, $\varLambda \subset \varTheta$ .

5.3.2. The main theorem and its corollaries

We now make an additional assumption on $\{{\boldsymbol{A}}(k)\}$ and $\{{\boldsymbol{B}}(k)\}$ to study the subgeometric convergence of $\big\{\boldsymbol{\pi}^{(N)}\big\}$ to $\boldsymbol{\pi}$ .

Assumption 5.10. There exists a distribution $F \in \mathcal{S}$ such that

(5.18a) \begin{align}\lim_{k\to\infty} { \overline{\overline{{{\boldsymbol{A}}}}}(k){\boldsymbol{e}} \over \overline{F}(k)}&= {\boldsymbol{c}}_{A},\end{align}
(5.18b) \begin{align}\lim_{k\to\infty} {\overline{\overline{{{\boldsymbol{B}}}}}(k){\boldsymbol{e}} \over \overline{F}(k)}&= {\boldsymbol{c}}_{B},\end{align}

where either ${\boldsymbol{c}}_{A}\neq{\textbf{0}}$ or ${\boldsymbol{c}}_{B}\neq{\textbf{0}}$ .

Assumption 5.10 can be interpreted as a condition on the distribution of level increments in steady state. We define $\Delta_+ = \max(X_1-X_0,0)$ and call it the nonnegative level increment. We then define $\varGamma(k)$ , $k \in \mathbb{Z}_+$ , as

(5.19) \begin{align}\varGamma(k)&= \sum_{(\ell,i) \in \mathbb{S}} \pi(\ell,i)\mathbb{P}(\Delta_+ \le k \mid (X_0,J_0)=(\ell,i))\nonumber\\&= \sum_{(\ell,i) \in \mathbb{S}} \pi(\ell,i)\mathbb{P}(X_1-X_0 \le k \mid (X_0,J_0)=(\ell,i))\nonumber\\&= \boldsymbol{\pi}(0) \sum_{n=0}^k {\boldsymbol{B}}(n){\boldsymbol{e}}+ \overline{\boldsymbol{\pi}}(0) \sum_{n=-1}^k {\boldsymbol{A}}(n){\boldsymbol{e}}.\end{align}

We call $\varGamma$ the stationary nonnegative level-increment (SNL) distribution. Assumption 2.1 ensures that the SNL distribution $\varGamma$ has a finite positive mean $\mu_{\pi}\,:\!=\,\boldsymbol{\pi}(0) \overline{{\boldsymbol{m}}}_{B} + \overline{\boldsymbol{\pi}}(0) \overline{{\boldsymbol{m}}}_{A}^+ > 0$ , where

\begin{align*}\overline{{\boldsymbol{m}}}_{A}^+= \sum_{k=1}^{\infty}k{\boldsymbol{A}}(k){\boldsymbol{e}}= \overline{{\boldsymbol{m}}}_{A} + {\boldsymbol{A}}({-}1){\boldsymbol{e}}\ge {\textbf{0}},\neq{\textbf{0}}.\end{align*}

We also define $\overline{\varGamma}_I(k) = 1 - \varGamma_I(k)$ for $k \in \mathbb{Z}_+$ , where $\varGamma_I$ denotes the integrated tail distribution (the equilibrium distribution) of the SNL distribution $\varGamma$ , that is,

(5.20) \begin{align}\varGamma_I(k)= {1 \over \mu_{\pi} } \sum_{\ell=0}^k (1 - \varGamma(\ell)),\quad k \in \mathbb{Z}_+.\end{align}

It follows from (5.20), (5.19), and Assumption 5.10 that

(5.21) \begin{align}\lim_{k\to\infty}{\overline{\varGamma}_I(k) \over \overline{F}(k)}= {\boldsymbol{\pi}(0) {\boldsymbol{c}}_{B} + \overline{\boldsymbol{\pi}}(0) {\boldsymbol{c}}_{A}\over\boldsymbol{\pi}(0) \overline{{\boldsymbol{m}}}_{B} + \overline{\boldsymbol{\pi}}(0) \overline{{\boldsymbol{m}}}_{A}^+} \in (0,\infty).\end{align}

Since $F \in \mathcal{S}$ , we have $\varGamma_I \in \mathcal{S} \subset \mathcal{L}$ (see [Reference Foss, Korshunov and Zachary9, Corollary 3.13]) and thus $\overline{\varGamma}_I \in \varTheta$ by Proposition 5.8.

Assumption 5.10 yields a subexponential asymptotic formula for the stationary distribution $\boldsymbol{\pi}$ of the infinite-level chain, as shown in the following proposition. This proposition contributes to the proof of Theorem 5.12 below (see (F.9) in Appendix F.1).

Proposition 5.11. ([Reference Masuyama30, Theorem 3.1].) If Assumptions 2.1 and 5.10 hold, then

(5.22) \begin{equation}\lim_{k\to\infty} {\overline{\boldsymbol{\pi}}(k) \over \overline{F}(k)}= {\boldsymbol{\pi}(0) {\boldsymbol{c}}_{B} + \overline{\boldsymbol{\pi}}(0) {\boldsymbol{c}}_{A}\over-\sigma} \boldsymbol{\varpi}.\end{equation}

We have arrived at the main theorem of this paper.

Theorem 5.12. If Assumptions 2.1, 3.1, and 5.10 hold, then

(5.23) \begin{align}\lim_{N\to\infty}{\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)\over\overline{F}(N)}&={\boldsymbol{\pi}(0){\boldsymbol{c}}_{B} + \overline{\boldsymbol{\pi}}(0){\boldsymbol{c}}_{A}\over -\sigma} \boldsymbol{\pi}(k), &\quad k &\in \mathbb{Z}_+,\end{align}

and thus, as $N\to\infty$ , $\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)$ converges to zero according to a subgeometric function $\overline{F}$ (see Proposition 5.8).

Proof. See Appendix F.

There are two other versions of the subgeometric convergence formula (5.23).

Corollary 5.13. If all the conditions of Theorem 5.12 are satisfied, then

(5.24) \begin{align}\lim_{N\to\infty}{\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)\over\overline{\varGamma}_I(N)}&={\boldsymbol{\pi}(0) \overline{{\boldsymbol{m}}}_{B} + \overline{\boldsymbol{\pi}}(0) \overline{{\boldsymbol{m}}}_{A}^+ \over -\sigma}\boldsymbol{\pi}(k), \quad k \in \mathbb{Z}_+,\end{align}
(5.25) \begin{align}\lim_{N\to\infty}{\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)\over\overline{\boldsymbol{\pi}}(N){\boldsymbol{e}}}&= \boldsymbol{\pi}(k), \qquad\qquad\qquad\quad\qquad k \in \mathbb{Z}_+.\end{align}

Proof. The formulas (5.24) and (5.25) follow from the combination of Theorem 5.12 with (5.21) and (5.22), respectively.

Theorem 5.12 and Corollary 5.13 give three types of subgeometric convergence formulas for the relative difference $[\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)] / \boldsymbol{\pi}(k){\boldsymbol{e}}$ .

Corollary 5.14. If all the conditions of Theorem 5.12 are satisfied, then

\begin{alignat*}{2}\lim_{N\to\infty}{1 \over\overline{F}(N)}{\left\|\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)\over\boldsymbol{\pi}(k){\boldsymbol{e}}\right\|}&={\boldsymbol{\pi}(0){\boldsymbol{c}}_{B} + \overline{\boldsymbol{\pi}}(0){\boldsymbol{c}}_{A}\over -\sigma}, &\quad k &\in \mathbb{Z}_+,\\\lim_{N\to\infty}{1 \over\overline{\varGamma}_I(N)}{\left\|\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)\over\boldsymbol{\pi}(k){\boldsymbol{e}}\right\|}&={\boldsymbol{\pi}(0) \overline{{\boldsymbol{m}}}_{B} + \overline{\boldsymbol{\pi}}(0) \overline{{\boldsymbol{m}}}_{A}^+ \over -\sigma}, &\quad k &\in \mathbb{Z}_+,\\\lim_{N\to\infty}{1 \over \overline{\boldsymbol{\pi}}(N){\boldsymbol{e}}}{\left\|\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)\over\boldsymbol{\pi}(k){\boldsymbol{e}}\right\|}&= 1, &\quad k &\in \mathbb{Z}_+.\end{alignat*}

Finally, we consider the case where ${\boldsymbol{A}}$ is reducible. We can conjecture that the convergence formula (5.23) holds if ${\boldsymbol{A}}$ is not irreducible but has a single communication class. However, the irreducibility of ${\boldsymbol{A}}$ is assumed in most of the existing results used to prove Theorem 5.12. Therefore, in order to confirm that this conjecture is true, we have to remove the irreducibility of ${\boldsymbol{A}}$ from the assumptions of these existing results. We can also apply Theorem 5.12 to the case where ${\boldsymbol{A}}$ has more than one communication class but no transient states; i.e., ${\boldsymbol{A}}(k)$ , $k \in \mathbb{Z}_{\ge -1}$ , is rearranged to be of the block-diagonal form

where s is some positive integer. In this case, by observing the finite-level chain $\Big\{\Big(X_n^{(N)},J_n^{(N)}\Big)\Big\}$ only when it is in $\mathbb{L}_0 \cup \big(\mathbb{Z}_{[1,N]} \times \mathbb{M}_1^{(i)}\big)$ ( $i=1,2,\dots,s$ ), we have s (censored) finite-level chains covered by Theorem 5.12 and thus obtain such a result as (5.23) for each chain (under appropriate technical assumptions). If we integrate the results obtained in this way, we may obtain a convergence formula for the original finite-level chain $\Big\{\Big(X_n^{(N)},J_n^{(N)}\Big)\Big\}$ , but this work would be somewhat tedious.

5.4. Application to the MAP/GI/1/N queue

This subsection considers the application of the main theorem (Theorem 5.12) to the loss probability in the MAP/GI/1/N queue (see e.g. [Reference Baiocchi3]). First, we introduce the Markovian arrival process (MAP) [Reference Lucantoni, Meier-Hellstern and Neuts24] (‘ ${\boldsymbol{D}}_0$ ’ and ‘ ${\boldsymbol{D}}_1$ ’ are used to express two matrices that characterize the MAP according to the convention of the representative paper [Reference Lucantoni23] on MAPs and batch MAPs; the two matrices, of course, have no relation to the deviation matrix ${\boldsymbol{D}}$ ). Next, we describe the MAP/GI/1/N queue and then establish a finite-level M/G/1-type Markov chain of the queue length process in the MAP/GI/1/N queue. Finally, we derive a subexponential asymptotic formula for the loss probability of the MAP/GI/1/N queue.

We introduce the MAP [Reference Lucantoni, Meier-Hellstern and Neuts24]. The MAP has its background (Markov) chain, denoted by $\{\varphi(t); t \ge 0\}$ , which is defined on a finite state space $\mathbb{M} \,:\!=\, \{1,2,\dots,M\} \subset \mathbb{N}$ and is characterized by the irreducible infinitesimal generator ${\boldsymbol{Q}}$ . The irreducible infinitesimal generator ${\boldsymbol{Q}}$ is expressed as the sum of two matrices ${\boldsymbol{D}}_0$ and ${\boldsymbol{D}}_1$ , i.e., ${\boldsymbol{Q}} = {\boldsymbol{D}}_0 + {\boldsymbol{D}}_1$ , where ${\boldsymbol{D}}_1$ denotes a nonnegative matrix and ${\boldsymbol{D}}_0$ denotes a diagonally dominant matrix with negative diagonal elements and nonnegative off-diagonal elements. Note that ${\boldsymbol{D}}_1$ governs the transition of the background chain $\{\varphi(t)\}$ with an arrival, while ${\boldsymbol{D}}_0$ governs its transition without an arrival. Thus, the MAP is characterized by the pair of matrices $({\boldsymbol{D}}_0, {\boldsymbol{D}}_1$ ). For later use, let $\boldsymbol{\varphi}$ denote the stationary distribution vector of the irreducible background chain $\{\varphi(t)\}$ with infinitesimal generator ${\boldsymbol{Q}}$ , i.e., $\boldsymbol{\varphi}{\boldsymbol{Q}} = {\textbf{0}}$ , $\boldsymbol{\varphi}{\boldsymbol{e}}=1$ , and $\boldsymbol{\varphi} > {\textbf{0}}$ . Furthermore, let $\lambda = \boldsymbol{\varphi}{\boldsymbol{D}}_1{\boldsymbol{e}}$ , which is the arrival rate of MAP $({\boldsymbol{D}}_0,{\boldsymbol{D}}_1)$ . To avoid trivial cases, assume that ${\boldsymbol{D}}_1 \ge {\boldsymbol{O}},\neq {\boldsymbol{O}}$ and thus $\lambda > 0$ .

We describe the MAP/GI/1/N queue and related notions. Consider a queueing system with a single server and a buffer of capacity $N \in \mathbb{N}$ . Customers arrive at the system according to MAP $({\boldsymbol{D}}_0,{\boldsymbol{D}}_1)$ . Arriving customers are served on a first-come, first-served (FCFS) basis. If the waiting room is not full when a customer arrives, the customer is accepted into the system; otherwise, the customer is rejected. In addition, if the server is idle, the accepted customer’s service begins immediately; otherwise, the customer waits for his or her service turn. The service times of accepted customers are assumed to be independent and identically distributed according to a general distribution function G(t) ( $t \ge 0$ ) with finite mean $\mu > 0$ . The traffic intensity of our queue is denoted by $\rho \,:\!=\, \lambda \mu \in (0,\infty)$ .

We establish a finite-level M/G/1-type Markov chain from the queue length process in the MAP/GI/1/N queue. For $n \in \mathbb{N}$ , let $L_n^{(N)} \in \mathbb{Z}_{[0,N]}$ and $\varphi_n \in \mathbb{M}$ denote the queue length and the background state, respectively, immediately after the nth departure of a customer. It then follows (see e.g. [Reference Baiocchi3]) that the discrete-time bivariate process $\{(L_n^{(N)},\varphi_n);\,n \in \mathbb{Z}_+\}$ is a finite-level M/G/1-type Markov chain on state space $\mathbb{Z}_{[0,N]} \times \mathbb{M}$ with transition probability matrix ${\boldsymbol{T}}^{(N)}$ (see [Reference Baiocchi3, Equation (4) and p. 871]) given by

\begin{align*}{\boldsymbol{T}}^{(N)}\,:\!=\,\left(\begin{array}{c@{\,\,}c@{\,\,}c@{\,\,}c@{\,\,}c@{\,\,}c@{\,\,}c}{\boldsymbol{E}}_0(0) &{\boldsymbol{E}}_0(1) &{\boldsymbol{E}}_0(2) &\cdots &{\boldsymbol{E}}_0(N-2) &{\boldsymbol{E}}_0(N-1) &\overline{{\boldsymbol{E}}}_0(N-1)\\{\boldsymbol{E}}(0) &{\boldsymbol{E}}(1) &{\boldsymbol{E}}(2) &\cdots &{\boldsymbol{E}}(N-2) &{\boldsymbol{E}}(N-1) &\overline{{\boldsymbol{E}}}(N-1)\\{\boldsymbol{O}} &{\boldsymbol{E}}(0) &{\boldsymbol{E}}(1) &\cdots &{\boldsymbol{E}}(N-3) &{\boldsymbol{E}}(N-2) &\overline{{\boldsymbol{E}}}(N-2)\\\vdots &\vdots &\vdots &\ddots &\vdots &\vdots &\vdots \\{\boldsymbol{O}} &{\boldsymbol{O}} &{\boldsymbol{O}} &\cdots &{\boldsymbol{E}}(1) &{\boldsymbol{E}}(2) &\overline{{\boldsymbol{E}}}(2)\\{\boldsymbol{O}} &{\boldsymbol{O}} &{\boldsymbol{O}} &\cdots &{\boldsymbol{E}}(0) &{\boldsymbol{E}}(1) &\overline{{\boldsymbol{E}}}(1)\\{\boldsymbol{O}} &{\boldsymbol{O}} &{\boldsymbol{O}} &\cdots &{\boldsymbol{O}} &{\boldsymbol{E}}(0) &\overline{{\boldsymbol{E}}}(0)\\\end{array}\right),\end{align*}

where, for $k \in \mathbb{Z}_+$ ,

(5.26) \begin{align}\overline{{\boldsymbol{E}}}(k) &= \sum_{\ell=k+1}^{\infty} {\boldsymbol{E}}(\ell), \quad \overline{{\boldsymbol{E}}}_0(k) = \sum_{\ell=k+1}^{\infty} {\boldsymbol{E}}_0(\ell),\nonumber\\{\boldsymbol{E}}_0(k) &= ({-}{\boldsymbol{D}}_0)^{-1}{\boldsymbol{D}}_1{\boldsymbol{E}}(k),\end{align}

and the sequence $\{{\boldsymbol{E}}(k);\,k \in \mathbb{Z}_+\}$ satisfies

\begin{align*}\sum_{k=0}^{\infty} z^k {\boldsymbol{E}}(k)= \int_0^{\infty}\exp \left\{ ({\boldsymbol{D}}_0 + z{\boldsymbol{D}}_1) t\right\} dG(t).\end{align*}

Note here (see [Reference Masuyama, Liu and Takine33, Section 3]) that $\sum_{k=0}^{\infty}{\boldsymbol{E}}(k)$ is an irreducible and stochastic matrix satisfying the conditions

(5.27) \begin{align}\boldsymbol{\varphi}\sum_{k=0}^{\infty}{\boldsymbol{E}}(k) &= \boldsymbol{\varphi},\nonumber\\\sum_{k=1}^{\infty}{\boldsymbol{E}}(k) &>{\boldsymbol{O}},\qquad\qquad\qquad\end{align}
(5.28) \begin{align}({\boldsymbol{E}}(0))_{i,i} &>0 \quad \text{for all}\ i\ \in \mathbb{M}.\end{align}

Equations (5.27) and (5.28) imply that the finite-level chain $\Big\{\Big(L_n^{(N)},\varphi_n\Big)\Big\}$ can reach any state in its state space from any state in $\mathbb{Z}_{[1,N]} \times \mathbb{M}$ . Note also that $({-}{\boldsymbol{D}}_0)^{-1}{\boldsymbol{D}}_1 \ge {\boldsymbol{O}}$ and $ ({\boldsymbol{D}}_0 + {\boldsymbol{D}}_1){\boldsymbol{e}} ={\boldsymbol{Q}}{\boldsymbol{e}} ={\textbf{0}} $ . Therefore, $({-}{\boldsymbol{D}}_0)^{-1}{\boldsymbol{D}}_1$ is a stochastic matrix. Combining this fact, (5.26), and (5.27), we obtain

(5.29) \begin{align}\sum_{k=1}^{\infty}{\boldsymbol{E}}_0(k)= ({-}{\boldsymbol{D}}_0)^{-1}{\boldsymbol{D}}_1 \sum_{k=1}^{\infty} {\boldsymbol{E}}(k)> {\boldsymbol{O}}.\end{align}

Furthermore, (5.27)–(5.29) imply that $\Big\{\Big(L_n^{(N)},\varphi_n\Big)\Big\}$ is irreducible and thus has a unique stationary distribution vector, denoted by

\begin{align*}\boldsymbol{\phi}^{(N)} \,:\!=\, \big(\boldsymbol{\phi}^{(N)}(0),\boldsymbol{\phi}^{(N)}(1),\dots,\boldsymbol{\phi}^{(N)}(N)\big),\end{align*}

where $\boldsymbol{\phi}^{(N)}(k)$ , $k \in \mathbb{Z}_{[0,N]}$ , is a $1 \times M$ vector with the following interpretation:

\begin{align*}\big(\boldsymbol{\phi}^{(N)}(k)\big)_i=\lim_{n\to\infty}{1 \over n+1}\sum_{m=0}^n\mathbb{1}\big(L_m^{(N)} = k, \varphi_m = i\big),\quad (k,i) \in \mathbb{Z}_{[0,N]} \times \mathbb{M}.\end{align*}

We present a formula for the loss probability, denoted by $P_{\textrm{loss}}^{(N)}$ , in the MAP/GI/1/N queue. The loss probability $P_{\textrm{loss}}^{(N)}$ is equal to the fraction of customers who are rejected because there is no room in the buffer upon their arrival. There is a well-known formula for the loss probability $P_{\textrm{loss}}^{(N)}$ in the MAP/GI/1/N queue (see e.g. [Reference Baiocchi3, Equation (9)]):

(5.30) \begin{align}P_{\textrm{loss}}^{(N)}&= 1 -{1\over\rho [1 + \boldsymbol{\phi}^{(N)}(0) ({-}{\boldsymbol{D}}_0)^{-1}\mu^{-1} {\boldsymbol{e}} }\nonumber\\&= 1 -{1 \over \rho + \lambda \boldsymbol{\phi}^{(N)}(0) ({-}{\boldsymbol{D}}_0)^{-1}{\boldsymbol{e}} }.\end{align}

The loss probability $P_{\textrm{loss}}^{(N)}$ is expressed by the stationary distribution (if it exists) of the infinite-level limit of the finite-level chain $\Big\{\Big(L_n^{(N)},\varphi_n\Big)\Big\}$ . Let ${\boldsymbol{T}}$ denote

\begin{equation*}{\boldsymbol{T}}=\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c}{\boldsymbol{E}}_0(0) & {\boldsymbol{E}}_0(1) & {\boldsymbol{E}}_0(2) & {\boldsymbol{E}}_0(3) & \cdots\\[3pt]{\boldsymbol{E}}(0) & {\boldsymbol{E}}(1) & {\boldsymbol{E}}(2) & {\boldsymbol{E}}(3) & \cdots\\[3pt]{\boldsymbol{O}} & {\boldsymbol{E}}(0) & {\boldsymbol{E}}(1) & {\boldsymbol{E}}(2) & \cdots\\[3pt]{\boldsymbol{O}} & {\boldsymbol{O}} & {\boldsymbol{E}}(0) & {\boldsymbol{E}}(1) & \cdots\\[3pt]\vdots & \vdots & \vdots & \vdots & \ddots\end{array}\right),\end{equation*}

which is the transition probability matrix of the embedded Markov chain of the queue length process at the departure points in the MAP/GI/1 queue, i.e., the infinite-buffer version of the MAP/GI/1/N queue. As with ${\boldsymbol{T}}^{(N)}$ , the transition probability matrix ${\boldsymbol{T}}$ is irreducible by (5.27)–(5.29). To proceed further, assume that

\begin{align*} 0 < \rho < 1,\end{align*}

which ensures that ${\boldsymbol{T}}$ has a unique stationary vector, denoted by

\begin{align*}\boldsymbol{\phi} \,:\!=\, (\boldsymbol{\phi}(0),\boldsymbol{\phi}(1),{\dots}),\end{align*}

where $\boldsymbol{\phi}(k)$ is a $1 \times M$ vector for all $k \in \mathbb{Z}_+$ .

We now introduce the assumption for deriving a subexponential asymptotic formula for the loss probability $P_{\textrm{loss}}^{(N)}$ .

Assumption 5.15. Let $G_I$ denote the integrated tail distribution of the service time distribution G, i.e.,

\begin{align*}G_I(t) = {1 \over \mu}\int_0^t (1 - G(u)) du,\quad t \ge 0.\end{align*}

The integrated tail distribution $G_I$ is second-order long-tailed (see [Reference Masuyama27, Definition 1.1]), i.e.,

(5.31) \begin{align}\lim_{t \to \infty} {\overline{G}_I(t + \sqrt{t}) \over \overline{G}_I(t)} = 1,\end{align}

where $\overline{G}_I(t) = 1 - G_I(t)$ for $t \ge 0$ .

Remark 5.16. According to [Reference Jelenković, Momčilović and Zwart14, Definition 1], the distribution $G_I$ is said to be square-root-insensitive if and only if (5.31) holds.

Under Assumption 5.15, we show two lemmas (Lemmas 5.17 and 5.18) and then a single theorem (Theorem 5.19). The theorem presents a subexponential asymptotic formula for the loss probability $P_{\textrm{loss}}^{(N)}$ .

Lemma 5.17. Suppose that Assumption 5.15 holds and let

\begin{align*}\overline{\boldsymbol{\phi}}(k) = \sum_{\ell=k+1}^{\infty} \boldsymbol{\phi}(\ell), \quad \overline{\overline{{{\boldsymbol{E}}}}}(k) = \sum_{\ell=k+1}^{\infty} \overline{{\boldsymbol{E}}}(\ell), \quad \overline{\overline{{{\boldsymbol{E}}}}}_0(k) = \sum_{\ell=k+1}^{\infty} \overline{{\boldsymbol{E}}}_0(\ell),\quad k \in \mathbb{Z}_+.\end{align*}

If $G_I \in \mathcal{S}$ , then

(5.32) \begin{align}\lim_{N \to \infty} {\overline{\overline{{{\boldsymbol{E}}}}}_0(N){\boldsymbol{e}} \over \overline{G}_I(N/\lambda)}&= \rho {\boldsymbol{e}},\end{align}
(5.33) \begin{align}\lim_{N \to \infty} {\overline{\overline{{{\boldsymbol{E}}}}}(N){\boldsymbol{e}} \over \overline{G}_I(N/\lambda)}&= \rho {\boldsymbol{e}},\end{align}

and

(5.34) \begin{align}\lim_{N \to \infty} {\overline{\boldsymbol{\phi}}(N) \over \overline{G}_I(N/\lambda)}&= {\rho \over 1 - \rho}\boldsymbol{\varphi}.\end{align}

Proof. Equation (5.33) follows from [Reference Masuyama30, Theorem 4.2(iii)]. It also follows from (5.33), (5.26), and $({-}{\boldsymbol{D}}_0)^{-1} {\boldsymbol{D}}_1{\boldsymbol{e}} = {\boldsymbol{e}}$ that

\begin{align*}\lim_{N \to \infty} {\overline{\overline{{{\boldsymbol{E}}}}}_0(N){\boldsymbol{e}} \over \overline{G}_I(N/\lambda)}&= \rho ({-}{\boldsymbol{D}}_0)^{-1}{\boldsymbol{D}}_1{\boldsymbol{e}} = \rho{\boldsymbol{e}},\end{align*}

which shows that (5.32) holds. Combining (5.32), (5.33), and [Reference Masuyama30, Theorem 3.1] yields

\begin{align*}\lim_{N \to \infty} {\overline{\boldsymbol{\phi}}(N) \over \overline{G}_I(N/\lambda)}&= {\rho \left[ \boldsymbol{\phi}(0){\boldsymbol{e}} +\overline{\boldsymbol{\phi}}(0){\boldsymbol{e}} \right]\over 1 - \rho}\boldsymbol{\varphi}= {\rho \over 1 - \rho}\boldsymbol{\varphi},\end{align*}

which shows that (5.34) holds.

Lemma 5.18. Suppose that Assumption 5.15 holds. If $G_I \in \mathcal{S}$ , then

(5.35) \begin{align}\lim_{N \to \infty}{\boldsymbol{\phi}^{(N)}(k) - \boldsymbol{\phi}(k) \over \overline{G}_I(N/\lambda)}&= {\rho \over 1 - \rho} \boldsymbol{\phi}(k),\quad k \in \mathbb{Z}_+.\end{align}

Proof. This lemma follows from Lemma 5.17 and Theorem 5.12.

Theorem 5.19. Suppose that Assumption 5.15 holds. If $G_I \in \mathcal{S}$ , then

(5.36) \begin{align}P_{\textrm{loss}}^{(N)}\stackrel{N}{\sim} \rho \overline{G}_I(N/\lambda)\stackrel{N}{\sim} (1 - \rho) \overline{\boldsymbol{\phi}}(N){\boldsymbol{e}},\end{align}

where $f(x) \stackrel{x}{\sim} g(x)$ implies $\displaystyle\lim_{x \to \infty}f(x)/g(x) = 1$ .

Proof. It follows from (5.30) and [Reference Lucantoni23, Equation (54)] that

(5.37) \begin{align}\lambda \boldsymbol{\phi}^{(N)}(0)({-}{\boldsymbol{D}}_0)^{-1}{\boldsymbol{e}}&= {1 \over 1 - P_{\textrm{loss}}^{(N)}} - \rho,\nonumber\\\lambda \boldsymbol{\phi}(0)({-}{\boldsymbol{D}}_0)^{-1}{\boldsymbol{e}}&= 1 - \rho,\end{align}

and thus

(5.38) \begin{align}\lambda \big[ \boldsymbol{\phi}^{(N)}(0) - \boldsymbol{\phi}(0) \big] ({-}{\boldsymbol{D}}_0)^{-1}{\boldsymbol{e}}&= {1 \over 1 - P_{\textrm{loss}}^{(N)}} - 1= {P_{\textrm{loss}}^{(N)} \over 1 - P_{\textrm{loss}}^{(N)}}.\end{align}

Applying (5.35) to (5.38) and using (5.37), we obtain

\begin{align*}\lim_{N \to \infty}{1 \over \overline{G}_I(N/\lambda)}{P_{\textrm{loss}}^{(N)} \over 1 - P_{\textrm{loss}}^{(N)}}&= \lambda \lim_{N \to \infty}{\boldsymbol{\phi}^{(N)}(0) - \boldsymbol{\phi}(0) \over \overline{G}_I(N/\lambda)} ({-}{\boldsymbol{D}}_0)^{-1}{\boldsymbol{e}}\nonumber\\&= {\rho \over 1 - \rho} \lambda \boldsymbol{\phi}(0) ({-}{\boldsymbol{D}}_0)^{-1}{\boldsymbol{e}}= \rho,\end{align*}

which implies that $P_{\textrm{loss}}^{(N)} \stackrel{N}{\sim} \rho \overline{G}_I(N/\lambda)$ . Combining this and (5.34) results in (5.36).

To the best of our knowledge, Theorem 5.19 is a new result on the asymptotics of the loss probability in the MAP/GI/1/N queue and its variants. Baiocchi [Reference Baiocchi3] studied the loss probability in the same queue as ours, and he derived a geometric asymptotic formula, but not a subexponential asymptotic one. Theorem 5.19 can also be extended to batch-arrival and service-vacation models. For example, we can derive a subexponential asymptotic formula for the loss probability in a BMAP/GI/1/N with vacations, and such a result would be considered a generalization of the asymptotic formulas presented in Liu and Zhao [Reference Liu and Zhao22].

6. Concluding remarks

Theorem 5.12, the main theorem of this paper, presents the subgeometric convergence formula for the difference between $\boldsymbol{\pi}^{(N)}(k)$ and $\boldsymbol{\pi}(k)$ , which are the respective stationary probabilities of level k in the finite-level and infinite-level M/G/1-type Markov chains. Theorem 5.12, together with Corollaries 5.13 and 5.14, provides three pieces of knowledge on the subgeometric convergence of $\big\{\boldsymbol{\pi}^{(N)}\big\}$ to $\boldsymbol{\pi}$ :

  1. (i) The convergence of $\big\{\boldsymbol{\pi}^{(N)}\big\}$ to $\boldsymbol{\pi}$ is subgeometric if the integrated tail distribution $\varGamma_I$ of the SNL distribution $\varGamma$ is subexponential.

  2. (ii) The subgeometric convergence rate of $\big\{\boldsymbol{\pi}^{(N)}\big\}$ is equal to the decay rate of $\overline{\varGamma}_I(N)$ and $\overline{\boldsymbol{\pi}}(N)$ .

  3. (iii) The decay rate of $\|\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)\| / (\boldsymbol{\pi}(k){\boldsymbol{e}})$ is independent of the level value k.

Two challenging future problems remain after this study. The first problem is to derive geometric convergence formulas for $\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)$ . A typical such formula would be of the following form:

\[\lim_{N\to\infty}{\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)\over\gamma^N}=\boldsymbol{\zeta}(k), \quad k \in \mathbb{Z}_+,\]

for some $\gamma \in (0,1)$ and nonnegative vector $\boldsymbol{\zeta}(k) \neq {\textbf{0}}$ . Once this formula is obtained, we can derive a geometric asymptotic formula for the loss probability in M/G/1-type queues. The second problem is to study the convergence rate of the total variation distance $\|\boldsymbol{\pi}^{(N)} - \boldsymbol{\pi}\|=\sum_{k=0}^{\infty}|\boldsymbol{\pi}^{(N)}(k) - \boldsymbol{\pi}(k)|{\boldsymbol{e}} + \overline{\boldsymbol{\pi}}(N){\boldsymbol{e}}$ in the cases of geometric and subgeometric convergence. The second problem would be more challenging than the first one, since it is generally not permitted to exchange the order of two operators ‘ $\lim_{N\to\infty}$ ’ and ‘ $\sum_{k=0}^{\infty}$ ’.

Appendix A. Proof of Proposition 2.4

We first present a key lemma (Lemma A.1) for the proof of Proposition 2.4, and then prove this proposition.

A.1. Key lemma for the proof

Lemma A.1. Let $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big);\,n\in\mathbb{Z}_+\Big\}$ denote a stochastic process such that

(A.1a) (A.1b) (A.1c) \begin{align*}\qquad\qquad\qquad \breve{X}_n^{(N)}=\left\{\begin{array}{l@{\quad}l}X_0 \wedge N, & n=0, \qquad\qquad\qquad\quad\qquad\qquad \\[8pt]\Big(\breve{X}_{n-1}^{(N)} + X_n - X_{n-1}\Big) \wedge N, &n \in \mathbb{N}, \,\breve{X}_{n-1}^{(N)} \in \mathbb{Z}_{[0,N-1]},\qquad\qquad\! \\[8pt]\breve{X}_{n-1}^{(N)} + \big(X_n - X_{n-1}\big) \wedge 0, &n \in \mathbb{N},\,\breve{X}_{n-1}^{(N)} =N,\qquad\qquad\qquad\ \ \end{array}\right.\end{align*}

and

(A.2) \begin{align}\breve{J}_n^{(N)} &= J_n, \qquad n \in \mathbb{Z}_+.\end{align}

For later convenience, we refer to the value of $\big\{\breve{X}_n^{(N)}\big\}$ as the level. Furthermore, let $\breve{\tau}_{0}^{(N)} = \inf\big\{n \in \mathbb{N}\,:\, \breve{X}_n^{(N)} = 0\big\}$ ; then the following are true.

  1. (i) $\breve{X}_n^{(N)} \le X_n$ for all $n=0,1,\dots,\breve{\tau}_0^{(N)}$ , and thus $\breve{\tau}_{0}^{(N)} \le \tau_0$ .

  2. (ii) The stopped process $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big);\, 0 \le n \le \breve{\tau}_{0}^{(N)}\Big\}$ is driven by the finite-level M/G/1-type transition probability matrix ${\boldsymbol{P}}^{(N)}$ until it stops at time $\breve{\tau}_{0}^{(N)}$ (i.e., reaches level zero).

  3. (iii) If $\Big(\breve{X}_0^{(N)},\breve{J}_0^{(N)}\Big) = \big(X_0^{(N)},J_0^{(N)}\big)$ , then $\breve{\tau}_{0}^{(N)}=\tau_{0}^{(N)}$ in distribution.

Proof. Since the statement (i) is obvious from (A.1) and (A.2), we prove the statements (ii) and (iii). Equations (A.1a) and (A.1b) (together with (A.2)) ensure that the level increment of $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big)\Big\}$ is equal to that of $\{(X_n, J_n)\}$ as long as $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big)\Big\}$ is below level N and has not yet reached level zero. Furthermore, (A.1c) (together with (A.2)) ensures that $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big)\Big\}$ stays at level N while $\{(X_n, J_n)\}$ evolves with the transition matrices ${\boldsymbol{A}}(0), {\boldsymbol{A}}(1), {\boldsymbol{A}}(2),\dots$ ; and it ensures that $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big)\Big\}$ moves down to level $N-1$ when $\{(X_n, J_n)\}$ moves down by one level with ${\boldsymbol{A}}({-}1)$ . These facts imply that the behavior of $\Big\{\Big(\breve{X}_n^{(N)},\breve{J}_n^{(N)}\Big)\Big\}$ follows the transition law expressed by (2.9). Therefore, the statement (ii) holds (a similar argument is given in [Reference Latouche and Ramaswami18, Theorem 8.3.1]). In addition, it follows from (A.1), (A.2), and ${\boldsymbol{B}}({-}1){\boldsymbol{e}} = {\boldsymbol{A}}({-}1){\boldsymbol{e}}$ that

\begin{align*}\mathbb{P}\Big(\breve{X}_{n+1}^{(N)} = 0 \mid \breve{X}_n^{(N)}=1,\breve{J}_n^{(N)} =i\Big)&=\mathbb{P}(X_{n+1} = X_n - 1 \mid X_n \ge 1,J_n =i)\nonumber\\&= ({\boldsymbol{A}}({-}1){\boldsymbol{e}})_i = ({\boldsymbol{B}}({-}1){\boldsymbol{e}})_i\nonumber\\&= \mathbb{P}\Big(X_{n+1}^{(N)} = 0 \mid X_n^{(N)}=1,J_n^{(N)} =i\Big).\end{align*}

Combining this result with the statement (ii) implies the statement (iii). The proof is complete.

A.2. Body of the proof

We begin with the proof of the statement (i). Since the infinite-level chain $\{(X_n, J_n)\}$ is irreducible, it reaches any state $(0,j) \in \mathbb{L}_0$ from any state $(0,i) \in \mathbb{L}_0$ . Let $N_{i,j}$ denote the maximum level in an arbitrarily chosen path from state (0, i) to state (0, j). Since $\mathbb{L}_0$ is finite, we have $N_0\,:\!=\,\max_{i,j \in \mathbb{M}_0} N_{i,j} < \infty$ . Furthermore, the similarity between the structures of ${\boldsymbol{P}}^{(N)}$ and ${\boldsymbol{P}}$ implies that the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ behaves stochastically the same as the infinite-level chain $\{(X_n, J_n)\}$ unless the former reaches its upper boundary level N. Therefore, for any $N > N_0$ , the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ reaches any state $(0,j) \in \mathbb{L}_0$ from any state $(0,i) \in \mathbb{L}_0$ .

Next, we prove the statement (ii). Suppose that the infinite-level chain $\{(X_n, J_n)\}$ is irreducible and recurrent. It then follows that the infinite-level chain $\{(X_n, J_n)\}$ reaches level zero with probability one from any state, and so does the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ owing to Lemma A.1(i).

Finally, we prove the statement (iii). Suppose that the infinite-level chain $\{(X_n, J_n)\}$ is irreducible and positive recurrent. There then exists a stochastic matrix $\boldsymbol{\varPi}^{(N)}\,:\!=\,\big(\varPi^{(N)}(k,i;\,\ell,j)\big)_{(k,i;\,\ell,j) \in \mathbb{L}_{\le N}}$ (see e.g. [Reference Doob8, Chapter V, Theorem 2.1]) such that

\begin{align*}\boldsymbol{\varPi}^{(N)} {\boldsymbol{P}}^{(N)}= {\boldsymbol{P}}^{(N)} \boldsymbol{\varPi}^{(N)} =\boldsymbol{\varPi}^{(N)}.\end{align*}

Therefore, the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ has at least one stationary distribution vector, and each of them is a linear combination of the row vectors of $\boldsymbol{\varPi}^{(N)}$ (see e.g. [Reference Doob8, Chapter V, Theorem 2.4]). Furthermore, the statements (i) and (ii) imply that the finite-level chain $\Big\{\Big(X_n^{(N)}, J_n^{(N)}\Big)\Big\}$ has only one closed communication class and thus its stationary distribution vector is unique.

Appendix B. Proof of Theorem 4.1

First, we prove that ${\boldsymbol{H}}_{\mathbb{A}}$ is a solution to the Poisson equation (4.1) with (4.5). To facilitate our discussion, we define $\widetilde{{\boldsymbol{H}}}\,:\!=\,\big(\widetilde{H}(k,i;\,\ell,j)\big)_{(k,i;\,\ell,j) \in \mathbb{S}^2}$ as a matrix such that

(B.1) \begin{align}\widetilde{H}(k,i;\,\ell,j)&=\mathbb{E}_{(k,i)}\!\!\left[\sum_{n=0}^{\tau_{\boldsymbol{\alpha}}-1} \overline{1}_{(\ell,j)}(X_n, J_n)\right],\end{align}

which is equal to the first term of $H_{\mathbb{A}}(k,i;\,\ell,j)$ in (4.4). We then have

\begin{align*}H_{\mathbb{A}}(k,i;\,\ell,j)&=\widetilde{H}(k,i;\,\ell,j) - \sum_{(k^{\prime},i^{\prime}) \in \mathbb{A}}\widetilde{\pi}_{\mathbb{A}}(k^{\prime},i^{\prime})\widetilde{H}(k^{\prime},i^{\prime};\,\ell,j),\end{align*}

and thus

(B.2) \begin{align}{\boldsymbol{H}}_{\mathbb{A}}&= \widetilde{{\boldsymbol{H}}}- {\boldsymbol{e}} \cdot (\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}, {\textbf{0}}) \widetilde{{\boldsymbol{H}}}= \widetilde{{\boldsymbol{H}}}- {\boldsymbol{e}} \cdot \widetilde{\boldsymbol{\pi}}_{\mathbb{A}}\widetilde{{\boldsymbol{H}}}(\mathbb{A}),\end{align}

where $\widetilde{{\boldsymbol{H}}}(\mathbb{A})$ is a submatrix of $\widetilde{{\boldsymbol{H}}}$ partitioned as follows:

(B.3)

Solving the matrix equation (B.2) with respect to $\widetilde{{\boldsymbol{H}}}$ and decomposing its rows with the sets $\mathbb{A}$ and $\mathbb{B}$ , we have

(B.4a) \begin{align}\widetilde{{\boldsymbol{H}}}(\mathbb{A})&= {\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A}) + {\boldsymbol{e}}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}\widetilde{{\boldsymbol{H}}}(\mathbb{A}),\end{align}
(B.4b) \begin{align}\widetilde{{\boldsymbol{H}}}(\mathbb{B})&= {\boldsymbol{H}}_{\mathbb{A}}(\mathbb{B}) + {\boldsymbol{e}}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}\widetilde{{\boldsymbol{H}}}(\mathbb{A}),\end{align}

where the sizes of the vectors ${\boldsymbol{e}}$ in (B.4a) and (B.4b) are equal to the cardinalities of $\mathbb{A}$ and $\mathbb{B}$ , respectively. Note (see [Reference Dendievel, Latouche and Liu7, Lemma 2.1]) that $\widetilde{{\boldsymbol{H}}}$ is a solution to the Poisson equation (4.1) (that is, $({\boldsymbol{I}} - {\boldsymbol{P}})\widetilde{{\boldsymbol{H}}} = {\boldsymbol{I}} - {\boldsymbol{e}}\boldsymbol{\pi}$ ) such that $\widetilde{H}(\boldsymbol{\alpha};\,\ell,j)=0$ for all $(\ell,j) \in \mathbb{S}$ . Using this fact and (B.2), we have

\begin{align*}({\boldsymbol{I}} - {\boldsymbol{P}}){\boldsymbol{H}}_{\mathbb{A}}&= ({\boldsymbol{I}} - {\boldsymbol{P}}) \widetilde{{\boldsymbol{H}}}- ({\boldsymbol{I}} - {\boldsymbol{P}}) {\boldsymbol{e}} \widetilde{\boldsymbol{\pi}}_{\mathbb{A}} \widetilde{{\boldsymbol{H}}}(\mathbb{A})\nonumber\\&= ({\boldsymbol{I}} - {\boldsymbol{P}}) \widetilde{{\boldsymbol{H}}} = {\boldsymbol{I}} - {\boldsymbol{e}}\boldsymbol{\pi},\end{align*}

which shows that ${\boldsymbol{H}}_{\mathbb{A}}$ is a solution to the Poisson equation (4.1). From (B.2), (B.3), and $\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}{\boldsymbol{e}}=1$ , we also have

(B.5) \begin{align}\big( \widetilde{\boldsymbol{\pi}}_{\mathbb{A}},\,{\textbf{0}} \big){\boldsymbol{H}}_{\mathbb{A}}&= \widetilde{\boldsymbol{\pi}}_{\mathbb{A}} \widetilde{{\boldsymbol{H}}}_{\mathbb{A}}- \widetilde{\boldsymbol{\pi}}_{\mathbb{A}} {\boldsymbol{e}}\cdot \widetilde{\boldsymbol{\pi}}_{\mathbb{A}} \widetilde{{\boldsymbol{H}}}(\mathbb{A})\nonumber\\&= \widetilde{\boldsymbol{\pi}}_{\mathbb{A}} \widetilde{{\boldsymbol{H}}}_{\mathbb{A}}- \widetilde{\boldsymbol{\pi}}_{\mathbb{A}} \widetilde{{\boldsymbol{H}}}(\mathbb{A})\nonumber\\&= {\textbf{0}},\end{align}

and thus ${\boldsymbol{H}}_{\mathbb{A}}$ satisfies the constraint (4.5).

Next, we prove (4.7), which implies that ${\boldsymbol{H}}_{\mathbb{A}}$ is independent of $\boldsymbol{\alpha}$ . Since the matrix $\widetilde{{\boldsymbol{H}}}$ , defined in (B.1), is a solution to the Poisson equation (4.1), it follows from [Reference Dendievel, Latouche and Liu7, Theorem 2.5]) that

(B.6a) \begin{align}\big({\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}} _{\mathbb{A}}\big)\widetilde{{\boldsymbol{H}}}(\mathbb{A}) &=\big({\boldsymbol{I}},\, {\boldsymbol{P}}_{\mathbb{A},\mathbb{B}}({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}\big)- {\boldsymbol{u}}_{\mathbb{A}}(\mathbb{A})\boldsymbol{\pi},\qquad\qquad\qquad\quad\qquad\end{align}
(B.6b) \begin{align}\qquad \widetilde{{\boldsymbol{H}}}(\mathbb{B}) &=\big({\boldsymbol{O}},\, ({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}\big)- {\boldsymbol{u}}_{\mathbb{B}}(\mathbb{A})\boldsymbol{\pi}+ ({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}{\boldsymbol{P}}_{\mathbb{B},\mathbb{A}}\widetilde{{\boldsymbol{H}}}(\mathbb{A}).\end{align}

Substituting (B.4a) into (B.6a) yields

(B.7) \begin{align}\Big({\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}} _{\mathbb{A}}\Big){\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A}) &=\big({\boldsymbol{I}},\, {\boldsymbol{P}}_{\mathbb{A},\mathbb{B}}({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}\big)- {\boldsymbol{u}}_{\mathbb{A}}(\mathbb{A})\boldsymbol{\pi}.\end{align}

Furthermore, (B.5) and (4.6) yield

(B.8) \begin{align}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}{\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A}) &= {\textbf{0}}.\end{align}

Note here that ${\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}}_{\mathbb{A}} + {\boldsymbol{e}}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}$ is nonsingular and

(B.9) \begin{align}\big({\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}}_{\mathbb{A}} + {\boldsymbol{e}}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}\big)^{-1}\big({\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}}_{\mathbb{A}}\big)= {\boldsymbol{I}} - {\boldsymbol{e}}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}.\end{align}

Therefore, pre-multiplying both sides of (B.7) by $\big({\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}}_{\mathbb{A}} + {\boldsymbol{e}}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}\big)^{-1}$ and using (B.8) and (B.9), we obtain

\begin{align*}{\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A}) &=\big({\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}}_{\mathbb{A}} + {\boldsymbol{e}}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}\big)^{-1}\left[\big({\boldsymbol{I}},\, {\boldsymbol{P}}_{\mathbb{A},\mathbb{B}}({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}\big)- {\boldsymbol{u}}_{\mathbb{A}}(\mathbb{A})\boldsymbol{\pi}\right],\end{align*}

which shows that (4.7a) holds. In addition, applying (B.4a) and (B.4b) to $\widetilde{{\boldsymbol{H}}}(\mathbb{A})$ and $\widetilde{{\boldsymbol{H}}}(\mathbb{B})$ , respectively, in (B.6b) and using $({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}{\boldsymbol{P}}_{\mathbb{B},\mathbb{A}}{\boldsymbol{e}}={\boldsymbol{e}}$ , we obtain

\begin{align*}&{\boldsymbol{H}}_{\mathbb{A}}(\mathbb{B}) + {\boldsymbol{e}}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}\widetilde{{\boldsymbol{H}}}(\mathbb{A})\nonumber\\&\quad =\big({\boldsymbol{O}},\, ({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}\big)- {\boldsymbol{u}}_{\mathbb{B}}(\mathbb{A})\boldsymbol{\pi}+ ({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}{\boldsymbol{P}}_{\mathbb{B},\mathbb{A}}\left[{\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A}) + {\boldsymbol{e}}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}\widetilde{{\boldsymbol{H}}}(\mathbb{A})\right]\nonumber\\&\quad =\big({\boldsymbol{O}},\, ({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}\big)- {\boldsymbol{u}}_{\mathbb{B}}(\mathbb{A})\boldsymbol{\pi}+ ({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}{\boldsymbol{P}}_{\mathbb{B},\mathbb{A}}{\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})+ {\boldsymbol{e}}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}\widetilde{{\boldsymbol{H}}}(\mathbb{A}),\end{align*}

and thus

\begin{align*}{\boldsymbol{H}}_{\mathbb{A}}(\mathbb{B})=\big({\boldsymbol{O}},\, ({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}\big)- {\boldsymbol{u}}_{\mathbb{B}}(\mathbb{A})\boldsymbol{\pi}+ ({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}{\boldsymbol{P}}_{\mathbb{B},\mathbb{A}}{\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A}),\end{align*}

which shows that (4.7b) holds.

To complete the proof, it suffices to show that ${\boldsymbol{H}}_{\mathbb{A}}$ is the unique solution to the Poisson equation (4.1) with (4.5). Based on (B.7) and (B.8), we can consider ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ to be a solution to the system of equations

(B.10a) \begin{align}\big({\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}}_{\mathbb{A}}\big) {\boldsymbol{X}}_{\mathbb{A}}(\mathbb{A})&=\big({\boldsymbol{I}},\, {\boldsymbol{P}}_{\mathbb{A},\mathbb{B}}({\boldsymbol{I}} - {\boldsymbol{P}}_{\mathbb{B}})^{-1}\big)- {\boldsymbol{u}}_{\mathbb{A}}(\mathbb{A})\boldsymbol{\pi},\end{align}
(B.10b) \begin{align}\widetilde{\boldsymbol{\pi}}_{\mathbb{A}}{\boldsymbol{X}}_{\mathbb{A}}(\mathbb{A}) &= {\textbf{0}},\qquad\qquad\qquad\qquad\qquad\end{align}

where ${\boldsymbol{X}}_{\mathbb{A}}(\mathbb{A}) $ is an unknown matrix of the same size as ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ . Let ${\boldsymbol{H}}^{\prime}_{\mathbb{A}}(\mathbb{A})$ denote an arbitrary solution to (B.10). Since ${\boldsymbol{H}}^{\prime}_{\mathbb{A}}(\mathbb{A})$ and ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ satisfy (B.10a), we have $\big({\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}}_{\mathbb{A}}\big){\boldsymbol{H}}^{\prime}_{\mathbb{A}}(\mathbb{A}) = $ $({\boldsymbol{I}} - \widetilde{{\boldsymbol{P}}}_{\mathbb{A}}) {\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ , which leads to

\begin{align*}{\boldsymbol{H}}^{\prime}_{\mathbb{A}}(\mathbb{A}) - {\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})&= \widetilde{{\boldsymbol{P}}}_{\mathbb{A}} \big[{\boldsymbol{H}}^{\prime}_{\mathbb{A}}(\mathbb{A}) - {\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})\big].\end{align*}

This equation (together with [Reference Doob8, Chapter V, Theorem 2.4]) yields

\begin{align*}{\boldsymbol{H}}^{\prime}_{\mathbb{A}}(\mathbb{A}) - {\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})&= \lim_{n\to \infty} {1 \over n} \sum_{k=1}^n \widetilde{{\boldsymbol{P}}}_{\mathbb{A}}^k\cdot \big[{\boldsymbol{H}}^{\prime}_{\mathbb{A}}(\mathbb{A}) - {\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})\big]\nonumber\\&= {\boldsymbol{e}} \widetilde{\boldsymbol{\pi}}_{\mathbb{A}} \big[{\boldsymbol{H}}^{\prime}_{\mathbb{A}}(\mathbb{A}) - {\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})\big]= {\boldsymbol{O}},\end{align*}

where the last equality holds because ${\boldsymbol{H}}^{\prime}_{\mathbb{A}}(\mathbb{A})$ and ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ satisfy (B.10b). Therefore, ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{A})$ is the unique solution to the system of equations (B.10), and thus (4.7b) uniquely determines ${\boldsymbol{H}}_{\mathbb{A}}(\mathbb{B})$ . Consequently, ${\boldsymbol{H}}_{\mathbb{A}}$ is the unique solution to the Poisson equation (4.1) with (4.5). The proof is complete.

Appendix C. Proof of Lemma 4.4

We prove the statement (i). It follows from (2.1), (2.9), and (4.8) that ${\boldsymbol{P}}^{(N)}{\boldsymbol{v}}^{\prime} \le {\boldsymbol{P}}{\boldsymbol{v}}^{\prime}$ for $N \in \mathbb{N}$ and that, for $k \in \mathbb{Z}_{\ge 2}$ ,

\begin{align*}\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(k;\,\ell){\boldsymbol{v}}^{\prime}(\ell)& = {1 \over -\sigma}\left(\sum_{\ell=-1}^{\infty} (\ell+k) {\boldsymbol{A}}(\ell){\boldsymbol{e}}+ \sum_{\ell=-1}^{\infty} {\boldsymbol{A}}(\ell){\boldsymbol{a}}\right)\nonumber\\&= {1 \over -\sigma}\left[k{\boldsymbol{e}}+ \left({\boldsymbol{A}}{\boldsymbol{a}} + \overline{{\boldsymbol{m}}}_{A}\right)\right]\nonumber\\&= {1 \over -\sigma} (k{\boldsymbol{e}} + {\boldsymbol{a}})+ {1 \over -\sigma}\left({\boldsymbol{A}}{\boldsymbol{a}} + \overline{{\boldsymbol{m}}}_{A} - {\boldsymbol{a}}\right)\nonumber\\&= {1 \over -\sigma} (k{\boldsymbol{e}} + {\boldsymbol{a}}) + {\sigma \over -\sigma}{\boldsymbol{e}}\nonumber\\&= {\boldsymbol{v}}^{\prime}(k) - {\boldsymbol{e}},\end{align*}

where the second-to-last equality is due to (3.4). In a similar way, we can show that $\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(0;\,\ell){\boldsymbol{v}}^{\prime}(\ell) < \infty$ and $\sum_{\ell=0}^{\infty}{\boldsymbol{P}}(1;\,\ell){\boldsymbol{v}}^{\prime}(\ell) < \infty$ . Therefore, (4.9) holds for some $b^{\prime} \in (0,\infty)$ and $K^{\prime} \in \mathbb{N}$ .

Next, we prove the statement (ii). Let ${\boldsymbol{x}}_{(\ell,j)}$ , $(\ell,j) \in \mathbb{S}$ , denote the $(\ell,j)$ th column of an arbitrary solution ${\boldsymbol{X}}$ to the Poisson equation (4.1). By definition, ${\boldsymbol{x}}_{(\ell,j)}$ is a solution to a Poisson equation

\begin{align*}({\boldsymbol{I}} - {\boldsymbol{P}}) {\boldsymbol{x}}_{(\ell,j)}= \textbf{1}_{(\ell,j)} - {\boldsymbol{e}}\pi(\ell,j).\end{align*}

Therefore, it follows from the statement (i) (which has been proved) and [Reference Glynn and Meyn10, Theorem 2.3] that there exists some $C_0 > 0$ such that $|{\boldsymbol{x}}_{(\ell,j)}| \le C_0 {\boldsymbol{v}}^{\prime}$ for any $(\ell,j) \in \mathbb{S}$ , and thus $|{\boldsymbol{X}}| \le C_0 {\boldsymbol{v}}^{\prime}{\boldsymbol{e}}^{\top}$ . It also follows from (4.8) and Theorem 3.4 that if Assumption 3.1 holds then $\boldsymbol{\pi}{\boldsymbol{v}}^{\prime} < \infty$ and thus $\boldsymbol{\pi}|{\boldsymbol{X}}| \le C_0 (\boldsymbol{\pi}{\boldsymbol{v}}^{\prime}){\boldsymbol{e}}^{\top}< \infty$ . Consequently, the statement (ii) is true.

In the following, we prove the statement (iii). Let ${\boldsymbol{w}}\,:\!=\,(w(k,i))_{(k,i) \in \mathbb{S}}$ denote an arbitrary column vector such that $|{\boldsymbol{w}}| \le {\boldsymbol{e}}$ . We then have

(C.1) \begin{align}\sum_{(\ell,j) \in \mathbb{S}}\left| H_{\mathbb{A}}(k,i;\,\ell,j) \right|& \le\sup_{|{\boldsymbol{w}}| \le {\boldsymbol{e}}}\sum_{(\ell,j) \in \mathbb{S}} H_{\mathbb{A}}(k,i;\,\ell,j) w(\ell,j)\quad \text{for each}\ (k,i) \in \mathbb{S}.\end{align}

According to Theorem 4.1, ${\boldsymbol{H}}_{\mathbb{A}}$ satisfies the Poisson equation (4.1) and thus ${\boldsymbol{H}}_{\mathbb{A}}{\boldsymbol{w}}$ satisfies

\begin{align*}({\boldsymbol{I}} - {\boldsymbol{P}}){\boldsymbol{H}}_{\mathbb{A}}{\boldsymbol{w}}= {\boldsymbol{w}} - (\boldsymbol{\pi}{\boldsymbol{w}}){\boldsymbol{e}}.\end{align*}

Therefore, we obtain an upper bound for $|{\boldsymbol{H}}_{\mathbb{A}}{\boldsymbol{w}}|$ by using [Reference Glynn and Meyn10, Theorem 2.1], the statement (i) of Lemma 4.4, and ${\boldsymbol{e}} \le ({-}\sigma){\boldsymbol{v}}^{\prime}$ (by (4.8)):

\begin{align*}|{\boldsymbol{H}}_{\mathbb{A}}{\boldsymbol{w}}| \le C_{\mathbb{A}}{\boldsymbol{v}}^{\prime}\quad \text{for some}\ C_{\mathbb{A}} > 0\ \text{independent of}\ {\boldsymbol{w}}.\end{align*}

Combining this bound and (C.1) results in

\begin{align*}|{\boldsymbol{H}}_{\mathbb{A}}|{\boldsymbol{e}} \le C_{\mathbb{A}} {\boldsymbol{v}}^{\prime},\end{align*}

which shows that the statement (iii) holds. The proof is complete.

Appendix D. Proof of Proposition 4.5

We begin the proof of this proposition with the following lemma (its proof is given at the end of this section).

Lemma D.1. Under Assumption 2.1, Assumption 3.1 holds if and only if

(D.1) \begin{align}\sum_{(k,i)\in\mathbb{S}} \pi(k,i)\mathbb{E}_{(k,i)}[\tau_{(0,j)}] &< \infty\quad {for\ some\ (and\ then\ every)\ j \in \mathbb{M}_0,}\end{align}

where $\tau_{(0,j)} = \inf\{n\in\mathbb{N}\,:\, (X_n,J_n) = (0,j)\}$ for $j \in \mathbb{M}_0$ .

First, we prove the statement (i). Assumption 2.1 and the aperiodicity of ${\boldsymbol{P}}$ ensure that the Markov chain $\{(X_n, J_n)\}$ is ergodic. Thus, the deviation matrix ${\boldsymbol{D}}$ exists if and only if (D.1) holds (see [Reference Coolen-Schrijner and van Doorn5, Theorem 4.1] and [Reference Dendievel, Latouche and Liu7, Lemma 2.6]). The combination of this fact and Lemma D.1 results in the statement (i).

Next, we prove the statement (ii), assuming that ${\boldsymbol{D}}$ exists, or equivalently that Assumption 3.1 holds (this equivalence is due to the statement (i) of the present proposition). The first half of the statement (ii) follows from Lemma D.1 and [Reference Dendievel, Latouche and Liu7, Lemma 2.6]. It thus remains to prove (4.10). Recall that ${\boldsymbol{D}}$ and ${\boldsymbol{H}}_{\mathbb{A}}$ are solutions to the Poisson equation (4.1). Therefore, the combination of Lemma 4.4(ii) and Assumption 3.1 implies that $\boldsymbol{\pi} |{\boldsymbol{D}}| < \infty$ and $\boldsymbol{\pi} |{\boldsymbol{H}}_{\mathbb{A}}| < \infty$ . It follows from this result and [Reference Glynn and Meyn10, Proposition 1.1] that there exists some row vector $\boldsymbol{\beta}$ such that ${\boldsymbol{D}} = {\boldsymbol{H}}_{\mathbb{A}} + {\boldsymbol{e}}\boldsymbol{\beta}$ . Since $\boldsymbol{\pi}{\boldsymbol{D}}={\textbf{0}}$ , we have $\boldsymbol{\beta} = - \boldsymbol{\pi}{\boldsymbol{H}}_{\mathbb{A}}$ , which leads to (4.10).

We conclude this section by proving Lemma D.1.

Proof of Lemma D.1. Theorem 3.6 shows that Assumption 3.1 is equivalent to (3.29). Therefore, it suffices to prove the equivalence of (3.29) and (D.1). For all $j \in \mathbb{M}_0$ and $(k,i) \in \mathbb{S}$ , we have

(D.2) \begin{align}\mathbb{E}_{(k,i)}[\tau_0]&\le\mathbb{E}_{(k,i)}\big[\tau_{(0,j)}\big]\nonumber\\&= \mathbb{E}_{(k,i)}[\tau_0] + \mathbb{E}_{(k,i)}\!\left[\mathbb{1}(J_{\tau_0} \neq j) \cdot \mathbb{E}\big[ \tau_{(0,j)} - \tau_0 \mid J_{\tau_0} \neq j\big]\right]\nonumber\\&= \mathbb{E}_{(k,i)}[\tau_0]+ \sum_{s \in \mathbb{M}_0 \setminus \{j\}}\mathbb{P}\big(J_{\tau_0} = s \mid (X_0,J_0)=(k,i)\big) \cdot \mathbb{E}_{(0,s)}[ \tau_{(0,j)}],\end{align}

where the last equality is due to the strong Markov property. Since $\{(X_n, J_n)\}$ is ergodic, there exists some constant $C>0$ such that

\[\sup_{s,j \in \mathbb{M}_0}\mathbb{E}_{(0,s)}[ \tau_{(0,j)}] < C.\]

Combining this bound and (D.2) yields

\begin{equation*}\mathbb{E}_{(k,i)}[\tau_0]\le\mathbb{E}_{(k,i)}[\tau_{(0,j)}]\le \mathbb{E}_{(k,i)}[\tau_0] + C\quad \text{for all}\ j\ \in \mathbb{M}_0\ \text{and}\ (k,i) \in \mathbb{S},\end{equation*}

which implies the equivalence of (3.29) and (D.1).

Appendix E. Proof of Theorem 5.4

To prove this theorem, it suffices to show that

(E.1) \begin{align}\overline{\boldsymbol{\pi}}^{(N)}(k)\le \big(1 + o\big(N^{-1}\big)\big) \overline{\boldsymbol{\pi}}(k),\quad k \in \mathbb{Z}_{[0,N]},\end{align}

where the term $1 + o\big(N^{-1}\big)$ is independent of k. Let $\tau_0^{(N)} = \inf\big\{n \in \mathbb{N}\,:\,X_n^{(N)} = 0\big\}$ . It then follows from [Reference Meyn and Tweedie34, Theorem 10.0.1] that, for $\ell \in \mathbb{Z}_{[0,N-1]}$ and $i \in \mathbb{M}_1$ ,

(E.2) \begin{align}\pi(\ell,i) &=\sum_{j \in \mathbb{M}_0}\pi(0,j)\mathbb{E}_{(0,j)}\!\!\left[\sum_{n=1}^{\tau_0}1_{(\ell,i)}(X_n,J_n)\right],\end{align}
(E.3) \begin{align}\pi^{(N)}(\ell,i)&= \sum_{j \in \mathbb{M}_0} \pi^{(N)}(0,j)\mathbb{E}_{(0,j)}^{(N)}\!\!\left[\sum_{n=1}^{\tau_0^{(N)}}1_{(\ell,i)}\big(X_n^{(N)},J_n^{(N)}\big)\right],\end{align}

where we use the simplified notation $\mathbb{E}_{(k,i)}^{(N)}[{\cdot}] \,:\!=\, \mathbb{E}\big[{\cdot}\mid \big(X_0^{(N)}, J_0^{(N)}\big)=(k,i)\big]$ , in addition to the notation $\mathbb{E}_{(k,i)}[{\cdot}]= \mathbb{E}[{\cdot}\mid (X_0, J_0)=(k,i)]$ introduced in Section 3.2. Note here that, for $k \in \mathbb{Z}_{[0,N-1]}$ and $j\in\mathbb{M}_0$ ,

(E.4) \begin{align}&\sum_{\ell=k+1}^N\mathbb{E}_{(0,j)}^{(N)}\!\!\left[\sum_{n=1}^{\tau_0^{(N)}}1_{(\ell,i)}\big(X_n^{(N)},J_n^{(N)}\big)\right]\le\sum_{\ell=k+1}^{\infty}\mathbb{E}_{(0,j)}\!\!\left[\sum_{n=1}^{\tau_0}1_{(\ell,i)}(X_n,J_n)\right],\end{align}

which is an immediate consequence of Lemma A.1. Equations (E.4) and (E.3) yield, for $k \in \mathbb{Z}_{[0,N]}$ and $i \in \mathbb{M}_1$ ,

(E.5) \begin{align}\overline{\pi}^{(N)}(k,i)&=\sum_{j \in \mathbb{M}_0}\pi^{(N)}(0,j)\sum_{\ell=k+1}^N\mathbb{E}_{(0,j)}^{(N)}\!\!\left[\sum_{n=1}^{\tau_0^{(N)}}1_{(\ell,i)}\big(X_n^{(N)},J_n^{(N)}\big)\right]\nonumber\\&\le\sum_{j \in \mathbb{M}_0}\pi^{(N)}(0,j)\sum_{\ell=k+1}^{\infty}\mathbb{E}_{(0,j)}\!\!\left[\sum_{n=1}^{\tau_0}1_{(\ell,i)}(X_n,J_n)\right].\end{align}

Theorem 5.3 also implies that $\boldsymbol{\pi}^{(N)}(0) = \big(1 + o\big(N^{-1}\big)\big)\boldsymbol{\pi}(0)$ . Applying this to (E.5) and using (E.2), we obtain

\begin{align*}\overline{\pi}^{(N)}(k,i)&\le \big(1 + o\big(N^{-1}\big)\big)\sum_{j \in \mathbb{M}_0}\pi(0,j) \sum_{\ell=k+1}^{\infty}\mathbb{E}_{(0,j)}\!\!\left[\sum_{n=1}^{\tau_0}1_{(\ell,i)}(X_n,J_n)\right]\nonumber\\&= \big(1 + o\big(N^{-1}\big)\big) \sum_{\ell=k+1}^{\infty}\sum_{j \in \mathbb{M}_0}\pi(0,j)\mathbb{E}_{(0,j)}\!\!\left[\sum_{n=1}^{\tau_0}1_{(\ell,i)}(X_n,J_n)\right]\nonumber\\&= \big(1 + o\big(N^{-1}\big) \sum_{\ell=k+1}^{\infty} \pi(\ell,i)\nonumber\\&= \big(1 + o\big(N^{-1}\big)\big) \overline{\pi}(k,i),\quad k \in \mathbb{Z}_{[0,N]},\,i\in\mathbb{M}_1,\end{align*}

where the term $\big(1 + o\big(N^{-1}\big)\big)$ is independent of (k, i). Consequently, (E.1) holds.

Appendix F. Proof of Theorem 5.12

Theorem 5.12 is an immediate consequence of applying Lemmas F.1 and F.2 to (5.3). The two lemmas are proved in Sections F.1 and F.2, respectively.

Lemma F.1. If Assumptions 2.1, 3.1, and 5.10 hold, then

(F.1a) \begin{align}\lim_{N\to\infty}{1 \over \overline{F}(N)}\left[\boldsymbol{\pi}^{(N)}(0)\sum_{n=N+1}^{\infty}{\boldsymbol{B}}(n){\boldsymbol{S}}^{(N)}(n;\,k)\right]&={\textbf{0}}, \quad k \in \mathbb{Z}_+,\end{align}
(F.1b) \begin{align}\lim_{N\to\infty}{1\over\overline{F}(N)}\left[\sum_{\ell=1}^N \boldsymbol{\pi}^{(N)}(\ell)\sum_{n=N+1}^{\infty} {\boldsymbol{A}}(n-\ell) {\boldsymbol{S}}^{(N)}(n;\,k)\right]&={\textbf{0}}, \quad k \in \mathbb{Z}_+,\end{align}

where ${\boldsymbol{S}}^{(N)}(n;\,k)$ is given in (5.4).

Lemma F.2. If Assumptions 2.1, 3.1, and 5.10 hold, then

(F.2a) \begin{align}\lim_{N\to\infty}{\boldsymbol{\pi}^{(N)}(0)\overline{\overline{{{\boldsymbol{B}}}}}(N-1){\boldsymbol{e}}\over\overline{F}(N)}&= \boldsymbol{\pi}(0){\boldsymbol{c}}_{B},\end{align}
(F.2b) \begin{align}\lim_{N\to\infty}\sum_{\ell=1}^N{\boldsymbol{\pi}^{(N)}(\ell)\overline{\overline{{{\boldsymbol{A}}}}}(N-\ell-1){\boldsymbol{e}}\over\overline{F}(N)}& = \overline{\boldsymbol{\pi}}(0){\boldsymbol{c}}_{A}.\end{align}

F.1. Proof of Lemma F.1

We confirm that (F.1a) and (F.1b) hold if

(F.3a) \begin{align}\lim_{N\to\infty}{\boldsymbol{\pi}^{(N)}(0)\overline{{\boldsymbol{B}}}(N){\boldsymbol{e}}\over\overline{F}(N)} =0,\end{align}
(F.3b) \begin{align}\lim_{N\to\infty}\sum_{\ell=1}^N{ \boldsymbol{\pi}^{(N)}(\ell) \overline{{\boldsymbol{A}}}(N-\ell){\boldsymbol{e}}\over\overline{F}(N)} =0.\end{align}

Since ${\boldsymbol{G}}$ is stochastic, ${\boldsymbol{G}}^N = \overline{\boldsymbol{\mathcal{O}}}(1)$ . It thus follows from (5.4) that, for each $k \in \mathbb{Z}_+$ , there exists some finite $C_k > 0$ such that $|{\boldsymbol{S}}^{(N)}(n;\,k) | \le C_k {\boldsymbol{e}}{\boldsymbol{e}}^{\top}$ for all $n \ge N+1$ and $N \ge \max(k,1)$ . The bound for $|{\boldsymbol{S}}^{(N)}(n;\,k)|$ yields

(F.4a) \begin{align}\sum_{n=N+1}^{\infty}{\boldsymbol{B}}(n) \big|{\boldsymbol{S}}^{(N)}(n;\,k)\big|&\le C_k \overline{{\boldsymbol{B}}}(N) {\boldsymbol{e}}{\boldsymbol{e}}^{\top},&\quad k & \in \mathbb{Z}_+,\,N \ge \max(k,1),\end{align}
(F.4b) \begin{align}\sum_{n=N+1}^{\infty} {\boldsymbol{A}}(n-\ell) \big|{\boldsymbol{S}}^{(N)}(n;\,k)\big|&\le C_k \overline{{\boldsymbol{A}}}(N-\ell) {\boldsymbol{e}}{\boldsymbol{e}}^{\top},&\quad\ k & \in \mathbb{Z}_+,\,N \ge \max(k,1), \ell \in \mathbb{Z}_{[1,N]} .\end{align}

Combining (F.3a) and (F.4a) results in (F.1a); and combining (F.3b) and (F.4b) results in (F.1b).

First, we prove (F.3a). Using (5.1), (5.18b), $F \in \mathcal{S} \subset \mathcal{L}$ , and Theorem 5.3, we obtain

(F.5) \begin{align}\lim_{N\to\infty}{\boldsymbol{\pi}^{(N)}(0)\overline{{\boldsymbol{B}}}(N){\boldsymbol{e}}\over\overline{F}(N)}&=\lim_{N\to\infty}{\boldsymbol{\pi}^{(N)}(0)\left[ \overline{\overline{{{\boldsymbol{B}}}}}(N-1){\boldsymbol{e}} - \overline{\overline{{{\boldsymbol{B}}}}}(N){\boldsymbol{e}} \right]\over\overline{F}(N)}\nonumber\\&=\lim_{N\to\infty}{\boldsymbol{\pi}^{(N)}(0)\overline{\overline{{{\boldsymbol{B}}}}}(N-1){\boldsymbol{e}}\over\overline{F}(N-1)}{\overline{F}(N-1) \over \overline{F}(N)}-\lim_{N\to\infty}{\boldsymbol{\pi}^{(N)}(0)\overline{\overline{{{\boldsymbol{B}}}}}(N){\boldsymbol{e}}\over\overline{F}(N)}\nonumber\\&=\boldsymbol{\pi}(0){\boldsymbol{c}}_{B} -\boldsymbol{\pi}(0){\boldsymbol{c}}_{B}= 0,\end{align}

which shows that (F.3a) holds.

Next, we prove (F.3b). Let $\Pi \in \mathbb{Z}_{[1,N]}$ and $\overline{A}$ denote independent integer-valued random variables. We then have

(F.6) \begin{align}\mathbb{P}(\Pi + \overline{A} \ge N) = \mathbb{P}(\Pi > 0)\mathbb{P}(\overline{A} > N-1)+ \sum_{\ell=0}^{N-1} \mathbb{P}(\Pi > N-\ell-1)\mathbb{P}(\overline{A} = \ell),\end{align}

for $N \in \mathbb{N}$ . By analogy with this equation, the following hold:

(F.7) \begin{align}\sum_{k=N}^{\infty} \sum_{\ell=1}^k\boldsymbol{\pi}^{(N)}(\ell) \overline{{\boldsymbol{A}}}(k-\ell){\boldsymbol{e}}&=\overline{\boldsymbol{\pi}}^{(N)}(0) \overline{\overline{{{\boldsymbol{A}}}}}(N-1){\boldsymbol{e}}+ \sum_{\ell=0}^{N-1} \overline{\boldsymbol{\pi}}^{(N)}(N - \ell - 1) \overline{{\boldsymbol{A}}}(\ell){\boldsymbol{e}},\end{align}
(F.8) \begin{align}\sum_{k=N}^{\infty} \sum_{\ell=1}^k\boldsymbol{\pi}(\ell) \overline{{\boldsymbol{A}}}(k-\ell){\boldsymbol{e}}&=\overline{\boldsymbol{\pi}}(0) \overline{\overline{{{\boldsymbol{A}}}}}(N-1){\boldsymbol{e}}+ \sum_{\ell=0}^{N-1} \overline{\boldsymbol{\pi}}(N - \ell - 1) \overline{{\boldsymbol{A}}}(\ell){\boldsymbol{e}}.\end{align}

Applying Theorem 5.4 to (F.7) and using (F.8), we obtain

(F.9) \begin{align}&\sum_{k=N}^{\infty}\sum_{\ell=1}^k \boldsymbol{\pi}^{(N)}(\ell) \overline{{\boldsymbol{A}}}(k-\ell){\boldsymbol{e}}\nonumber\\&\le \big(1 + o\big(N^{-1}\big)\big)\left[\overline{\boldsymbol{\pi}}(0) \overline{\overline{{{\boldsymbol{A}}}}}(N-1){\boldsymbol{e}}+ \sum_{\ell=0}^{N-1} \overline{\boldsymbol{\pi}}(N - \ell - 1) \overline{{\boldsymbol{A}}}(\ell){\boldsymbol{e}}\right]\nonumber\\&=\big(1 + o\big(N^{-1}\big)\big)\sum_{k=N}^{\infty}\sum_{\ell=1}^k \boldsymbol{\pi}(\ell) \overline{{\boldsymbol{A}}}(k-\ell){\boldsymbol{e}}.\end{align}

Furthermore, it follows from (5.18a), Proposition 5.11, and [Reference Masuyama26, Proposition A.3] that

(F.10) \begin{align}\lim_{N\to\infty}\sum_{k=N}^{\infty}\sum_{\ell=1}^k{\boldsymbol{\pi}(\ell) \overline{{\boldsymbol{A}}}(k-\ell){\boldsymbol{e}}\over\overline{F}(N)}= {\boldsymbol{\pi}(0) {\boldsymbol{c}}_{B} + \overline{\boldsymbol{\pi}}(0) {\boldsymbol{c}}_{A}\over-\sigma} \boldsymbol{\varpi}\overline{\overline{{{\boldsymbol{A}}}}}({-}1){\boldsymbol{e}}+ \overline{\boldsymbol{\pi}}(0){\boldsymbol{c}}_A.\end{align}

Combining (F.10) and (F.9) yields

(F.11) \begin{align}\limsup_{N\to\infty}\sum_{k=N}^{\infty}\sum_{\ell=1}^k{\boldsymbol{\pi}^{(N)}(\ell) \overline{{\boldsymbol{A}}}(k-\ell){\boldsymbol{e}}\over\overline{F}(N)}&< \infty,\end{align}

and thus

\begin{align*}\lim_{N\to\infty}\sum_{\ell=1}^N{\boldsymbol{\pi}^{(N)}(\ell) \overline{{\boldsymbol{A}}}(N-\ell){\boldsymbol{e}}\over\overline{F}(N)} = 0,\end{align*}

which shows that (F.3b) holds. The proof is complete.

F.2. Proof of Lemma F.2

We prove only the second limit (F.2b), because the first one (F.2a) is implied by (F.5) in the proof of (F.3a). The following equation holds for the integer-valued random variables $\Pi \in \mathbb{Z}_{[1,N]}$ and $\overline{A}$ in (F.6):

\begin{align*}\mathbb{P}(\Pi + \overline{A} \ge N)= \sum_{\ell=1}^N \mathbb{P}(\Pi = \ell)\mathbb{P}(\overline{A} > N - \ell - 1),\quad N \in \mathbb{N}.\end{align*}

Thus, we have

(F.12) \begin{align}\sum_{k=N}^{\infty} \sum_{\ell=1}^k\boldsymbol{\pi}^{(N)}(\ell) \overline{{\boldsymbol{A}}}(k-\ell){\boldsymbol{e}}&= \sum_{\ell=1}^N \boldsymbol{\pi}^{(N)}(\ell) \overline{\overline{{{\boldsymbol{A}}}}}(N-\ell-1){\boldsymbol{e}}.\end{align}

Combining (F.12) and (F.11) yields

\begin{align*}\limsup_{N\to\infty}\sum_{\ell=1}^N{\boldsymbol{\pi}^{(N)}(\ell)\overline{\overline{{{\boldsymbol{A}}}}}(N-\ell-1){\boldsymbol{e}}\over\overline{F}(N)}< \infty.\end{align*}

Therefore, using (5.18a), $F \in \mathcal{S} \subset \mathcal{L}$ , Theorem 5.3, and the dominated convergence theorem, we obtain

\begin{align*}&\lim_{N\to\infty}\sum_{\ell=1}^N{\boldsymbol{\pi}^{(N)}(\ell)\overline{\overline{{{\boldsymbol{A}}}}}(N-\ell-1){\boldsymbol{e}}\over\overline{F}(N)}\nonumber\\&\quad =\sum_{\ell=1}^{\infty}\lim_{N\to\infty}{\boldsymbol{\pi}^{(N)}(\ell)\overline{\overline{{{\boldsymbol{A}}}}}(N-\ell-1){\boldsymbol{e}}\over\overline{F}(N)}\nonumber\\&\quad =\sum_{\ell=1}^{\infty}\lim_{N\to\infty}\boldsymbol{\pi}^{(N)}(\ell){\overline{\overline{{{\boldsymbol{A}}}}}(N-\ell-1){\boldsymbol{e}}\over\overline{F}(N -\ell-1)}{\overline{F}(N -\ell-1)\over\overline{F}(N)}\nonumber\\&\quad =\sum_{\ell=1}^{\infty}\boldsymbol{\pi}(\ell){\boldsymbol{c}}_{A}= \overline{\boldsymbol{\pi}}(0) {\boldsymbol{c}}_{A},\end{align*}

which shows that (F.2b) holds. The proof is complete.

Acknowledgements

The research of H. Masuyama was supported in part by JSPS KAKENHI Grant No. JP21K11770.

Funding information

There are no funding bodies to thank in relation to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Akar, N., Oğuz, N. C. and Sohraby, K. (1998). Matrix-geometric solutions of M/G/1-type Markov chains: a unifying generalized state-space approach. IEEE J. Sel. Areas Commun. 16, 626639.CrossRefGoogle Scholar
Asmussen, S. (2003). Applied Probability and Queues, 2nd edn. Springer, New York.Google Scholar
Baiocchi, A. (1994). Analysis of the loss probability of the MAP/G/1/K queue. Part I: asymptotic theory. Stoch. Models 10, 867–893.CrossRefGoogle Scholar
Baiocchi, A. and Bléfari-Melazzi, N. (1993). Steady-state analysis of the MMPP/G/1/K queue. IEEE Trans. Commun. 41, 531534.CrossRefGoogle Scholar
Coolen-Schrijner, P. and van Doorn, E. A. (2002). The deviation matrix of a continuous-time Markov chain. Prob. Eng. Inf. Sci. 16, 351366.CrossRefGoogle Scholar
Cuyt, A. et al. (2003). Computing packet loss probabilities in multiplexer models using rational approximation. IEEE Trans. Comput. 52, 633644.10.1109/TC.2003.1197129CrossRefGoogle Scholar
Dendievel, S., Latouche, G. and Liu, Y. (2013). Poisson’s equation for discrete-time quasi-birth-and-death processes. Performance Evaluation 70, 564577.CrossRefGoogle Scholar
Doob, J. L. (1953). Stochastic Processes. John Wiley, New York.Google Scholar
Foss, S., Korshunov, D. and Zachary, S. (2013). An Introduction to Heavy-Tailed and Subexponential Distributions, 2nd edn. Springer, New York.CrossRefGoogle Scholar
Glynn, P. W. and Meyn, S. P. (1996). A Liapounov bound for solutions of the Poisson equation. Ann. Prob. 24, 916931.CrossRefGoogle Scholar
Heidergott, B. (2008). Perturbation analysis of Markov chains. In Proc. 9th International Workshop on Discrete Event Systems, Institute of Electrical and Electronics Engineers, Piscataway, NJ, pp. 99–104.CrossRefGoogle Scholar
Herrmann, C. (2001). The complete analysis of the discrete time finite DBMAP/G/1/N queue. Performance Evaluation 43, 95121.CrossRefGoogle Scholar
Ishizaki, F. and Takine, T. (1999). Loss probability in a finite discrete-time queue in terms of the steady state distribution of an infinite queue. Queueing Systems 31, 317326.CrossRefGoogle Scholar
Jelenković, P. R., Momčilović, P. and Zwart, B. (2004). Reduced load equivalence under subexponentiality. Queueing Systems 46, 97–112.CrossRefGoogle Scholar
Kim, B. and Kim, J. (2016). Explicit solution for the stationary distribution of a discrete-time finite buffer queue. J. Indust. Manag. Optimization 12, 11211133.CrossRefGoogle Scholar
Kim, J. and Kim, B. (2007). Asymptotic analysis for loss probability of queues with finite GI/M/1 type structure. Queueing Systems 57, 4755.CrossRefGoogle Scholar
Kimura, T., Daikoku, K., Masuyama, H. and Takahashi, Y. (2010). Light-tailed asymptotics of stationary tail probability vectors of Markov chains of M/G/1 type. Stoch. Models 26, 505548.CrossRefGoogle Scholar
Latouche, G. and Ramaswami, V. (1999). Introduction to Matrix Analytic Methods in Stochastic Modeling. Society for Industrial and Applied Mathematics, Philadelphia.CrossRefGoogle Scholar
Liu, J., Liu, Y. and Zhao, Y. Q. (2023). Matrix-analytic methods for solving Poisson’s equation with applications to Markov chains of GI/G/1-type. SIAM J. Matrix Anal. Appl. 44, 11221145.CrossRefGoogle Scholar
Liu, Y., Wang, P. and Xie, Y. (2014). Deviation matrix and asymptotic variance for GI/M/1-type Markov chains. Frontiers Math. China 9, 863880.CrossRefGoogle Scholar
Liu, Y., Wang, P. and Zhao, Y. Q. (2018). The variance constant for continuous-time level dependent quasi-birth-and-death processes. Stoch. Models 34, 2544.CrossRefGoogle Scholar
Liu, Y. and Zhao, Y. Q. (2013). Asymptotic behavior of the loss probability for an M/G/1/N queue with vacations. Appl. Math. Modelling 37, 17681780.CrossRefGoogle Scholar
Lucantoni, D. M. (1991). New results on the single server queue with a batch Markovian arrival process. Stoch. Models 7, 146.CrossRefGoogle Scholar
Lucantoni, D. M., Meier-Hellstern, K. S. and Neuts, M. F. (1990). A single-server queue with server vacations and a class of non-renewal arrival processes. Adv. Appl. Prob. 22, 676705.CrossRefGoogle Scholar
Makowski, A. M. and Shwartz, A. (1992). Stochastic approximations and adaptive control of a discrete-time single-server network with random routing. SIAM J. Control Optimization 30, 14761506.CrossRefGoogle Scholar
Masuyama, H. (2011). Subexponential asymptotics of the stationary distributions of M/G/1-type Markov chains. Europ. J. Operat. Res. 213, 509516.CrossRefGoogle Scholar
Masuyama, H. (2013). Tail asymptotics for cumulative processes sampled at heavy-tailed random times with applications to queueing models in Markovian environments. J. Operat. Res. Soc. Japan 56, 257308.Google Scholar
Masuyama, H. (2015). Error bounds for augmented truncations of discrete-time block-monotone Markov chains under geometric drift conditions. Adv. Appl. Prob. 47, 83105.CrossRefGoogle Scholar
Masuyama, H. (2016). Error bounds for augmented truncations of discrete-time block-monotone Markov chains under subgeometric drift conditions. SIAM J. Matrix Anal. Appl. 37, 877910.CrossRefGoogle Scholar
Masuyama, H. (2016). A sufficient condition for the subexponential asymptotics of GI/G/1-type Markov chains with queueing applications. Ann. Operat. Res. 247, 6595.CrossRefGoogle Scholar
Masuyama, H. (2017). Continuous-time block-monotone Markov chains and their block-augmented truncations. Linear Algebra Appl. 514, 105150.CrossRefGoogle Scholar
Masuyama, H. (2017). Error bounds for last-column-block-augmented truncations of block-structured Markov chains. J. Operat. Res. Soc. Japan 60, 271320. Latest revision available at https://arxiv.org/abs/1601.03489.Google Scholar
Masuyama, H., Liu, B. and Takine, T. (2009). Subexponential asymptotics of the BMAP/GI/1 queue. J. Operat. Res. Soc. Japan 52, 377401.Google Scholar
Meyn, S. P. and Tweedie, R. L. (2009). Markov Chains and Stochastic Stability, 2nd edn. Cambridge University Press.CrossRefGoogle Scholar
Miyazawa, M., Sakuma, Y. and Yamaguchi, S. (2007). Asymptotic behaviors of the loss probability for a finite buffer queue with QBD structure. Stoch. Models 23, 7995.CrossRefGoogle Scholar
Neuts, M. F. (1989). Structured Stochastic Matrices of M/G/1 Type and Their Applications. Marcel Dekker, New York.Google Scholar
Nummelin, E. and Tuominen, P. (1983). The rate of convergence in Orey’s theorem for Harris recurrent Markov chains with applications to renewal theory. Stoch. Process. Appl. 15, 295311.CrossRefGoogle Scholar
Ramaswami, V. (1988). A stable recursion for the steady state vector in Markov chains of M/G/1 type. Stoch. Models 4, 183188.CrossRefGoogle Scholar
Zhao, Y. Q. (2000). Censoring technique in studying block-structured Markov chains. In Advances in Algorithmic Methods for Stochastic Models, eds G. Latouche and P. Taylor, Notable Publications, Neshanic Station, NJ, pp. 417–433.Google Scholar
Zhao, Y. Q., Li, W. and Braun, W. J. (2003). Censoring, factorizations, and spectral analysis for transition matrices with block-repeating entries. Methodology Comput. Appl. Prob. 5, 3558.CrossRefGoogle Scholar