Hostname: page-component-78c5997874-s2hrs Total loading time: 0 Render date: 2024-11-10T21:46:24.962Z Has data issue: false hasContentIssue false

Heavy-traffic queue length behavior in a switch under Markovian arrivals

Published online by Cambridge University Press:  01 March 2024

Shancong Mou*
Affiliation:
Georgia Institute of Technology
Siva Theja Maguluri*
Affiliation:
Georgia Institute of Technology
*
*Postal address: 755 Ferst Dr NW, Atlanta, GA 30318.
*Postal address: 755 Ferst Dr NW, Atlanta, GA 30318.
Rights & Permissions [Opens in a new window]

Abstract

This paper studies the input-queued switch operating under the MaxWeight algorithm when the arrivals are according to a Markovian process. We exactly characterize the heavy-traffic scaled mean sum queue length in the heavy-traffic limit, and show that it is within a factor of less than 2 from a universal lower bound. Moreover, we obtain lower and upper bounds that are applicable in all traffic regimes and become tight in the heavy-traffic regime.

We obtain these results by generalizing the drift method recently developed for the case of independent and identically distributed arrivals to the case of Markovian arrivals. We illustrate this generalization by first obtaining the heavy-traffic mean queue length and its distribution in a single-server queue under Markovian arrivals and then applying it to the case of an input-queued switch.

The key idea is to exploit the geometric mixing of finite-state Markov chains, and to work with a time horizon that is chosen so that the error due to mixing depends on the heavy-traffic parameter.

Type
Original Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

The big data and machine learning revolution has been powered by large-scale data centers. With the growing size of data centers, it has become important to study the design and operation of efficient networks that facilitate the exchange of data [29]. One goal in the design of a data center network is to create a network that has full bisection bandwidth [Reference Alizadeh1, Reference Perry25] or a network that is logically equivalent to an input-queued switch. An operational challenge in such a data center is to schedule packets in order to maximize throughput and minimize delay.

In this paper, we study the problem of scheduling packets in an input-queued switch. In addition to serving as a model for data center networks, input-queued switches are important because they form the building blocks of any data network. An input-queued switch can be modeled as a matrix of queues operating in discrete time. Packets arrive into each of the queues according to a stochastic process. Each packet needs exactly one time slot of service. The key constraint is that at each time, exactly one packet can be served from each row or column. Thus the set of allowed schedules at each time forms a permutation matrix.

A popular algorithm for scheduling in an input-queued switch is the MaxWeight algorithm [Reference Tassiulas and Ephremides32], which was first proposed in the context of wireless networks. In this algorithm, at each time, the permutation matrix with the maximum weight is chosen, using queue lengths as weights. While MaxWeight is known to maximize throughput in an input-queued switch [Reference McKeown, Mekkittikul, Anantharam and Walrand23], understanding the delay or queue-length performance is much more challenging, and so one uses asymptotic analysis. In this paper, we consider the heavy-traffic regime, where the total arrival rate in each row and column approaches the maximum possible value of one. The mean queue length in heavy traffic under the MaxWeight algorithm was characterized in [Reference Maguluri, Burle and Srikant21, Reference Maguluri and Srikant22], where it was also shown that MaxWeight has the optimal scaling.

The key limitation of the work in [Reference Maguluri, Burle and Srikant21, Reference Maguluri and Srikant22] is that it assumes that the arrivals are independent and identically distributed (i.i.d.). However, it is known that real data centers experience short bursts of high traffic [Reference Benson, Akella and Maltz3]. The focus of this paper, therefore, is to consider arrivals that are modulated by a Markov chain, which can model a rich class of arrival patterns. For instance, in a continuous-time setting, it is known that a Markovian arrival process approximates any marked point process to an arbitrary degree of accuracy [Reference Asmussen and Koole2].

1.1. Main contributions

The main contribution of this paper is the exact characterization of the mean sum of queue lengths in heavy traffic in an input-queued switch operating under the MaxWeight algorithm. By Little’s law, such a result immediately implies a result on the mean delay. The key difference relative to the result in [Reference Maguluri and Srikant22] under i.i.d. traffic is that the mean queue lengths depend not only on the instantaneous variance in steady state, but the entire autocovariance function of the arrival process. In addition to the heavy-traffic results, we obtain upper and lower bounds on the mean queue lengths under any traffic, and show that these match in the heavy-traffic regime. We also obtain a universal lower bound under any algorithm, and show that it is within a factor of at most two in the heavy-traffic limit.

An input-queued switch does not satisfy the so-called complete resource pooling (CRP) condition, and to the best of our knowledge, this is the first result on heavy-traffic characterization of a non-CRP system. Methods based on fluid and diffusion limits [Reference Gamarnik and Zeevi8, Reference Harrison10, Reference Harrison11, Reference Harrison and López12, Reference Stolyar31, Reference Williams35] allow for natural generalization beyond i.i.d. arrivals, but except in special cases, there are no known results on non-CRP systems using these methods. The drift method was introduced in [Reference Maguluri, Burle and Srikant21, Reference Maguluri and Srikant22] to analyze non-CRP systems such as the switch, but was crucially limited to i.i.d. arrivals. A key methodological contribution of this paper is to extend the drift method to Markovian arrivals, and thus obtain the first result on a non-CRP system under a non-i.i.d. arrival process.

The main ingredient in the drift method is to consider the one-step drift, i.e., the expected change in the value, of a test function. Under i.i.d. arrivals, the future arrivals are independent of the current queue length, and this property plays a crucial role in bounding the one-step drift. The key challenge in the study of Markovian arrivals is that such an independence property does not hold, because both the current queue length and future arrivals are correlated to the past arrivals. In order to overcome this challenge, we recursively expand the current queue length to express it in terms of the queue length m slots ago, as well as the arrivals and service over the last m time slots. If m is large enough, the old queue length is approximately independent of the current arrivals, because of the fast (geometric) mixing of the Markov chain that modulates the arrivals. However, this recursive expansion also introduces certain error terms, which are small only if m is small. We carefully choose m as a function of the heavy-traffic parameter $\epsilon$ , to optimally trade off between these two competing phenomena; i.e., our choice of m ensures approximate independence between the old queue length and the current arrivals while also ensuring that the resultant error terms go to zero in the heavy-traffic regime.

In order to illustrate our generalization of the drift method, in Section 3 we present a study of a single-server queue operating in discrete time under Markovian arrivals. We first present the Markovian generalization of Kingman’s bound [Reference Kingman17] on the mean queue length. In the case of a single-server queue, in addition to the mean queue length, it is possible to obtain the complete distribution of the queue length in heavy traffic. We show that, similarly to the i.i.d. case, it is an exponential distribution, albeit with a modified mean that depends on the autocovariance function. We show this result using the transform method [Reference Hurtado-Lange and Maguluri13], a generalization of the drift method based on exponential test functions.

1.2. Related literature

Heavy-traffic analysis of queueing systems has been carried out in the literature using fluid and diffusion limits [Reference Gamarnik and Zeevi8, Reference Harrison10, Reference Harrison11, Reference Harrison and López12, Reference Stolyar31, Reference Williams35]. Systems with Markovian arrivals can naturally be studied using such an approach, and the results on single-server queues that we present in Section 3 are known in that setting. However, most of these results are applicable only when the systems satisfy a condition called complete resource pooling (CRP). Under the CRP condition, the system behaves as a single-server queue. This is usually proved formally via a state-space collapse result, which shows that in the heavy-traffic limit, the multidimensional queue size vector stays close to a one-dimensional subspace. Except in some special cases, there is no literature using the diffusion limit approach to study systems that do not satisfy the CRP condition. The switch system considered in this paper does not satisfy the CRP condition, and it exhibits a multidimensional state-space collapse; that is, the queue size vector stays close to a multidimensional subspace.

An alternate method for heavy-traffic analysis based on drift arguments was introduced in [Reference Eryilmaz and Srikant6] to study CRP systems. This drift method was generalized to study the switch system when the CRP condition is not met, in [Reference Maguluri, Burle and Srikant21, Reference Maguluri and Srikant22]. The literature on the drift method so far is limited to an i.i.d. arrival process.

The switch under non-i.i.d. traffic was studied in [Reference Neely24] and [Reference Sharifnassab, Tsitsiklis and Golestani28]. A loose upper bound on queue lengths was obtained in [Reference Neely24] and a state-space collapse result was established in [Reference Sharifnassab, Tsitsiklis and Golestani28]. To the best of our knowledge, this is the first work that exactly characterizes the heavy-traffic queue lengths for a system without CRP, under a non-i.i.d. arrival process.

An input-queued switch is one of the simplest systems that does not satisfy CRP, and so has served as a guidepost for the study of general queueing systems [Reference Shah, Tsitsiklis and Zhong27]. The drift method that was developed in [Reference Maguluri and Srikant22] was used to study a variety of stochastic processing networks in a flurry of follow-up works, all of them under i.i.d. traffic. An input-queued switch with prioritized customers was studied in [Reference Lu, Maguluri, Squillante and Suk19, Reference Lu, Maguluri, Squillante and Suk20], a switch under novel low-complexity scheduling algorithms was studied in [Reference Jhunjhunwala and Maguluri16], and an optical switch that incurs queueing delay was studied in [Reference Wang, Maguluri and Javidi33]. The so-called generalized switch model was studied in [Reference Hurtado Lange and Maguluri14], and it subsumes a rich class of stochastic networks, including wireless networks, cloud computing, data center networks, production systems, and mobile base stations. These methods have also been used to study load balancing, in [Reference Hurtado-Lange, Varma and Maguluri15, Reference Zhou, Tan and Shroff36], and the bandwidth sharing network, in [Reference Wang, Maguluri, Srikant and Ying34].

1.3. Notation

Throughout the paper, we denote a random variable by an uppercase letter, for example, X; we denote the realization of a random variable X by the corresponding lowercase letter x. If they are vectors, we use the corresponding boldface letter, for example, $\boldsymbol{X}$ and $\boldsymbol{x}$ . We write $X \sim \pi$ if X follows the distribution $\pi$ , and $X \stackrel{d}{=} Y$ if X follows the same distribution as Y. We use $\mathcal{I}({\cdot})$ to denote the indicator function. We denote by $E_{X \sim \pi}[\cdot]$ and $Var_{X \sim \pi}[\cdot]$ the expectation and variance calculated with respect to the distribution $X \sim \pi$ . Finally, we denote by $\mathbb{N}_+$ the set of positive integers, by $\mathbb{N}$ the set of non-negative integers, and by $\mathbb{R}_+$ the set of positive real numbers.

The rest of the paper is organized as follows. In Section 2 we present a few preliminary results that are essential for the proofs. In Section 3 we introduce the single-server queue model, and present heavy-traffic results for it, in order to illustrate the drift method for Markovian arrivals. In Section 4 we introduce the switch model and present the main result of the paper, on the heavy-traffic behavior of the switch under Markovian arrivals. These results are obtained by using the ideas developed in Section 3 in conjunction with the broad outline from [Reference Maguluri, Burle and Srikant21]. In particular, we first present a state-space collapse result, then use it to characterize the heavy-traffic mean sum queue length. Section 5 concludes the paper.

2. Preliminaries

In this section we present a few preliminary results on Markov chains that will be used throughout the paper.

2.1. Moment bounds from Lyapunov-type drift conditions

The results in this paper are based on studying the drift of functions of Markov chains. A lemma from [Reference Hajek9] is usually used to obtain moment bounds based on conditions on the one-step drift. However, in this paper we will instead work with the m-step drift, and so we need the following generalization of the lemma from [Reference Hajek9], which is proved in Appendix B.1.

Lemma 1. Let $\left\{\boldsymbol{Q}^t ,\boldsymbol{X}^t \right\}_{t \geq 0}$ be an irreducible, positive recurrent, and aperiodic Markov chain over a countable state space $ (\mathcal{Q},\mathcal{X} )$ . Suppose $Z\,:\, \mathcal{X}\to \mathbb{R}_+$ is a non-negative Lyapunov function. We define the m-step drift of Z at $ (\boldsymbol{q},\boldsymbol{x} )$ as

(1) \begin{equation} \Delta^m Z (\boldsymbol{q}, \boldsymbol{x} ) \triangleq \left[Z \left(\boldsymbol{Q}^{t+m},\boldsymbol{X}^{t+m} \right)-Z \left(\boldsymbol{Q}^t,\boldsymbol{X}^t \right) \right]\mathcal{I} \left(\left(\boldsymbol{Q}^{t},\boldsymbol{X}^{t} \right) = (\boldsymbol{q},\boldsymbol{x} ) \right), \end{equation}

where $\mathcal{I} (\cdot )$ is the indicator function. Thus, $\Delta^m Z (\boldsymbol{q},\boldsymbol{x} )$ is a random variable that measures the amount of change in the value of Z in m steps, starting from state $ \left(\boldsymbol{q},\boldsymbol{x}\right)$ . This drift is assumed to satisfy the following conditions, for any m:

  1. 1. There exist an $\eta(m) > 0$ and a $\kappa(m) >0$ such that for any $t = 1, 2, \ldots$ and for all $ (\boldsymbol{q},\boldsymbol{x}) \in (\mathcal{Q},\mathcal{X} )$ with $Z (\boldsymbol{q},\boldsymbol{x} ) \geq \kappa(m)$ ,

    \begin{equation*} E \left[\Delta^m Z (\boldsymbol{q}, \boldsymbol{x} )\mid \left(\boldsymbol{Q}^t,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right] \leq -\eta(m). \end{equation*}
  2. 2. There exists a $D(m) < \infty$ such that for all $ \left(\boldsymbol{q},\boldsymbol{x} \right) \in (\mathcal{Q},\mathcal{X} )$ ,

    \begin{equation*} P \big(\lvert{ \Delta^m Z (\boldsymbol{q}, \boldsymbol{x} )}\rvert \leq D(m) \big) =1. \end{equation*}
  3. 3. For any $t_0 \in [0,m]$ , there exists a $\hat{D}(t_0) < \infty$ such that for all $ \left(\boldsymbol{q},\boldsymbol{x} \right) \in (\mathcal{Q},\mathcal{X} )$ ,

    \begin{equation*} { P \big(\lvert{ Z (\boldsymbol{q}, \boldsymbol{x} )}\rvert \leq \hat{D}(t_0)\big)=1.} \end{equation*}

Then there exist a $\theta^*(m)>0$ and a $C^*(m) < \infty$ such that

\begin{equation*} \limsup_{t \to \infty} E \big[e^{\theta^*(m)Z \left(\boldsymbol{Q}^t,\boldsymbol{X}^t \right)} \big] \leq C^{\star}(m). \end{equation*}

If the Markov chain is positive recurrent, then $Z \left(\boldsymbol{Q}^t,\boldsymbol{X}^t\right)$ converges in distribution to a random variable $\overline{Z}$ for which

\begin{equation*} E \big[e^{\theta^*\overline{Z} } \big] \leq C^{\star}(m), \end{equation*}

which implies that all moments of $\overline{Z}$ exist and are finite.

Remark 1. If $\lvert{ \Delta Z (\boldsymbol{q}, \boldsymbol{x} )}\rvert \leq D^{\prime}$ is satisfied, then Conditions 2 and 3 are satisfied with $D(m) = mD^{\prime}$ and $\hat{D} = D^{\prime}$ .

2.2. Geometric mixing of finite-state Markov chains

The following two lemmas on geometric mixing of finite-state Markov chains will be exploited to obtain the results in the paper.

Lemma 2. Let $\{X^t\}_{\ t\geq 0}$ be an irreducible, positive recurrent, and aperiodic Markov chain on a finite state space $\Omega$ . Let $\pi$ denote its stationary distribution. Let $f({\cdot})$ be a real-valued function, i.e., $f\,:\,\Omega \to \mathbb{R}_+$ . Let $\lambda$ be the stationary mean of $f(X^t)$ , i.e.,

\begin{equation*} \lambda = E_{X\sim\pi}\left[f(X)\right]. \end{equation*}

Then there exist constants $\alpha \in (0,1 )$ and $C > 0$ such that, for any $m \in \mathbb{N}_+$ , for any initial distribution $X^0 \sim \pi^0$ , we have

\begin{equation*} \big\vert E_{ X^0 \sim \pi^0 } [ (f(X^m)-\lambda ) ] \big\vert \leq 2LC \alpha^m, \end{equation*}

where $ L = \max_{x \in \Omega} f(x) $ .

The lemma is proved in Appendix B.2.

Lemma 3. Let $\{X^t\}_{\ t\geq 0}$ be an irreducible, positive recurrent, and aperiodic Markov chain with finite state space $\Omega$ and stationary distribution $\pi$ . Let $f({\cdot})$ be a real-valued function bounded above by $A_{max}$ , i.e., $f\,:\,\Omega \to \mathbb{R}_+, f(x)\leq A_{max},\ \forall x\in \Omega$ . Let $\lambda$ be the stationary mean of $f(X^t)$ , i.e.,

\begin{equation*} \lambda = E_{\pi}\left[f(X)\right]. \end{equation*}

Then

\begin{align*} &\lim_{m \to \infty} \frac{Var_{ X^0 \sim \pi^0 } \left(\sum_{t=1}^m f(X^t) \right)}{m} \\ &= \gamma(0)+2\lim_{m \to \infty}\sum_{t=1}^{m}\frac{m-t}{m}\gamma(t) = \gamma(0)+2\lim_{m \to \infty}\sum_{t=1}^{m}\gamma(t),\end{align*}

where

\[ \gamma(t) = E_{X^0 \sim \pi}\big[ \big(f\big(X^t\big)-\lambda \big) \big(f\big(X^0\big)-\lambda \big)\big] \]

is the autocovariance function.

The lemma is proved in Appendix B.3.

Remark 2. There are several known equivalent formulations of the asymptotic variance. For instance, it can be shown (see Theorem 21.2.5 in [Reference Douc, Moulines, Priouret and Soulier5]) that under appropriate assumptions,

(2) \begin{align}\lim_{m \to \infty} \frac{Var_{ X^0 \sim \pi^0 } \left(\sum_{t=1}^m f(X^t) \right)}{m} =2 E_{X \sim \pi}\big[ (f(X)-\lambda )\hat{f}(X)\big]-E_{X \sim \pi}\big[ (f(X)-\lambda )^2\big] ,\end{align}

where $\hat{f}({\cdot})$ is a solution of the Poisson equation of the Markov chain, $\hat{f}(x) = f(x) + E\big[\hat{f}(X^1)|X^0=x\big] - \lambda$ . In this paper, we will state all the results in terms of the asymptotic variance in Lemma 3; our results can easily be reformulated in terms of equivalent expressions such as (2).

Remark 3. The assumption of a finite state space is used primarily for establishing the positive-recurrence result for the single-server queue and the switch system in the heavy-traffic regime. All other results still hold for general state space. Another way of stating the heavy-traffic results in this paper would be to allow for a general state space and assume positive recurrence of the system (and then to prove positive recurrence in the case of a finite state space). However, we feel that such a presentation would be confusing, and so we present only the finite-state-space case in the following sections.

3. Single-server queue

In this section, we consider a single-server queue operating in discrete time. Under i.i.d. arrivals, the queue length in such a system is equivalent to the waiting time in a G/G/1 queue. Drift arguments were used to study the heavy-traffic mean queue length in [Reference Eryilmaz and Srikant6, Reference Kingman17, Reference Srikant and Ying30], and the transform method was used to study the heavy-traffic stationary distribution in [Reference Hurtado-Lange and Maguluri13]. In this section, we consider the discrete-time single-server queue under Markovian arrivals, and extend both the drift method and the transform method. The key ingredients are to consider the dynamic m-step drift and exploit the geometric mixing of irreducible, positive recurrent, aperiodic finite-state-space Markov chains.

3.1. Mathematical model

Consider a single-server queue operating in discrete time. Let $Q^t$ be the number of customers in the system at the beginning of time slot t. Arrivals occur according to an underlying Markov chain $\{X^t\}_{\ t\geq 0}$ where the number of arrivals in the time slot t is given by $A^t = f (X^t )$ for some non-negative integer-valued function $f({\cdot})$ . The potential service $S^t$ is assumed to be i.i.d. with mean $\mu$ and variance $\sigma_s^2$ .

Assume that $\{X^t\}_{\ t\geq 0}$ is irreducible, positive recurrent, and aperiodic on a finite state space $\Omega$ . Thus $\{X^t\}_{\ t\geq 0}$ converges to its stationary distribution $\pi$ at a geometric rate. Further assume that $A^t$ and $S^t$ are bounded above by $A_{max}$ and $S_{max}$ respectively.

In each time slot, we assume that the service occurs after arrivals, and the system evolves as follows: for each $t = 1,\ 2,\ldots$ ,

(3) \begin{align} Q^{t+1}& = \max\left\{Q^t + A^t - S^t,0\right\}=Q^t + A^t - S^t+ U^t, \end{align}

where $U^t$ denotes the unused service and is defined by $U^t = Q^{t+1} -\left(Q^t + A^t - S^t \right) $ . From the definition of the unused service, we have

(4) \begin{align} Q^{t+1}U^t=0.\end{align}

In order to study the heavy-traffic behavior of the single-server queue, we will consider a sequence of arrival processes such that the arrival rate approaches the service rate $\mu$ . To this end, let $\{X^t\}^{ (\epsilon )}_{t\geq 0}$ be a set of irreducible, positive recurrent, and aperiodic underlying Markov chains indexed by the heavy-traffic parameter $\epsilon \in (0,\mu )$ . Assume that the arrival process is such that the two-dimensional Markov chain $ \big\{ \big((Q^t)^{ (\epsilon )},\ (X^t)^{ (\epsilon )} \big)\big\}_{t\geq 0 } $ is irreducible and aperiodic. For any fixed $\epsilon$ , let $\overline{X}^{ (\epsilon )}$ be the steady-state variable to which $(X^t)^{ (\epsilon )}$ converges in distribution with $E\big[\overline{A}^{(\epsilon)} \big] = E\big[f\big(\overline{X}^{ (\epsilon )}\big)\big] = \lambda^{(\epsilon)} = \mu -\epsilon$ .

Let $\gamma^{(\epsilon)}(t)$ be the autocovariance function of the arrival process starting from steady state $X^0 \stackrel{d}{=}\overline{X}^{ (\epsilon )}$ , $\overline{A}^{(\epsilon)}= f\big(\overline{X}^{ (\epsilon )}\big)$ and $(A^t)^{(\epsilon)} = f\left((X^t)^{(\epsilon)}\right)$ indexed by $\epsilon$ , i.e.,

\[ \gamma^{(\epsilon)}(t) = E \big[ \big((A^t)^{(\epsilon)}-\lambda \big) \big(\big(A^0\big)^{(\epsilon)}-\lambda \big) \big]. \]

Let

\[ \big(\sigma_a^{(\epsilon)}\big)^2 = \lim_{m \to \infty} \frac{Var \left(\sum_{t=1}^m (A^t)^{(\epsilon)} \right)}{m}=\gamma^{(\epsilon)}(0)+2\lim_{m \to \infty}\sum_{t=1}^{m}\gamma^{(\epsilon)}(t), \]

where the equality follows from Lemma 3. We will call $\big(\sigma_a^{(\epsilon)}\big)^2$ the effective variance. Assume that for every $t \in \mathbb{N}$ ,

(5) \begin{align} \lim_{\epsilon \to 0}\gamma^{(\epsilon)}(t) = \gamma(t).\end{align}

Define

\begin{align*} \sigma_a^2 = \gamma(0)+2\lim_{m \to \infty}\sum_{t=1}^{m}\gamma(t). \end{align*}

Then we have the following claim, which establishes an interchange of limit, and so we call $\sigma_a^2 $ the limiting effective variance.

Claim 1. $\lim_{\epsilon \to 0} \big(\sigma_a^{(\epsilon)}\big)^2 = \sigma_a^2.$

The claim is proved in Appendix C.

3.2. Heavy-traffic limit of the mean queue length

We now present the generalization of Kingman’s heavy-traffic bound [Reference Kingman17] in a single-server queue under Markovian arrivals. In other words, we characterize the steady-state mean queue length in a single-server queue in the heavy-traffic regime.

Theorem 1. Let $(A^t)^{ (\epsilon )}$ be a set of arrival processes determined by the corresponding Markov chains $\left\{(X^t)^{(\epsilon)}\right\}_{\ t\geq 0}$ as described before, with steady-state arrival rates $\lambda^{ (\epsilon )} = \mu -\epsilon$ . Let $\alpha^{ (\epsilon )}$ and $C^{ (\epsilon )} $ be the corresponding geometric mixing parameters as mentioned in Lemma 2. Assume $\sup_\epsilon \alpha^{ (\epsilon )} \leq \alpha <1$ and $\sup_\epsilon C^{ (\epsilon )} \leq C < \infty$ . Then we have the following:

  1. 1. For each $\epsilon \in (0,\mu )$ , the two-dimensional Markov chain

    \begin{align*}\Big\{ \Big((Q^t)^{ (\epsilon )},\ (X^t)^{ (\epsilon )} \Big)\Big\}_{t\geq 0 } \end{align*}
    is positive recurrent.
  2. 2. Let $\overline{Q}^{ (\epsilon )}$ be a steady-state random variable to which the queue length process $\left\{Q^t\right\}^{ (\epsilon )}_{t \geq 1}$ converges in distribution. We have

    \begin{equation*} \lim_{\epsilon \to 0} E \left[\epsilon \overline{Q}^{ (\epsilon )} \right]=\frac{\sigma_a^2+\sigma_s^2}{2}. \end{equation*}

Proof of Theorem 1. To prove the first part of the theorem, we use m-step Foster–Lyapunov theorem (Proposition 2.2.4 in [Reference Fayolle, Malyshev and Menshikov7]). We consider the quadratic Lyapunov function, $\left(Q^{(\epsilon)}\right)^2$ , and show that its m-step drift is negative except in a finite set. Consider a fixed $\epsilon \in (0,\mu)$ . For ease of exposition, we suppress the superscript $ ({\cdot})^{(\epsilon )}$ .

Claim 2. For any $\epsilon \in (0,\mu)$ and $m \in \mathbb{N}_+$ ,

\begin{align*} \begin{aligned} & E \left[\left(Q^{t+m}\right)^2-(Q^t)^2\mid \left(Q^t,X^t \right)= (q,x ) \right]\\ & \leq E \left [2Q^t\left(\sum_{i=1}^{m} (A^{t+m-i}-\lambda )\right)-2m\epsilon Q^t+m K_0 (m )\mid \left(Q^t,X^t \right)= (q,x ) \right],\\ \end{aligned}\end{align*}

where

\begin{align*} K_0 (m ) = 2m \big(A_{max}+S_{max} \big) \big(A_{max}+S_{max} \big)+ \big(A_{max}+S_{max} \big)^2.\end{align*}

The proof of Claim 2 is in Appendix D.1.

We now consider the first term on the right-hand side. The main challenge in bounding this term is the correlation between queue length and arrivals at the same time slot, because of the Markovian nature of the arrival process. We use the geometric mixing of the underlying Markov chain as stated in Lemma 2 to get

(6) \begin{align} &E \left[2Q^t \left(A^{t+m}-\lambda \right )\big| \left(Q^t,X^t \right)= (q,x ) \right]\nonumber\\[5pt] &\leq \big\lvert{E \left[2Q^t \left(A^{t+m}-\lambda \right )\big| \left(Q^t,X^t \right)= (q,x ) \right]}\big\rvert \leq 4A_{max}C\alpha^m q .\end{align}

Thus,

\begin{align*} & E \left [2Q^t\left(\sum_{i=1}^{m} \big(A^{t+m-i}-\lambda \big)\right)-2m\epsilon Q^t\mid \left(Q^t,X^t \right)= (q,x ) \right]\\ &\leq 2q \left[ \left(2A_{max}C\frac{1-\alpha^m}{1-\alpha} \right)-m\epsilon \right].\end{align*}

Define

\begin{equation*}m (\epsilon )= \min\left\{m \in \mathbb{N}_+\,:\, 2A_{max}C\frac{1-\alpha^m}{1-\alpha}<\frac{1}{2}m\epsilon \right\}.\end{equation*}

Such an $m (\epsilon )$ exists because $2A_{max}C\frac{1-\alpha^m}{1-\alpha}$ is finite and $\frac{1}{2}m\epsilon \to \infty$ as $m \to \infty$ . Let $K_1(\epsilon) = \frac{\epsilon m(\epsilon)}{2}$ . Then we have

\begin{align*} & E \left[\left(Q^{t+m(\epsilon)}\right)^2-(Q^t)^2\mid \left(Q^t,X^t \right)= (q,x ) \right] \leq -2K_1(\epsilon)q +m(\epsilon) K_0 (m(\epsilon) ).\end{align*}

Let $\mathcal{B} = \left\{q\,:\, q \leq \frac{m(\epsilon) K_0 (m(\epsilon) )}{K_1({\epsilon})}\right\}$ denote a finite set. Then we have

\begin{align*} & E \left[\left(Q^{t+m(\epsilon)}\right)^2-(Q^t)^2\mid \left(Q^t,X^t \right)= (q,x ) \right] \\ &\leq -m(\epsilon) K_0 (m(\epsilon) )\mathcal{I} (q \in \mathcal{B}^c ) + m(\epsilon) K_0 (m(\epsilon) )\mathcal{I}(q \in \mathcal{B}).\end{align*}

The positive recurrence of the two-dimensional Markov chain $\big\{ \big(Q^t,\ X^t \big),\ t\geq 0\big\}$ then follows from the m-step Foster–Lyapunov theorem.

Therefore, using the irreducibility and aperiodicity of the two-dimensional Markov chain $\big\{ \big(Q^t,\ X^t \big),\ t\geq 0\big\}$ , we know that a unique stationary distribution exists. To prove the second part of the theorem, we will set the m-step drift of the quadratic test function to zero under the stationary distribution. In order to do this, we must first make sure that the stationary expectation of the quadratic function is finite, which is given by the following claim.

Claim 3. For any $\epsilon >0$ in steady state,

\begin{align*} E \Big[ \Big(\overline{Q}^{ (\epsilon )} \Big)^2 \Big] < \infty.\end{align*}

The proof of Claim 3 is in Appendix D.2.

In the rest of the proof, we consider the system in its steady state, and so for every time t, $(Q^t)^{(\epsilon)} \stackrel{d}= \overline{Q}^{(\epsilon)}$ (equivalently, this can be thought of as initializing the system in its steady state). For ease of exposition, we again drop the superscript $({\cdot})^{(\epsilon)}$ and just use $Q^t$ . Then $A^t$ denotes the arrivals in steady state, and the queue length at time $t+m$ is

\begin{equation*}Q^{t+m}=Q^t+\sum_{l=0}^{m-1} A^{t+l}-\sum_{l=0}^{m-1} S^{t+l}+\sum_{l=0}^{m-1} U^{t+l},\end{equation*}

which has the same distribution as $Q^t$ for all $m \in \mathbb{N}_+$ . We can set the one-step drift equal to zero:

(7) \begin{align} 0=E \Big[\big(Q^{t+1} \big)^2-\left(Q^{t} \right)^2 \Big]=E \Big[\!\left(Q^t+A^t-S^t+U^t \right)^2-\left(Q^{t} \right)^2 \Big], \end{align}

where the last equation follows from (3). Expanding (7) and applying (4), we have

(8) \begin{align} 2\epsilon E \left[Q^t \right]=E \left[2Q^t (A^t-\lambda ) \right]-E \big[\left(U^t\right)^2 \big] + E \big[ (A^t-\lambda )^2 \big]+\sigma_s^2+ \epsilon^2. \end{align}

Equation (8) can be obtained using standard arguments. The heavy-traffic limit of the scaled queue length $\epsilon E \left[Q^t \right]$ is typically obtained by bounding the right-hand side and then letting $\epsilon \to 0$ . The main challenge here is bounding the $E \left[2Q^t (A^t-\lambda ) \right]$ term, since the queue length and the arrivals are correlated because the arrival process is Markovian. We bound this term by recursively expanding $Q^t$ using (3), and using the Markov chain mixing result from Lemma 2. We then obtain the following claim, the proof of which can be found in Appendix D.3.

Claim 4. For any $\epsilon \in (0,\mu)$ and $m \in \mathbb{N}_+$ , we have

\begin{align*} 2\left(1-2A_{max}C\frac{\alpha^m}{\epsilon}\right) E\left[\epsilon Q^t\right] &\leq \gamma(0) + 2\sum_{i=1}^{m}\gamma(i) + \sigma_s^2 + 2m \big(A_{max}+\lambda \big) \epsilon + \epsilon^2,\\ 2\left(1+2A_{max}C\frac{\alpha^m}{\epsilon}\right) E\left[\epsilon Q^t\right] &\geq \gamma(0) + 2\sum_{i=1}^{m}\gamma(i) + \sigma_s^2- 2m \big(A_{max}+\lambda \big)\epsilon\\& \quad - S_{max}\epsilon + \epsilon^2. \end{align*}

Note that the claim is true for all $\epsilon$ and m. Put ${ (\cdot )}^{ (\epsilon )}$ back. For a given $\epsilon$ , we now set $m = \left\lfloor \frac{1}{\sqrt{\epsilon}} \right\rfloor$ , and we take the limit as $\epsilon \to 0$ to get

(9) \begin{align} \lim_{\epsilon \to 0} E \big[\epsilon\overline{ Q}^{(\epsilon)}\big]=\frac{\lim_{\epsilon \to 0} \gamma^{(\epsilon)}(0) + \lim_{m(\epsilon) \to \infty} 2\sum_{t=1}^{m(\epsilon)}\gamma^{(\epsilon)}(t)+\sigma_s^2}{2}. \end{align}

Discussion. The key challenge in the proof is in handling the $E \left[Q^t (A^t-\lambda ) \right]$ term. When the arrivals are i.i.d., the expectation can simply be written as a product of expectations. Under Markovian arrivals, this cannot be done because of the correlation between the queue length and the arrivals at the same time slot. A multistep drift approach is usually used [Reference Rajagopalan, Shah and Shin26] to overcome this type of difficulty while establishing positive recurrence. We adopt the same approach in the first part of the proof to establish positive recurrence. However, it is unclear whether such a multistep drift argument is refined enough to provide an exact expression for the mean heavy-traffic queue length. So, in the second part of the proof, we simply use the one-step drift, and use a different approach to bound the term $E \left[Q^t (A^t-\lambda ) \right]$ . Essentially, we recursively apply (3) m times to expand $Q^t$ in terms of $Q^{t-m}$ and eventually get a term of the form $E \left[2Q^{t-m} (A^t-\lambda ) \right]$ . Now, for large enough m, $Q^{t-m}$ and $A^t$ are approximately independent, thanks to the fast mixing of the Markov chain underlying the arrival process, and so we can approximate the expectation by the product of expectations. This recursive expansion yields several other terms of the form $E \left[ (A^{t-i}-\lambda ) (A^t-\lambda ) \right]$ , which give us the autocovariance of the arrival process in Claim 4, and eventually in Theorem 1.

Note that for the argument to work, m needs to be chosen carefully. In particular, it should be large enough to ensure independence between the queue length and the arrival; on the other hand, it should be small enough to ensure that the error terms introduced by the recursive expansion are negligible in the heavy-traffic regime. It turns out that any m that is between $\Omega\left( \ln \frac{1}{\epsilon} \right)$ and $O(\frac{1}{\epsilon})$ works, and we choose $m = \left\lfloor \frac{1}{\sqrt{\epsilon}} \right\rfloor$ arbitrarily within the range.

The following claim, which is proved in Appendix D.4, is useful for evaluating

\begin{align*} \lim_{m(\epsilon) \to \infty}\sum_{t=1}^{m(\epsilon)}\gamma^{(\epsilon)}(t). \end{align*}

Claim 5. For any $\epsilon \in (0,\mu)$ and $m(\epsilon) = \left\lfloor \frac{1}{\sqrt{\epsilon}} \right\rfloor$ , we have

(10) \begin{equation} \begin{aligned} \lim_{m(\epsilon) \to \infty}\sum_{t=1}^{m(\epsilon)}\gamma^{(\epsilon)}(t) = \lim_{M \to \infty} \sum_{t=1}^{M} \gamma(t). \end{aligned} \end{equation}

Finally, combining (9) and Claim 5, we have

\begin{equation*} \begin{aligned} \lim_{\epsilon \to 0} E \left[\epsilon \overline{Q}^{(\epsilon)}\right] &=\frac{ 2\lim_{M \to \infty} \sum_{i=1}^{M} \gamma(t)+\lim_{\epsilon \to 0} \gamma^{(\epsilon)}(0) +\sigma_s^2}{2} = \frac{\sigma_a^2+\sigma_s^2}{2}. \end{aligned} \end{equation*}

Note that the key idea in the proof is to consider the m-step drift, where m is chosen to be a function of the heavy-traffic parameter $\epsilon$ . In addition, mixing time bounds on the underlying Markov chain are exploited.

Moreover, we also derive the heavy-traffic limiting distribution of the scaled queue length, which is in Appendix A.

4. Input-queued switch

In this section, we will study an input-queued switch operating under Markovian arrivals, and present an exact characterization of the mean sum of the queue lengths in heavy traffic. We first introduce the mathematical model of a switch under Markovian arrivals, and we present the MaxWeight algorithm. We then establish throughput optimality, state-space collapse, and asymptotically tight upper and lower bounds under the MaxWeight scheduling algorithm, which are proved using the m-step Lyapunov-type drift argument developed in the previous section. For completeness, the universal lower bound with Markovian arrivals under any feasible scheduling algorithm is also presented here.

Note on notation. We adopt the notation and definitions in [Reference Maguluri and Srikant22]. We restrict our discussion to Euclidean space $\mathbb{R}^{N^2}$ . For ease of exposition and understanding, we express an element $\boldsymbol{x}$ of $\mathbb{R}^{N^2}$ in two equivalent ways: first, as an $N^2$ -dimensional vector, which is the standard representation in Euclidean space; and second, as an $N\times N$ matrix, with the (i, j) element denoted by $\boldsymbol{x}_{ij}$ . For any two vectors $\boldsymbol{x}$ and $\boldsymbol{y}$ , the inner product is defined by

\begin{equation*} \left\langle \boldsymbol{x}, \boldsymbol{y} \right \rangle \triangleq \sum_{i=1}^N\sum_{j=1}^N x_{ij}y_{ij}.\end{equation*}

We say that $\boldsymbol{X}=\boldsymbol{Y}$ if $X_{ij}=Y_{ij}$ for all $i,j \in\{1,2,\ldots, N\}$ . The vector consisting of all ones is denoted by ${\textbf{1}}$ . Define $\boldsymbol{e}^{(i)}$ as the matrix with ones in the ith row and zeros everywhere else; similarly, $\tilde{\boldsymbol{e}}^{(j)}$ is the matrix with ones in the jth column and zeros everywhere else. That is,

(11) \begin{equation} \begin{aligned} &\boldsymbol{e}^{(i)}_{i,j}=1\ \forall j, &\boldsymbol{e}^{(i)}_{i^{\prime},j}=0\ \forall i^{\prime}\neq i,\ \forall j;\\ &\tilde{\boldsymbol{e}}^{(j)}_{i,j}=1\ \forall i, &\tilde{\boldsymbol{e}}^{(j)}_{i,j^{\prime}}=0\ \forall j^{\prime}\neq j,\ \forall i.\\ \end{aligned}\end{equation}

4.1. Mathematical model of a switch

An $ N \times N $ crossbar switch, also known as an input-queued switch, has N input ports and N output ports with a separate queue for each input–output pair. Such a system can be modeled as an $N\times N$ matrix of queues, where the (i, j)th queue corresponds to packets entering input port i and intended for output port j. The four key elements in the mathematical model of a switch under Markovian arrivals are as follows:

  • $\boldsymbol{A}^t \in \mathbb{R}^{N^2}$ : the vector of all arrivals at time slot t, of which the (i, j)th element corresponds to the number of packets arriving at the ith input port and to be delivered to the jth output port.

  • $\boldsymbol{S}^t \in \mathbb{R}^{N^2}$ : the vector of all services at time slot t. In each time slot, in each column and row, at most one queue can be served and the service is at most 1.

  • $\boldsymbol{U}^t \in \mathbb{R}^{N^2}$ : the vector of unused services at time slot t.

  • $\boldsymbol{Q}^t \in \mathbb{R}^{N^2}$ : the vector of all queue lengths at time slot t.

The arrival process $A_{ij}^t$ is determined by an underlying Markov chain $\left\{\boldsymbol{X}^t\right\}_{t\geq 0 }$ , with $A_{ij}^t= {{}f_{ij}} \big(X_{ij}^t \big)$ . The multidimensional Markov chain $\left\{\boldsymbol{X}^t\right\}_{t\geq 0 }$ is assumed to be irreducible, positive recurrent, and aperiodic on a finite state space $\Omega$ . Thus $\left\{\boldsymbol{X}^t\right\}_{t\geq 0}$ converges to its stationary distribution $\boldsymbol{\pi}$ with geometric rate. Let $\boldsymbol{f}=({{}f_{ij}})$ and $\boldsymbol{\lambda} = E \big[\overline{\boldsymbol{A}} \big] = E \big[\boldsymbol{f} \big(\overline{\boldsymbol{X}} \big) \big]$ , where $\overline{\boldsymbol{A}}$ and $\overline{\boldsymbol{X}}$ are the steady-state variables to which $\boldsymbol{A}^t$ and $\boldsymbol{X}^t$ converge in distribution.

The set of feasible schedules $\mathcal{S}$ can be written as

\begin{equation*} \mathcal{S} = \left\{\boldsymbol{s} \in\{0,1\}^{N^2}\,:\, \sum_{i=1}^N s_{ij}\leq 1, \sum_{j=1}^N s_{ij}\leq 1\ \forall i,j\in\{1,2,\ldots, N\} \right\}.\end{equation*}

The system operates as follows.

At the beginning of every time slot, the service $\boldsymbol{S}^t$ is determined based on the queue length $\boldsymbol{Q}^t$ , according to the scheduling algorithm. Then the arrivals $\boldsymbol{A}^t$ occur according to the underlying Markov chain. Finally the packets are served; this may result in an unused service if there are no packets in a scheduled queue. The queue length evolves as follows:

\begin{equation*} \begin{aligned} Q_{ij}^{t+1} & = \big[Q_{ij}^t+A_{ij}^t-S_{ij}^t\big]^+=Q_{ij}^t+A_{ij}^t-S_{ij}^t +U_{ij}^t, \end{aligned}\end{equation*}

or

\begin{equation*} \boldsymbol{Q}^{t+1} = \boldsymbol{Q}^{t}+ \boldsymbol{A}^{t}-\boldsymbol{S}^{t}+\boldsymbol{U}^t,\end{equation*}

where $[x]^+ = \max\!(0,x)$ . Assume that the Markov chain $\left\{\left(\boldsymbol{Q}^t,\boldsymbol{X}^t\right)\right\}_{t\geq 0 }$ is irreducible.

For the unused service $\boldsymbol{U}^t$ , we have the following natural constraints on $U_{ij}^t$ :

\begin{gather*} U_{ij}^t \leq S_{ij}^t,\\ \sum_{i}U_{ij}^t \in \{0,1\}\ \forall j \in \{1,2,\ldots,N\},\\ \sum_{j}U_{ij}^t \in \{0,1\}\ \forall i \in \{1,2,\ldots,N\}.\end{gather*}

Moreover, for all $i,j \in\{1,2,\ldots,N\}$ we have

\begin{equation*} U_{ij}^t A_{ij}^t =0,\ \ \ U_{ij}^t {{}Q_{ij}^t} =0,\ \ \ U_{ij}^t Q_{ij}^{t+1} =0.\end{equation*}

The first two equations can be derived as follows. If $U^t_{ij}=0$ , then $U^t_{ij}A^t_{ij}=0$ and $U^t_{ij}Q^t_{ij}=0$ . Conversely, if $U^t_{ij}\neq0$ , we know that the service is at most 1, so $U^t_{ij}=1$ . Considering the system operation, the only possible case is when the queue length at the current time slot is zero $\big(Q_{ij}^t = 0\big)$ and there are no arrivals during the current time slot $\big(A_{ij}^t = 0\big)$ . Consequently, we have $U^t_{ij}A^t_{ij}=0$ and $U^t_{ij}Q^t_{ij}=0$ .

4.2. MaxWeight algorithm

The MaxWeight algorithm is a well-studied scheduling algorithm for switches [Reference Srikant and Ying30]. In each time slot, the MaxWeight algorithm picks a service vector $S^t$ such that the weighted summation of service is maximized, where the weight vector is the current queue length vector, i.e.,

\begin{equation*} \boldsymbol{S}^t = \arg \max_{\boldsymbol{s}\in \mathcal{S}} \sum_{ij}Q_{ij}(t)s_{ij} = \arg \max_{\boldsymbol{s}\in \mathcal{S}}\big\langle \boldsymbol{Q}^t, \boldsymbol{s}\big\rangle.\end{equation*}

In the MaxWeight algorithm, ties are broken uniformly at random. The set of maximal feasible schedules or perfect matchings, $\mathcal{S}^*$ , is defined as follows:

\begin{equation*} \mathcal{S}^* = \left\{\boldsymbol{s} \in\{0,1\}^{N^2}\,:\, \sum_{i=1}^N s_{ij}= 1, \sum_{j=1}^N s_{ij} = 1\ \forall i,j\in\{1,2,\ldots, N\} \right\}.\end{equation*}

Without loss of generality, we assume that the MaxWeight algorithm always picks a perfect matching, i.e.,

\begin{equation*} \boldsymbol{S}^t \in \mathcal{S}^*,\ \forall t>0.\end{equation*}

Note that this is without loss of generality because under the MaxWeight scheduling algorithm, we can always choose the maximal schedule. Only when the queue length at some queue (i, j) is zero do we have $s_{ij} = 0$ and $s^*_{ij}=1$ . In this case, we can pretend that $s_{ij} = 1$ and $u_{ij} = 1$ .

Once a Markovian scheduling algorithm is fixed, the switch can be modeled by the Markov chain $\left\{\left(\boldsymbol{Q}^t,\boldsymbol{X}^t\right)\right\}_{t\geq 0 }$ . We assume that the arrival process is such that under the MaxWeight algorithm, this Markov chain is irreducible and aperiodic. A switch is said to be stable under a scheduling algorithm if the Markov chain $\left\{\left(\boldsymbol{Q}^t,\boldsymbol{X}^t\right)\right\}_{t\geq 0 }$ is positive recurrent. The capacity region is defined as the set of arrival rates $\boldsymbol{\lambda}$ under which the switch is stabilizable by some algorithm.

For a switch under i.i.d. arrivals, it is known [Reference Srikant and Ying30] that the capacity region $\mathcal{C}$ is the convex hull of the feasible schedule set, i.e., $\mathcal{C} = \text{Conv}(\mathcal{S})$ . It can also be written as

\begin{equation*} \begin{aligned} \mathcal{C} & = \left\{\boldsymbol{\lambda} \in \mathbb{R}^{N^2}_+\,:\, \sum_{i=1}^N \lambda_{ij}\leq 1,\sum_{j=1}^N \lambda_{ij} \leq 1\ \forall i,j \in \{1,2,\ldots,N\} \right\}\\ & = \left\{\boldsymbol{\lambda} \in \mathbb{R}^{N^2}_+\,:\, \big\langle \boldsymbol{\lambda}, \boldsymbol{e}^{(i)} \big\rangle\leq1, \big\langle \boldsymbol{\lambda}, \boldsymbol{\tilde{e}}^{(j)} \big\rangle \leq1\ \forall i,j \in \{1,2,\ldots,N\} \right\}. \end{aligned}\end{equation*}

If a scheduling algorithm stabilizes the switch under any arrival rate in the capacity region, then it is said to be throughput optimal. Moreover, it is known that the MaxWeight algorithm is throughput optimal [Reference Srikant and Ying30]. We will establish similar results under Markovian arrivals in Section 4.3. Before that, we present some geometric observations about the capacity region from [Reference Maguluri and Srikant22].

4.2.1. Some geometric observations

The capacity region $\mathcal{C}$ is a convex polytope with dimension $N^2$ . Let $\mathcal{F}$ denote the face of $\mathcal{C}$ where all input and output ports are all saturated, defined by

\begin{equation*} \begin{aligned} \mathcal{F} & = \left\{\boldsymbol{\lambda} \in \mathbb{R}^{N^2}_+\,:\, \sum_{i=1}^N \lambda_{ij}= 1,\sum_{j=1}^N \lambda_{ij}= 1 \forall i,j \in \{1,2,\ldots,N\} \right\}\\ & = \left\{\boldsymbol{\lambda} \in \mathbb{R}^{N^2}_+\,:\, \big\langle \boldsymbol{\lambda}, \boldsymbol{e}^{(i)} \big\rangle = 1, \big\langle \boldsymbol{\lambda}, \boldsymbol{\tilde{e}}^{(j)} \big\rangle = 1\ \forall i,j \in \{1,2,\ldots,N\} \right\}. \end{aligned}\end{equation*}

Note that $\mathcal{F} = \text{Conv}(\mathcal{S^*})$ . Let $\mathcal{K}$ denote the normal cone of the face $\mathcal{F}$ , which can be written explicitly as

\begin{equation*} \mathcal{K} = \left\{ \boldsymbol{x} \in \mathbb{R}^{N^2}\,:\, \boldsymbol{x} = \sum_{i=1} ^N w_i\boldsymbol{e}^{(i)}+\sum_{j=1} ^N \tilde{w}_j\tilde{\boldsymbol{e}}^{(j)},\ w_i, \tilde{w}_j \in \mathbb{R}_+ \forall i,j\right\}. \end{equation*}

It can be verified that this is indeed the normal cone, and so, for all $\boldsymbol{x},\boldsymbol{y} \in \mathcal{F}$ and all $\boldsymbol{z} \in \mathcal{K}$ , we have

(12) \begin{equation} \left\langle \boldsymbol{x}-\boldsymbol{y},\boldsymbol{z}\right\rangle =0. \end{equation}

The polar cone $\mathcal{K}^o$ of the cone $\mathcal{K}$ is defined as

\begin{equation*} \mathcal{K}^o = \Big\{ \boldsymbol{x} \in \mathbb{R}^{N^2}\,:\, \left\langle\boldsymbol{x},\boldsymbol{y}\right\rangle\leq 0 \ \forall \boldsymbol{y}\in \mathcal{K} \Big\}.\end{equation*}

Let $\mathcal{L}$ denote the subspace spanned by the cone $\mathcal{K}$ :

\begin{equation*} \mathcal{L} = \left\{ \boldsymbol{x} \in \mathbb{R}^{N^2}\,:\, \boldsymbol{x} = \sum_{i=1} ^N w_i\boldsymbol{e}^{(i)}+\sum_{j=1} ^N \tilde{w}_j\tilde{\boldsymbol{e}}^{(j)},\ \text{where} \ w_i,\ \tilde{w}_j \in \mathbb{R},\ \forall i,j\right\}, \end{equation*}

For any $ \boldsymbol{x} \in \mathbb{R}^{N^2}$ and any convex set $\mathcal{W}$ , the projection of $\boldsymbol{x}$ onto the set $\mathcal{W}$ is denoted by $\boldsymbol{x}_{{\parallel} \mathcal{W}}$ and defined as

\begin{equation*} \boldsymbol{x}_{{\parallel} \mathcal{W}} = \arg\min_{\boldsymbol{y} \in \mathcal{W}}\lvert\lvert{\boldsymbol{y} - \boldsymbol{x}}\rvert\rvert;\end{equation*}

consequently,

\begin{equation*} \boldsymbol{x}_{{\perp} \mathcal{W}} = \boldsymbol{x}- \boldsymbol{x}_{{\parallel} \mathcal{W}}.\end{equation*}

The projection of $ \boldsymbol{x}$ onto the subspace $\mathcal{L}$ is given by

\begin{equation*} \left(\boldsymbol{x}_{{{\parallel}\mathcal{L}}}\right)_{ij} = \frac{\sum_{i}{x}_{ij}}{N} + \frac{\sum_{j}{x}_{ij}}{N} - \frac{\sum_{ij} {x}_{ij}}{N^2},\end{equation*}

and its $\ell_2$ norm is

\begin{equation*} \lvert\lvert{\boldsymbol{x}_{{\parallel}\mathcal{L}} }\rvert\rvert^2= \frac{1}{N}\Bigg( \sum_j \Bigg( \sum_{i}{x}_{ij}\Bigg)^2 + \sum_i \Bigg(\sum_{j}{x}_{ij}\Bigg)^2 - \frac{1}{N} \Bigg(\sum_{ij}{x}_{ij}\Bigg)^2 \Bigg).\end{equation*}

For any $\boldsymbol{x}_{{\parallel}\mathcal{L}}$ and $\boldsymbol{y}_{{\parallel}\mathcal{L}}$ , we have

(13) \begin{align} \big\langle \boldsymbol{x}_{{\parallel}\mathcal{L}}, \boldsymbol{y}_{{\parallel}\mathcal{L}}\big\rangle = \big\langle \boldsymbol{x}, \boldsymbol{y}_{{\parallel}\mathcal{L}}\big\rangle = \sum_{ij}x_{ij}\left[\frac{1}{N}\sum_{j^{\prime}}y_{ij^{\prime}}+\frac{1}{N}\sum_{i^{\prime}}y_{i^{\prime}j}-\frac{1}{N^2}\sum_{i^{\prime}j^{\prime}}y_{i^{\prime}j^{\prime}}\right].\end{align}

4.3. Throughput optimality

In this section, we show that under Markovian arrivals, $\mathcal{C}$ is indeed the capacity region, and MaxWeight is throughput optimal.

Proposition 1. If $\lambda \notin \mathcal{C}$ , then no scheduling algorithm can support arrival rate matrix $\boldsymbol{\lambda}$ .

The proof follows from Theorem 4.2.1 in [Reference Srikant and Ying30], using the Markov chain ergodic theorem instead of the strong law of large numbers.

Proposition 2. For an input-queued switch operating under Markovian arrivals, the MaxWeight scheduling algorithm can support any arrival rate matrix $\boldsymbol{\lambda}$ in the interior of $\mathcal{C}$ , i.e., $\boldsymbol{\lambda}\in int(\mathcal{C})$ . Thus, MaxWeight is throughput optimal.

Proof. Note that $\left\{\left(\boldsymbol{Q}^t,\boldsymbol{X}^t\right)\right\}_{t\geq 0 }$ is an irreducible Markov chain under MaxWeight scheduling. We prove this theorem by demonstrating that $\left\{\left(\boldsymbol{Q}^t,\boldsymbol{X}^t\right)\right\}_{t\geq 0 }$ is positive recurrent, which can be done using the Foster–Lyapunov theorem (Proposition 2.2.4 in [Reference Fayolle, Malyshev and Menshikov7]). Consider the Lyapunov function

\begin{equation*} V \left(\boldsymbol{Q}^t,\boldsymbol{X}^t \right)=\lvert\lvert{\boldsymbol{Q}^t}\rvert\rvert^2.\end{equation*}

We will consider the m-step drift of this Lyapunov function, and bound it as follows.

Claim 6. Let $C_{max} = \max_{ij}C_{ij}$ and $\alpha_{max} = \max_{ij}\alpha_{ij}$ , where $C_{ij}$ and $\alpha_{ij}$ are the geometric mixing parameters given by Lemma 2 for the (i, j)th underlying Markov chain $\big\{ X^t_{ij} \big\}_{t\geq 0}$ . There exists $\epsilon >0$ such that $\boldsymbol{\lambda}+\epsilon {\textbf{1}} \in \mathcal{C}$ . Let

\begin{align*} m(\epsilon ) = \min \left\{m \in \mathbb{N}_+\mid 2NA_{max}C_{max}\frac{1-\alpha_{max}^m}{1-\alpha_{max}} < \frac{m\epsilon }{2} \right\}; \end{align*}

then

\begin{equation*} \begin{aligned} & E \left[V \left(\boldsymbol{Q}^{t+m(\epsilon)} , \boldsymbol{X}^{t+m(\epsilon)} \right)-V \left(\boldsymbol{Q}^t,\boldsymbol{X}^t \right)\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)=(\boldsymbol{q},\boldsymbol{x}) \right]\\ & \leq -\frac{m(\epsilon )\epsilon }{2}\lvert\lvert{\boldsymbol{q} }\rvert\rvert+\frac{K_2 (m(\epsilon ) )}{2} , \end{aligned} \end{equation*}

where

\begin{align*} K_2 (m(\epsilon ) ) & = m(\epsilon )N^2 \big(A_{max}+S_{max} \big)^2\\ & \quad + 2m(\epsilon )^2 N^2 \big(A_{max}+S_{max}\big)\big(\epsilon +A_{max}+\lambda_{max} \big). \end{align*}

The theorem then follows from the Foster–Lyapunov theorem (Proposition 2.2.4 in [Reference Fayolle, Malyshev and Menshikov7]). A detailed proof of the claim is in Appendix E.

4.4. Heavy-traffic analysis

We will now study the switch in heavy traffic, as the arrival rate approaches the face $\mathcal{F}$ on the boundary of the capacity region $\mathcal{C}$ . To this end, consider $\left\{\boldsymbol{X}^t\right\}^{ (\epsilon )}_{t\geq 0}$ , a set of irreducible, positive recurrent, and aperiodic underlying vector-valued Markov chains indexed by the heavy-traffic parameter $\epsilon \in (0,1)$ taking values in $\Omega^{N^2}$ . The arrival process for the system indexed by $\epsilon$ is given by $\big(A_{ij}^t\big)^{(\epsilon)} = {{}f_{ij}}\big(\big(X_{ij}^t\big)^{(\epsilon)}\big)$ for a non-negative-valued function ${{}f_{ij}}({\cdot})$ . For any fixed $\epsilon$ , let $\overline{\boldsymbol{X}}^{ (\epsilon )}$ be the steady-state variable to which $\left(\boldsymbol{X}^t\right)^{ (\epsilon )}$ converges in distribution, with $E\big[\overline{\boldsymbol{A}}^{(\epsilon)} \big] = E\big[\boldsymbol{f}\big(\overline{\boldsymbol{X}}^{ (\epsilon )}\big)\big] = \boldsymbol{\lambda}^{(\epsilon)} = (1 -\epsilon)\boldsymbol{v}$ , where $\boldsymbol{v}$ is an arrival rate on the boundary of the capacity region $\mathcal{C}$ such that $v_{ij} > 0$ for all i, j, and all the input and output ports are saturated. In other words, we assume $\boldsymbol{v} \in \mathcal{F}$ . Let $\gamma_{ij,kl}^{(\epsilon)}(t)$ be the spatiotemporal correlation function of the arrival process between the ijth port at time t and the klth port at time 0, starting from steady state $\boldsymbol{X}^{\textbf{0}} \stackrel{d}{=}\overline{\boldsymbol{X}}^{ (\epsilon )}$ , i.e.,

\[ {} \gamma_{ij,kl}^{(\epsilon)}(t) = E \Big[ \Big(\big(A_{ij}^t\big)^{(\epsilon)}-\lambda_{ij} \Big) \Big(\big(A_{kl}^0\big)^{(\epsilon)}-\lambda_{kl} \Big) \Big]. \]

Assume that for all $t \in \mathbb{N}$ ,

(14) \begin{equation} {} \lim_{\epsilon \to 0}\gamma_{ij,kl}^{(\epsilon)}(t) = \gamma_{ij,kl}(t).\end{equation}

Let

\[{} \sigma_{ij,kl}^2 = \gamma_{ij,kl}(0)+2\lim_{m \to \infty}\sum_{t=1}^{m}\gamma_{ij,kl}(t), \]

and let $\boldsymbol{\sigma}=(\sigma_{ij,kl}) \in \mathbb{R}^{N^2\times N^2}$ be the limiting effective covariance matrix. For any scheduling algorithm under which the switch system is stable, let $\Big( \big(\overline{\boldsymbol{Q}}\big)^{(\epsilon )},\big(\overline{\boldsymbol{X}}\big)^{(\epsilon )} \Big)$ be a steady-state random variable to which the process $\left\{ \left( \left(\boldsymbol{Q}^t \right)^{(\epsilon)},\left(\boldsymbol{X}^t \right)^{(\epsilon)} \right)\right\}_{t\geq 0}$ converges in distribution. Assume that for any $\epsilon \in (0,1)$ and for any i, j, $\sup_\epsilon \alpha^{(\epsilon)}_{ij}<\alpha_{max}<1$ and $\sup_{\epsilon}C^{(\epsilon)}_{ij} < C_{max} < \infty$ .

4.4.1. Universal lower bound

In this section, we will give a lower bound on the average queue length under heavy traffic, which is valid under any stable scheduling algorithm.

Proposition 3. Consider a set of switch systems parameterized by the heavy-traffic parameter $\epsilon\in (0,1)$ as described before. Consider a scheduling algorithm under which the switch system is stable. Then, under heavy traffic, we have

\begin{equation*} \liminf_{\epsilon \to 0} \epsilon E\Bigg[\sum_{ij}\overline{\boldsymbol{Q}^{(\epsilon)}} \Bigg] \geq \frac{\lvert\lvert{\boldsymbol{\sigma}}\rvert\rvert^2}{2}. \end{equation*}

Using a coupling argument, it was shown in [Reference Maguluri and Srikant22] that the row sum of the queue lengths under any policy is bounded below by a single-server queue with an arrival process that is the sum of the arrivals to all the queues in the row. The result then follows from using Theorem 1 for a single-server queue, and so we omit the proof.

4.4.2. State-space collapse under MaxWeight policy

A key step in characterizing the heavy-traffic behavior is to establish state-space collapse. It was shown in [Reference Maguluri and Srikant22] that under the MaxWeight algorithm, the queue lengths collapse into the cone $\mathcal{K}$ in heavy traffic. In this subsection, we establish the same result under Markovian arrivals. More formally, we show that under the MaxWeight algorithm, $\overline{\boldsymbol{q}}_{{\perp\mathcal{K}}}^{(\epsilon)}$ can be bounded by some constant independent of $\epsilon$ . Thus, when the heavy-traffic parameter $\epsilon$ goes to zero (the mean arrival rate vector $\boldsymbol{\lambda}$ approaches the boundary $\mathcal{F}$ of the capacity region $\mathcal{C}$ ), $\overline{\boldsymbol{q}}_{{\perp\mathcal{K}}}^{(\epsilon)}$ is negligible compared to $\overline{\boldsymbol{q}}_{{\parallel\mathcal{K}}}^{(\epsilon)}$ .

We define the following quadratic Lyapunov functions and their corresponding m-step drifts:

\begin{align*} V (\boldsymbol{q},\boldsymbol{x} ) & \triangleq \lvert\lvert{\boldsymbol{q}}\rvert\rvert^2 = \sum_{ij}q_{ij}^2, &W_{{\perp\mathcal{K}}} (\boldsymbol{q},\boldsymbol{x} ) &\triangleq \lvert\lvert{\boldsymbol{q}_{\perp\mathcal{K}}}\rvert\rvert, \\ V_{\perp\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} ) &\triangleq \lvert\lvert{\boldsymbol{q}_{\perp\mathcal{K}}}\rvert\rvert^2 = \sum_{ij}q_{{\perp\mathcal{K}} ij}^2, &V_{\parallel\mathcal{K}} (\boldsymbol{q} ,\boldsymbol{x} ) &\triangleq \lvert\lvert{\boldsymbol{q}_{\parallel\mathcal{K}}}\rvert\rvert^2 = \sum_{ij}q_{{\parallel\mathcal{K}} ij}^2,\end{align*}
\begin{equation*} \begin{aligned} & \Delta^m V (\boldsymbol{q},\boldsymbol{x} ) \triangleq \left[V \left(\boldsymbol{Q}^{t+m },\boldsymbol{X}^{t+m} \right)-V \left(\boldsymbol{Q}^t,\boldsymbol{X}^t \right) \right] \mathcal{I} \left( (\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) ),\\ & \Delta^m W_{\perp\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} ) \triangleq \left[W_{\perp\mathcal{K}} \left( \boldsymbol{Q}^{t+m},\boldsymbol{X}^{t+m} \right) -W_{\perp\mathcal{K}} \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) \right] \mathcal{I} \left( \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right),\\ & \Delta^m V_{\perp\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} ) \triangleq \left[V_{\perp\mathcal{K}} \left(\boldsymbol{Q}^{t+m} ,\boldsymbol{X}^{t+m} \right)-V_{\perp\mathcal{K}} \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) \right] \mathcal{I} \left( \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right ) = (\boldsymbol{q},\boldsymbol{x} ) \right),\\ & \Delta^m V_{\parallel\mathcal{K}} (\boldsymbol{q} ,\boldsymbol{x} ) \triangleq \left[V_{\parallel\mathcal{K}} \left(\boldsymbol{Q}^{t+m},\boldsymbol{X}^{t+m} \right)-V_{\parallel\mathcal{K}} \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) \right] \mathcal{I} \left( \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right).\\ \end{aligned}\end{equation*}

The next proposition gives the state-space collapse.

Proposition 4. Consider a set of switch systems under the MaxWeight scheduling algorithm, parameterized by the heavy-traffic parameter $0<\epsilon<1$ as described before. Further assume that $v_{\min} \triangleq \min_{ij}v_{ij}>0$ . Then, for each system with $\epsilon < v_{\min}/2\lvert\lvert{\boldsymbol{v}}\rvert\rvert$ , the steady-state queue length vector satisfies

\begin{equation*} E \left[\Big\lvert\Big\lvert{\left(\overline{\boldsymbol{Q}}_{\perp\mathcal{L}} \right)^{(\epsilon )}}\Big\rvert\Big\rvert^2 \right]\leq E \left[\Big\lvert\Big\lvert{\left(\overline{\boldsymbol{Q}}_{\perp\mathcal{K}} \right)^{(\epsilon )}}\Big\rvert\Big\rvert^2 \right] \leq K^{\star\star}, \end{equation*}

where $K^{\star\star}$ is a constant that does not depend on $\epsilon$ .

To prove the proposition, we need the following two lemmas.

Lemma 4. Under the MaxWeight algorithm, for any $\boldsymbol{q} \in \mathbb{R}^{N^2}$ ,

\begin{equation*} \boldsymbol{v} +\frac{v_{\text{min}}}{\lvert\lvert{\boldsymbol{q}_{\perp\mathcal{K}}}\rvert\rvert}\boldsymbol{q}_{\perp\mathcal{K}} \in \mathcal{C}. \end{equation*}

This lemma was proved in [Reference Maguluri and Srikant22, Claim 2].

Lemma 5. The drift of $W_{{\perp\mathcal{K}}} (\cdot )$ can be bounded in terms of the drift of $V (\cdot )$ and $V_{\parallel\mathcal{K}} (\cdot )$ as follows:

\begin{equation*} \Delta^m W_{\perp\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} ) \leq \frac{1}{2\lvert\lvert{\boldsymbol{q}_{\perp\mathcal{K}}}\rvert\rvert} \left(\Delta^m V (\boldsymbol{q},\boldsymbol{x} )-\Delta^m V_{\perp\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} ) \right), \quad \forall \left(\boldsymbol{q},\boldsymbol{x}\right) \in \boldsymbol{R}^{N^2}. \end{equation*}

The proof of this lemma is almost identical to that of Lemma 7 in [Reference Eryilmaz and Srikant6], and so we skip it here.

Proof of Proposition 4. For ease of exposition, the superscript $\epsilon$ in this proof is skipped, i.e., we will use $\boldsymbol{Q}^t$ and $\lambda$ to denote $\left(\boldsymbol{Q}^t\right)^{(\epsilon)}$ and $\boldsymbol{\lambda}^{(\epsilon)}$ respectively. The proof is based on applying Lemma 1, by verifying that the three conditions are satisfied. We start with the following claim, which is proved in Appendix F.1.

Claim 7. For any $m\in \mathbb{N}_+$ ,

\begin{equation*} \begin{aligned} {\lvert{ \Delta^m W_{\perp\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} )}\rvert \leq m \lvert{ \Delta W_{\perp\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} )}\rvert \leq N m \big(A_{max}+S_{max} \big).} \end{aligned} \end{equation*}

Therefore, Conditions 2 and 3 are satisfied with $D(m) = Nm\big(A_{max}+S_{max}\big)$ and $\hat{D}(t_0) =N \big(A_{max}+S_{max} \big) $ . To verify Condition 1, we need the following claim, which is proved in Appendix F.2.

Claim 8. For any $m\in \mathbb{N}_+$ ,

\begin{equation*} \begin{aligned} &E \left[\Delta^m W_{\perp\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} ) \mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} )\right]\\ & \leq E \left[\sum_{l=1}^m\big\lvert\big\lvert{\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}}\big\rvert\big\rvert \mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]+ \frac{K_3(m)}{2\lvert\lvert{\boldsymbol{q}_{\perp\mathcal{K}} }\rvert\rvert}+m \big(\epsilon\lvert\lvert{\boldsymbol{v}}\rvert\rvert-v_{\min} \big),\\ \end{aligned} \end{equation*}

where

\begin{equation*} \begin{aligned} K_3(m) &= 2Nm^2\big(A_{max}+S_{max} \big) \big(N\big(A_{max}+\lambda\big)+\epsilon\lvert\lvert{\boldsymbol{v}}\rvert\rvert +v_{\min} \big)\\&+N^2m \big(A_{max}+S_{max} \big)^2. \end{aligned} \end{equation*}

We can use Lemma 2 to bound the first term on the right-hand side above as follows:

\begin{equation*} \begin{aligned} &E \left[\sum_{l=1}^m\big\lvert\big\lvert{\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}}\big\rvert\big\rvert\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right] \\ &\leq \sum_{l=1}^{m}\sqrt{\sum_{ij} 4A_{max}^2C_{ij}^2\alpha_{ij}^{2 (m-l )}}\\ & \leq \sum_{l=0}^{m-1}2N A_{max} C_{max}\alpha_{max}^l\\ & = 2N A_{max} C_{max} \frac{1-\alpha_{max}^m}{1-\alpha_{max}}.\\ \end{aligned} \end{equation*}

Let

\begin{equation*} \begin{aligned} &m_0 = \min\Bigg \{m \in \mathbb{N}_+ \mid \frac{2 N\ A_{max} C_{max}}{1-\alpha_{max}} \leq \frac{m v_{\min}}{4} \Bigg \}. \end{aligned} \end{equation*}

Clearly, such an $m_0 \in \mathbb{N}_+$ exists.

Therefore, for $m\geq m_0$ we have

\begin{align*} E \left[\sum_{l=1}^m\big\lvert\big\lvert{\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}}\big\rvert\big\rvert\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right] \leq \frac{m v_{\min}}{4}.\end{align*}

Combining this with Claim 8, when $\epsilon \leq \frac{v_{\min}}{4\lvert\lvert{\boldsymbol{v}}\rvert\rvert}$ , we have

\begin{equation*} \begin{aligned} E \left[\Delta^{m_0} W_{\perp\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} ) \mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right] &\leq \frac{K_3(m_0)}{2\lvert\lvert{\boldsymbol{q}_{\perp\mathcal{K}} }\rvert\rvert} -\frac{m_0 v_{\min}}{2} \leq -\frac{v_{\min}}{4}, \end{aligned} \end{equation*}

for all $\boldsymbol{q}\ \text{such that}\ W_{\perp\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} )\geq \frac{2K_3 (m_0 )}{m_0v_{\min}}$ . Let

\begin{align*} Z (\boldsymbol{q},\boldsymbol{x} ) = W_{\perp\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} ). \end{align*}

Define $\kappa(m_0) = \frac{2K_3 (m_0 )}{m_0v_{\min}}$ , $\eta = \frac{v_{\min}}{4}$ , $D(m_0) = N m_0 \big(A_{max}+S_{max} \big)$ , and $\hat{D}(t_0) = N\big(A_{max}+S_{max} \big)$ . This verifies the three conditions in Lemma 1. We have that there exists a $K^{\star\star}(m_0)$ such that

\begin{equation*} E \left[\Big\lvert\Big\lvert{\left(\overline{\boldsymbol{Q}}_{\perp\mathcal{K}} \right)^{(\epsilon )}}\Big\rvert\Big\rvert^2 \right] \leq K^{\star\star}(m_0). \end{equation*}

Since $\mathcal{K} \in \mathcal{L}$ , we conclude that

\begin{align*} E \left[\Big\lvert\Big\lvert{\left(\overline{\boldsymbol{Q}}_{\perp\mathcal{L}} \right)^{(\epsilon )}}\Big\rvert\Big\rvert^2 \right] \leq E \left[\Big\lvert\Big\lvert{\left(\overline{\boldsymbol{Q}}_{\perp\mathcal{K}} \right)^{(\epsilon )}}\Big\rvert\Big\rvert^2 \right] \leq K^{\star\star}(m_0). \end{align*}

Since the parameters $ \eta$ , $\kappa(m_0)$ , $D(m_0),$ and $\hat{D}(t_0)$ in the three conditions of Lemma 1 do not involve $\epsilon$ , we have that $K^{\star\star}(m_0)$ does not depend on $\epsilon$ . To simplify the notation, we will use $K^{\star\star}$ to represent $K^{\star\star}(m_0)$ .

4.4.3. Asymptotically tight upper and lower bounds under MaxWeight policy

We now use the state-space collapse result from the previous section to obtain an exact expression for the heavy-traffic scaled mean sum of the queue lengths in an input-queued switch under Markovian arrivals. In particular, we will obtain lower and upper bounds on the steady-state mean queue lengths that differ by only $o(1/\epsilon)$ , and so are negligible in the heavy-traffic limit. We will again use the m-step drift argument from Section 3. We will use $V^{\prime}(\boldsymbol{q},\boldsymbol{x}) = \lvert\lvert{\textbf{q}_{{\parallel}\mathcal{L}}}\rvert\rvert^2$ as the Lyapunov function to this end. This Lyapunov function was introduced in the i.i.d. case in [Reference Maguluri, Burle and Srikant21].

The one-step drift of V (q) is defined as

(15) \begin{equation} \Delta V^{\prime} (\boldsymbol{q} ,\boldsymbol{x} ) \triangleq \left[V^{\prime} \left(\boldsymbol{Q}^{t+1},\boldsymbol{X}^{t+1} \right)-V^{\prime} \left(\boldsymbol{Q}^t,\boldsymbol{X}^t \right) \right]\mathcal{I} \left(\left(\boldsymbol{Q}^t,\boldsymbol{X}^t\right) =(\boldsymbol{q},\boldsymbol{x}) \right).\end{equation}

Lemma 6. For any arrival rate vector $\boldsymbol{\lambda}$ in the interior of the capacity region $\boldsymbol{\lambda} \in int (\mathcal{C} )$ , the steady-state mean $E\big[\lvert\lvert{\overline{\boldsymbol{Q}}}\rvert\rvert^2\big]$ is finite, and consequently $E [V^{\prime} (\boldsymbol{Q} ,\boldsymbol{X} ) ]$ is finite for the input-queued switch under the MaxWeight algorithm.

This lemma is needed to set the drift of $V^{\prime} (\boldsymbol{q},\boldsymbol{x})$ to zero in steady state. The lemma is proved in Appendix G. We will now state and prove the main result of this paper.

Theorem 2. Consider a set of switch systems under the MaxWeight scheduling algorithm, parameterized by the heavy-traffic parameter $0<\epsilon<1$ as described before. For each system with $\epsilon < v_{\min}/2\lvert\lvert{\boldsymbol{v}}\rvert\rvert$ , in the heavy-traffic limit, as $\epsilon \to 0$ , we have

(16) \begin{equation} {} \lim_{\epsilon \to 0}\epsilon E\Bigg[\sum_{ij}\overline{Q}_{ij}^\epsilon \Bigg] = \sum_{ij}\left(\frac{1}{N} \sum_{j^{\prime}} \sigma^2_{ij,ij^{\prime}}+\frac{1}{N} \sum_{i^{\prime}}\sigma^2_{ij,i^{\prime}j}-\frac{1}{N^2} \sum_{i^{\prime}j^{\prime}} \sigma^2_{ij,i^{\prime}j^{\prime}}\right). \end{equation}

Corollary 1. If the underlying Markov chains are independent across input–output pairs, i.e., $\sigma_{ij,kl}=0\ \forall ij\neq kl$ , then Equation (16) can be simplified as

\begin{equation*}\lim_{\epsilon \to 0}\epsilon E\Bigg[\sum_{ij}\overline{Q}_{ij}^\epsilon \Bigg]= \left(1-\frac{1}{2N} \right)\lvert\lvert{\boldsymbol{\sigma}}\rvert\rvert^2.\end{equation*}

Corollary 2. If the arrival processes are i.i.d. across time, i.e., $\gamma_{ij,kl} (t)=0\ \forall t\in\{1,2,\ldots\}$ , then Equation (16) can be simplified as

\begin{equation*}\lim_{\epsilon \to 0}\epsilon E\Bigg[\sum_{ij}\overline{Q}_{ij}^\epsilon \Bigg]= \sum_{ij}\left(\frac{1}{N} \sum_{j^{\prime}} \gamma^2_{ij,ij^{\prime}}(0)+\frac{1}{N} \sum_{i^{\prime}}\gamma^2_{ij,i^{\prime}j}(0)-\frac{1}{N^2} \sum_{i^{\prime}j^{\prime}} \gamma^2_{ij,i^{\prime}j^{\prime}}(0)\right),\end{equation*}

which matches the result in [Reference Hurtado Lange and Maguluri14, Corollary 1, p. 13].

Remark 4. Although Theorem 2 and Corollaries 1 and 2 present the results in the heavy-traffic limit as $\epsilon \to 0$ , our approach provides upper and lower bounds on $E\left[\sum_{ij}\overline{Q}_{ij}^\epsilon \right]$ for all $\epsilon \in(0, v_{\min}/2\|\boldsymbol{v}\|)$ . See Equations (25) and (26) for the explicit bounds.

Proof of Theorem 2. Fix an $\epsilon \in (0, v_{\min}/2\lvert\lvert{\boldsymbol{v}}\rvert\rvert)$ and consider the system with index $\epsilon$ . In the rest of the proof, we consider the system in its steady state, and so for every time t, $\left(\boldsymbol{Q}^t\right)^{(\epsilon)} \stackrel{d}= \overline{\boldsymbol{Q}}^{(\epsilon)}$ . We again skip the superscript ${({\cdot})}^{(\epsilon)}$ in this proof and use $\boldsymbol{Q}^t$ to denote the steady-state queue length vector. Then $\boldsymbol{A}^t$ denotes the arrival vector in steady state and $\boldsymbol{Q}^{t+m}$ denotes the queue length at time $t+m$ ,

\begin{equation*}\boldsymbol{Q}^{t+m} = \boldsymbol{Q}^t+\sum_{l=0}^{m-1} \boldsymbol{A}^{t+l}-\sum_{l=0}^{m-1} \boldsymbol{S}^{t+l}+\sum_{l=0}^{m-1} \boldsymbol{U}^{t+l},\end{equation*}

which has the same distribution as $\boldsymbol{Q}^t$ for all $m \in \mathbb{N}_+$ .

The steady-state mean $V^{\prime} (\boldsymbol{Q}^t )$ is finite from Lemma 6. Therefore, we can set the mean drift of $V^{\prime} (\cdot )$ in steady state to zero:

(17) \begin{align} E \left[\Delta V^{\prime} (\boldsymbol{q} ,\boldsymbol{x} ) \right]& = -\underbrace{2E \left[\left\langle \boldsymbol{Q}^t_{{\parallel}\mathcal{L}},\boldsymbol{S}^t_{{\parallel}\mathcal{L}}-\boldsymbol{A}^t_{{\parallel}\mathcal{L}} \right\rangle \right]}_{\mathcal{T}_{1}} +\underbrace{E\left[ \big\lvert\big\lvert{\boldsymbol{A}^t_{{\parallel}\mathcal{L}}-\boldsymbol{S}^t_{{\parallel}\mathcal{L}}}\big\rvert\big\rvert^2 \right]}_{\mathcal{T}_{2}} \nonumber \\ &-\underbrace{E \Big[ \big\lvert\big\lvert{\boldsymbol{U}^t_{{\parallel}\mathcal{L}}}\big\rvert\big\rvert^2 \Big]}_{\mathcal{T}_{3}}+\underbrace{2E\left[ \left \langle \boldsymbol{Q}^{t+1}_{{\parallel}\mathcal{L}}, \boldsymbol{U}^t_{{\parallel}\mathcal{L}} \right \rangle \right]}_{\mathcal{T}_{4}} =0. \end{align}

We will now bound each of these four terms. In steady state we have

\begin{equation*} E \left[ \big\langle\boldsymbol{Q}^{t},\boldsymbol{A}^{t} \big\rangle \right] =E \left[\big\langle\boldsymbol{Q}^{t+m},\boldsymbol{A}^{t+m} \big\rangle \right],\end{equation*}

since the steady-state mean $E \left[ \big\langle\boldsymbol{Q}^{t},\boldsymbol{A}^{t} \big\rangle \right] \leq \sqrt{E\left[ \lvert\lvert{\boldsymbol{Q}^{t}}\rvert\rvert^2\right]E\left[ \lvert\lvert{\boldsymbol{A}^{t}}\rvert\rvert^2 \right]}< \infty $ according to Lemma 6. Then we have

(18) \begin{align} \mathcal{T}_{1} & = 2E \left[\left\langle \boldsymbol{Q}^t_{{\parallel}\mathcal{L}},\boldsymbol{S}^t_{{\parallel}\mathcal{L}}-\boldsymbol{\lambda}_{{\parallel}\mathcal{L}} \right\rangle \right]+2E \left[\left\langle \boldsymbol{Q}^t_{{\parallel}\mathcal{L}},\boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{A}^t_{{\parallel}\mathcal{L}} \right\rangle \right] \nonumber\\ & = 2E \left[\left\langle \boldsymbol{Q}^t_{{\parallel}\mathcal{L}},\boldsymbol{S}^t_{{\parallel}\mathcal{L}}-\boldsymbol{\lambda}_{{\parallel}\mathcal{L}} \right\rangle \right]+2E \left[\left\langle \boldsymbol{Q}^{t+m}_{{\parallel}\mathcal{L}},\boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{A}^{t+m}_{{\parallel}\mathcal{L}} \right\rangle \right] \nonumber \\ & =2E \left[\left\langle \boldsymbol{Q}^t_{{\parallel}\mathcal{L}},\boldsymbol{S}^t_{{\parallel}\mathcal{L}}-\boldsymbol{\lambda}_{{\parallel}\mathcal{L}} \right\rangle \right] \nonumber\\ & +2E \left[\left\langle \boldsymbol{Q}^{t}_{{\parallel}\mathcal{L}}+\sum_{l=1}^m \boldsymbol{A}^{t+m-l}_{{\parallel}\mathcal{L}}-\boldsymbol{S}^{t+m-l}_{{\parallel}\mathcal{L}}+\boldsymbol{U}^{t-m+l}_{{\parallel}\mathcal{L}},\boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{A}^{t+m}_{{\parallel}\mathcal{L}} \right\rangle \right] \nonumber\\ & =2E \left[\left\langle \boldsymbol{Q}^t_{{\parallel}\mathcal{L}},\boldsymbol{S}^t_{{\parallel}\mathcal{L}}-\boldsymbol{\lambda}_{{\parallel}\mathcal{L}} \right\rangle \right]+2E \left[\left\langle \boldsymbol{Q}^{t}_{{\parallel}\mathcal{L}},\boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{A}^{t+m}_{{\parallel}\mathcal{L}} \right\rangle \right] \nonumber\\ & +2E \left[\left\langle\sum_{l=1}^m \boldsymbol{A}^{t+m-l}_{{\parallel}\mathcal{L}}-\boldsymbol{\lambda}_{{\parallel}\mathcal{L}},\boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{A}^{t+m}_{{\parallel}\mathcal{L}} \right\rangle \right] \nonumber\\ & +2E \left[\left\langle\sum_{l=1}^m \boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{S}^{t+m-l}_{{\parallel}\mathcal{L}},\boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{A}^{t+m}_{{\parallel}\mathcal{L}} \right\rangle \right] \nonumber\\ & +2E \left[\left\langle\sum_{l=1}^m \boldsymbol{U}^{t-m+l}_{{\parallel}\mathcal{L}}, \boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{A}^{t+m}_{{\parallel}\mathcal{L}} \right\rangle \right]. \end{align}

We will now bound each of the terms on the right-hand side of (18). The first term can be simplified as follows. Since we assumed that the schedule is always a perfect matching, we have $\sum_i s_{ij}=\sum_j s_{ij}=1$ . Since $\boldsymbol{\lambda}^\epsilon = (1-\epsilon )\boldsymbol{v}$ and $\boldsymbol{v} \in \mathcal{F}$ , we have $\sum_i \lambda_{ij}=\sum_j \lambda_{ij}=1-\epsilon$ , so by (13),

\begin{equation*} \begin{aligned} E \left[\left\langle \boldsymbol{Q}^t_{{\parallel}\mathcal{L}},\boldsymbol{S}^t_{{\parallel}\mathcal{L}}-\boldsymbol{\lambda}_{{\parallel}\mathcal{L}} \right\rangle \right]= \frac{\epsilon}{N} E \Bigg[\sum_{ij}Q_{ij}^t\Bigg]. \end{aligned} \end{equation*}

For the second term in the right-hand side of (18), according to (13) we have

\begin{align*} & E \left[\left\langle \boldsymbol{Q}^{t}_{{\parallel}\mathcal{L}},\boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{A}^{t+m}_{{\parallel}\mathcal{L}} \right\rangle \right] \\ & = \frac{1}{N}E \Bigg[\sum_{ij} Q_{ij}^t \Bigg(\sum_{j^{\prime}} \left(\lambda_{ij^{\prime}}-A^{t+m}_{ij^{\prime}} \right) \Bigg) \Bigg]\\ &+ \frac{1}{N}E \Bigg[\sum_{ij} Q_{ij}^t \Bigg(\sum_{i^{\prime}} \left(\lambda_{i^{\prime}j}-A^{t+m}_{i^{\prime}j} \right) \Bigg) \Bigg]\\ &-\frac{1}{N^2}E \Bigg[ \sum_{ij} Q_{ij}^t \Bigg(\sum_{i^{\prime}j^{\prime}} \left(\lambda_{i^{\prime}j^{\prime}}-A^{t+m}_{i^{\prime}j^{\prime}} \right) \Bigg) \Bigg]. \end{align*}

According to Lemma 2, by the geometric mixing of the Markov chain underlying the arrivals, we have

\begin{equation*} \begin{aligned} \left \vert E \Bigg[\sum_{ij}Q_{ij}^t \Bigg(\sum_{j^{\prime}} \left(\lambda_{ij^{\prime}}-A^{t+m}_{ij^{\prime}} \right) \Bigg) \Bigg] \right\vert &\leq E \Bigg[\sum_{ij} Q_{ij}^t \Bigg(\sum_{j^{\prime}}2A_{max}C_{ij}\alpha_{ij}^m \Bigg) \Bigg]\\ &\leq 2NA_{max}C_{max}\alpha_{max}^m E \Bigg[ \sum_{ij} Q_{ij}^t \Bigg], \end{aligned} \end{equation*}

where $C_{max} = \max_{ij}\{C_{ij}\}$ and $\alpha_{max} = \max_{ij}\{\alpha_{ij}\}$ . Thus the second term in the right-hand side of (18) can be bounded as follows:

\begin{equation*} \Big\lvert{E \left[\left\langle \boldsymbol{Q}^{t}_{{\parallel}\mathcal{L}},\boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{A}^{t+m}_{{\parallel}\mathcal{L}} \right\rangle \right]}\Big\rvert \leq 6A_{max}C_{max}\alpha_{max}^m E \Bigg[ \sum_{ij} Q_{ij}^t \Bigg]. \end{equation*}

For the third term in the right-hand side of (18), according to (13) we have

\begin{equation*} {} \begin{aligned} &E \left[\left\langle\sum_{l=1}^m \boldsymbol{A}^{t+m-l}_{{\parallel}\mathcal{L}}-\boldsymbol{\lambda}_{{\parallel}\mathcal{L}},\boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{A}^{t+m}_{{\parallel}\mathcal{L}} \right\rangle \right]\\ & = -\sum_{ij}\sum_{l=1}^{m}E \Bigg[ \sum_{j^{\prime}}\frac{1}{N}\Big(A_{ij}^{t+m}-\lambda_{ij} \Big) \Big(A_{ij^{\prime}}^{t+m-l}-\lambda_{ij^{\prime}}\Big)\\ &\quad\quad\quad\quad\quad\quad +\sum_{i^{\prime}}\frac{1}{N}\Big(A_{ij}^{t+m}-\lambda_{ij} \Big) \Big(A_{i^{\prime}j}^{t+m-l}-\lambda_{i^{\prime}j}\Big)\\ &\quad\quad\quad\quad\quad\quad -\sum_{i^{\prime}j^{\prime}} \frac{1}{N^2} \Big(A_{ij}^{t+m}-\lambda_{ij} \Big)\Big(A_{i^{\prime}j^{\prime}}^{t+m-l}-\lambda_{i^{\prime}j^{\prime}}\Big) \Bigg]\\ & = -\sum_{ij}\sum_{l=1}^{m}\left(\frac{1}{N} \sum_{j^{\prime}} \gamma_{ij,ij^{\prime}}(l)+\frac{1}{N} \sum_{i^{\prime}} \gamma_{ij,i^{\prime}j}(l)-\frac{1}{N^2} \sum_{i^{\prime}j^{\prime}} \gamma_{ij,i^{\prime}j^{\prime}}(l) \right). \end{aligned} \end{equation*}

For the fourth term in the right-hand side of (18), since the future arrivals are independent of past service, and the Markov chain is in steady state, according to (13) we have

\begin{equation*} \begin{aligned} E \left[\left\langle\sum_{l=1}^m \boldsymbol{\lambda}^{t+m-l}_{{\parallel}\mathcal{L}}-\boldsymbol{S}^{t+m-l}_{{\parallel}\mathcal{L}},\boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{A}^{t+m}_{{\parallel}\mathcal{L}} \right\rangle \right] =0. \end{aligned} \end{equation*}

Now we bound the fifth term in the right-hand side of (18). From Lemma 6, we have $E\big[\lvert\lvert{{\overline{\boldsymbol{Q}}}}\rvert\rvert_2\big] < \infty$ . Since $\big\lvert{E\big[ \sum_{ij} \overline{Q}_{ij} \big]}\big\rvert = \lvert\lvert{{\overline{\boldsymbol{Q}}}}\rvert\rvert_1 \leq N^2E\big[\lvert\lvert{{\overline{\boldsymbol{Q}}}}\rvert\rvert_2\big]$ , we conclude that $E\left[ \sum_{ij} \overline{Q}_{ij} \right]$ is finite. Thus the drift is zero in steady state,

\begin{align*} E \Bigg[ \sum_{ij} \left( Q^{t+1}_{ij} - Q^{t}_{ij}\right) \Bigg] &= E \Bigg[ \sum_{ij} \left( A^t_{ij} - S^t_{ij} +U^t_{ij} \right) \Bigg]\\ &= E \Bigg[ N(1-\epsilon) - N+ \sum_{ij} U^t_{ij} \Bigg] =0, \end{align*}

which gives $E \left[ \sum_{ij} U_{ij}^t \right] = N\epsilon. $

Since $\left\lvert{\sum_{j^{\prime}} \left(\lambda_{ij^{\prime}}-A^{t+m}_{ij^{\prime}} \right) }\right\rvert\leq 1+N A_{max}$ , according to (13) we have

\begin{equation*} \begin{aligned} \left\lvert{E \left[\left\langle\sum_{l=1}^m \boldsymbol{U}^{t-m+l}_{{\parallel}\mathcal{L}}, \boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{A}^{t+m}_{{\parallel}\mathcal{L}} \right\rangle \right]}\right\rvert \leq 3m\epsilon \big(1+NA_{max}\big). \end{aligned} \end{equation*}

Putting all of these terms back into (18), we conclude that

(19) \begin{equation} {} \begin{aligned} \mathcal{T}_{1} \geq & 2\left (\frac{1}{N}-6A_{max}C_{max}\frac{\alpha_{max}^m}{\epsilon}\right) E \bigg[\epsilon\sum_{ij} Q_{ij}^t\bigg]\\ &-2\sum_{ij}\sum_{l=1}^{m}\left(\frac{1}{N} \sum_{j^{\prime}} \gamma_{ij,ij^{\prime}}(l)+\frac{1}{N} \sum_{i^{\prime}} \gamma_{ij,i^{\prime}j}(l)-\frac{1}{N^2} \sum_{i^{\prime}j^{\prime}} \gamma_{ij,i^{\prime}j^{\prime}}(l) \right)\\ &-6m\epsilon \big(1+NA_{max} \big), \end{aligned} \end{equation}

and

(20) \begin{equation} {} \begin{aligned} \mathcal{T}_{1} & \leq2\left (\frac{1}{N}+6A_{max}C_{max}\frac{\alpha_{max}^m}{\epsilon}\right) E \bigg[\epsilon\sum_{ij} Q_{ij}^t\bigg]\\ &-2\sum_{ij}\sum_{l=1}^{m}\left(\frac{1}{N} \sum_{j^{\prime}} \gamma_{ij,ij^{\prime}}(l)+\frac{1}{N} \sum_{i^{\prime}} \gamma_{ij,i^{\prime}j}(l)-\frac{1}{N^2} \sum_{i^{\prime}j^{\prime}} \gamma_{ij,i^{\prime}j^{\prime}}(l) \right)\\ &+6m\epsilon \big(1+NA_{max} \big). \end{aligned} \end{equation}

In bounding $\mathcal{T}_{2}$ , $\mathcal{T}_{3}$ , and $\mathcal{T}_{4}$ , we do not have to deal with the correlation between $\boldsymbol{Q}$ and $\boldsymbol{A}$ , and so they can be bounded in the same manner as in [Reference Maguluri and Srikant22]. We present a brief overview here. It can be shown as in [Reference Maguluri, Burle and Srikant21, Reference Maguluri and Srikant22] that

(21) \begin{equation} {} \begin{aligned} \mathcal{T}_{2}&=E\left[ \lvert\lvert{\boldsymbol{A}^t_{{\parallel}\mathcal{L}}-\boldsymbol{S}^t_{{\parallel}\mathcal{L}}}\rvert\rvert^2 \right]\\ & = E\left[ \lvert\lvert{\boldsymbol{\lambda}_{{\parallel}\mathcal{L}}-\boldsymbol{S}^t_{{\parallel}\mathcal{L}}}\rvert\rvert^2 \right]+E\left[ \lvert\lvert{\boldsymbol{A}^t_{{\parallel}\mathcal{L}}-\boldsymbol{\lambda}_{{\parallel}\mathcal{L}}}\rvert\rvert^2 \right]\\ &= \epsilon^2+ \sum_{ij}\left(\frac{1}{N} \sum_{j^{\prime}} \gamma_{ij,ij^{\prime}}(0)+\frac{1}{N} \sum_{i^{\prime}} \gamma_{ij,i^{\prime}j}(0)-\frac{1}{N^2} \sum_{i^{\prime}j^{\prime}} \gamma_{ij,i^{\prime}j^{\prime}}(0) \right). \end{aligned} \end{equation}

The term $\mathcal{T}_{3}$ can be bounded as

(22) \begin{equation} 0\leq \mathcal{T}_{3} = E \left[ \lvert\lvert{\boldsymbol{U}^t_{{\parallel}\mathcal{L}}}\rvert\rvert^2 \right] \leq E \left[ \lvert\lvert{\boldsymbol{U}^t}\rvert\rvert^2 \right]=E \Bigg[ \sum_{ij}\left(U_{ij}^t\right)^2 \Bigg] \stackrel{(a)}= E \bigg[ \sum_{ij}U_{ij}^t \bigg] =N\epsilon, \end{equation}

where (a) follows from $u_{ij} \in\{0,1\}$ , and the last equality follows from (33). Now, for the term $\mathcal{T}_{4}$ , we have

(23) \begin{align} \mathcal{T}_{4} = 2E\left[ \left \langle \boldsymbol{Q}^{t+1}_{{\parallel}\mathcal{L}}, \boldsymbol{U}^t_{{\parallel}\mathcal{L}} \right \rangle \right] =2E\left[ \left \langle \boldsymbol{Q}^{t+1}_{{\parallel}\mathcal{L}}, \boldsymbol{U}^t \right \rangle \right]\notag & = 2E\left[ \left \langle \boldsymbol{Q}^{t+1}, \boldsymbol{U}^t \right \rangle \right]-2E\left[ \left \langle \boldsymbol{Q}^{t+1}_{{\perp}\mathcal{L}}, \boldsymbol{U}^t\right \rangle \right]\notag\\& \stackrel{(a)}= -2E\left[ \left \langle \boldsymbol{Q}^{t+1}_{{\perp}\mathcal{L}}, \boldsymbol{U}^t\right \rangle \right], \end{align}

where (a) follows from the definition of unused service. Using the Cauchy–Schwartz inequality, we have

(24) \begin{align} \Big\lvert{2E\left[ \left \langle \boldsymbol{Q}^{t+1}_{{\perp}\mathcal{L}}, \boldsymbol{U}^t\right \rangle \right]}\Big\rvert\leq 2\sqrt{E\left[\lvert\lvert{\boldsymbol{Q}^{t+1}_{{\perp}\mathcal{L}}}\rvert\rvert^2 \right]E\left[\lvert\lvert{\boldsymbol{U}^t}\rvert\rvert^2 \right]} \leq 2\sqrt{K^{\star\star}N\epsilon}, \end{align}

where the last inequality follows from Proposition 4. Substituting (19)–(24) into (17), putting back the superscript ${({\cdot})}^{(\epsilon)}$ , and taking $m(\epsilon)=\lfloor1/\sqrt{\epsilon}\rfloor$ , we get

(25) \begin{align} &2\left (\frac{1}{N}-6A_{max}C_{max}\frac{\alpha_{max}^{m(\epsilon)}}{\epsilon}\right) E \bigg[\epsilon\sum_{ij} \overline{Q}^{(\epsilon)}_{ij}\bigg] \nonumber \\ & \leq2\sum_{ij}\sum_{l=1}^{m(\epsilon)}\left(\frac{1}{N} \sum_{j^{\prime}} \gamma^{(\epsilon)}_{ij,ij^{\prime}}(l)+\frac{1}{N} \sum_{i^{\prime}} \gamma^{(\epsilon)}_{ij,i^{\prime}j}(l)-\frac{1}{N^2} \sum_{i^{\prime}j^{\prime}} \gamma^{(\epsilon)}_{ij,i^{\prime}j^{\prime}}(l) \right) +6m(\epsilon)\epsilon \big(1+NA_{max} \big)\nonumber \\ &+\epsilon^2+ \sum_{ij}\left(\frac{1}{N} \sum_{j^{\prime}} \gamma^{(\epsilon)}_{ij,ij^{\prime}}(0)+\frac{1}{N} \sum_{i^{\prime}} \gamma^{(\epsilon)}_{ij,i^{\prime}j}(0)-\frac{1}{N^2} \sum_{i^{\prime}j^{\prime}} \gamma^{(\epsilon)}_{ij,i^{\prime}j^{\prime}}(0) \right)+N\epsilon+2\sqrt{K^{\star\star}N\epsilon}, \end{align}

and

(26) \begin{align} &2\left (\frac{1}{N}+6A_{max}C_{max}\frac{\alpha_{max}^{m(\epsilon)}}{\epsilon}\right) E \bigg[\epsilon\sum_{ij} \overline{Q}^{(\epsilon)}_{ij}\bigg]\nonumber \\ \geq&2\sum_{ij}\sum_{l=1}^{m(\epsilon)}\left(\frac{1}{N} \sum_{j^{\prime}} \gamma^{(\epsilon)}_{ij,ij^{\prime}}(l)+\frac{1}{N} \sum_{i^{\prime}} \gamma^{(\epsilon)}_{ij,i^{\prime}j}(l)-\frac{1}{N^2} \sum_{i^{\prime}j^{\prime}} \gamma^{(\epsilon)}_{ij,i^{\prime}j^{\prime}}(l) \right) -6m(\epsilon)\epsilon \big(1+NA_{max} \big) \nonumber \\ &+\epsilon^2+ \sum_{ij}\left(\frac{1}{N} \sum_{j^{\prime}} \gamma^{(\epsilon)}_{ij,ij^{\prime}}(0)+\frac{1}{N} \sum_{i^{\prime}} \gamma^{(\epsilon)}_{ij,i^{\prime}j}(0)-\frac{1}{N^2} \sum_{i^{\prime}j^{\prime}} \gamma^{(\epsilon)}_{ij,i^{\prime}j^{\prime}}(0) \right) -2\sqrt{K^{\star\star}N\epsilon}. \end{align}

Letting $\epsilon \to 0$ , we have

\begin{equation*} {} \begin{aligned} &\lim_{\epsilon \to 0}\epsilon E\bigg[ \sum_{ij}\overline{Q}^{(\epsilon)}_{ij} \bigg] \\ &= \lim_{\epsilon \to 0} \sum_{ij}\left(\frac{1}{N} \sum_{j^{\prime}} \gamma^{(\epsilon)}_{ij,ij^{\prime}}(0)+\frac{1}{N} \sum_{i^{\prime}} \gamma^{(\epsilon)}_{ij,i^{\prime}j}(0)-\frac{1}{N^2} \sum_{i^{\prime}j^{\prime}} \gamma^{(\epsilon)}_{ij,i^{\prime}j^{\prime}}(0)\right)\\ &+ 2\sum_{ij}\sum_{t=1}^{\frac{1}{\lfloor1/\sqrt{\epsilon}\rfloor}}\left(\frac{1}{N} \sum_{j^{\prime}} \gamma^{(\epsilon)}_{ij,ij^{\prime}}(l)+\frac{1}{N} \sum_{i^{\prime}} \gamma^{(\epsilon)}_{ij,i^{\prime}j}(l)-\frac{1}{N^2} \sum_{i^{\prime}j^{\prime}} \gamma^{(\epsilon)}_{ij,i^{\prime}j^{\prime}}(l) \right) . \end{aligned} \end{equation*}

The proof is now complete after using Claim 5.

5. Conclusion

In this paper, we have analyzed the heavy-traffic behavior in a switch operating under the MaxWeight scheduling algorithm when the arrivals are Markovian. We have obtained the steady-state sum queue length, which is consistent with the result in the i.i.d. case. Our arguments generalize the drift method that was developed in [Reference Eryilmaz and Srikant6, Reference Maguluri and Srikant22] and the transform method in [Reference Hurtado-Lange and Maguluri13] to the case of Markovian arrivals. The key ideas are to consider drift over a time window whose size depends on the heavy-traffic parameter, and to exploit geometric mixing of Markov chains to get a handle on the Markovian correlations.

There are several possible directions for future research. One immediate direction is to extend the result to the so-called ‘generalized switch model’ [Reference Hurtado Lange and Maguluri14, Reference Stolyar31] when the complete resource pooling condition is not satisfied. (This is the case for a general queueing model that includes a switch that is incompletely saturated, a switch where the arrivals across the ports are saturated, wireless networks under interference and fading, cloud-computing scheduling, etc.) A second future direction is to establish the generality of Markovian arrivals in discrete-time systems. It was shown in [Reference Asmussen and Koole2] that Markovian arrival processes approximate any marked point process to an arbitrary degree of accuracy. Exploring the validity of such a result for discrete-time arrival processes is an open problem.

Appendix A. Heavy-traffic limit of queue length distribution

In this appendix, we will obtain the heavy-traffic limiting steady-state distribution of $\epsilon \overline{Q}^{(\epsilon)}$ . One way of doing this is to use $q^{t+1}$ as the test function to obtain the kth moment of the queue length. Once all the moments are obtained, the distribution can be inferred. Such an approach was used in [Reference Eryilmaz and Srikant6]. Here, we instead use the transform method that was presented in [Reference Hurtado-Lange and Maguluri13]. The key contribution is to extend the result in [Reference Hurtado-Lange and Maguluri13] to the case of Markovian arrivals in a single-server queue.

Theorem 3. Consider the same setting as in Theorem 1. Let $\overline{Q}^{ (\epsilon )}$ be a steady-state random variable to which the queue length processes $\left\{Q^t\right\}^{ (\epsilon )}_{t \geq 1}$ converge in distribution. Then for any $\theta \leq 0$ ,

\begin{equation*} \lim_{\epsilon \to 0}E \Big[e^{\epsilon\theta \overline{Q}^{(\epsilon)}} \Big] = \frac{1}{1-\theta\frac{\sigma_a^2+\sigma_s^2}{2}}.\end{equation*}

Therefore, we have that $\epsilon \overline{Q}^{(\epsilon)}$ converges in distribution to an exponential variable with mean $\frac{\sigma_a^2+\sigma_s^2}{2}$ .

The proof is presented in Appendix H.2. The key idea is to consider the m-step drift of the exponential test function, $e^{\epsilon\theta q}$ . As in the proof of Theorem 1, we again choose m as a function of $\epsilon$ and exploit the mixing rate of the underlying Markov chain. In addition, we use the following lemma to compare the arrival process with an independent Markovian process with the same transition probabilities. This lemma, which is proved in Appendix H.1, enables us to asymptotically decouple the queue length and arrival process in the heavy-traffic regime.

Lemma 7. For every $\epsilon \in (0,\mu)$ , let $\left\{Y^t\right\}^{(\epsilon)}_{t\geq0}$ be a Markov chain that is independent of the chain $\{X^t\}^{(\epsilon)}_{t \geq 0}$ , but is defined on the same state space $\Omega$ , and with the same transition probability, so that it has the same stationary distribution, i.e.

\begin{equation*}\overline{Y}^{ (\epsilon )}\stackrel{d}=\overline{X}^{ (\epsilon )} \sim \pi^{(\epsilon)}.\end{equation*}

Let $(b^t)^{(\epsilon)}=f \big((Y^t)^{(\epsilon)} \big)$ and ${\big(Y^0\big)}^{(\epsilon)} \sim \pi^{(\epsilon)}$ . Then

\begin{align*} & \Bigg\vert \sum_{l=0}^{m-1} \bigg\{ E \Big[ \Big(\big(A^{t+l}\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big(\big(A^{t+m}\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) - \Big(\big(b^{m-l}\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big(\big(b^{0}\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \mid \big(X^{t}\big)^{(\epsilon)} \Big] \bigg\}\Bigg\vert\\ & \leq 4 \big(A_{max}+\lambda^{(\epsilon)} \big)A_{max} \big(C^{(\epsilon)}\big)^2m\big(\alpha^{(\epsilon)}\big)^m, \end{align*}

and

\begin{align*} \Big\lvert{E \Big[ \big(\big(A^{t+m}\big)^{(\epsilon)}-\lambda^{(\epsilon)} \big)^2- \big(\big(b^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \big)^2 \Big\rvert \big(X^{t}\big)^{(\epsilon)} \Big ]} \Big\rvert \leq 2 \big (A_{max}+\lambda^{(\epsilon)} \big)^2\big(C^{(\epsilon)}\big)\big(\alpha^{(\epsilon)}\big)^m. \end{align*}

Appendix B. Proof of lemmas in Section 2

B.1. Proof of Lemma 1

Proof of Lemma 1. Let $\theta \in (0,1)$ . Applying Taylor expansion and using Condition 2 in the lemma, we have

(27) \begin{align} & E \left[e^{\theta \Delta^m Z (\boldsymbol{q},\boldsymbol{x} )}\mid \left(\boldsymbol{Q}^t, \boldsymbol{X}^t \right)= \left(\boldsymbol{q},\boldsymbol{x} \right) \right]\nonumber\\ & \leq E\left[1+\theta \Delta^m Z (\boldsymbol{q},\boldsymbol{x} )+\sum_{l=2}^\infty \frac{\theta^l D^l}{l!}\mid \left(\boldsymbol{Q}^t, \boldsymbol{X}^t \right)= \left(\boldsymbol{q},\boldsymbol{x} \right) \right]\\ &= 1+\theta E\left[ \Delta^m Z (\boldsymbol{q},\boldsymbol{x} )\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right]+\sum_{l=2}^\infty \frac{\theta^l D^l}{l!}.\nonumber \end{align}

According to Conditions 1 and 2, we have

(28) \begin{equation} \begin{aligned} & E\left[ \Delta^m Z (\boldsymbol{q},\boldsymbol{x} )\mid \left(\boldsymbol{Q}^t,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right] \leq -\eta \mathcal{I}\left(\left(\boldsymbol{q},\boldsymbol{x}\right) \in \mathcal{B} \right) + D\mathcal{I}\left(\left(\boldsymbol{q},\boldsymbol{x}\right) \in \mathcal{B}^c \right), \end{aligned} \end{equation}

where

\begin{align*} \mathcal{B} = \left\{ \left(\boldsymbol{q} ,\boldsymbol{x} \right) \in (\mathcal{Q},\mathcal{X} )\,:\, Z (\boldsymbol{q},\boldsymbol{x} ) \geq \kappa\right\}. \end{align*}

Since $\theta \in (0,1)$ , we have

(29) \begin{align} \sum_{l=2}^\infty \frac{\theta^l D^l}{l!} = \theta^2 \sum_{l=2}^\infty \frac{\theta^{l-2} D^l}{l!}\leq \theta^2 \sum_{l=2}^\infty \frac{D^l}{l!} = \theta^2(e^D-1-D).\end{align}

Combining (27)–(29), we have

(30) \begin{align} & E \left[e^{\theta \Delta^m Z (\boldsymbol{q},\boldsymbol{x} )}\mid \left(\boldsymbol{Q}^t, \boldsymbol{X}^t \right)= \left(\boldsymbol{q},\boldsymbol{x} \right) \right]\nonumber\\ & \leq \left[1-\theta \eta +\theta^2 \left(e^D -1-D \right) \right]\mathcal{I} \left( \left(\boldsymbol{q}, \boldsymbol{x} \right) \in \mathcal{B} \right)\nonumber\\ &\quad +\left[1+\theta D +\theta^2 \left(e^D -1-D \right) \right]\mathcal{I} \left( \left(\boldsymbol{q} ,\boldsymbol{x} \right) \in \mathcal{B}^c \right)\nonumber\\ & \leq \delta_1 \mathcal{I}\left( \left(\boldsymbol{q} ,\boldsymbol{x} \right) \in \mathcal{B} \right)+\delta_2 \mathcal{I} \left( \left(\boldsymbol{q} ,\boldsymbol{x} \right) \in \mathcal{B}^c \right), \end{align}

where

\begin{align*} \delta_1 = 1-\theta \eta +\theta^2 \left(e^D -1-D \right) <1 \end{align*}

for sufficiently small $\theta \leq \frac{1}{\eta}$ and

\begin{align*} \delta_2 = 1+\theta D +\theta^2 \left(e^D -1-D \right). \end{align*}

According to (30), we have

\begin{align*} & E \big[ e^{\theta Z (\boldsymbol{Q} (t+m ),\boldsymbol{X} (t+m ) )} \big] \\ & = \sum_{ (\boldsymbol{q},\boldsymbol{x} ) \in (\mathcal{Q},\mathcal{X} )} E \left[ e^{\theta \Delta^m Z (\boldsymbol{q},\boldsymbol{x} )} e^{\theta Z \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)}\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right] P \left( \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right)\\ & =\sum_{(\boldsymbol{q},\boldsymbol{x} ) \in \mathcal{B}} e^{\theta Z \left(\boldsymbol{q} ,\boldsymbol{x} \right)}E \left[ e^{\theta \Delta^m Z (\boldsymbol{Q}^t,\boldsymbol{X}^t )} \mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right] P \left( \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right) \\ & +\sum_{(\boldsymbol{q},\boldsymbol{x} ) \in \mathcal{B}^c} e^{\theta Z \left(\boldsymbol{q} ,\boldsymbol{x} \right)}E \left[ e^{\theta \Delta^m Z (\boldsymbol{Q}^t,\boldsymbol{X}^t )}\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right] P \left( \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right) \\ & \leq \delta_1 \sum_{ (\boldsymbol{q},\boldsymbol{x} ) \in (\mathcal{Q},\mathcal{X} )} e^{\theta Z \left(\boldsymbol{q} ,\boldsymbol{x} \right)} P \left( \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right)\\ &+ \sum_{\mathcal{B}^c} \left(\delta_2-\delta_1 \right)e^{\theta Z \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)} P \left( \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right)\\ & \leq \delta_1 E \big[e^{\theta Z \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) } \big] + \theta(D+\eta)e^{\theta \kappa}. \end{align*}

By induction, for all $ t_0 \in [0, m]$ and $l \in \mathbb{N}_+$ we have

\begin{align*} & E \Big[ e^{\theta Z \left(\boldsymbol{Q}^{t_0+lm} ,\boldsymbol{X}^{t_0+lm} \right)} \Big] \leq \delta_1^l E \Big[ e^{\theta Z \left(\boldsymbol{Q}^{t_0} ,\boldsymbol{X}^{t_0} \right)} \Big]+\frac{1-\delta_1^l}{1-\delta_1} \theta\left(D+\eta \right)e^{\theta \kappa}. \end{align*}

Note that $E\big[ e^{\theta Z \left(\boldsymbol{Q}^{t_0} ,\boldsymbol{X}^{t_0} \right)} \big] \leq e^{\theta \hat{D}(t_0)} <\infty$ for all $t_0 \in[0,m]$ , by Condition 3. Since $\{t \in \mathbb{R}\,:\, t \geq 0 \} = \{t = t_0+l m\,:\, t_0 \in[0,m]\ \text{and}\ l \in \mathbb{N}_+ \}$ , we have

\begin{align*} \limsup_{t \to \infty} E \Big[ e^{\theta Z \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)} \Big] \leq \frac{ \theta\left(D+\eta \right)e^{\theta \kappa}}{1-\delta_1} \triangleq C^\star. \end{align*}

B.2. Proof of Lemma 2

Proof of Lemma 2. Define the total variance difference between two distributions $\pi_1$ and $\pi_2$ on $\Omega$ as

(31) \begin{align} \lvert\lvert{\pi_1-\pi_2}\rvert\rvert _{TV}=\frac{1}{2}\sum_{x \in \Omega}\lvert{ \pi_1 (x )-\pi_2 (x )}\rvert\rvert. \end{align}

It has been shown in [Reference Levin and Peres18, Chapter 4, Theorem 4.9, p. 52] that for an irreducible, positive recurrent, and aperiodic finite-state Markov chain with transition probability matrix P and stationary distribution $\pi$ , there exist constants $\alpha \in (0,1 )$ and $C > 0$ such that for all $m \in \mathbb{N}_+$ ,

(32) \begin{align} \max_{x \in \Omega}\lvert\lvert{P (x,\cdot )-\pi}\rvert\rvert_{TV} \leq C\alpha^m. \end{align}

Thus, for any initial distribution $X^0 \sim \pi^0$ ,

\begin{align*} &\big\lvert{E \left[f(X^m)-\lambda \right]}\big\rvert\\ & \leq\sum_{y \in \Omega}f (y )\lvert{ \big(\pi^0P^m \big) (y )-\pi (y )}\rvert \\ & \leq\sum_{y \in \Omega}f (y )\sum_{x \in \Omega}\pi^0 (x )\lvert{P^m (x,y )-\pi (y )}\rvert\\ & \leq L\sum_{y \in \Omega}\sum_{x \in \Omega}\pi^0 (x )\lvert{P^m (x,y )-\pi (y )}\rvert\\ & = L\sum_{x \in \Omega}\pi^0 (x )\sum_{y \in \Omega}\lvert{P^m (x,y )-\pi (y )}\rvert\\ & \leq 2L\sum_{x \in \Omega}\pi^0 (x )\max_{x \in \Omega}\lvert\lvert{P (x,\cdot )-\pi}\rvert\rvert_{TV}\\ & \leq 2LC\alpha^m. \end{align*}

Thus,

\begin{align*} \big\lvert{E \left[f(X^m)-\lambda \right]}\big\rvert\leq 2LC\alpha^m. \end{align*}

B.3. Proof of Lemma 3

Proof of Lemma 3. We have

\begin{align*} \lvert{\gamma(t)}\rvert & =\big\lvert{ E \big[ \big(f(X^t)-\lambda \big) \big(f\big(X^0\big)-\lambda \big) \big] }\big\rvert\\ & = \big\lvert{ E_{X^0 \sim \pi } \big[ \big(f\big(X^0\big)-\lambda \big)E_{X^0 \sim \pi } \left[ \left(f(X^t)-\lambda \right) \right] \big] }\big\rvert\\& \stackrel{(a)}{\leq}\ E_{X^0 \sim \pi } \big[\big\lvert{ \big(f\big(X^0\big)-\lambda \big)E_{X^0 \sim \pi } \left[ \left(f(X^t)-\lambda \right) \right] }\big\rvert \big] \\ & = E_{X^0 \sim \pi } \big[\big\lvert{f\big(X^0\big)-\lambda }\rvert \lvert{E_{X^0 \sim \pi } \left[ \left(f(X^t)-\lambda \right) \right] }\big\rvert \big]\\ & \leq2 \big(A_{max}+\lambda \big)A_{max}C\alpha^k, \end{align*}

where (a) follows from Jensen’s inequality and the last line follows from Lemma 2. Let $V_1 = \gamma(0)+2\lim_{m \to \infty}\sum_{t=1}^{m}\frac{m-t}{m}\gamma(t) $ and $V_2= \gamma(0)+2\lim_{m \to \infty}\sum_{t=1}^{m}\gamma(t)$ . Consider

\begin{align*} \lvert{V_1-V_2}\rvert & = \lim_{m\to \infty}\Bigg\lvert{\sum_{t=1}^m \frac{t}{m} \gamma(t)}\Bigg\rvert\\ & = \lim_{m\to \infty}\Bigg\lvert{\sum_{t=1}^m \frac{m-t}{m} \gamma(t)-\sum_{t=1}^m\gamma(t)}\Bigg\rvert\\ & \leq \lim_{m\to \infty}\sum_{t=1}^m \frac{t}{m} \lvert{\gamma(t)}\rvert\\ & \leq \lim_{m\to \infty}2\big(A_{max}+\lambda \big)A_{max}C \sum_{t=1}^m \frac{t}{m}\alpha^t\\ & \leq \lim_{m\to \infty}2\big(A_{max}+\lambda \big)A_{max}C \left(\frac{\alpha-\alpha^{m+1}}{m(1-\alpha^2)} -\frac{\alpha^{m+1}}{1-\alpha} \right). \end{align*}

Since $\alpha \leq 1$ , we have $\lvert{V_1-V_2}\rvert \leq 0$ . Thus, $V_1=V_2$ .

Appendix C. Proof of Claim 1

We will prove the claim using the dominated convergence theorem to justify a certain interchange of limit and summation. We have

\begin{align*} \big\lvert{ \gamma^{(\epsilon)}(t)}\big\rvert &=\big\lvert{ E \big[ \big((A^t)^{(\epsilon)}-\lambda \big) \big(\big(A^0\big)^{(\epsilon)}-\lambda \big) \big]}\big\rvert\\ & \leq E\big[\big\lvert{\big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \big)}\big\rvert \big\lvert{E \big[ \big( (A^t)^{(\epsilon)}-\lambda^{(\epsilon)} \big)\vert X^0 \big] }\big\rvert \big]\\ & \leq \big(A_{max}+\lambda\big) 2A_{max}C\alpha^t, \end{align*}

where the last inequality follows Lemma 2. Since

\begin{align*} \lim_{m\to\infty}\sum_{t=1}^m \big(A_{max}+\lambda\big) 2A_{max}C\alpha^t &= \lim_{m\to\infty}2\big(A_{max}+\lambda\big)A_{max}C\alpha\frac{1-\alpha^{m-1}}{1-\alpha}\\ &\leq 2\big(A_{max}+\lambda\big)A_{max}C\frac{\alpha}{1-\alpha}< \infty, \end{align*}

we conclude that the interchange of limits is valid, i.e.,

\begin{align*} \lim_{\epsilon \to 0}\lim_{m \to \infty}\sum_{t=1}^{m}\gamma^{(\epsilon)}(t) = \lim_{m \to \infty}\sum_{t=1}^{m}\lim_{\epsilon \to 0}\gamma^{(\epsilon)}(t) = \lim_{m \to \infty}\sum_{t=1}^{m}\gamma(t). \end{align*}

Therefore, $\lim_{\epsilon \to 0} \big(\sigma_a^{(\epsilon)}\big)^2 = \sigma_a^2.$

Appendix D. Details in proof of Theorem 1

D.1. Proof of Claim 2

Proof. We first use the queue evolution equation (3) to recursively expand $Q^{t+m}$ :

\begin{align*} & E \left[\left(Q^{t+m}\right)^2-(Q^t)^2\mid \left(Q^t,X^t \right)= (q,x ) \right]\\ & \leq E \left[ \big(Q^{t+m-1}+A^{t+m-1}+S^{t+m-1} \big)^2-(Q^t)^2\mid \left(Q^t,X^t \right)= (q,x ) \right]\\ & =E \bigg[\big(Q^{t+m-1}\big)^2+2Q^{t+m-1} \big(A^{t+m-1}-S^{t+m-1} \big) \\ &+ \big(A^{t+m-1}-S^{t+m-1} \big)^2-(Q^t)^2 \big]\mid \left(Q^t,X^t \right)= (q,x ) \bigg]\\ & =E\Bigg [2\left(Q^t+\sum_{i=0}^{m-2}A^{t+m-i}-\sum_{i=0}^{m-2}S^{t+m-i} +\sum_{i=0}^{m-2}U^{t+m-i}\right)\big(A^{t+m-1}-S^{t+m-1}\big) \\ & +\big(Q^{t+m-1}\big)^2+ \big(A^{t+m-1}-S^{t+m-1} \big)^2-(Q^t)^2\mid \left(Q^t,X^t \right)= (q,x )\Bigg ]\\ & \leq E \big[2Q^t \big(A^{t+m-1}-S^{t+m-1} \big)+2m \big(A_{max}+S_{max} \big) \big(A_{max}+S_{max} \big)\\ & + \big(Q^{t+m-1}\big)^2+\big(A_{max}+S_{max} \big)^2-(Q^t)^2\mid \left(Q^t,X^t \right)= (q,x ) \big]\\ & =E \big[\big(Q^{t+m-1}\big)^2+2Q^t \big(A^{t+m-1}-S^{t+m-1} \big)\\ &-(Q^t)^2+K_0 (m )\mid \left(Q^t,X^t \right)= (q,x ) \big],\end{align*}

where

\begin{align*} K_0 (m ) = 2m \big(A_{max}+S_{max} \big) \big(A_{max}+S_{max} \big)+ \big(A_{max}+S_{max} \big)^2.\end{align*}

By induction we have

\begin{align*} & E \left[\left(Q^{t+m}\right)^2-(Q^t)^2\mid \left(Q^t,X^t \right)= (q,x ) \right]\\ & \leq E \left[2Q^t \left(\sum_{i=1}^{m}A^{t+m-i}-\sum_{i=1}^{m}S^{t+m-i} \right)+m K_0 (m )\mid \left(Q^t,X^t \right)= (q,x ) \right]\\ & = E \left [2Q^t\sum_{i=1}^{m} (A^{t+m-i}-\lambda )-2m\epsilon Q^t+mK_0 (m )\mid \left(Q^t,X^t \right)= (q,x ) \right].\end{align*}

D.2. Proof of Claim 3

Fix $\epsilon$ , and take $Z(Q^t,X^t) = Q^t$ as the test function:

\begin{align*} \Delta^{m(\epsilon)} Z(q,x) &= \Big(Q^{t+m(\epsilon)}-Q^t\Big)\mathcal{I}\big(Q^t = q\big) \\ & = \left(\sqrt{\left(Q^{ t+m(\epsilon)}\right)^2}-\sqrt{\left(Q^{ t}\right)^2}\right)\mathcal{I}\big(Q^t = q\big)\\ & \leq \frac{1}{2Q^{t}} \Big(\big(Q^{ t+m(\epsilon)}\big)^2- \left(Q^{ t}\right)^2\Big)\mathcal{I}\big(Q^t = q\big),\end{align*}

where the last line follows from the fact that $f(x)= \sqrt{x}$ is a concave function for $x\geq 0$ , so that $f(y)-f(x) \leq (y-x)f^{\prime}(x) = \frac{y-x}{2\sqrt{x}}$ with $y = \left(Q^{t+m(\epsilon)}\right)^2$ and $x = \left(Q^{t}\right)^2$ . Therefore,

\begin{align*} &E \left[\Delta^{m(\epsilon)} Z(q,x)\mid \left(Q^t,X^t \right)= (q,x ) \right]\\ &\leq E \left[\frac{1}{2Q^{t}} \Big(\big(Q^{ t+m(\epsilon)}\big)^2- \left(Q^{ t}\right)^2\Big)\mid \left(Q^t,X^t \right)= (q,x ) \right]\\ &\leq -K_1(\epsilon) +\frac{1}{{2Q^{t}}}m(\epsilon) K_0 (m(\epsilon) ),\end{align*}

Let ${\kappa(m(\epsilon))} = \frac{m(\epsilon) K_0 (m(\epsilon) )}{K_1({\epsilon})}$ and $\eta = \frac{ K_1(\epsilon)}{2}$ . Then, for all $ (q,x) \in (\mathcal{Q},\mathcal{X} )$ with $Z (q,x ) \geq {\kappa(m(\epsilon))}$ , we have

\begin{align*} E \left[\Delta^m Z (q, x )\mid \left(Q^t,X^t \right) = (q,x ) \right] \leq -{\eta(m(\epsilon))}. \end{align*}

Moreover, we have $P\left(\Delta^{m(\epsilon)} Z(Q,X) \leq {m(\epsilon)}\big(A_{max}+S_{max}\big)\right)=1$ . Using Lemma 1, we conclude that

\begin{align*} E \Big[ \Big(\overline{Q}^{ (\epsilon )} \Big)^2 \Big] < \infty.\end{align*}

D.3. Proof of Claim 4

Proof. We will prove the claim by bounding the terms on the right-hand side of (8). First, setting the drift of $Q^t$ to zero in steady state, for all $i \in \mathbb{N}_+$ we have

(33) \begin{align} E \left[U^{t+i} \right]=E \left(Q^{t+i+1}-Q^{t+i}+S^{t+i}-A^{t+i} \right)=\mu-\lambda=\epsilon. \end{align}

Thus,

(34) \begin{align} -\epsilon {{}S_{max}}= - (\mu-\lambda )S_{max}\leq E \left[-U^t S^t \right]\leq E \big[-\big(U^t\big)^2 \big] \leq 0. \end{align}

This bounds one of the terms in (8). Also note that it implies the following bound, which we will use shortly:

(35) \begin{align} \Bigg\lvert{\sum_{i=1}^{m}E \big[2U^{t+m-i} \left(A^{t+m}-\lambda \right ) \big]}\Bigg\rvert\leq 2m \big(A_{max}+\lambda \big) \epsilon. \end{align}

Now we will bound the first term on the right-hand side of (8), which is the most challenging term. Notice that

\begin{align*} E \left[2Q^t (A^t-\lambda ) \right] \leq 2\big(A_{max}+\lambda\big) E \left[Q^t\right] < \infty.\end{align*}

Since we are in steady state, for all $m \in \mathbb{N}_+$ we have

(36) \begin{align} E \left[2Q^t (A^t-\lambda ) \right] = E \left[2Q^{t+m} \left(A^{t+m}-\lambda \right ) \right]. \end{align}

According to (3), we have

(37) \begin{align} & E \left[2Q^{t+m} \left(A^{t+m}-\lambda \right ) \right] \nonumber\\ & = E \left[2 (Q^{t+m-1}+A^{t+m-1}-S^{t+m-1}+U^{t+m-1} ) \left(A^{t+m}-\lambda \right ) \right]\\ & = E \left[2Q^{t+m-1} \left(A^{t+m}-\lambda \right ) \right]+E \left[2 (A^{t+m-1}-\lambda ) \left(A^{t+m}-\lambda \right ) \right]\nonumber\\ &+E \left[2U^{t+m-1} \left(A^{t+m}-\lambda \right ) \right]\nonumber, \end{align}

where the last equality uses the independence of $S^{t+m-1}$ and $A^{t+m}$ . By repeating these steps recursively we obtain, for all $m \in \mathbb{N}_+$ ,

(38) \begin{align} & E \left[2Q^t (A^t-\lambda ) \right] \nonumber\\ & = E \left[2Q^t \left(A^{t+m}-\lambda \right ) \right] + 2\sum_{i=1}^{m}\gamma(i)+ \sum_{i=1}^{m}E \left[2U^{t+m-i} \left(A^{t+m}-\lambda \right ) \right], \end{align}

where

\begin{align*} \gamma(i) = E \left[ \left(A^{t+m-i}-\lambda \right) \left(A^{t+m}-\lambda \right) \right] = E \left[ \left(A^{t+i}-\lambda \right) \left(A^{t}-\lambda \right) \right]. \end{align*}

According to Lemma 2,

\begin{align*} &\big\lvert{ E \left[2Q^t \left(A^{t+m}-\lambda \right ) \right]}\big\rvert \leq E \left[2Q^t \big\lvert{ E \left[ \left(A^{t+m}-\lambda \right)\mid (X^t,Q^t ) \right]}\big\rvert \right] \leq 4A_{max}C\alpha^m E [Q^t ]. \end{align*}

Thus, from (38) and (35) we have

\begin{align*} &-4A_{max}C\alpha^m E \big[2Q^t \big] - 2m \big(A_{max}+\lambda \big) \epsilon+2\sum_{i=1}^{m}\gamma(i)\\ & \leq E \big[2Q^t (A^t-\lambda ) \big] \\ & \leq 4A_{max}C\alpha^m E \big[2Q^t \big] + 2m \big(A_{max}+\lambda \big) \epsilon+2\sum_{i=1}^{m}\gamma(i). \end{align*}

Now, substituting this and (34) into (8), for all $m \in \mathbb{N}_+$ we have

\begin{align*} 2\left(1-2A_{max}C\frac{\alpha^m}{\epsilon}\right) E\left[\epsilon Q^t\right] & \leq \gamma(0) + 2\sum_{i=1}^{m}\gamma(i) + \sigma_s^2 + 2m \big(A_{max}+\lambda \big) \epsilon + \epsilon^2, \\ 2\left(1+2A_{max}C\frac{\alpha^m}{\epsilon}\right) E\left[\epsilon Q^t\right] & \geq \gamma(0) + 2\sum_{i=1}^{m}\gamma(i) + \sigma_s^2- 2m \big(A_{max}+\lambda \big)\epsilon - S_{max}\epsilon + \epsilon^2. \end{align*}

D.4. Proof of Claim 5

Proof. We will prove the claim using the dominated convergence theorem to justify a certain interchange of limit and summation. To this end, note that for any $ \epsilon>0$ and an integer $M \geq \big\lceil \frac{1}{\sqrt{\epsilon}} \big\rceil$ , since $X^0 \stackrel{d}{=}\overline{X}^{ (\epsilon )}$ , we have

\begin{align*} & \left\vert\sum_{i=1}^{M} E\Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big] - \sum_{i=1}^{\big\lfloor \frac{1}{\sqrt{\epsilon}} \big\rfloor } E\Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big)\Big]\right\vert\\ & = \left\vert\sum_{i=\big\lceil \frac{1}{\sqrt{\epsilon}} \big\rceil}^{ M} E\Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big]\right\vert\\ & = \left\vert\sum_{i=\big\lceil \frac{1}{\sqrt{\epsilon}} \big\rceil}^{ M} E\Big[\Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) E \Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big] \Big]\right\vert\\ & \leq \sum_{i=\big\lceil \frac{1}{\sqrt{\epsilon}} \big\rceil}^{ M} E\Big[\Big\lvert{\Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big)}\Big\rvert \Big\lvert{E \Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big]}\Big\rvert \Big]\\ & \stackrel{(a)}\leq \sum_{i=\big\lceil \frac{1}{\sqrt{\epsilon}} \big\rceil}^{ M} \big(A_{max}+\lambda\big) 2A_{max}C\alpha^t\\ & \leq 2A_{max}C\big(A_{max}+\lambda\big)\frac{\alpha^{\frac{1}{\sqrt{\epsilon}}}}{1-\alpha}, \end{align*}

where we used Lemma 2 to get (a). Let $M \to \infty$ ; for any $ \epsilon>0$ , since $X^0 \stackrel{d}{=}\overline{X}^{ (\epsilon )}$ , we have

(39) \begin{align} & \sum_{i=1}^{\big\lfloor \frac{1}{\sqrt{\epsilon}} \big\rfloor } E\Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big)\Big] \\ &\geq \lim_{M \to \infty}\sum_{i=1}^{M} E\Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big] - 2A_{max}C\big(A_{max}+\lambda\big)\frac{\alpha^{\frac{1}{\sqrt{\epsilon}}}}{1-\alpha}\nonumber \end{align}

and

(40) \begin{align} & \sum_{i=1}^{\big\lfloor \frac{1}{\sqrt{\epsilon}} \big\rfloor } E\Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big]\\ &\leq \lim_{M \to \infty} \sum_{i=1}^{M} E\Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big]+ 2A_{max}C\big(A_{max}+\lambda\big)\frac{\alpha^{\frac{1}{\sqrt{\epsilon}}}}{1-\alpha}.\nonumber \end{align}

Letting $\epsilon \to 0$ and combining (39) and (40), we have

\begin{align*} &\lim_{\epsilon \to 0}\sum_{i=1}^{\big\lfloor \frac{1}{\sqrt{\epsilon}} \big\rfloor } E\Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big] \\ & = \lim_{\epsilon \to 0} \lim_{M \to \infty} \sum_{i=1}^{M} E\Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \big) \Big]. \end{align*}

Notice that

\begin{align*} &\lim_{M \to \infty}\sum_{i=1}^{M} \Big\lvert{E\Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big]}\Big\rvert \leq 2A_{max}C\big(A_{max}+\lambda\big)\alpha\frac{1}{1-\alpha}. \end{align*}

It follows from Lebesgue’s dominated convergence theorem that

(41) \begin{align} & \lim_{\epsilon \to 0} \lim_{M \to \infty} \sum_{i=1}^{M} E\Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big]\nonumber\\ & = \lim_{M \to \infty} \sum_{i=1}^{M} \lim_{\epsilon \to 0} E\Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big]. \end{align}

According to the weak convergence of the underlying Markov chain (Equation (5)), we have

(42) \begin{align} &\lim_{\epsilon \to 0} E\Big[ \Big( \big(A^i\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big) \Big( \big(A^0\big)^{(\epsilon)}-\lambda^{(\epsilon)} \Big)\Big] = \gamma(i). \end{align}

Thus, combining (41) and (42), we have

\begin{align*} \lim_{m(\epsilon) \to \infty}\sum_{i=1}^{m(\epsilon)}\gamma^{(\epsilon)}(i) = \lim_{M \to \infty} \sum_{i=1}^{M} \gamma(i). \end{align*}

Appendix E. Proof of Claim 6

Proof. Similarly to the proof for Theorem 1, we consider the m-step drift of the Lyapunov function $V({\cdot})$ . For any $m \in \mathbb{N}_+$ ,

(43) \begin{align} &E \left[\Delta^m V (\boldsymbol{q},\boldsymbol{x} )\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ \leq & E \Bigg[\sum_{l=1}^m\big( 2 \big\langle \boldsymbol{Q} ^{t+m-l },\boldsymbol{A}^{t+m-1} -\boldsymbol{\lambda} \big\rangle + \big\lvert\big\lvert{\boldsymbol{A}^{t+m-l} -\boldsymbol{S}^{t+m-l} }\big\rvert\big\rvert^2\nonumber\\ &+ 2\big\langle \boldsymbol{Q} ^{t+m-l },\boldsymbol{\lambda}-\boldsymbol{S} ^{t+m-l} \big\rangle\big) \mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \Bigg]. \end{align}

The first term in the right-hand side of (43) can be simplified as follows:

\begin{align*} &E \left[ \left\langle \boldsymbol{Q} ^{t+m-l },\boldsymbol{A}^{t+m-1} -\boldsymbol{\lambda} \right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\\ =\,& E \left[ \left\langle \boldsymbol{Q}^t +\sum_{i=0}^{m-l} \left(\boldsymbol{A}^{t+i} -\boldsymbol{S}^{ t+i} +\boldsymbol{U}^{t+i} \right),\boldsymbol{A}^{t+m-1} -\boldsymbol{\lambda} \right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\\ \leq& \left \langle \boldsymbol{q} ,E \left[\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right] \right\rangle+ mN^2 \big(A_{max}+S_{max} \big) \big(A_{max}+\lambda_{max} \big) \\ \stackrel{(a)}\leq& \Big\lvert\Big\lvert{ \boldsymbol{q}}\rvert\rvert \lvert\lvert{E \left[\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right] = (\boldsymbol{q},\boldsymbol{x} ) \right]}\Big\rvert\Big\rvert + mN^2 \big(A_{max}+S_{max} \big) \big(A_{max}+\lambda_{max} \big) \\ \stackrel{(b)}\leq &2NA_{max}C_{max}\alpha_{max}^{m-l}\lvert\lvert{\boldsymbol{q}}\rvert\rvert+ mN^2 \big(A_{max}+S_{max} \big) \big(A_{max}+\lambda_{max} \big), \end{align*}

where (a) follows from the Cauchy–Schwarz inequality and (b) follows from Lemma 2. The second term in the right-hand side of (43) can be bounded from above as follows:

\begin{align*} &E \left[\big\lvert\big\lvert{\boldsymbol{A}^{t+m-l} -\boldsymbol{S}^{t+m-l} }\big\rvert\big\rvert^2\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right] \leq N^2\big(A_{max}+S_{max}\big)^2. \end{align*}

The last term in the right-hand side of (43) can be written as

\begin{align*} & E \Big[\big\langle \boldsymbol{Q} ^{t+m-l },\boldsymbol{\lambda}-\boldsymbol{S} ^{t+m-l} \big\rangle \mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \Big]\\ =& E \Big[ E \big[\big\langle\boldsymbol{Q} ^{t+m-l },\boldsymbol{\lambda}-\boldsymbol{S} ^{t+m-l} \big\rangle \mid \big(\boldsymbol{Q} ^{t+m-l} ,\boldsymbol{X}^{t+m-l} \big) \big]\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \Big]\\ \stackrel{(a)}\leq & E \Big[\min_{\boldsymbol{r}\in \mathcal{C}}\Big\langle\boldsymbol{Q} ^{t+m-l },\boldsymbol{\lambda}-\boldsymbol{r}\Big\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \Big]\\ \stackrel{(b)}\leq& E \Big[\Big\langle\boldsymbol{Q} ^{t+m-l },\boldsymbol{\lambda}-\left(\boldsymbol{\lambda}+\epsilon {\textbf{1}} \right)\Big\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \Big] \\ =&- \epsilon E \left[\Bigg\langle\boldsymbol{Q}^t +\sum_{i=0}^{m-l} \left(\boldsymbol{A}^{t+i} -\boldsymbol{S}^{ t+i} +\boldsymbol{U}^{t+i} \right), {\textbf{1}} )\bigg\rangle \mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right],\\ \leq &- \epsilon {\lvert\lvert{\boldsymbol{q}}\rvert\rvert_1} +\epsilon mN^2\big(A_{max}+S_{max}\big)\\ \leq & - \epsilon \lvert\lvert{\boldsymbol{q} }\rvert\rvert +\epsilon mN^2\big(A_{max}+S_{max}\big), \end{align*}

where (a) follows from MaxWeight scheduling. Since $\lambda \in int(\mathcal{C})$ , there exists a positive number $\epsilon $ such that $\boldsymbol{\lambda}+\epsilon {\textbf{1}} \in int(\mathcal{C})$ . This gives (b). The last inequality comes from the fact that for any vector $\boldsymbol{x}$ , its $l_1$ norm is no less than its $l_2$ norm. Based on the discussion above, we have

\begin{align*} E \left[\Delta^m V (\boldsymbol{q},\boldsymbol{x} )\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]& \leq \frac{K_2(m) }{2} +\left(2NA_{max}C_{max}\frac{1-\alpha_{max}^m}{1-\alpha_{max}}-m\epsilon \right)\lvert\lvert{\boldsymbol{q}}\rvert\rvert\\ & \leq \frac{K_2 (m(\epsilon ) )}{2} -\frac{m(\epsilon )\epsilon }{2}\lvert\lvert{\boldsymbol{q}}\rvert\rvert, \end{align*}

where

\begin{align*} m(\epsilon ) = \min \left\{m \in \mathbb{N}_+\mid 2NA_{max}C_{max}\frac{1-\alpha_{max}^m}{1-\alpha_{max}} < \frac{m\epsilon }{2} \right\} \end{align*}

and

\begin{align*} K_2 (m(\epsilon ) ) &= m(\epsilon )N^2 \big(A_{max}+S_{max} \big)^2+2m(\epsilon )^2 N^2 \big(A_{max}+S_{max}\big)\big(\epsilon +A_{max}+\lambda_{max} \big). \end{align*}

Appendix F. Details in proof of Proposition 4

F.1. Proof of Claim 7

Proof. The detailed proof of Claim 7 is as follows:

\begin{align*} \big\lvert{ \Delta^m W_{\perp\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} )}\big\rvert &= \big\lvert{ \left[W_{\perp\mathcal{K}} \left(\boldsymbol{Q}^{t+m},\boldsymbol{x}^{t+m} \right)-W_{\perp\mathcal{K}} \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) \right]}\big\rvert \mathcal{I} \left(\left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right ) = (\boldsymbol{q},\boldsymbol{x} ) \right)\\ &\stackrel{(a)}{\leq}\big\lvert\big\lvert{\boldsymbol{Q}_{\perp\mathcal{K}}^{t+m}-\boldsymbol{Q}_{\perp\mathcal{K}}^{t}}\big\rvert\big\rvert\\ & \stackrel{(b)}{\leq} \big\lvert\big\lvert{\boldsymbol{Q}^{t+m}-\boldsymbol{Q}^{t}}\big\rvert\big\rvert\\ & = \sqrt{\sum_{ij} \left(Q_{ij}^{t+m}-Q_{ij}^{t} \right)^2}\\ & \stackrel{(c)}{\leq}\sqrt{\sum_{ij} \big(m\big(A_{max}+S_{max} \big) \big)^2}\\ & = Nm\big(A_{max}+S_{max} \big), \end{align*}

where (a) follows from the triangle inequality and (b) follows from the contraction property of projection. The inequality (c) is valid because

\begin{align*} \big\lvert{Q_{ij}^{t+m}-Q_{ij}^{t}}\big\rvert=\bigg\lvert{\sum_{t=1}^m A^t-S^t+U^t}\bigg\rvert\leq m\big(A_{max}+S_{max} \big). \end{align*}

F.2. Proof of Claim 8

Proof. We will use Lemma 5 to bound the m-step drift $\Delta^m W_{\perp\mathcal{K}}(\boldsymbol{q} ) $ . To this end, we will first bound the drift $\Delta^m V (\boldsymbol{q},\boldsymbol{x} )$ and then bound $\Delta^m V _{\parallel\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} )$ . First, consider the drift $\Delta^m V _{\parallel\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} )$ :

(44) \begin{align} &E \left[\Delta^m V (\boldsymbol{q},\boldsymbol{x} )\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & =E \left[\lvert\lvert{\boldsymbol{Q}^{t+m}}\rvert\rvert^2-\lvert\lvert{\boldsymbol{Q}^t }\rvert\rvert^2\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & \leq E \left[\lvert\lvert{\boldsymbol{Q}^{t+m-1}+\boldsymbol{A}^{t+m-1} -\boldsymbol{S}^{t+m-1}}\rvert\rvert^2-\lvert\lvert{\boldsymbol{Q}^t }\rvert\rvert^2\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & = E \bigg[\lvert\lvert{\boldsymbol{Q}^{t+m-1}}\rvert\rvert^2-\lvert\lvert{\boldsymbol{Q}^t }\rvert\rvert^2+2\left\langle \boldsymbol{Q}^{t+m-1} , \boldsymbol{A}^{t+m-1} -\boldsymbol{S}^{t+m-1} \right\rangle \nonumber\\ &+ \lvert\lvert{\boldsymbol{A}^{t+m-1} -\boldsymbol{S}^{t+m-1}}\rvert\rvert^2\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \bigg]\nonumber\\ & = E \left[\sum_{l=1}^m 2\left\langle \boldsymbol{Q}^{t+m-l} ,\boldsymbol{A}^{t+m-1}-\boldsymbol{\lambda} \right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ &+E \left[\sum_{l=1}^m \lvert\lvert{\boldsymbol{A}^{t+m-l}-\boldsymbol{S}^{t+m-l}}\rvert\rvert^2\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & +\sum_{l=1}^m E \left[2\left\langle \boldsymbol{Q} ^{t+m-l },\boldsymbol{\lambda}-\boldsymbol{S}^{t+m-l} \right\rangle \mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right], \end{align}

where the last equation follows from repeating the previous step inductively. By the tower property, the last term in (44) can be written as

(45) \begin{align} &\sum_{l=1}^m E \left[2\left\langle\boldsymbol{Q} ^{t+m-l },\boldsymbol{\lambda}-\boldsymbol{S} ^{t+m-l} \right\rangle \mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & = \sum_{l=1}^m E \bigg[E \bigg[2\left\langle\boldsymbol{Q} ^{t+m-l }, (1-\epsilon )\boldsymbol{v}-\boldsymbol{S} ^{t+m-l} \right\rangle\mid \left(\boldsymbol{Q} ^{t+m-l} ,\boldsymbol{X}^{t+m-l} \right) \bigg]\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \bigg]\nonumber\\ & = \sum_{l=1}^m -2\epsilon E \left[\left\langle\boldsymbol{Q} ^{t+m-l },\boldsymbol{v}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & +\sum_{l=1}^m E \bigg[E \bigg[2\left\langle\boldsymbol{Q} ^{t+m-l },\boldsymbol{v}-\boldsymbol{S} ^{t+m-l} \right\rangle\mid \left(\boldsymbol{Q} ^{t+m-l} ,\boldsymbol{X}^{t+m-l} \right)= (\boldsymbol{q}^{\prime},\boldsymbol{x}^{\prime} ) \bigg]\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \bigg]. \end{align}

Since we use MaxWeight scheduling, using Lemma 4, we have

(46) \begin{align} & E \bigg[2\left\langle\boldsymbol{Q} ^{t+m-l },\boldsymbol{v}-\boldsymbol{S} ^{t+m-l} \right\rangle\mid \left(\boldsymbol{Q} ^{t+m-l} ,\boldsymbol{X}^{t+m-l} \right)= (\boldsymbol{q}^{\prime},\boldsymbol{x}^{\prime} ) \bigg]\nonumber\\ & \leq E \bigg[2\left\langle\boldsymbol{Q} ^{t+m-l },\boldsymbol{v}-\left(\boldsymbol{v} +\frac{v_{\min}}{\lvert\lvert{\boldsymbol{q}_{\perp\mathcal{K}}}\rvert\rvert}\boldsymbol{q}_{\perp\mathcal{K}} )\right) \right\rangle\mid \left(\boldsymbol{Q} ^{t+m-l} ,\boldsymbol{X}^{t+m-l} \right)= (\boldsymbol{q}^{\prime},\boldsymbol{x}^{\prime} ) \bigg]\nonumber\\ & \leq -2v_{\min}\big\lvert\big\lvert{\boldsymbol{Q}_{\perp\mathcal{K}}^{t+m-l} }\big\rvert\big\rvert\nonumber\\ & \leq -2v_{\min}\big\lvert\big\lvert{\boldsymbol{Q}_{\perp\mathcal{K}}^t }\big\rvert\big\rvert+2v_{\min}\big\lvert\big\lvert{\boldsymbol{Q}_{\perp\mathcal{K}}^{t+m-l} -\boldsymbol{Q}_{\perp\mathcal{K}}^t }\big\rvert\big\rvert\nonumber\\ & \leq -2v_{\min}\big\lvert\big\lvert{\boldsymbol{Q}_{\perp\mathcal{K}}^t }\big\rvert\big\rvert+2v_{\min}\big\lvert\big\lvert{\boldsymbol{Q} ^{t+m-l }-\boldsymbol{Q}^t }\big\rvert\big\rvert\nonumber\\ & \leq -2v_{\min}\big\lvert\big\lvert{\boldsymbol{Q}_{\perp\mathcal{K}}^t }\big\rvert\big\rvert+2v_{min}N m \big(A_{max}+S_{max} \big). \end{align}

Combining (44)–(46), we have

(47) \begin{align} & E \left[\Delta^m V (\boldsymbol{q},\boldsymbol{x} )\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & \leq E \left[\sum_{l=1}^m 2\left\langle\boldsymbol{Q} ^{t+m-l },\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right] \nonumber\\ &+E \left[\sum_{l=1}^m \big\lvert\big\lvert{\boldsymbol{A}^{t+m-l} -\boldsymbol{S}^{t+m-l} }\big\rvert\big\rvert^2\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]+2Nm^2v_{min}\big(A_{max}+S_{max} \big)\nonumber\\ & + \sum_{l=1}^m -2\epsilon E \left[\left\langle\boldsymbol{Q} ^{t+m-l },\boldsymbol{v}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]-2m v_{\min}\lvert\lvert{\boldsymbol{Q}_{\perp\mathcal{K}} }\rvert\rvert\nonumber\\ & \leq E \left[\sum_{l=1}^m 2\left\langle\boldsymbol{Q} ^{t+m-l },\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ &+ \sum_{l=1}^m -2\epsilon E \left[\left\langle\boldsymbol{Q} ^{t+m-l },\boldsymbol{v}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & +2Nm^2v_{min}\big(A_{max}+S_{max} \big)+N^2m \big(A_{max}+S_{max} \big)^2-2m v_{\min}\lvert\lvert{\boldsymbol{Q}_{\perp\mathcal{K}} }\rvert\rvert. \end{align}

Next, we will bound the drift of $\Delta^m V_{\parallel\mathcal{K}} (\boldsymbol{q} ,\boldsymbol{x} )$ :

(48) \begin{align} &E \left[\Delta^m V _{\parallel\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} )\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & = E \left[\big\lvert\big\lvert{\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m} }\big\rvert\big\rvert^2-\big\lvert\big\lvert{\boldsymbol{Q}_{\parallel\mathcal{K}}^{t} }\big\rvert\big\rvert^2\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & = E \left[\big\lvert\big\lvert{\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m} -\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}}\big\rvert\big\rvert^2\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ &+E \left[2\left\langle \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}, \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m} -\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & +E \left[\big\lvert\big\lvert{\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}}\big\rvert\big\rvert^2-\big\lvert\big\lvert{\boldsymbol{Q}_{\parallel\mathcal{K}}^{t} }\big\rvert\big\rvert^2\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ &\geq E \left[2\left\langle \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}, \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m} -\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ &+E \left[\big\lvert\big\lvert{\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}}\big\rvert\big\rvert^2-\big\lvert\big\lvert{\boldsymbol{Q}_{\parallel\mathcal{K}}^{t} }\big\rvert\big\rvert^2\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right] \end{align}

The first term can be bounded below as follows:

(49) \begin{align} &E \left[2\left\langle \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}, \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m} -\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & = \bigg[2\left\langle \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}, \boldsymbol{Q}^{t+m}-\boldsymbol{Q}_{\perp\mathcal{K}}^{t+m} -\left(\boldsymbol{Q}^{t+m-1}-\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}\right)\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \bigg]\nonumber\\ \geq & E \left[2\left\langle \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}, \boldsymbol{Q}^{t+m} -\boldsymbol{Q}^{t+m-1}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]. \end{align}

The last line follows from the fact that since $\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1} \in \mathcal{K}$ , $\boldsymbol{Q}_{\perp\mathcal{K}}^{t+m} \in \mathcal{K}^o$ , we have

\begin{align*} \left\langle\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1},\boldsymbol{Q}_{\perp\mathcal{K}}^{t+m} \right\rangle \leq 0 \quad \text{and} \quad \left\langle \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}, \boldsymbol{Q}_{\perp\mathcal{K}}^{t+m-1} \right\rangle = 0. \end{align*}

Substituting (49) into (48), we have

(50) \begin{align} &E \left[\Delta^m V_{\parallel\mathcal{K}} (\boldsymbol{q} ,\boldsymbol{x} )\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ \geq& E \bigg[2\left\langle \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1},\boldsymbol{A}^{t+m-1} -\boldsymbol{S}^{t+m-1} +\boldsymbol{U}^{t+m-1} \right\rangle \nonumber\\ &+\big\lvert\big\lvert{\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-1}}\big\rvert\big\rvert^2-\big\lvert\big\lvert{\boldsymbol{Q}_{\parallel\mathcal{K}}^{t} }\big\rvert\big\rvert^2\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \bigg]\nonumber\\ \stackrel{(a)}{\geq}& \sum_{l=1}^m E \left[2\left\langle \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-l} ,\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ &+\sum_{l=1}^m E \left[2\left\langle \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-l} ,\boldsymbol{\lambda}-\boldsymbol{S} ^{t+m-l} \right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ \stackrel{(b)}{=} &\sum_{l=1}^m E \left[2\left\langle \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-l} ,\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ &+\sum_{l=1}^m E \left[-2\epsilon\left\langle \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-l} ,\boldsymbol{v}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & +\sum_{l=1}^m E \left[-2\left\langle\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-l} , \boldsymbol{S} ^{t+m-l} -\boldsymbol{v}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ & =\sum_{l=1}^m E \left[2\left\langle \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-l} ,\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber \\ &+ \sum_{l=1}^m E \left[-2\epsilon\left\langle \boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-l} ,\boldsymbol{v}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right], \end{align}

where (a) follows from recursively expanding the previous expression and noting that

\begin{align*} \left\langle \boldsymbol{Q}^{t+m-1}, \boldsymbol{U}^{t+m-1} \right\rangle \geq 0, \end{align*}

since each component of $\boldsymbol{Q}^{t+m-1}$ and $\boldsymbol{U}^{t+m-1}$ is non-negative. The equality (b) follows from the fact that $\boldsymbol{\lambda} = (1-\epsilon)\boldsymbol{v}$ . Since $\boldsymbol{q}_{\parallel\mathcal{\boldsymbol{K}}} \in \mathcal{K}$ and $\boldsymbol{s}, \boldsymbol{v} \in \mathcal{F}$ , according to (12), we have $\left\langle\boldsymbol{Q}_{\parallel\mathcal{K}}^{t+m-l} , \boldsymbol{S} ^{t+m-l} -\boldsymbol{v}\right\rangle=0$ , which gives us the last equation. Combining (47) and (50), we have

(51) \begin{align} & E \left[\Delta^m V (\boldsymbol{q},\boldsymbol{x} )-\Delta^m V (\boldsymbol{q}_{\perp\mathcal{K}} )\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right] \nonumber\\ \leq & E \left[\sum_{l=1}^m 2\left\langle\boldsymbol{Q}_{\perp\mathcal{K}}^{t+m-l} ,\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ &+2Nm^2 A_{max}+N^2m \big(A_{max}+S_{max} \big)^2\nonumber\\ & + \sum_{l=1}^m -2\epsilon E \left[\left\langle\boldsymbol{Q}_{\perp\mathcal{K}}^{t+m-l} ,\boldsymbol{v}\right\rangle\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right] -2m v_{\min}\big\lvert\big\lvert{\boldsymbol{Q}_{\perp\mathcal{K}}^t }\big\rvert\big\rvert\nonumber\\ \stackrel{(a)}\leq & 2\big\lvert\big\lvert{\boldsymbol{Q}_{\perp\mathcal{K}}^t }\big\rvert\big\rvert E \left[\sum_{l=1}^m\big\lvert\big\lvert{\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}}\big\rvert\big\rvert\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ &+ 2Nm^2 A_{max} (N\big(A_{max}+\lambda\big)+\epsilon\lvert\lvert{\boldsymbol{v}}\rvert\rvert )-2m v_{\min}\big\lvert\big\lvert{\boldsymbol{Q}_{\perp\mathcal{K}}^t }\big\rvert\big\rvert\nonumber\\ & +2m\epsilon\big\lvert\big\lvert{\boldsymbol{Q}_{\perp\mathcal{K}}^t }\big\rvert\big\rvert \lvert\lvert{\boldsymbol{v}}\rvert\rvert+2Nm^2v_{\min}\big(A_{max}+S_{max} \big)+N^2m \big(A_{max}+S_{max} \big)^2\nonumber\\ =\,& 2\big\lvert\big\lvert{\boldsymbol{Q}_{\perp\mathcal{K}}^t }\big\rvert\big\rvert E \left[\sum_{l=1}^m\big\lvert\big\lvert{\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}}\big\rvert\big\rvert \mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\nonumber\\ &+ 2\big\lvert\big\lvert{\boldsymbol{Q}_{\perp\mathcal{K}}^t }\big\rvert\big\rvert \left(m\epsilon\lvert\lvert{\boldsymbol{v}}\rvert\rvert -mv_{\min} \right)+K_3(m), \end{align}

where (a) follows from the Cauchy–Schwarz inequality and

\begin{align*} K_3(m) &= 2Nm^2\big(A_{max}+S_{max} \big) (N\big(A_{max}+\lambda\big)\\ &+\epsilon\lvert\lvert{\boldsymbol{v}}\rvert\rvert +v_{\min} )+N^2m \big(A_{max}+S_{max} \big)^2. \end{align*}

Using Lemma 5, we conclude that

\begin{align*} &E \left[\Delta^m W_{\perp\mathcal{K}} (\boldsymbol{q},\boldsymbol{x} ) \mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} )\right]\\ &\leq E\left[\frac{1}{2\lvert\lvert{\boldsymbol{q}_{\perp\mathcal{K}}}\rvert\rvert} \left(\Delta^m V (\boldsymbol{q},\boldsymbol{x} )-\Delta^m V (\boldsymbol{q}_{\perp\mathcal{K}} ) \right)\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\\ & \leq E \left[\sum_{l=1}^m\big\lvert\big\lvert{\boldsymbol{A}^{t+m-l} -\boldsymbol{\lambda}}\big\rvert\big\rvert\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right] + \frac{K_3(m)}{2\big\lvert\big\lvert{\boldsymbol{Q}_{\perp\mathcal{K}}^t }\big\rvert\big\rvert}+m (\epsilon\lvert\lvert{\boldsymbol{v}}\rvert\rvert-v_{\min} ). \end{align*}

Appendix G. Proof of Lemma 6

Proof of Lemma 6. Note that $V^{\prime}(\boldsymbol{q},\boldsymbol{x}) = \lvert\lvert{\boldsymbol{q}_{{\parallel}\mathcal{L}}}\rvert\rvert^2\leq \lvert\lvert{\boldsymbol{q}}\rvert\rvert^2=V(\boldsymbol{q},\boldsymbol{x})$ . Therefore, in order to prove the lemma, we will show that $E [V (\boldsymbol{Q},\boldsymbol{X} ) ]$ is finite in steady state. We do this using Lemma 1 on the Lyapunov function $W (\boldsymbol{q},\boldsymbol{x} ) = \lvert\lvert{\boldsymbol{q}}\rvert\rvert = \sqrt{V (\boldsymbol{q},\boldsymbol{x} )}$ . Note that its m-step drift is

\begin{equation*} \Delta^m W (\boldsymbol{q},\boldsymbol{x} ) \triangleq \left[W \left(\boldsymbol{Q}^{t+m},\boldsymbol{X}^{t+m} \right)-W \left(\boldsymbol{Q}^t,\boldsymbol{X}^t \right) \right]\mathcal{I} \left( \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right). \end{equation*}

To use Lemma 1, we first check that Conditions 2 and 3 of Lemma 1 hold:

\begin{align*} \lvert{ \Delta W (\boldsymbol{q},\boldsymbol{x} )}\rvert & = \lvert{ \lvert\lvert{\boldsymbol{Q}^{t+1} }\rvert\rvert-\lvert\lvert{\boldsymbol{Q}^t }\rvert\rvert}\rvert\mid \mathcal{I} ( \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) )\\ & \leq \lvert\lvert{\boldsymbol{Q}^{t} -\boldsymbol{Q}^t}\rvert\rvert\\ & = \sqrt{\sum_{ij} \left(Q_{ij}^{t} -Q_{ij}^t \right)^2}\\ & \leq \sqrt{\sum_{ij} \left(m \big(A_{max}+S_{max} \big) \right)^2}\\ & = N \big(A_{max}+S_{max} \big). \end{align*}

Therefore, Conditions 2 and 3 are satisfied with $D(m) = Nm\big(A_{max}+S_{max}\big)$ and $\hat{D}(t_0) =N \big(A_{max}+S_{max} \big) $ .

We now verify Condition 1:

\begin{align*} &E \left[\Delta^m W (\boldsymbol{q},\boldsymbol{x} )\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right]\\ =& E \left[\lvert\lvert{\boldsymbol{Q}^{t+m} }\rvert\rvert-\lvert\lvert{\boldsymbol{Q}^t }\rvert\rvert\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right]\\ \leq & E \left[\frac{1}{2\lvert\lvert{\boldsymbol{Q}^t }\rvert\rvert} \left(\lvert\lvert{\boldsymbol{Q}^{t+m} }\rvert\rvert^2-\lvert\lvert{\boldsymbol{Q}^t }\rvert\rvert^2 \right)\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right)= (\boldsymbol{q},\boldsymbol{x} ) \right]\\ \leq & \frac{1}{2\lvert\lvert{\boldsymbol{q} }\rvert\rvert}E \left[\Delta^m V (\boldsymbol{q},\boldsymbol{x} )\mid \left(\boldsymbol{Q}^t ,\boldsymbol{X}^t \right) = (\boldsymbol{q},\boldsymbol{x} ) \right]\\ \stackrel{(a)}\leq & -\frac{m(\epsilon )}{4}\ \text{ for all $\boldsymbol{q}$ such that $W (\boldsymbol{q}) \geq \frac{2K_2 (m(\epsilon ) )}{\epsilon }$}, \end{align*}

where (a) follows from Claim 6,

\begin{align*} m(\epsilon ) = \min \left\{m \in \mathbb{N}_+\mid 2NA_{max}C_{max}\frac{1-\alpha_{max}^m}{1-\alpha_{max}}< \frac{m\epsilon }{2} \right\}, \end{align*}

and

\begin{align*} K_2 (m(\epsilon ) ) &= m(\epsilon )N^2 \big(A_{max}+S_{max} \big)^2+2m(\epsilon )^2 N^2 \big(A_{max}+S_{max}\big)\big(\epsilon +A_{max}+\lambda_{max} \big). \end{align*}

Therefore, Condition 1 is also satisfied with $\kappa(m(\epsilon)) = \frac{2K_2 (m(\epsilon ) )}{\epsilon }$ and $\eta(m(\epsilon)) = \frac{m(\epsilon )}{4}$ . Applying Lemma 1, we conclude that all moments of $W (\overline{\boldsymbol{Q}},\overline{\boldsymbol{X}} )$ are finite in steady state. The lemma follows from noting that $V^{\prime} (\boldsymbol{q} ,\boldsymbol{x} ) = \lvert\lvert{\mathbb{q}_{{\parallel}\mathcal{L}}}\rvert\rvert^2$ is the norm of the projection of $\mathbb{q}$ onto $\mathcal{S}$ .

Appendix H. Details in proof of Theorem 3

H.1. Proof of Lemma 7

Proof of Lemma 7. Since $\epsilon$ is fixed, we drop $({\cdot})^{(\epsilon)}$ for simplicity. First, let $X^t =\boldsymbol{x}_0$ , let $\alpha^{t+l}$ denote the conditional distribution of $X^{t+l}$ , and let P denote the transition matrix of $\{X^t\}_{t\geq0}$ . Then, for any $l \in \{0,1,\ldots,m-1\}$ ,

\begin{align*} &E \left[ \big( \big(A^{t+l}-\lambda \big) \left(A^{t+m}-\lambda \right)\mid X^t =\boldsymbol{x}_0 \big) \right]\nonumber\\ &=\sum_{x \in \Omega}\alpha^{t+l} \left(x \right) \left[f (x )-\lambda \right] \left[\sum_{y \in \Omega}P_{xy}^{m-l}f (y )-\lambda \right]\\ &=\sum_{x \in \Omega}\alpha^{t+l} (x ) \left[f (x )-\lambda \right] \left[\sum_{y \in \Omega} \left(P_{xy}^{m-l}-\pi (y ) \right)f (y ) \right].\nonumber \end{align*}

Similarly, we have

\begin{align*} &E \big[ \big(b^0-\lambda \big) \big(b^{m-l}-\lambda \big) \big]\\ &=\sum_{x \in \Omega}\pi (x ) \left[f (x )-\lambda \right] \left[\sum_{y \in \Omega}P_{xy}^{m-l}f (y )-\lambda \right]\\ &=\sum_{x \in \Omega}\pi (x ) [f (x )-\lambda ] \left[\sum_{y \in \Omega} \left(P_{xy}^{m-l}-\pi (y ) \right)f (y ) \right]. \end{align*}

Thus,

\begin{align*} &\Bigg\lvert{\sum_{l=0}^{m-1} \big(E \big[ \big(A^{t+l}-\lambda \big) \big(A^{t+m}-\lambda \big)\mid X^t=\boldsymbol{x}_0 \big]-E \big[ \big(b^{m-l}-\lambda \big) \big(b^0-\lambda \big) \big] \big)}\Bigg\rvert\\ =&\Bigg\lvert{\sum_{l=0}^{m-1}\sum_{x \in \Omega} \left(\alpha^{t+l} (x )-\pi (x ) \right) \left(f (x )-\lambda \right)\sum_{y \in \Omega} \left[P_{xy}^{m-l}-\pi (y ) \right]f (y )}\Bigg\rvert\\ =&\Bigg\lvert{\sum_{l=0}^{m-1}\sum_{x \in \Omega}\sum_{z \in \Omega}\alpha^t (z ) \left[P_{zx}^l (x )-\pi (x ) \right] \left[f (x )-\lambda \right]\sum_{y \in \Omega} \left[P_{xy}^{m-l}-\pi (y ) \right]f (y )}\Bigg\rvert\\ \leq& \big(A_{max}+\lambda \big)A_{max}\sum_{l=0}^{m-1}\sum_{x \in \Omega}\sum_{z \in \Omega}\alpha^t (z )\lvert{P_{zx}^l (x )-\pi (x )}\rvert\sum_{y \in \Omega}\lvert{P_{xy}^{m-l}-\pi (y )}\rvert \end{align*}

and

\begin{align*} &\Big\lvert{E \left[ \left(A^{t+m}-\lambda \right )^2- \big(b^0-\lambda \big)^2\mid X^t =\boldsymbol{x}_0 \right]}\Big\rvert\\ =&\bigg\lvert{\sum_{x \in \Omega} \left(\alpha^{t+m} (x )-\pi (x ) \right) \left(f (x )-\lambda \right)^2}\bigg\rvert\\ =&\bigg\lvert{\sum_{x \in \Omega}\sum_{y \in \Omega}\pi (y ) \left(P_{yx}^m-\pi (x ) \right) \left(f (x )-\lambda \right)^2}\bigg\rvert\\ \leq & 2 \big(A_{max}+\lambda \big)^2C\alpha^m. \end{align*}

According to (31) and (32), for all $\boldsymbol{x}_0 \in \Omega$ we have

\begin{align*} &\Bigg\lvert{\sum_{l=0}^{m-1} \left(E \left[ \big(A^{t+l}-\lambda \big) \left(A^{t+m}-\lambda \right)\mid X^t=\boldsymbol{x}_0 \right]-E \left[ \left(b^{m-l}-\lambda \right) \left(b^0-\lambda \right) \right] \right)}\Bigg\rvert\\ &\leq 4 \left(A_{max}+\lambda \right)A_{max}C^2m\alpha^m, \end{align*}

and

\begin{align*} \Big\lvert{E \left[ \left(A^{t+m}-\lambda \right )^2- \big(b^0-\lambda \big)^2 \mid X^t=\boldsymbol{x}_0\right]}\Big\rvert \leq 2 \big(A_{max}+\lambda \big)^2C\alpha^m. \end{align*}

Since these bounds hold for all $ \boldsymbol{x}_0 \in \Omega$ , and since the bounds do not depend on $\boldsymbol{x}_0$ , we have the lemma.

H.2. Proof of Theorem 3

Proof of Theorem 3. In the rest of the proof, we consider the system in its steady state, and so for every time t, $(Q^t)^{(\epsilon)} \stackrel{d}= \overline{Q}^{(\epsilon)}$ . For ease of exposition, we again drop the superscript $({\cdot})^{(\epsilon)}$ and just use $Q^t$ . Then $A^t$ denotes the arrivals in steady state, and the queue length at time $t+m$ is

\begin{equation*}Q^{t+m}=Q^t+\sum_{l=0}^{m-1} A^{t+l}-\sum_{l=0}^{m-1} S^{t+l}+\sum_{l=0}^{m-1} U^{t+l},\end{equation*}

which has the same distribution as $Q^t$ for all $m \in \mathbb{N}_+$ . Note that, from the definition of the unused service and (4), it can easily be shown (by considering the cases of $U=0$ and $U\neq 0$ ) that

\begin{equation*}\begin{aligned} \big(e^{\epsilon\theta Q^{t+1}}-1 \big) \big(e^{-\epsilon\theta U^{t}}-1 \big)=0,\\e^{\epsilon\theta Q^{t+1}} = 1- e^{-\epsilon\theta U^{t}}+e^{\epsilon\theta \big(Q^t+A^t-S^t\big)}.\end{aligned}\end{equation*}

Taking the expectation with respect to the stationary distribution on both sides, we have

(52) \begin{align}E \big[e^{\epsilon\theta Q^{t+1}} \big]= 1- E \big[e^{-\epsilon\theta U^{t}} \big]+E \big[ e^{\epsilon \theta (Q^t+A^t-S^t )} \big].\end{align}

Since $\theta \leq 0$ , we have that $e^{\epsilon\theta Q^t}\leq 1$ . Therefore, in steady state $E \big[e^{\epsilon\theta Q^{t}}\big]$ is bounded, and so we can set its drift to zero:

(53) \begin{align} E \big[e^{\epsilon\theta Q^{t+1}} \big] -E \big[e^{\epsilon\theta Q^{t}} \big]=0.\end{align}

Combining (52) and (53), we have

\begin{align*} E \big[e^{\epsilon\theta Q^{t}} \big]= 1- E \big[e^{-\epsilon\theta U^{t}} \big]+E \big[e^{\epsilon \theta (Q^t+A^t-S^t )} \big]\\ E \big[e^{\epsilon\theta Q^{t}} \big(e^{\epsilon \theta (A^t-S^t )}-1 \big) \big]= E \big[e^{-\epsilon\theta U^{t}} \big]-1.\end{align*}

Therefore, since we are in steady state, we can replace the index t by $t+m$ and get

\begin{align*} E \big[e^{\epsilon\theta Q^{t+m}} \big(e^{\epsilon \theta (A^{t+m}-S^{t+m} )}-1 \big) \big]= E \big[e^{-\epsilon\theta U^{t+m}} \big]-1 = E \big[e^{-\epsilon\theta U^{t}} \big]-1,\end{align*}

where the last equality holds because of steady state and the fact that $E \big[e^{-\epsilon\theta U^{t}}\big]$ is bounded in steady state. Boundedness follows from the observation that $U^t$ is bounded by $S_{max}$ . Expanding $Q^{t+m}$ using (3), we get

(54) \begin{align} & E \Big[e^{\epsilon\theta Q^{t}}e^{\epsilon\theta \big(\sum_{l=0}^{m-1}A^{t+l}-\sum_{l=0}^{m-1}S^{t+l}+\sum_{l=0}^{m-1}U^{t+l} \big)} \Big(e^{\epsilon \theta \big(A^{t+m}-S^{t+m} \big )}-1 \Big) \Big ]\nonumber\\ &= E \big[e^{-\epsilon\theta U^{t}} \big]-1.\end{align}

Taking the Taylor expansion of the right-hand side of (54) with respect to $\theta$ , we have

(55) \begin{align} E \big[e^{-\epsilon\theta U^{t}} \big]-1 =& -E \left[-\epsilon\theta U^{t} \right]+E \left[\frac{\epsilon^2\theta^2 \left(U^{t}\right)^2}{2} \right]+E\left[\frac{-\left(\epsilon U^t\right)^3\hat{\theta}^3}{6}\right]\nonumber\\ \stackrel{(a)}{=}& -\epsilon^2\theta+\frac{\epsilon^2\theta^2 E \big[\left(U^{t}\right)^2 \big]}{2}+ E\left[\frac{-\left(\epsilon U^t\right)^3\hat{\theta}^3}{6}\right]\nonumber\\ = &\epsilon^2\theta \left(-1+\frac{\theta E \big[\left(U^{t}\right)^2 \big]}{2} \right)+E\left[\frac{-\left(\epsilon U^t\right)^3\hat{\theta}^3}{6}\right],\end{align}

where (a) is due to (33).

Since $\hat{\theta} \in (\theta,0]$ and $U^t \leq S_{max}$ , the absolute value of the last term in (55) can be bounded above by $K \epsilon^3$ , where K is a constant. Therefore, for ease of exposition, we can write (55) as

\begin{equation*} \begin{aligned} &E \big[e^{-\epsilon\theta U^{t}} \big]-1 = \epsilon^2\theta \left(-1+\frac{\theta E \big[\left(U^{t}\right)^2 \big]}{2} \right)+O\big(\epsilon^3\big) = \epsilon^2\theta \left(-1+\frac{\theta O(\epsilon)}{2} \right)+O\big(\epsilon^3\big),\end{aligned}\end{equation*}

where the last equation comes from the fact that

\begin{equation*}E \big[\left(U^{t}\right)^2 \big] \leq E\left[\left(U^{t}S^{t}\right) \right] \leq S_{max}E\left[ U^{t}\right]\leq S_{max}\epsilon\end{equation*}

using (33). Taking the Taylor expansion of the left-hand side of (54) with respect to $\theta$ , for the same reason, we have

\begin{equation*} \begin{aligned} & E \left[e^{\epsilon\theta Q^{t}}e^{\epsilon\theta \left(\sum_{l=0}^{m-1}A^{t+l}-\sum_{l=0}^{m-1}S^{t+l}+\sum_{l=0}^{m-1}U^{t+l} \right)} \left(e^{\epsilon \theta \left(A^{t+m}-S^{t+m} \right )}-1 \right) \right]\\ = &\epsilon^2\theta E \Bigg[e^{\epsilon\theta Q^{t}} \left(1+\theta\epsilon \left(\sum_{l=0}^{m-1}A^{t+l}-\sum_{l=0}^{m-1}S^{t+l}+\sum_{l=0}^{m-1}U^{t+l} \right)+O \big(\epsilon^2 \big) \right)\\ &\left(\epsilon^{-1} \left(A^{t+m}-S^{t+m} \right)+\frac{\theta}{2} \left(A^{t+m}-S^{t+m} \right)^2+O \left(\epsilon \right) \right) \Bigg]. \end{aligned}\end{equation*}

This can be divided into seven terms as follows:

\begin{align*} & \epsilon^2\theta \Bigg \{\underbrace{ E \left[e^{\theta \epsilon Q^t}\epsilon^{-1} \left(A^{t+m}-S^{t+m} \right) \right]}_{\mathcal{T}_{5}}\\ +& \underbrace{E \left[e^{\theta \epsilon Q^t}\theta \left(\sum_{l=0}^{m-1}A^{t+l}-\sum_{l=0}^{m-1}S^{t+l}+\sum_{l=0}^{m-1}U^{t+l} \right) \left(A^{t+m}-S^{t+m} \right ) \right]}_{\mathcal{T}_{6}}\\ +& \underbrace{E \left[O (\epsilon )e^{\theta\epsilon Q^t} \left(A^{t+m}-S^{t+m} \right ) \right]}_{\mathcal{T}_{7}}+ \underbrace{E \left[e^{\theta \epsilon Q^t}\frac{\theta}{2} \left(A^{t+m}-S^{t+m} \right )^2 \right]}_{\mathcal{T}_{8}}\\ +& \underbrace{E \left[e^{\theta \epsilon Q^t}\frac{ \epsilon\theta^2}{2} \left(\sum_{l=0}^{m-1}A^{t+l}-\sum_{l=0}^{m-1}S^{t+l}+\sum_{l=0}^{m-1}U^{t+l} \right) \left(A^{t+m}-S^{t+m} \right )^2 \right]}_{\mathcal{T}_{9}}\\ +&\underbrace{E \left[O \big(\epsilon^2 \big)\frac{\theta}{2} \left(A^{t+m}-S^{t+m} \right)^2 \right]}_{\mathcal{T}_{10}}\\ +& \underbrace{E \left[e^{\theta\epsilon Q^t} \left(1+\theta\epsilon \left(\sum_{l=0}^{m-1}A^{t+l}-\sum_{l=0}^{m-1}S^{t+l}+\sum_{l=0}^{m-1}U^{t+l} \right)+O (\epsilon^2 ) \right)O (\epsilon ) \right]}_{\mathcal{T}_{11}}\Bigg \}.\end{align*}

According to Lemma 2, we have

\begin{align*} \mathcal{T}_{5}=&E \left[e^{\theta \epsilon Q^t}\epsilon^{-1} \left(A^{t+m}-\lambda \right ) \right] +E \left[e^{\theta \epsilon Q^t}\epsilon^{-1} \left(\lambda-S^{t+m} \right) \right] \\ =& E \left[e^{\theta \epsilon Q^t}\epsilon^{-1} \left(A^{t+m}-\lambda \right ) \right] - E \big[e^{\theta\epsilon Q^t} \big].\end{align*}

Since

\begin{align*} \lvert{E \left[e^{\theta \epsilon Q^t}\epsilon^{-1} \left(A^{t+m}-\lambda \right ) \right]}\rvert \leq \frac{\alpha^m}{\epsilon},\end{align*}

we have

\begin{align*} - E \big[e^{\theta\epsilon Q^t} \big]-\frac{\alpha^m}{\epsilon} \leq \mathcal{T}_{5}\leq& - E \big[e^{\theta\epsilon Q^t} \big]+\frac{\alpha^m}{\epsilon}.\end{align*}

Next,

\begin{align*} \mathcal{T}_{6}= &E \left[e^{\theta \epsilon Q^t}\theta \left(\sum_{l=0}^{m-1}A^{t+l}-\sum_{l=0}^{m-1}S^{t+l}+\sum_{l=0}^{m-1}U^{t+l} \right) \left(A^{t+m}-S^{t+m} \right ) \right]\\ =&\theta E \left[e^{\theta \epsilon Q^t} \left (\sum_{l=0}^{m-1}A^{t+l}-\sum_{l=0}^{m-1}S^{t+l} \right) \left(A^{t+m}-S^{t+m} \right ) \right]\\ &+ \theta E \left[e^{\theta \epsilon Q^t} \sum_{l=0}^{m-1}U^{t+l} \left(A^{t+m}-S^{t+m} \right ) \right].\end{align*}

Since

(56) \begin{align} & \Bigg\lvert{\theta E \left[e^{\theta \epsilon Q^t} \sum_{l=0}^{m-1}U^{t+l} \left(A^{t+m}-S^{t+m} \right ) \right]}\Bigg\rvert \leq &\lvert{ \theta}\rvert \sqrt{E \left[e^{2\theta \epsilon Q^t} \left(A^{t+m}-S^{t+m} \right )^2 \right]}\sqrt{ \left(\sum_{l=0}^{m-1}U^{t+l} \right)^2}\nonumber\\ \leq &\lvert{ \theta}\rvert m\sqrt{\epsilon} \big(A_{max}+S_{max} \big)\sqrt{S_{max}},\end{align}

we have

\begin{align*} \mathcal{T}_{6} &=\theta E \left[e^{\theta \epsilon Q^t} \left (\sum_{l=0}^{m-1}A^{t+l}-\sum_{l=0}^{m-1}S^{t+l} \right) \left(A^{t+m}-S^{t+m} \right ) \right] +mO\big(\sqrt{\epsilon}\big).\end{align*}

Also, $\lvert{ \mathcal{T}_{7}}\rvert, \lvert{ \mathcal{T}_{9}}\rvert, \lvert{ \mathcal{T}_{10}}\rvert, \lvert{ \mathcal{T}_{11}}\rvert$ can be bounded as follows:

\begin{align*} &\lvert{ \mathcal{T}_{7}}\rvert =O (\epsilon ),\\ &\lvert{ \mathcal{T}_{9}}\rvert \leq \frac{\theta^2\epsilon}{2}m \big(A_{max}+S_{max} \big) \big(A_{max}+S_{max} \big)^2=mO(\epsilon),\\ &\lvert{ \mathcal{T}_{10}}\rvert \leq \frac{\mid \theta\mid }{2} \big(A_{max}+S_{max} \big)^2O \big(\epsilon^2 \big)=O\big(\epsilon^2\big),\\ &\lvert{\mathcal{T}_{11}}\rvert \leq O (\epsilon ).\end{align*}

Now consider

\begin{align*} &\mathcal{T}_{6}+\mathcal{T}_{8}\\ =\,& \theta E \left[e^{\theta\epsilon Q^t}\sum_{l=0}^{m-1} \big(A^{t+l}-\lambda \big) \left(A^{t+m}-\lambda \right ) \right] -\theta E \left[e^{\theta\epsilon Q^t}\sum_{l=0}^{m-1} \big(A^{t+l}-\lambda \big) \big(S^{t+m}-\lambda \big) \right]\\ &-\theta E \left[e^{\theta\epsilon Q^t}\sum_{l=0}^{m-1} \big(S^{t+l}-\lambda \big) \left(A^{t+m}-\lambda \right ) \right] +\theta E \left[e^{\theta\epsilon Q^t}\sum_{l=0}^{m-1} \big(S^{t+l}-\lambda \big) \big(S^{t+m}-\lambda \big) \right]\\ &+mO\big(\sqrt{\epsilon}\big)+ \frac{\theta}{2}E \left[e^{\theta\epsilon Q^t} \left( \left(A^{t+m}-\lambda \right )^2+\sigma_s^2 \right) \right] \\ &+\frac{\theta}{2}E \left[e^{\theta\epsilon Q^t} \left(2 \left(A^{t+m}-\lambda \right ) \left(\lambda-S^{t+m} \right)+\epsilon^2 \right) \right]\\ =\, &\theta E \left[e^{\theta\epsilon Q^t}\sum_{l=0}^{m-1} \big(A^{t+l}-\lambda \big) \left(A^{t+m}-\lambda \right ) \right] +\frac{\theta}{2}E \left[e^{\theta\epsilon Q^t} \left( \left(A^{t+m}-\lambda \right )^2+\sigma_s^2 \right) \right]\\ &+mO(\epsilon )+mO\big(\sqrt{\epsilon}\big),\end{align*}

where the last equation comes from the independence between the current service process and the previous arrival, queue length, and service processes. According to Lemma 7, we have

(57) \begin{align} \mathcal{T}_{6}+\mathcal{T}_{8}\leq &\frac{\theta}{2} E \big[e^{\theta\epsilon Q^t} \big] \Bigg(2\sum_{l=0}^{m-1}E \big[ \big(b^{m-l}-\lambda \big) \big(b^0-\lambda \big) \big]+E \Big[ \big(b^0-\lambda \big)^2 \Big]+\sigma_s^2 \Bigg)\nonumber\\ &+mO\big(\sqrt{\epsilon}\big)+mO\left(\epsilon \right)+L_1m\alpha^m + L_2\alpha^m,\nonumber\\ \mathcal{T}_{6}+\mathcal{T}_{8}\geq &\frac{\theta}{2} E \big[e^{\theta\epsilon Q^t} \big] \Bigg(2\sum_{l=0}^{m-1}E \big[ \big(b^{m-l}-\lambda \big) \big(b^0-\lambda \big) \big]+E \Big[ \big(b^0-\lambda \big)^2 \Big]+\sigma_s^2 \Bigg)\nonumber\\ &+mO\big(\sqrt{\epsilon}\big)+mO\left(\epsilon \right)-L_1m\alpha^m - L_2\alpha^m,\end{align}

where $L_1 = 4 \left(A_{max}+\lambda \right)A_{max}C^2$ and $L_2 = 2 \big(A_{max}+\lambda \big)^2C$ . Combining (54)–(57), we have

\begin{align*} &E \big[e^{\theta\epsilon Q^t} \big] \bigg(-1+\frac{\theta}{2} \bigg(2\sum_{l=0}^{m-1}E \Big[ \big(b^{m-l}-\lambda \big) \big(b^0-\lambda \big) \Big] +E \left[ \big(b^0-\lambda \big)^2 \right]+\sigma_s^2 \bigg)\bigg)\\ &+mO\big(\sqrt{\epsilon}\big)+mO\left(\epsilon \right)+\frac{\alpha^m}{\epsilon}-L_1m\alpha^m - L_2\alpha^m \\ \leq& O (\epsilon )-1,\\ &E \big[e^{\theta\epsilon Q^t} \big] \left(-1+\frac{\theta}{2} \left(2\sum_{l=0}^{m-1}E \Big[ \big(b^{m-l}-\lambda \big) \big(b^0-\lambda \big) \Big] +E \Big[ \big(b^0-\lambda \big)^2 \Big]+\sigma_s^2 \right)\right)\\ & +mO\big(\sqrt{\epsilon}\big)+mO\left(\epsilon \right)+\frac{\alpha^m}{\epsilon}+L_1m\alpha^m + L_2\alpha^m \\ \geq& O (\epsilon )-1.\end{align*}

Letting $m=\epsilon^{-\frac{1}{4}}$ and let $\epsilon \to 0$ , we have

\begin{align*} \lim_{\epsilon \to 0}E \big[e^{\theta\epsilon Q^t} \big] =& \frac{1}{1-\frac{2\lim_{m \to \infty}\sum_{l=0}^{m-1} E [ \left(b^{m-l}-\lambda \right ) \left(b^0-\lambda \right) ]+E [ \big(b^0-\lambda \big)^2 ]+\sigma_s^2}{2}}.\end{align*}

Putting back the ${(\cdot )}^{(\epsilon)}$ notation, we have

\begin{align*} &\lim_{\epsilon \to 0}E \Big[e^{\theta\epsilon \overline{Q}^{(\epsilon)}} \Big] \\ =& \frac{1}{1-\frac{2\lim_{m(\epsilon) \to \infty}\sum_{l=1}^{m(\epsilon)}E \left[ \left (b^{(\epsilon)}_{l}-\lambda^{(\epsilon)} \right) \left (\left(b^{0}\right)^{(\epsilon)}-\lambda^{(\epsilon)} \right) \right]+ \lim_{\epsilon \to 0}E\left[ \left(b^{(\epsilon)}_{l}-\lambda^{(\epsilon)} \right)^2 \right]+\sigma_s^2}{2}}\\ =& \frac{1}{1-\frac{\lim_{\epsilon \to 0} \gamma^{(\epsilon)}(0) + \lim_{m(\epsilon) \to \infty} 2\sum_{i=1}^{m(\epsilon)}\gamma^{(\epsilon)}(t)+\sigma_s^2}{2}}.\\\end{align*}

Similarly as in the proof of Part 2 of Theorem 1, using Claim 5, we have

\begin{align*} \lim_{\epsilon \to 0}E \Big[e^{\epsilon\theta \overline{Q}^{(\epsilon)}} \Big] = \frac{1}{1-\theta\frac{\sigma_a^2+\sigma_s^2}{2}}.\end{align*}

Note that $\frac{1}{1-\theta\frac{\sigma_a^2+\sigma_s^2}{2}}$ is the one-sided Laplace transform of an exponential random variable with mean $\frac{\sigma_a^2+\sigma_s^2}{2}$ . By [Reference Braverman, Dai and Miyazawa4, Lemma 6.1], this implies that $\epsilon \overline{Q}^{(\epsilon)} $ converges in distribution to an exponential random variable with mean $\frac{\sigma_a^2+\sigma_s^2}{2}$ , which completes the proof.

Acknowledgements

The authors thank Dr. Daniela Hurtado Lange for useful discussions. This work was partially supported by NSF grants EPCN-2144316, CMMI-2140534, and CCF-1850439.

Funding information

There are no funding bodies to thank in relation to the creation of this article.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Alizadeh, M. et al. (2013). pFabric: minimal near-optimal datacenter transport. ACM SIGCOMM Comput. Commun. Rev. 43, 435446.10.1145/2534169.2486031CrossRefGoogle Scholar
Asmussen, S. and Koole, G. (1993). Marked point processes as limits of Markovian arrival streams. J. Appl. Prob. 30, 365372.Google Scholar
Benson, T., Akella, A. and Maltz, D. A. (2010). Network traffic characteristics of data centers in the wild. In IMC ’10: Proceedings of the 10th ACM SIGCOMM Conference on Internet Measurement, Association for Computing Machinery, New York, pp. 267–280.10.1145/1879141.1879175CrossRefGoogle Scholar
Braverman, A., Dai, J. and Miyazawa, M. (2017). Heavy traffic approximation for the stationary distribution of a generalized Jackson network: the BAR approach. Stoch. Systems 7, 143196.10.1287/15-SSY199CrossRefGoogle Scholar
Douc, R., Moulines, E., Priouret, P. and Soulier, P. (2018). Markov Chains. Springer, Cham.10.1007/978-3-319-97704-1CrossRefGoogle Scholar
Eryilmaz, A. and Srikant, R. (2012). Asymptotically tight steady-state queue length bounds implied by drift conditions. Queueing Systems 72, 311359.10.1007/s11134-012-9305-yCrossRefGoogle Scholar
Fayolle, G., Malyshev, V. A. and Menshikov, M. V. (1995). Topics in the Constructive Theory of Countable Markov Chains. Cambridge University Press.10.1017/CBO9780511984020CrossRefGoogle Scholar
Gamarnik, D. and Zeevi, A. (2006). Validity of heavy traffic steady-state approximations in generalized Jackson networks. Ann. Appl. Prob. 16, 5690.10.1214/105051605000000638CrossRefGoogle Scholar
Hajek, B. (1982). Hitting-time and occupation-time bounds implied by drift analysis with applications. Adv. Appl. Prob. 14, 502525.10.2307/1426671CrossRefGoogle Scholar
Harrison, J. (1988). Brownian models of queueing networks with heterogeneous customer populations. In Stochastic Differential Systems, Stochastic Control Theory and Applications, Springer, Berlin, pp. 147186.10.1007/978-1-4613-8762-6_11CrossRefGoogle Scholar
Harrison, J. M. (1998). Heavy traffic analysis of a system with parallel servers: asymptotic optimality of discrete review policies. Ann. App. Prob. 8, 822848.Google Scholar
Harrison, J. M. and López, M. J. (1999). Heavy traffic resource pooling in parallel-server systems. Queueing Systems 33, 339368.10.1023/A:1019188531950CrossRefGoogle Scholar
Hurtado-Lange, D. and Maguluri, S. T. (2020). Transform methods for heavy-traffic analysis. Stoch. Systems 10, 275309.10.1287/stsy.2019.0056CrossRefGoogle Scholar
Hurtado Lange, D. A. and Maguluri, S. T. (2022). Heavy-traffic analysis of queueing systems with no complete resource pooling. Math. Operat. Res. 47, 3129–2155.10.1287/moor.2021.1248CrossRefGoogle Scholar
Hurtado-Lange, D., Varma, S. M. and Maguluri, S. T. (2022). Logarithmic heavy traffic error bounds in generalized switch and load balancing systems. J. Appl. Prob. 59, 652669.10.1017/jpr.2021.82CrossRefGoogle Scholar
Jhunjhunwala, P. and Maguluri, S. T. (2020). Low-complexity switch scheduling algorithms: delay optimality in heavy traffic. IEEE/ACM Trans. Networking 30, 464473.10.1109/TNET.2021.3116606CrossRefGoogle Scholar
Kingman, J. (1961). The single server queue in heavy traffic. Math. Proc. Camb. Phil. Soc. 57, 902904.Google Scholar
Levin, D. A. and Peres, Y. (2017). Markov Chains and Mixing Times. American Mathematical Society, Providence, RI.10.1090/mbk/107CrossRefGoogle Scholar
Lu, Y., Maguluri, S. T., Squillante, M. S. and Suk, T. (2018). Optimal dynamic control for input-queued switches in heavy traffic. In 2018 Annual American Control Conference (ACC), Institute of Electrical and Electronics Engineers, Piscataway, NJ, pp. 38043809.10.23919/ACC.2018.8431711CrossRefGoogle Scholar
Lu, Y., Maguluri, S. T., Squillante, M. S. and Suk, T. (2019). On heavy-traffic optimal scaling of c-weighted MaxWeight scheduling in input-queued switches. IEEE Trans. Automatic Control 67, 42724277.10.1109/TAC.2021.3121367CrossRefGoogle Scholar
Maguluri, S. T., Burle, S. K. and Srikant, R. (2018). Optimal heavy-traffic queue length scaling in an incompletely saturated switch. Queueing Systems 88, 279309.10.1007/s11134-017-9562-xCrossRefGoogle Scholar
Maguluri, S. T. and Srikant, R. (2016). Heavy traffic queue length behavior in a switch under the MaxWeight algorithm. Stoch. Systems 6, 211250.10.1287/15-SSY193CrossRefGoogle Scholar
McKeown, N., Mekkittikul, A., Anantharam, V. and Walrand, J. (1999). Achieving 100% throughput in an input-queued switch. IEEE Trans. Commun. 47, 12601267.10.1109/26.780463CrossRefGoogle Scholar
Neely, M. J. (2008). Delay analysis for maximal scheduling with flow control in wireless networks with bursty traffic. IEEE/ACM Trans. Networking 17, 11461159.10.1109/TNET.2008.2008232CrossRefGoogle Scholar
Perry, J. et al. (2014). Fastpass: a centralized ‘zero-queue’ datacenter network. In ACM SIGCOMM Comput. Commun. Rev. 44, 307318.Google Scholar
Rajagopalan, S., Shah, D. and Shin, J. (2009). Network adiabatic theorem: an efficient randomized protocol for contention resolution. ACM SIGMETRICS Performance Evaluation Rev. 37, 133144.10.1145/2492101.1555365CrossRefGoogle Scholar
Shah, D., Tsitsiklis, J. N. and Zhong, Y. (2011). Optimal scaling of average queue sizes in an input-queued switch: an open problem. Queueing Systems 68, 375384.10.1007/s11134-011-9234-1CrossRefGoogle Scholar
Sharifnassab, A., Tsitsiklis, J. N. and Golestani, S. J. (2020). Fluctuation bounds for the max-weight policy with applications to state space collapse. Stoch. Systems 10, 223250.10.1287/stsy.2019.0038CrossRefGoogle Scholar
Singh, A. et al. (2015). Jupiter rising: a decade of Clos topologies and centralized control in Google’s datacenter network. ACM SIGCOMM Comput. Commun. Rev. 45, 183197.10.1145/2829988.2787508CrossRefGoogle Scholar
Srikant, R. and Ying, L. (2013). Communication Networks: An Optimization, Control, and Stochastic Networks Perspective. Cambridge University Press.Google Scholar
Stolyar, A. L. (2004). MaxWeight scheduling in a generalized switch: state space collapse and workload minimization in heavy traffic. Ann. Appl. Prob. 14, 153.Google Scholar
Tassiulas, L. and Ephremides, A. (1990). Stability properties of constrained queueing systems and scheduling policies for maximum throughput in multihop radio networks. In 29th IEEE Conference on Decision and Control, Institute of Electrical and Electronics Engineers, Piscataway, NJ, pp. 2130–2132.Google Scholar
Wang, C.-H., Maguluri, S. T. and Javidi, T. (2017). Heavy traffic queue length behavior in switches with reconfiguration delay. In IEEE INFOCOM 2017—IEEE Conference on Computer Communications, Institute of Electrical and Electronics Engineers, Piscataway, NJ, pp. 1–9.10.1109/INFOCOM.2017.8057173CrossRefGoogle Scholar
Wang, W., Maguluri, S. T., Srikant, R. and Ying, L. (2018). Heavy-traffic delay insensitivity in connection-level models of data transfer with proportionally fair bandwidth sharing. ACM SIGMETRICS Performance Evaluation Rev. 45, 232245.10.1145/3199524.3199565CrossRefGoogle Scholar
Williams, R. J. (1998). Diffusion approximations for open multiclass queueing networks: sufficient conditions involving state space collapse. Queueing Systems 30, 2788.10.1023/A:1019108819713CrossRefGoogle Scholar
Zhou, X., Tan, J. and Shroff, N. (2018). Flexible load balancing with multi-dimensional state-space collapse: throughput and heavy-traffic delay optimality. Performance Evaluation 127, 176193.10.1016/j.peva.2018.10.003CrossRefGoogle Scholar