1. Introduction
In this paper, we study a blockchained buffering system with federated learning as shown in Figure 1.
The main focus is of three folds: 2-stage game-theoretic problem oriented 3-stage service policy computing of users-selection and rate scheduling/dynamic pricing, convolutional neural network (CNN) based algorithm design, and simulation case studies. The game-theoretic problem consists of both zero-sum and non-zero-sum 2-stage game competitions (representing “win-lose” and “win-win” 2-stage competitions). Furthermore, the computed policy is proved to be asymptotically optimal and fair via diffusion approximation. The asymptotic optimality means that the whole workload and total cost of the system are asymptotically minimized. The asymptotic fairness means that no user can change his personal policy unilaterally for profit. The computed 3-stage service policy as shown in Figure 2 is based on a 2-stage game-theoretic problem whose solution is represented by a saddle point to a zero-sum game problem and a Nash equilibrium point to a non-zero-sum game problem (see also the related concepts in Dai [Reference Dai8, Reference Dai9], Marchi [Reference Marchi19], Nash [Reference Nash21], and Rosen [Reference Rosen25]). Furthermore, in the 2-stage game problems, each user has his own utility function in terms of price, queue length, and service rate. This utility function is a generalization of the existing one in Dai [Reference Dai6, Reference Dai8, Reference Dai9], Ye and Yao [Reference Ye and Yao28], and references therein. Examples of such a utility function can be the so-called proportionally fair allocation, minimal potential delay allocation, and $(\beta,\alpha)$-proportionally fair allocation (see e.g., Ye and Yao [Reference Ye and Yao28]), which are widely used in internet protocols and communication systems.
In Figure 1, we present such a generalized service system consisting of V service pools and J buffer queues corresponding to J-parallel users for two positive integer numbers V and J. A blockchain system is added to this system for security and distributed data storage. A federated leaning center is also added to this system for dynamic policy computing and online payment transaction via stable digital currency. This blockchained buffering and federated learning system is a generalized platform of the recent studies (see e.g., Ayaz et al. [Reference Ayaz, Sheng, Tian and Guan2], Dai [Reference Dai8, Reference Dai9, Reference Dai11], Demertzis et al. [Reference Demertzis, Iliadis, Pimenidis, Tziritas, Koziri and Kikiras14], Qu et al. [Reference Qu, Pokhrel, Garg, Gao and Xiang22]), which come up in different research areas such as metaverse, sixth generation of wireless communication (6G), internet of vehicles (IoV), web3.0, etc. Due to the security consideration of the system, blockchains are used to protect privacy among different users as shown in Figure 1. Moreover, the way developed in Dai [Reference Dai8, Reference Dai9] uses a single-dimensional aggregated total workload process of the system to dynamically design general-dimensional decision parameter vector at each time point in the federated learning (FL) center. Then, the FL center sends the computed parameters back to their corresponding individual service pools, respectively, for local information upgrades and local model training. This idea to employ the single-dimensional aggregated workload process in the design of Dai [Reference Dai8, Reference Dai9] is motivated from the state space collapse property (or more classically, the Little’s law) widely studied in queueing literature (see e.g., Bramson [Reference Bramson3] and Little [Reference Little17]). The purpose to use this idea in the processing of a multiclass queueing network environment is aimed to reduce the system’s dimension and to avoid the curse of dimensionality. When state space collapse phenomenon happens in a queueing system, the multiple class queueing processes will display a certain proportional relationship with the single-dimensional aggregated workload process, which will significantly reduce the system computational complexity. In the design of Dai [Reference Dai8, Reference Dai9], the proportional relationship represented by the state space collapse property is generalized to be represented by a solution to a game-theoretic problem. This solution can be trained and computed in the FL center, which can be used as an online scheduling policy. Under certain traffic flow and service discipline assumptions, this dynamic policy is proved to be asymptotically fair and optimal in certain sense (that will be elaborated later) through diffusion approximation under the so-called diffusive scaling and the well-known heavy-traffic regime. When the game-theoretic problem reduces to an optimization problem and without the blockchain security consideration, readers are also referred to Dai [Reference Dai6], Ye and Yao [Reference Ye and Yao28] for related studies. Furthermore, the studies in Dai [Reference Dai8, Reference Dai9] consider a general multiple service pool system with a more general input flow process (i.e., a J-dimensional triply stochastic renewal reward process (TSRRP)). In the meanwhile, the studies in Dai [Reference Dai6], Ye and Yao [Reference Ye and Yao28] focus on the analysis when the input process is a conventional J-dimensional renewal process.
The contribution of this research is of three folds. The first fold is to add the dynamic pricing capability to the study in Dai [Reference Dai9]. The second fold is to design a general CNN based policy computing algorithm flow chart along the line of the so-called big model framework. The third fold is to provide detailed simulation examples. However, the main focus of our current paper is on the design and analysis of a joint 3-stage service policy extended from the previously mentioned 2-stage game-theoretic policy concerning users-selection, dynamic pricing, and online rate resource allocation for given queue/environment state dependent pricing and utility functions. The asymptotic optimality and fairness of this designed 3-stage service policy is justified by applying the well-known heavy traffic approximation and modeling technique together with our newly added dynamic pricing functionality. In the meanwhile, it is also supported by our newly conducted simulation case studies with three users and through explicitly constructing the solutions to their corresponding dual cost game competition problems. Note that, the allocated rate to a user is corresponding to the service time for the user. Furthermore, the service for the user includes four steps: user registration, security checking, dynamic policy computing, and online payment transaction through stable digital currency as shown in Figure 1. Thus, the service time for the user will be the summation of the processing times corresponding to the four service steps.
The design procedure is illustrated in Figure 2, where J (e.g., J = 3) users want to receive services from the V service pools. However, only two of them can be selected to receive services at each time point according to chosen state dependent pricing and utility functions. After the selected two users get into services, they need to share the limited capacity from different service pools in a cooperative way. The selection process is determined by a dynamic policy corresponding to a solution (called a saddle point) to a zero-sum dual cost game competition problem at each particular time point. The sharing process for the selected two users is to compete the system rate service resource and is determined by a dynamic policy through a solution (called a Nash equilibrium point) to a non-zero-sum dual cost game competition problem. Note that, as shown in Figure 2, the associated 2-stage game-theoretic problem is a J × V-dimensional problem. In general, an explicit solution is not available. Thus, we design a CNN based algorithm flow chart for general usage. However, to support our current designed policy, we choose smaller J and V to conduct simulation case studies, which is presented in Sections 4–5.
The input data flows from different users to our system as indicated in Figure 1 are characterized by a J-dimensional TSRRP that can be further approximated by a diffusion process with the target for our effective simulation. Furthermore, the service rate capacity available for resource-competing users at each pool is modeled as a randomly capacity region evolving with a finite state continuous Markov chain (FS-CTMC). The arrived data flows can immediately get into service if the servers are available. Otherwise, they will be stored in the queueing buffers waiting for service. Besides buffers, the decision information after service for each user will be stored in a distributed data base called blockchain as designed in Figure 1.
There are many reasons (e.g., security, decentralization, smart contract) for today’s FinTech system and future Internet (e.g., metavers and web3.0) to choose blockchain as a key technology (see, e.g., Buterin [Reference Buterin4], Dai [Reference Dai8,Reference Dai9,Reference Dai11], Iansiti and Lakehani [Reference Iansiti and Lakehani16], Nakamoto [Reference Nakamoto20], Rajan and Visser [Reference Rajan and Visser23]). Blockchain consisting of data blocks is an orderly distributed database with encryption and is frequently referred to as a ledger. Each data block contains the proposed (or calculated) decision information with customer’s private and public keys at a single time point with a time-stamp and a link to a previous block. The management of a blockchain can be realized via various smart contracts in a decentralized way. Traditionally, a smart contract within a blockchain can be considered as a digitalized regulation rule with common sense. In this paper, we will make the rule dynamically evolving according to an online decision-making policy, i.e., a solution to a dynamic game based competition problem.
More precisely, we will study the resource allocation and dynamic pricing of a joint saddle & Nash equilibrium service policy for such a system. When users get into service either from buffers or from outside of the system, the computation of their processing policy concerning resource allocation may depend on long history data stored in the blockchain (e.g., represented by a conditional mean defined process) and can be dynamically priced via stable digital currency. Each queue may be served at the same time through multiple smart service pools while each pool may also serve multiple queues simultaneously by running intelligent policies. Note that, to reflect the dynamic evolving nature of real-world systems and to realize the decentralized operation in a blockchain, the users to be selected at a time is random, the number of pools to serve a specific queue is random, and the number of queues to be served by a given pool is also random. The effectiveness of our proposed policy is in terms of revenue, profit, cost, system delay, etc. We model them through some utility (or hash) functions in terms of the performance measures of their internal data flow dynamics such as queue length and workload processes. To demonstrate the usefulness of our policy, we derive a reflecting diffusion with regime-switching (RDRS) model for the performance measures under our designed policy to offer services to different users in a cost-effective, efficient, and fair way. Based on this RDRS model, our proposed policy is effectively implemented with numerical simulations for some case studies of the system in Figures 1–2. Concerning the numerical scheme through Monte Carlo simulation for a Brownian motion driven stochastic system, readers are referred to Dai [Reference Dai10] and references therein for more details.
Since dynamic resource pricing is our major concern, we give some discussions about the concept of stable digital currency and its applications. More precisely, stable digital currency is a digital token used in digital informational and data network systems. It can be traced back to the optimal pricing of bits (or ports) in telecommunication managements and admission controls through token buffers in communication networks around the mid and late 90s by Bell Labs’ researchers (see e.g., the related discussions in Dai [Reference Dai6], Elwalid and Mitra [Reference Elwalid and Mitra15]). Along this line, Nakomoto [Reference Nakamoto20] extended the concept of bit (or port) to the bitcoin in the year of 2008 and Buterin [Reference Buterin4] further enhanced this concept to Ethereum in the year of 2013. Note that, for both the bitcoin and the Ethereum, they are still not real stable digital currencies. However, during this evolvement, Dai (see e.g., Maker [18]) made his effort to endow the Ethereum with real value and invented DaiCoin. Since then, this concept and lawful implementations are becoming more and more popular with the emergence of US digital dollar (DD), (China Central Bank) digital currency/electronic payment (DC/EP), and (European) Central Bank digital currency (CBDC). The latest application of dynamic resource pricing can be found in a metaverse system (see e.g., Dai [Reference Dai11]).
To show the importance of stable digital currency, an example based on a supply chain finance service system is displayed in Figure 3. The business model presented in this example can be considered as a generalized online digital payment system with service lead time involvements and consists of 4 typical service stages to make goods eventually delivered to customers: raw material procurements, make to order (MTO), assemble to order (ATO), and agent sales (see e.g., the upper-half of Figure 3 where agents are further classified into two levels of suppliers). Usually, during the procurement and service stages, the cash cannot be paid until the delivery of procured products. Thus, the bank notes, e-bill, receipt, etc. through credit, mortgage, and the third party warrant as shown in the middle of Figure 3 are widely used in real-world practice. To improve the efficiency and security of this type of payments, the stable digital currency such as US DD, (China) DC/EP, and (European) CBDC as designed in Figure 3 is a suitable choice. Furthermore, as shown in the upper-right corner of Figure 3, data information among companies can be asymmetric or symmetric. Frequently, they are not exchangeable. Thus, our policy can be integrated into this system to develop a smart online algorithm in terms of dynamic resource pricing to solve this problem. From the lower-half design in Figure 3, we can see that our supply chain system can be mapped into and interact with an information system through wireless 5G/6G network or wireline IP network (and even future IoB). Then, many online payments and transactions with lawful services can be handled as shown in the middle of Figure 3.
The remainder of this paper is organized as follows. In Section 2, we formulate our system model for dynamic resource pricing via stable digital currency. In Section 3, we present our main theorem based on a 3-stage (i.e., users-selection, dynamic pricing, and resource-competition scheduling) policy. A corresponding general CNN based algorithm flow chart is also presented in this section. In Section 4, we present two illustrative policy examples. In Section 5, we conduct simulation case studies to show the effectiveness of our policy. In Section 6, we theoretically prove our main theorem. In Section 7, we give the conclusion of this paper.
2. System model
In this section, we present our service model with dynamic resource pricing capability. It owns V number of service pools associated with a set of positive integers ${\cal V}\equiv\{1,\ldots,V\}$ and owns J number of queues for J-parallel users corresponding to a set of positive integers ${\cal J}\equiv\{1,\ldots,J\}$). Furthermore, we assume that the buffer storage in each queue is nonnegative. Each pool owns Jv number of flexible parallel-servers with v belonging to a positive integer set ${\cal V}$. Let the prime denote the transpose of a vector or a matrix. Then, associated with the queues, there is a J-dimensional arrival process $A=\{A(t)=(A_{1}(t),\ldots,A_{J}(t))',t\geq 0\}$ and it is called a data packet arrival process. In this situation, $A_{j}(t)$ for each $j\in{\cal J}$, $t\geq 0$, and some positive integer $n\in\{1,2,\ldots\}$ is the number of data packets that arrive at the jth queue during time interval $(0,t]$. In addition, in a real world service system such as a banking service or a supply chain system, the associated input ethereum/cash flows and supply/demand processes can be digitalized and mapped into the data packet based framework. The size of a data packet is a random number $\zeta\in\{1,2,\ldots\}$. Our system is assumed under an external random environment driven by a stationary FS-CTMC $\alpha=\{\alpha(t),t\in[0,\infty)\}$ with a finite state space ${\cal K}\equiv\{1,\ldots,K\}$, whose generator matrix is given by $G=(g_{il})$ with $i,l\in{\cal K}$, and
where $\gamma(i)$ is the holding rate for the continuous time chain staying in a state $i\in{\cal K}$ and $Q=(q_{il})$ is the corresponding transition matrix of its embedded discrete-time Markov chain (see e.g., Resnick [Reference Resnick24]). Moreover, define τn for each nonnegative integer $n\in\{0,1,\ldots\}$ by:
Note that, the external random environment may be caused by different factors (see e.g., the explanations in Choudhury et al. [Reference Choudhury, Mandelbaum, Reiman and Whitt5], Dai [Reference Dai6], Wang and Moayeri [Reference Wang and Moayeri27]). Here, in our blockchained case, the FS-CTMC $\alpha(\cdot)$ may be considered as a history-data dependent system randomly evolving parameter for a targeted stationary random variable $\alpha(\infty)$, which can be generated by (or approximated through) a conditional mean defined process, i.e.,
where $\{{\cal F}_{t},t\in[0,\infty)\}$ is a filtration generated by blockchain history information. For example, when the blockchain database read and write (R/W) times are exponentially distributed, the martingale representation theorem for a jump-diffusion process (see e.g., Applebaum [Reference Applebaum1]) can be applied to (2.3) to generate the required FS-CTMC assumption. Then, we can model the arrival process $A_{j}(\cdot)$ for each positive integer $j\in{\cal J}$ as a big data flow stream through a TSRRP as in Dai [Reference Dai8]. More precisely, the process $A_{j}(\tau_{n}+\cdot)$ for each $n\in\{0,1,\ldots\}$ is a counting process corresponding to a (conditional) delayed renewal reward process with arrival rate $\lambda_{j}(\alpha(\tau_{n}))$ and mean reward $m_{j}(\alpha(\tau_{n}))$ associated with finite squared coefficients of variations $\alpha^{2}_{j}(\alpha(\tau_{n}))$ and $\zeta^{2}_{j}(\alpha(\tau_{n}))$ during time interval $[\tau_{n},\tau_{n+1})$.
Now, we let $\{u_{j}(k),k=1,2,\ldots\}$ be the sequence of times between the arrivals of the $(k-1)$th and the kth reward batches of packets at the jth queue. The associated batch reward is given by $w_{j}(k)$ and all the data packets arrived with it are indexed in certain successive order. Therefore, we can present the renewal counting process corresponding to the inter-arrival time sequence $\{u_{j}(k),k=1,2,\ldots\}$ for each $j\in{\cal J}$ as follows,
Thus, we can restate the definition of an TSRRP $A_{j}(\cdot)$ quantitatively through the expression,
Each data packet will first get service in the system and then leave it. The service is managed by a blockchain. In this blockchain, the service for a data packet is composed of two parts: security checking and policy computation (or real data payload transmission). After completing the service, the security information and the policy (or the transmission result) will be stored and copied to all the participating partner nodes for storage and in the meanwhile to produce nonce values and private keys. Moreover, we denote $\{v_{j}(k),k=1,2,\ldots\}$ to be the sequence of successive arrived packet lengths at queue j, which is assumed to be a sequence of strictly positive i.i.d. random variables with average packet length $1/\mu_{j}\in(0,\infty)$ and squared coefficient of variation $\beta_{j}^{2}\in(0,\infty)$. In addition, we suppose that all the inter-arrival and service time processes are mutually (conditionally) independent when the environmental state is fixed. Associated with each $j\in{\cal J}$ and each nonnegative constant h, we employ $S_{j}(\cdot)$ to denote the renewal counting process corresponding to $\{v_{j}(k),k=1,2,\ldots\}$. In other words,
Define $Q_{j}(t)$ to be the jth queue length with $j\in{\cal J}$ at each time $t\in[0,\infty)$ and $D_{j}(t)$ to be the number of packet departures from the jth queue in $(0,t]$. Therefore, the queueing dynamics governing the evolving of the internal data flow in and out within our unified service platform can be modeled by:
where each queue is assumed to have an infinite storage capacity to buffer data packets (jobs) arrived from a given user.
Note that, in a DaiCoin and blockchain based mortgage system (see e.g., Maker [18]), $Q_{j}(t)$ is the number of Ethereums available at time t. In this case, we need to dynamically determine how many Dais should be loaned to customer j for each Ethereum at time t according to the value of Q(t). Similarly, in a banking system, $Q_{j}(t)$ can be the number of loan demands waiting at time t. In this case, we need to determine what is the loan interest rate at time t according to the value of Q(t). Furthermore, in communication and cloud-computing based service systems, we need to price the bit service ratio at time t according to the value of Q(t). In all, we need to dynamically price our service in a real-world system according to the evolving of Q(t) with the evolution of time t. For convenience, we will use the unified terminology “price $P_{j}(t)$” to denote the price (the number of Dais or interest ratio) associated with $Q_{j}(t)$ and $\alpha(t)$ at time t. In economics, there are different pricing functions with respect to Q(t) and $\alpha(t)$ (see e.g., Dai and Jiang [Reference Dai and Jiang13]). Here, we assume that $P_{j}(t)$ is a positive function in terms of $Q_{j}(t)$ and $\alpha(t)$, i.e.,
Note that, $P_{j}(t)$ is a specific value at time t for given values of $Q_{j}(t)$ and $\alpha(t)$. In general, $P_{j}(t)$ is a random variable at time t since both $Q_{j}(t)$ and $\alpha(t)$ are random variables at time t. From the expression of (2.8), we can see that the price also depends on the random environment movement (e.g., the season’s movement). In addition, we suppose that $f_{j}(\cdot,\cdot)$ in (2.8) is Lipschitz continuous with respect to $Q_{j}(t)$. Then, we can introduce a utility (or a hash) function with respect to the valued queue length $P_{j}(t)Q_{j}(t)$ for user $j\in{\cal J}$ at each service pool $v\in{\cal V}$ as follows,
where $P(t)=(P_{1}(t),\ldots,P_{J}(t))$ and $\Lambda(t)=(\Lambda_{1}(t),\ldots\Lambda_{J}(t))$. Moreover, $\Lambda_{j}(t)$ for each $t\in[0,\infty]$ and $j\in{\cal J}$ is the summation of all service rates allocated to the jth user at time t from all possible pools and servers. Here, we remark that, $\Lambda_{j}(t)$ may be given in a feedback control form and it depends on the current price P(t), the current queue length Q(t), and the system state $\alpha(t)$ at a given time t. In other words, we have that $\Lambda_{j}(t)=\Lambda_{j}(P(t)Q(t),\alpha(t))$. Furthermore, we note that, the upper case $\Lambda_{j}(t)$ used here denotes service capacity allocation process. It is not directly related to the lower case λj as used in (6.3) of Subsubsection 6.1.3, which denotes the nominal arrival rate for user j.
Now, we define W(t) and $W_{j}(t)$ to be the (expected) total workload in the system at time t and the one associated with user j at time t, to wit,
In the following study, we will use W(t) and Q(t) as performance measures, $f=(f_{1},\ldots,f_{J})$ in (2.8) as pricing function, and $\{U_{vj},j\in{\cal J},v\in{\cal V}\}$ in (2.9) as utility (or hash) functions. Based on these measures and functions, we can propose a joint dynamical pricing and rate scheduling policy $(P,\Lambda)$ with users-selection at each time point for different service pools and servers to all the users. Under the policy, the total workload W(t) and its corresponding total cost are minimized, where the total cost is a summation of costs serving all users in certain sense. For an exact definition of the total cost, it is given in (3.22) though the so-called dual-costs. Furthermore, we assume that the available resources from different pools and servers can be flexibly allocated and shared between the system and users, i.e., the system operates under a concurrent resource occupancy service regime. Based on these facts, we can define $T_{j}(t)$ to be the cumulative amount of service given to the jth queue up to time t, i.e.,
Hence, if we let $S_{j}(t)$ be the total number of jobs (packets) that finishes service in the system by time t, we know that $D_{j}(t)=S_{j}(T_{j}(t))$.
3. Main theorem
TSRRPs can effectively model big data arrival streams. However, it is difficult to directly conduct the analysis of the associated physical queueing model in (2.7) or its related physical workload model in (2.10) due to the non-Markovian characteristics of TSRRPs. Thus, in this paper, we will develop a scheme by applying the well-known heavy traffic approximation and modeling technique to establish the RDRS model corresponding to our newly designed game-competition based dynamic resource pricing and scheduling policy by considering our queueing system under the asymptotic regime, where it is heavily loaded (load balanced), i.e., under the so-called heavy traffic condition. Furthermore, we will prove the correctness of RDRS modeling via diffusion approximation while we will also show the effectiveness of the identified model for our newly proposed pricing and scheduling policy by presenting simulation case studies. The corresponding simulation results are displayed in Figures 4–5 (as follows) and Figure 11 and their interpretations are presented in Subsection 5.
3.1. RDRS model
In this subsection, we first present the basic idea of our main claim in terms of our RDRS modeling under a smart contract policy. Second, for convenience, we introduce the definition of RDRS model. More precisely, for each $t\geq 0$ and $j\in{\cal J}$, we introduce two sequences of diffusion-scaled processes: $\hat{Q}^{r}(\cdot)$ and $\hat{W}^{r}(\cdot)$ by:
where $\{r,r\in{\cal R}\}$ is supposed to be a strictly increasing sequence of positive real numbers and tends to infinity. Then, our main claim can be presented as follows.
The sequence of 2-tuple scaled processes in (3.1) corresponding to a game-competition based dynamic resource pricing and scheduling policy with users’ selection, which is designed in the subsequent subsection, converges jointly in distribution. More precisely, under the heavy traffic condition described in Section 6, we have that:
where $\hat{W}(\cdot)$ is presented by an RDRS model and $\hat{Q}(\cdot)$ is an asymptotic queue policy process with dynamic pricing globally over $[0,\infty)$ through a saddle point to zero-sum game-competition problem and a Pareto minimal-dual-cost Nash equilibrium point to a non-zero-sum game-competition problem.
Definition 3.1. A u-dimensional stochastic process $\hat{Z}(\cdot)$ with $u\in{\cal J}$ is claimed as an RDRS with oblique reflection if it can be uniquely represented as:
Furthermore, $b(\alpha(t),t)=(b_{1}(\alpha(t),t),\ldots,b_{u}(\alpha(t),t)'$ is a u-dimensional vector, $\sigma^{E}(t)$ and $\sigma^{S}(t)$ are u × J matrices, $R(\alpha(t),t)$ with $t\in R_{+}$ is a u × u matrix, and $(\hat{Z}(\cdot),\hat{Y}(\cdot))$ is a coupled almost surely continuous solution of (3.3) with the following properties for each $j\in\{1,\ldots,u\}$,
In addition, a solution to the RDRS in (3.3) is called a strong solution if it is in the pathwise sense and is called a weak solution if it is in the sense of distribution.
In terms of the well-posedness of an RDRS, readers are referred to a general discussion in Dai [Reference Dai7]. Furthermore, in Definition 3.1, the stochastic processes $B^{E}(\cdot)$ and $B^{S}(\cdot)$ are, respectively, two J-dimensional standard Brownian motions, which are independent each other. For each state $i\in{\cal K}$ and a time $t\in[0,\infty)$, the nominal arrival rate vector $\lambda(i)$, the mean reward vector m(i) , the nominal throughput vector $\rho(i)$, and a constant parameter vector $\theta(i)$ are given as follows,
The covariance matrices are given by:
The It$\hat{o}$’s integrals with respect to the Brownian motions are defined as:
3.2. A 3-stage users-selection and dynamic pricing/rate scheduling policy
In this subsection, we design a 3-stage users-selection, dynamic pricing, and rate scheduling policy through a 2-stage game-theoretic problem consisting of both zero-sum and non-zero-sum game-competitions myopically at each time point for the purpose as stated in the introduction of the paper.
3.2.1. General service capacity region
In our system, the jobs in the jth queue for each $j\in{\cal J}$ may be served at the same time by a random but at most Vj ($\leq V$) number of service pools corresponding to selected utility (hash) functions at a particular time point. With this simultaneous service mechanism, the total service rate for the jth queue at the time point is the summation of the rates from all the pools possibly to serve the jth queue. More precisely, we index these pools by a subset ${\cal V}(\,j)$ of the set ${\cal V}$ as follows,
where, vlj with $l\in\{1,\ldots,V_{j}\}$ denotes the vljth pool in ${\cal V}(\,j)$. In the same way, a pool denoted by $v\in{\cal V}$ can possibly serve at most Jv number of job classes represented by a subset ${\cal J}(v)$ of the set ${\cal J}$, i.e.,
where jvl with $l\in\{1,\ldots,J_{v}\}$ indexes the jvlth job class in ${\cal J}(v)$. To be more illustrative, let L be a J × V constituent matrix such that:
Then, ${\cal V}(\,j)$ consists of all the non-zero components of the jth row of L while ${\cal J}(v)$ consists of all the non-zero components of the vth column of L. Furthermore, in each pool v, there are Jv number of flexible parallel-servers with rate allocation vector:
where $c_{j_{vl}}(t)$ with $l\in\{1,\ldots,J_{v}\}$ is the assigned service rate to the jvlth user at pool v and time t. Similarly, corresponding to the $l\in\{1,\ldots,J_{v}\}$, we will also denote the rate $c_{j_{vl}}(t)$ by $c_{vj}(t)$ for an index $j\in{\cal J}(v)$.
Note that, the vector in (3.9) takes values in a capacity region ${\cal R}_{v}(\alpha(t))$ driven by the FS-CTMC $\alpha=\{\alpha(t),t\in[0,\infty)\}$. For each given $i\in{\cal K}$ and $v\in{\cal V}$, the set ${\cal R}_{v}(i)$ is a convex region containing the origin and has Lv $( \gt J_{v})$ boundary pieces (see e.g., the upper-left graph in Figure 2). In this region, every point is defined according to the associated users, i.e., $x=(x_{j_{v1}},\ldots,x_{j_{vJ_{v}}})$. On the boundary of ${\cal R}_{v}(i)$ for each $i\in{\cal K}$, Jv of them are $(J_{v}-1)$-dimensional linear facets along the coordinate axes. The other ones denoted by ${\cal O}_{v}(i)$ are located in the interior of $R^{J_{v}}_{+}$. It is called the capacity surface of ${\cal R}_{v}(i)$ and it has $B_{v}=L_{v}-J_{v}\;( \gt 0)$ linear or smooth curved facets $h_{vk}(c_{v\cdot},i)$ on $R_{+}^{J_{v}}$ for $k\in{\cal U}_{v}\equiv\{1,2,\ldots,B_{v}\}$, i.e.,
Furthermore, if we define $C_{U_{v}}(i)$ to be the sum capacity upper bound for ${\cal R}_{v}(i)$, the facet in the center of ${\cal O}_{v}(i)$ is linear and is assumed to be a non-degenerate $(J_{v}-1)$-dimensional region. More precisely, it can be represented by
where $k_{U_{v}}\in{\cal U}_{v}$ is the index corresponding to $C_{U_{v}}(i)$. In addition, we suppose that any one of the Jv linear facets along the coordinate axes forms an $(J_{v}-1)$-user capacity region associated with a particular group of $J_{v}-1$ users if the queue corresponding to the other user is empty. In the same manner, we can provide an interpretation for the $(J_{v}-l)$-user capacity region for each $l\in\{2,\ldots,J_{v}-1\}$.
Concerning the allocation of the service resources over the capacity regions to different users, we adopt the so-called head of line service discipline. Equivalently, the service goes to the packet at the head of the line for a serving queue where packets are stored in the order of their arrivals. The service rates are determined by a utility (or hash) function of the environmental state, the price for each user, and the number of packets in each of the queues. More precisely, for each state $i\in{\cal K}$, a price vector $p=(p_{1},\ldots,p_{J})$, and a queue length vector $q=(q_{1},\ldots,q_{J})'$, we define $\Lambda_{\cdot j}(pq,i)$ with $j\in{\cal J}$ to be the rate vector (in qubits/ps) of serving the jth queue at all its possible service pools, i.e.,
where,
Furthermore, let $\Lambda_{v\cdot}(pq,i)$ for each $v\in{\cal V}$ be the rate vector for all the users possibly served at service pool v, i.e.,
Thus, $c^{{\cal Q}(pq)}_{v_{lj}}(i)=c^{{\cal Q}(pq)}_{j}(i)$ if the pool index $v_{lj}\in{\cal V}(\,j)$ for an integer $l\in\{1,\ldots,V_{j}\}$ with $j\in{\cal J}$ while the total rate used in (2.11) can be represented by:
In the end, we impose the convention that an empty queue should not be served. Then, for each $v\in{\cal V}$ and ${\cal Q}\subseteq{\cal J}$ (e.g., a set as given by (3.13)), we can define:
Therefore, for all ${\cal Q}$ such that $\emptyset\subsetneqq{\cal Q}\subseteq{\cal J}(v)$ corresponding to each $v\in{\cal V}$, if $c^{\cal Q}_{v\cdot}(i)$ is on the boundaries of the capacity region ${\cal R}_{v}(i)$, we have the following observation that:
where $c^{\emptyset}_{v\cdot}(i)\in{\cal O}_{v}(i)$ and $\emptyset$ denotes the empty set. Typical examples of our capacity region are referred to the upper-left graph in Figure 2 for more details.
3.2.2. A dynamic pricing and scheduling policy with users-selection
For our purpose, we classify all the users into two types. More precisely, we first need to smartly choose the users to be served. In other words, at each time point and for each pool v, we intelligently select a set ${\cal M}(i,v)\equiv\{\,j_{v1}(i),\ldots,j_{vM_{v}}(i)\}$ of users to get into services with $j_{vl}\in{\cal J}$ and $l\in\{1,\ldots,M_{v}\}$ for a given positive integer number $M_{v}\leq J_{v}$. Among these chosen users, we need to conduct the dynamic pricing while realize optimal and fair resource allocation. Therefore, we design a strategy by mixing a saddle point and a static Pareto maximal-utility Nash equilibrium policy myopically at each time point t to a mixed zero-sum and non-zero-sum game problem for each state $i\in{\cal K}$ and a given valued queue length vector $pq=(p_{1}q_{1},\ldots,p_{J}q_{J})'$. Here we note that $p=(p_{1},\ldots,p_{J})'$ is a given price vector and $q=(q_{1},\ldots,q_{J})'$ is a given queue length vector such that $p_{j}=f_{j}(q_{j},i)$ as in (2.8) for each $j\in{\cal J}$ and $i\in{\cal K}$. The saddle point corresponds the users’ selection while the Pareto optimality represents the full utilization of resources in the whole game system and the Nash equilibrium represents the fairness to all the chosen users. More exactly, in this game, there are J users (players) associated with the J queues. Each of them has his own utility function $U_{vj}(p_{j}q_{j},c_{vj})$ with $j\in{\cal J}(v)$ and $v\in{\cal V}(\,j)$. This utility function is a generalization of the existing one in Dai [Reference Dai6, Reference Dai8, Reference Dai9], Ye and Yao [Reference Ye and Yao28], and references therein. Examples of such a utility function can be the so-called proportionally fair allocation, minimal potential delay allocation, and $(\beta,\alpha)$-proportionally fair allocation (see e.g., Ye and Yao [Reference Ye and Yao28]), which are widely used in internet protocols and communication systems. Every chosen user selects a policy to maximize his own utility function at each service pool v while the summation of all the users’ utility functions and the summation of the utility functions associated with the chosen users are also maximized. To wit, we can formulate a generalized users-selection, pricing, and resource-scheduling game problem by extending the ones in Examples 4.1–4.3 as follows,
while we have that:
Note that, the rate vector c in (3.20)-(3.21) is given by:
and the utility functions used in (3.20)-(3.21) are defined by:
Then, by extending the concepts of Nash equilibrium point, saddle point, and Pareto optimality in Dai [Reference Dai8, Reference Dai9], Marchi [Reference Marchi19], Nash [Reference Nash21] and Rosen [Reference Rosen25], we have the following definition concerning a utility based 2-stage game-theoretic policy via a saddle point to a zero-sum game problem and a static Pareto maximal-utility Nash equilibrium point to a non-zero-sum game problem myopically at each particular time point.
Definition 3.2. For each state $i\in{\cal K}$, a price vector $p\in R^{J}_{+}$, and a queue length vector $q\in R^{J}_{+}$ such that $(2.8)$ is satisfied, we call the rate vector:
a utility based 2-stage game-theoretic policy if it is a solution to the game problem in (3.20)-(3.21), which is obtained by firstly obtaining a saddle point to a zero-sum game problem and secondly obtaining a static Pareto maximal-utility Nash equilibrium point to a non-zero-sum game problem, such that, for each $j\in{\cal J}(v)\setminus{\cal Q}(q)$ and any given $c(i)\in F_{{\cal Q}(q)}(i)$, the following facts are true,
3.3. Main theorem under the policy
Before stating our main theorem, we first introduce another concept of the so-called 2-stage dual-cost game-theoretic policy via a saddle point to a zero-sum game problem and a minimal-dual-cost Pareto Nash equilibrium point to a non-zero-sum game problem myopically at each given time point for a given price parameter $p\in R_{+}^{J}$. Then, based on the 2-stage dual-cost game-theoretic policy, we can inversely obtain the price vector and determine the target rate vector. To do so, we formulate a 2-stage minimal-dual-cost game problem in (3.22), which is corresponding to the utility based one in (3.20)-(3.21). More precisely, for a given $i\in{\cal K}$, a price parameter $p\in R_{+}^{J}$, a rate vector $c\in {\cal R}(i)\equiv{\cal R}_{1}(i)\times\ldots\times {\cal R}_{V}(i)$, and a parameter $w\geq 0$, the 2-stage minimal-dual-cost game problem can be presented as follows:
subject to
where, the cost function $C_{vj}(pq,c)$ for each $j\in{\cal J}(v)$ and $v\in\{0\}\cup{\cal V}(\,j)$ is defined by:
and ${\cal C}(c)$ is an index set associated with the non-zero rates and non-empty queues, i.e.,
In other words, if the environment is in state $i\in{\cal K}$, we try to find a queue state q for a given $c\in{\cal R}(i)$, a price parameter vector $p\in R_{+}^{J}$, and a given parameter $w\geq 0$ such that the individual user’s dual-costs and the total dual-cost over the system are all minimized at the same time while the (average) workload meets or exceeds w. Then, we have the following definitions.
Definition 3.3. For each state $i\in{\cal K}$, a price vector $p\in R_{+}^{J}$, and a rate vector $c(i)\in{\cal R}(i)$, a queue length vector $q^{*}\in R^{J}_{+}$ is called a dual-cost based 2-stage game-theoretic policy if it is a solution to the game problem in (3.22), which is obtained by firstly obtaining a saddle point to a zero-sum game problem and secondly obtaining a static Pareto minimal-dual cost Nash equilibrium point to a non-zero-sum game problem, such that, for each $j\in{\cal C}(c)$, $v\in\{0\}\cup{\cal V}$, and any given $q\in R^{J}_{+}$ with $q_{j}=0$ when $j\in{\cal J}\setminus {\cal C}(c)$, we have that:
Note that, once we obtain the queue policy point $q^{*}$ with respect to the given price vector p from Definition 3.3, we can inversely deduce the corresponding price policy vector p in terms of $q^{*}$, i.e., $p=g(q^{*})$ as in (2.8). This relationship can be used to design iterative algorithms in our numerical simulations. Furthermore, in Definition 3.3, we have used more strict concept of “Pareto optimal Nash equilibrium point”, this concept can be relaxed to “Pareto optimal point” and the related theoretical discussion keeps true. In certain cases and when it is necessary, we can shift the Pareto optimal point to the Pareto optimal Nash equilibrium point by some mapping techniques.
Definition 3.4. Let $\hat{Q}^{r,(P,G)}(\cdot)$ and $\hat{W}^{r,(P,G)}(\cdot)$ be the diffusion-scaled queue length and workload processes, respectively, under an arbitrarily feasible dynamic pricing and rate scheduling policy (P, G) satisfying the Lipschitz condition in (2.8). A vector process $\hat{Q}(\cdot)$ is called an asymptotic dual-cost based 2-stage game-theoretic policy globally over the whole time horizon if, for any $t\geq 0$ and $v\in\{0\}\cup{\cal V}(\,j)$ with $j\in{\cal J}$, we have that:
Furthermore, for each $j\in{\cal M}(\alpha(t),v,t)\cap({\cal C}(c)\cap{\cal J}(v))$, we have that:
In addition, for each $j\in({\cal C}(c)\cap{\cal J}(v))\setminus{\cal M}(\alpha(t),v,t)$, we have that:
Note that, in (3.25)-(3.26) and for each $j\in{\cal J}$, we have that:
Next, let $q^{*}(w,p,\rho(i))$ be a dual-cost based 2-stage game-theoretic policy corresponding to the game problem in (3.22) in terms of each given number $w\geq 0$, $p\in R_{+}^{J}$, and $i\in{\cal K}$ at a given time t. Furthermore, let $p(w,q^{*},\rho(i))$ denote its corresponding inverse price vector with respect to $q^{*}$ and construct price policy vector:
such that the Lipschitz condition in (2.8) is satisfied. Then, our main theorem can be presented as follows.
Theorem 3.5. For the game-competition based users-selection, dynamic pricing, and scheduling policy determined by (3.20)-(3.22) and (3.28) with $Q^{r}(0)=0$ and conditions (6.3)-(6.8) (that will be detailed in Section 6), the convergence in (3.2) is true. Furthermore, the limit queue length $\hat{Q}(\cdot)$ and the total workload $\hat{W}(\cdot)$ in (3.2) have the relationship:
where $\hat{P}(\cdot)$ the inverse price vector process defined through (3.28) and $\hat{W}(\cdot)$ is a 1-dimensional RDRS in strong sense with:
for $t\in[0,\infty)$ and some constant $\theta_{j}(i)$ for each $j\in\bigcup_{v\in{\cal V}}{\cal M}(i,v,t)$. In addition, there is a common supporting probability space, under which and with probability one, the limit queue length $\hat{Q}(\cdot)$ is an asymptotic dual-cost based 2-stage game-theoretic policy globally over time interval $[0,\infty)$. Finally, the limit workload $\hat{W}(\cdot)$ is also asymptotic minimal in the sense that:
The proof of Theorem 3.5 will be given in Section 6.
3.4. CNN-based algorithm flow chart
Based on the policy derived in Theorem 3.5, we can design a CNN based algorithm flow chart in Figure 6.
More precisely, for a constant $T\in[0,\infty)$, we divide the interval $[0,T]$ equally into n subintervals $\{[t_{i},t_{i+1}],i\in\{0,1,\ldots,n-1\}\}$ with $t_{0}=0$, $t_{n}=T$, and $\Delta t_{i}=t_{i+1}-t_{i}=\frac{T}{n}$. Furthermore, let
for each process $F(\cdot)\in\{B^{E}(\cdot),B^{S}(\cdot),\hat{W}(\cdot),\hat{Y}(\cdot)\}$. Then, we can develop an iterative procedure as shown in Figure 6 to conduct RDRS model based simulation studies and to illustrate the efficiency of our designed policy. In this algorithm, the main part is the policy computing concerning users-selection, dynamic pricing via queueing strategy, and rate scheduling in the federated learning center at each time $t_{i+1}$ for given initial conditions at t 0. As shown in Figure 2 and as mentioned in Introduction of this paper, our game-theoretic problem is a J × V-dimensional problem. In general, an explicit solution is not available. Thus, we need a numerical method as designed in Figure 6 to solve the 2 -stage game-theoretic problem. The target is to get an associated saddle point to its corresponding zero-sum game problem and a Nash equilibrium point to its non-zero-sum game problem. In real-world system, J and V may take large numbers and our CNN algorithm exhibits big model behavior. Since our designed platform is a cloud computing (or even the near future quantum-cloud computing) based one, this big model can be effectively solved. However, to illustrate the usage of our designed algorithm and demonstrate the efficiency of our designed policy, we choose smaller J and V to conduct simulation case studies, which is presented in Sections 4–5.
Finally, note that the limit total workload $\hat{W}(t_{i+1})$ in Figure 6 can be replaced by the total workload in a real-world system if the input load of the system is close to heavy traffic (that will be detailed in Section 6). Furthermore, for the CNN algorithm flow chart designed in Figure 6, there are actually 4 processing stages in the federated learning center. The 2-stage game-theoretic problem is divided into 3 stages with an additional Stage II(b) to handle the dynamic pricing. The Stage III is directly corresponding to the integral relationship between the utility functions and their dual cost functions in (3.22).
4. Three illustrative policy examples
To be more illustrative and as a preparation of the following simulation case studies, we here present a 2-stage dynamic pricing and rate scheduling example and a 3-stage users-selection, dynamic pricing, and rate scheduling example based on a single-pool service system as shown in Figure 2. Hence, we will omit the pool index v.
4.1. The first example
The first example is corresponding to a 2-user case as shown in Figure 7, which can be used to model the DaiCoin based digital payment system with two types of Ethereums (corresponding to two DaiCoins) as in Maker [18]. It can also be used to model a MIMO wireless communication channel (i.e., a single base station equipped with two antennas) shared by two-users or a quantum computer with two eigenmodes (see e.g., Dai [Reference Dai9, Reference Dai12]). In this case, we are interested in the problem about how to price the two users and conduct the computing rate (i.e., power) resource allocation cooperatively inside a service system. More precisely, we take V = 1 with J = 2 and assume that the state space of the FS-CTMC $\alpha(t)$ defined in Subsection 2 consists only of a single state (i.e., $\alpha(t)\equiv 1$ for all $t\in[0,\infty)$). In a MIMO wireless environment, this case is associated with the so-called pseudo static channels. Then, the capacity region denoted by ${\cal R}$ is supposed to be a non-degenerate convex one confined by five boundary lines including the two ones on x-axis and y-axis as shown in Figure 7. The capacity upper bound of the region satisfies $c_{1}+c_{2}=2,000$. This region is corresponding to a degenerate fixed MIMO wireless channel of the generally randomized one in Dai [Reference Dai6]. For each price vector $p=(p_{1},p_{2})\in R_{+}^{2}=[0,\infty)\times[0,\infty)$ corresponding to the process P(t) in (2.8) and the queue length vector $q=(q_{1},q_{2})\in R_{+}^{2}=[0,\infty)\times[0,\infty)$ corresponding to the process Q(t) defined in (2.7) at a particular time point, we take the utility functions in terms of rate vector $c=(c_{1},c_{2})\in{\cal R}$ for user 1 and user 2, respectively, by:
where $ln(\cdot)$ is the logarithm function with the base e. Note that, the utility functions U 1 and U 2 in (4.1) are called proportionally fair and minimal potential delay allocations, respectively, which are widely used in communication systems Furthermore, in a (quantum) blockchain system, these utility functions can be considered as generalized hash functions to replace the currently used random number generators to generate partial nonce values and private keys, i.e., $((p_{1},p_{2}),(q_{1},q_{2}),(c_{1},c_{2}))$.
Example 4.1. For the upper graph case in Figure 7 and by the utility functions in (4.1), we can propose a 2-stage pricing and rate-scheduling policy at each time point $t\in[0,\infty)$ by a Pareto maximal-utility Nash equilibrium point to the following non-zero-sum game problem:
where $U_{0}(pq,c)=U_{1}(pq,c)+U_{2}(pq,c)$. To wit, if $c^{*}=(c_{1}^{*},c_{2}^{*})$ is a solution to the game problem in (4.2), we have that:
Furthermore, it follows from the inequalities in (4.3) that, if a game player’s (i.e., a user’s) rate service policy is unilaterally changed, his utility cannot be improved.
Remark 4.2. Due to the non-degenerate assumption imposed in (3.10)-(3.11), the strictly positive outer normal vector ζ to the facet ${\cal O}_{v}$ with v = 1 at the point $\rho^{*}_{v}$ exists, which is as shown in Figure 7. Hence, the so-called complete resource pooling condition (CRP) as introduced in Subsubsection 6.1.2 holds. In this case, a so-called fixed point defined as a Pareto minimal Nash equilibrium point to a dual-cost non-zero-sum game problem can be explicitly constructed, which is presented in (5.5). Interested readers are also referred to Dai [Reference Dai8], Ye and Yao [Reference Ye and Yao28] for more related discussions.
4.2. The second example
The second example is by adding Stage 0 for users’ selection in Figure 8. Comparing with the first case with J = 2, we here consider a 3-user case (i.e., J = 3) and add one more user selection layer. At each time point, we choose two of the three users for service according to a zero-sum game competition policy. When any two users $i,j\in\{1,2,3\}$ with i ≠ j are selected, they will be served based on a non-zero-sum game competition policy. The capacity upper bound of the corresponding capacity region satisfies $c_{i}+c_{j}=2,000$ as in the first 2-user case. Furthermore, suppose that, at a particular time point, there is a price vector $p=(p_{1},p_{2},p_{3})\in R_{+}^{3}$ corresponding to the process P(t) in (2.8) and a queue length vector $q=(q_{1},q_{2},q_{3})\in R_{+}^{3}$ corresponding to the process Q(t) defined in (2.7). Then, for each $(c_{i},c_{j})\in{\cal R}$, the corresponding utility functions are taken as in (4.1) if $i,j\in\{1,2\}$. However, if i = 3 or j = 3, the corresponding utility function is taken to be the following one,
Example 4.3. For the second case corresponding to both the upper and lower graphs in Figure 8 and by the utility functions in (4.1) and (4.4), we can design a 3-stage users-selection, pricing and rate-scheduling policy myopically at each time point $t\in[0,\infty)$, which involves two steps as follows. First, we choose two users for service by a saddle point policy via the solution to the zero-sum game problem,
for each $j\in\{1,2,3\}$, $j_{1}\in\{1,2,3\}\setminus\{\,j\}$, $j_{2}\in\{1,2,3\}\setminus\{\,j,j_{1}\}$, and a fixed $pq\in R_{+}^{3}$, where,
In other words, if $c^{*}=(c_{1}^{*},c_{2}^{*},c_{3}^{*})$ is a solution to the game problem in (4.5), and if
then, for a fixed $pq\in R_{+}^{3}$, we have that:
Second, when two users corresponding to the summation $U_{0j}=U_{k}+U_{l}$ for an index $j\in\{1,2,3\}$ with two associated indices $k,l\in\{1,2,3\}$ as in one of (4.6) are selected, we can propose a 2-stage pricing and rate-scheduling policy at each time point by a Pareto maximal-utility Nash equilibrium point to the non-zero-sum game problem for a fixed $pq\in R_{+}^{3}$,
To wit, if $c^{*}=(c_{k}^{*},c_{l}^{*})$ is a solution to the game problem in (4.8), we have that:
4.3. The third example
The third example is corresponding to a case with 2 pools (i.e., V = 2) as shown in Figure 9.
In reality, this system is corresponding to a MIMO wireless communication system with two base stations. Each of the base station is equipped with a single antenna. However, the two base stations can cooperated each other to form a transmission channel with a capacity region as shown in Figure 9. The rest of the illustration is similar to the one for the second example. Hence, we omit it here.
5. Simulation case studies via RDRS models
In this section, we conduct simulation case studies for Examples 4.1–4.3 presented in Section 4. The simulation for the third example with 2-pools in Section 4 is similar to the one for Example 4.3. Hence, we omit it here. The main point of these simulation studies is to illustrate our policies proposed in the two examples outperform several policies in certain ways. These policies used for the purpose of comparisons include an existing constant pricing policy, an existing 2D-Queue policy, a newly designed randomly users’ selection stochastic pooling policy, and an arbitrarily selected dynamic pricing policy. As mentioned in Section 4, Examples 4.1–4.3 are corresponding to a single-pool system with two-users and three-users, respectively. Thus, we will omit all the related pool index v. In an associated real-world system, the parameter vectors p and q in (4.2) (or (4.5)) are the randomly evolving pricing process P(t) in (2.8) and the queue length process Q(t) in (2.7). Hence, it is our concern of this section about how to employ the RDRS performance model in Definition 3.1 to evaluate the usefulness of our proposed myopic users-selection, dynamic pricing, and scheduling policies globally over the whole time horizon $[0,\infty)$ for Examples 4.1–4.3. To interpret our numerical simulation implementations, we first identify the corresponding dual-cost functions {$C_{j}(q,c)$ as defined in (3.22) with $j\in\{1,2,3\}$ for the associated $U_{j}(q,c)$ given in (4.1) and (4.4). More precisely,
where $1/\mu_{j}$ for all $j\in\{1,2,3\}$ are average quantum packet lengths associated with the three users as explained just before (2.6).
5.1. The simulation for example 4.1
Based on the first two dual-cost functions in (5.1), we can formulate a corresponding 2-stage minimal dual-cost non-zero-sum game problem for a price parameter $p\in R_{+}^{2}$ as follows,
for a fixed constant w > 0, a fixed $c\in{\cal R}$, and all $j\in\{0,1,2\}$ with $C_{0}(pq,c)=C_{1}(pq,c)+C_{2}(pq,c)$. Since $C_{j}(p_{j}q_{j},c_{j})$ for each $j\in\{1,2\}$ is strictly increasing with respect to $p_{j}q_{j}$ (or simply qj), a Pareto minimal dual-cost Nash equilibrium point to the problem in (5.2) must be located on the line where the equality of the constraint inequality is true (i.e., $q_{1}/\mu_{1}+q_{2}/\mu_{2}=w$). Thus, we know that:
Hence, it follows from (5.3) that:
Then, by solving the equation $\frac{\partial\bar{f}(q_{1})}{\partial q_{1}}=0$, we can get the minimal value of the function $\bar{f}(q_{1})$ for each $p\in R_{+}^{2}$. More precisely, the unique Pareto minimal point $q^{*}(p,w)=(q_{1}^{*},q_{2}^{*})(p,w)$ to the problem in (5.2) can be explicitly given by:
From the green curve in the upper-left graph of Figure 10 where $p_{1}=p_{2}=1$ and $w=10,000$, we can see that this point is close to the one corresponding to $q_{1}=0$ and we can consider it as a Pareto minimal Nash equilibrium point near boundary. Another Nash equilibrium point is the intersection point of the red and blue curves in the left graph of Figure 10. Obviously, this point is not a minimal total cost point. However, we can use some transformation technique to shift the minimal point to this one and to design a more fairly balanced decision policy. Nevertheless, for the purpose of this research in finding the Pareto utility-maximization Nash equilibrium policy, we use the point in (5.5) as our decision policy. In this case, for the price parameter $p\in R_{+}^{2}$, we have:
Then, associated with a given queue length based Pareto minimal Nash equilibrium point in (5.5), we can obtain the relationship between prices p 1 and p 2 as follows,
From (5.7), we can see that there are different choices of dynamic pricing policies corresponding to Pareto minimal Nash equilibrium point $q^{*}(p,w)$ in (5.5). For the current study, we take
whose dynamic evolutions with the queue length q 1 are shown in the upper-right graph of Figure 10.
Next, by Theorem 3.5, we know that the coefficients of the 1-dimensional RDRS under our dynamic pricing and game-based scheduling policy for the physical workload process $\hat{W}$ can be denoted by:
Then, based on $\hat{W}$, we can get the dynamic queueing policy by (5.5) and its associated dynamic pricing policy through (5.8):
After determining the initial prices $\hat{P}(0)=(initialprice1, initialprice2)$, we suppose that $\hat{P}(t)$ has the lower bound price protection functionality, i.e., $\hat{P}(t)\in[lowerboundprice1,\infty)$ × $[lowerboundprice2,\infty)$. Corresponding to (5.8), this truncated price process still owns the Lipschitz continuity as imposed in (2.8). Then, by combining the policy in (5.10) with the simulation algorithms for RDRSs we can illustrate our policy in (5.10) is cost-effective in comparing with a constant pricing policy, a 2D-Queue policy and an arbitrarily selected dynamic pricing policy. These simulation comparisons are presented in Figure 11 with different parameters. The number N of simulation iterative times for these comparisons is 6,000 and the simulation time interval is $[0,T]$ with T = 200, which is further divided into $n=5,000$ subintervals. The first graph on the left-column in Figure 11 is the mean total cost difference (MTCD) at each time point ti with $i\in\{0,1,\ldots,5,000\}$ between our current dynamic pricing policy in (5.10) and the constant pricing policy with $P_{1}(t)=P_{2}(t)=1$ for all $t\in[0,\infty)$, i.e.,
where ωj denotes the jth sample path and the $Q(\omega_{j},t_{i})$ in (5.11) is the queue length corresponding to the constant pricing policy at each time point ti. The second graph on the left-column in Figure 11 is the MTCD between our newly designed dynamic pricing policy in (5.10) and a 2D-Queue policy used as an alternative comparison policy in Dai [Reference Dai8]. For this 2D-Queue policy, the constant pricing with $P_{1}(t)=P_{2}(t)=1$ is employed and the associated Q(t) is presented as a two-dimensional RDRS model as in Dai [Reference Dai8]. The third graph on the left-column in Figure 11 is the MTCD between our current dynamic pricing policy in (5.10) and an arbitrarily selected dynamic pricing policy given by:
with the associated queue policy $Q(t)=\hat{Q}(t)$. The first and second graphs on the right-column in Figure 11 display the dynamics of $\hat{Q}(t)$ for both users. The third graph on the right-column in Figure 11 shows the price evolutions corresponding to two users. From the first graph in Figure 11, we can see that the cost is relatively large if the initial prices are relatively high. All of the other comparisons in Figure 11 show the cost-effectiveness of our policy in (5.10).
5.2. The simulation for example 4.3
Based on the three dual-cost functions in (5.1), we can first select any two of the three users for service by formulating the following minimal dual-cost zero-sum game problem for a price parameter $p\in R_{+}^{3}$, a constant w > 0, and a fixed $c\in{\cal R}$,
where $j\in\{1,2,3\}$, $j_{1}\in\{1,2,3\}\setminus\{\,j\}$, and $j_{2}\in\{1,2,3\}\setminus\{\,j,j_{1}\}$, and
In other words, if $q^{*}=(q_{1}^{*},q_{2}^{*},q_{3}^{*})$ is a solution to the game problem in (5.13), and if,
then, for any two fixed $p,c\in R_{+}^{3}$, we have that:
Furthermore, when two users corresponding to the summation $C_{0j}=C_{k}+C_{l}$ for an index $j\in\{1,2,3\}$ with two associated indices $k,l\in\{1,2,3\}$ as in one of (5.13) are selected, we can propose a 2-stage pricing and queueing policy at each time point by a Pareto minimal dual cost Nash equilibrium point to the non-zero-sum game problem for two fixed $p,c\in R_{+}^{3}$,
To wit, if $q^{*}=(q_{k}^{*},q_{l}^{*})$ is a solution to the game problem corresponding to the two users, we have that:
Thus, for the price parameter $p\in R_{+}^{3}$ and each $w\geq 0$, it follows from (5.13)-(5.16) and (5.17)-(5.18) that our queueing policy $(q_{1}^{*}(p,w),q_{2}^{*}(p,w),q_{3}^{*}(p,w))$ can be designed by:
where the function $\bar{g}_{1}$ is given in (5.5) and $\hat{g}_{1}$ is calculated in a similar way as follows,
The intersection point of $\hat{g}_{1}$ and $\hat{g}_{3}$ in terms of q 1 is a Pareto optimal Nash equilibrium point as shown in the lower-left graph of Figure 10. Furthermore, based on (5.19)-(5.20), we can inversely determine our pricing policy $p=(p_{1},p_{2},p_{3})$ as follows,
where, κ is defined in (5.7) and $\hat{\kappa}$ can be calculated in the same way as follows,
Furthermore, $\varpi(q^{*}_{1})$ in (5.21) is a Non-negative function in terms of $q^{*}_{1}$ and it is taken to be the unity in the drawing of dynamic pricing evolving in the lower-graph of Figure 10 with $w=10,000$.
To show the cost-effectiveness of our queueing policy in (5.19) with its associated pricing policy in (5.21), we present an arbitrarily selected stochastic pooling policy for the purpose of comparisons as follows,
where u is a uniformly distributed random number.
After determining the initial price vector $\hat{P}(0)=(initialprice1,initialprice2,initialprice3)$, we suppose that $\hat{P}(t)$ has the lower bound price protection and the upper bound constraint functionalities, i.e., $\hat{P}(t)\in[lowerboundprice1$, $upperboundprice1)$ × $[lowerboundprice2$, $upperboundprice2)$ × $[lowerboundprice3$, $upperboundprice3)$. Corresponding to (5.21), this truncated price process still own the Lipschitz continuity as imposed in (2.8). Then, by the similar explanations used for (5.11), we can conduct the corresponding simulation comparisons for this example as shown in Figures 4–5. The cost value evolution based on our queueing policy in (5.19) with its associated pricing policy in (5.21) is shown in the first graph of the left-column in each of Figures 4–5. Its MTCD in (5.11) compared with the arbitrarily selected stochastic pooling policy in (5.23) is displayed in the first graph of the right-column in each of Figures 4–5. The cost value evolution based on our queueing policy in (5.19) with constant pricing (i.e., $p_{1}=p_{2}=p_{3}$) is shown in the second graph of the left-column in each of Figures 4–5. In this constant pricing case, its MTCD in (5.11) compared with the arbitrarily selected stochastic pooling policy in (5.23) is displayed in the second graph of the right-column in each of Figures 4–5. The MTCD based on our queueing policy in (5.19) with its associated pricing policy in (5.21) and with the constant pricing policy is shown in the third graph of the left-column in each of Figures 4–5. The price evolutions for the three users are shown in the third graph of the right-column in each of Figures 4–5. In the special case with parameters as shown in Figure 5, the three price evolutions are the same. Furthermore, the MTCD between our dynamic pricing policy in (5.21) and the constant pricing policy is the number 0 as shown in the third graph of the left-column in Figure 5.
6. Justification of RDRS modeling
In this section, we theoretically prove the correctness of our RDRS modeling presented in Theorem 3.5. To be convenient for readers, we first outline the proof for Theorem 3.5, which is a technical generalization of the corresponding proofs in existing discussions in Dai [Reference Dai8, Reference Dai9]. The major breakthrough in this generalization is to incorporate the dynamic pricing functions in (2.8)-(2.9) in terms of both the queue length $Q_{j}(t)$ with $j\in\{1,\ldots,J\}$ and the random environment $\alpha(t)$ into the proof. In Dai [Reference Dai8], we prove a corresponding theorem for a generic game platform where the game-theoretic oriented resource scheduling policy is a solely “win-win” fairly sharing non-zero-sum game oriented one, which is designed for multiple resources-sharing and fairly competing users. The studied platform in Dai [Reference Dai8] consists of multiple intelligent (quantum) cloud-computing pools and parallel-queues. The arrival data streams associated with all the users are modeled by TSRRPs. Every user in the system can be served at the same time by multiple service pools and in the meantime every service pool consisting of parallel-servers can also provide services to multi-users simultaneously. The associated RDRS performance model is established under the “win-win” fairly resources-sharing scheduling policy via diffusion approximation. The study in Dai [Reference Dai8] is extended to the case for a “win-lose & win-win” 2-stage zero-sum and non-zero-sum mixed game-theoretic scheduling policy based platform in Dai [Reference Dai9]. However, the study in Dai [Reference Dai9] is still a 2-stage resources-competition and resources-sharing oriented one, which does not consider the dynamic pricing issue. Therefore, in the discussions of the following two subsections for proving our current Theorem 3.5, we incorporate the dynamic pricing functions in (2.8)-(2.9) into consideration, which is along the line of the corresponding proof in Dai [Reference Dai9].
6.1. The required conditions
In this subsection, we present the required conditions in proving our RDRS modeling.
6.1.1. Conditions on utility functions
The utility functions can be either simply taken as the well-known proportionally fair and minimal potential delay allocations as used in (4.1) for Example 5 or generally taken such that the existence of a utility based 2-stage game-theoretic policy corresponding to the game problem in (3.20)-(3.21) is guaranteed. More precisely, for each given $p\in R_{+}^{J}$, we can assume that $U_{vj}(p_{j}q_{j},c_{vj})$ for each $j\in{\cal J}(v)$ and $v\in{\cal V}(\,j)$ is defined on $R_{+}^{J}$. It is second-order differentiable and satisfies:
Furthermore, we suppose that $\{U_{vj}(p_{j}q_{j},c_{vj}),j\in{\cal J}(v),v\in{\cal V}(\,j)\}$ satisfies the radial homogeneity condition at each given time point $t\in[0,\infty)$. In other words, for any scalar a > 0, each q > 0, $i\in{\cal K}$, $v\in{\cal V}$, and each $j_{l}\in{\cal M}(i,v,t)$ with $l\in\{1,\ldots,M_{v}\}$, its Pareto maximal utility Nash equilibrium point for the game has the radial homogeneity:
6.1.2. Complete resource pooling condition
Complete resource pooling (CRP) condition is commonly used in queueing network scheduling literature (see e.g., Stolyar [Reference Stolyar26], Ye and Yao [Reference Ye and Yao28]). Roughly speaking, under this condition, the network service resources can completely be shared in certain way by all allowed users. There are different (but essentially equivalent) ways to describe CRP condition (see e.g., Stolyar [Reference Stolyar26], Ye and Yao [Reference Ye and Yao28]). Here, we adopt the way in Stolyar [Reference Stolyar26] to present our CRP condition. More precisely, let $\bar{{\cal R}}_{v}(i)$ for $i\in{\cal K}$ and $v\in{\cal V}$ denote the boundary of ${\cal R}_{v}(i)$ in (3.10). Moreover, let $\bar{{\cal R}}^{*}_{v}(i)$ for $i\in{\cal K}$ and $v\in{\cal V}$ denote the outer (“north-east”) boundary of $\bar{{\cal R}}_{v}(i)$. Then, we have the following concepts.
A vector $\rho^{*}_{v}(i)$ with $i\in{\cal K}$ and $v\in{\cal V}$ is said to satisfy resource pooling (RP) condition if $\rho^{*}_{v}(i)\in\bar{{\cal R}}^{*}_{v}(i)$ and the outer normal vector ζ to $\bar{{\cal R}}_{v}(i)$ at $\rho^{*}_{v}(i)$ is unique (up to a scaling). In other words, the RP condition holds if $\rho^{*}_{v}(i)$ lies in the (relative) interior of one of curved facets of $\bar{{\cal R}}^{*}_{v}(i)$. Furthermore, $\rho^{*}_{v}(i)$ is said to satisfy the CRP condition if it satisfies the RP condition and all components of the corresponding normal vector ζ are strictly positive.
6.1.3. Heavy traffic condition
In addition, we introduce a sequence of independent Markov processes indexed by $r\in{\cal R}$, i.e., $\{\alpha^{r}(\cdot),r\in{\cal R}\}$. These systems all have the same basic structure as presented in the last section except the arrival rates $\lambda^{r}_{j_{l}}(i)$ and the holding time rates $\gamma^{r}(i)$ for all $i\in{\cal K}$, which may vary with $r\in{\cal R}$. Here, we suppose that they satisfy the heavy traffic condition:
where, $\theta_{j_{l}}(i)\in R$ is some constant for each $i\in{\cal K}$. Moreover, we suppose that the nominal arrival rate $\lambda_{j_{l}}(i)$ is given by:
and $\rho_{j_{l}}(i)$ in (6.4) for $j_{l}\in{\cal M}(i,v,t)$ with $l\in\{1,\ldots,M_{v}\}$ is the nominal throughput determined by:
with $\rho_{v\cdot}(i)\in{\cal O}_{v}(i)$ that is corresponding to the dimension Mv. In addition, $\nu_{v\cdot}$ and $\bar{\rho}_{v\cdot}(i)$ are a Jv-dimensional constant vector and a reference service rate vector, respectively, at service pool v, satisfying:
Remark 6.1. By (3.11), $\bar{\rho}_{v\cdot}(i)$ for each $i\in{\cal K}$ and $v\in{\cal V}(\,j_{l})$ can indeed be selected, which satisfy the second condition in (6.7). Thus, the CRP condition is true. Hence, the nominal throughput $\rho(i)$ in (6.4) can be determined. One simple example that satisfies these conditions is to take $\nu_{vj_{l}}=1$ for all $j_{l}\in{\cal M}(i,v,t)\cap{\cal J}(v)$ and $v\in{\cal V}(\,j_{l})$. Thus, the conditions in (6.4)-(6.7) mean that the system manager wishes to maximally and fairly allocate capacity to all users. Moreover, the design parameters $\lambda_{j_{l}}(i)$ for all $j_{l}\in{\cal M}(i,v,t)\cap{\cal J}$ and each $i\in{\cal K}$ can be determined by (6.4).
Next, we assume that the inter-arrival time associated with the kth arriving job batch to the system indexed by $r\in{\cal R}$ is given by:
where the $\hat{u}_{j_{l}}(k)$ does not depend on r and i. Moreover, it has mean one and finite squared coefficient of variation $\alpha_{j_{l}}^{2}$. In addition, the number of packets, $w_{j_{l}}(k)$, and the packet length $v_{j_{l}}(k)$ are assumed not to change with r. Thus, it follows from the heavy traffic condition in (6.3) for the rth environmental state process $\alpha^{r}(\cdot)$ with $r\in{\cal R}$ that $\alpha^{r}(r^{2}\cdot)$ and $\alpha(\cdot)$ equal to each other in distribution since they own the same generator matrix (see e.g., the definition in pages 384–388 of Resnick [Reference Resnick24]). Therefore, under the sense of distribution, all of the systems indexed by $r\in{\cal R}$ in (3.1) has the same random environment over any time interval $[0,t]$.
6.2. Proof of Theorem 3.5
First, it follows from the second condition in (6.3) that the processes $\alpha^{r}(r^{2}\cdot)$ for each $r\in{\cal R}$ and $\alpha(\cdot)$ are equal in distribution. Hence, without loss of generality, we can assume that:
Thus, for each $j\in{\cal J}$, $r\in{\cal R}$ and by the radial homogeneity of $\Lambda(pq,i)$ of the policy in (6.2), we can define the fluid and diffusion scaled processes as follows,
Then, it follows from (2.7), (6.9), the assumptions among arrival and service processes that
Furthermore, for each $j\in{\cal J}$, let
where
For convenience, we define
In addition, we let $\bar{Q}^{r}(\cdot)$, $\bar{E}^{r}(\cdot)$, $\bar{S}^{r}(\cdot)$, and $\bar{T}^{r}(\cdot)$ be the associated vector processes. Then, for the processes in (6.10)-(6.16), we define the corresponding fluid limit related processes,
where $\zeta_{t}(\cdot)$ denotes a process depending on the external environment, i.e.,
Furthermore, we have that
where for each $i\in{\cal K}$ and $t\in[0,\infty)$, we have that,
Then, we have the following lemma concerning the weak convergence to a stochastic fluid limit process under our game-competition based dynamic pricing and scheduling strategy.
Lemma 6.2. Assume that the initial queue length $\bar{Q}^{r}(0)\Rightarrow\bar{Q}(0)$ along $r\in{\cal R}$. Then, the joint convergence in distribution along a subsequence of ${\cal R}$ is true under our game-competition based dynamic pricing and scheduling strategy in (3.20) and (3.28) with the conditions required by Theorem 3.5,
In addition, if $\bar{Q}(0)=0$, the convergence is true along the whole ${\cal R}$ and the limit satisfies:
where $\bar{\lambda}(\cdot)$ is defined in (6.22), $\mu(\cdot)\equiv(\mu_{1},\ldots,\mu_{J})'\cdot$, and $\bar{c}(\cdot)$ is defined by:
Proof. First, by the proof of Lemma 1 in Dai [Reference Dai6] and the implicit function theorem, we can show that the pricing function f constructed through (3.22), (3.20), and (3.28) can be assumed to be Lipschitz continuous. Then, by extending the proof of Lemma 3 in Dai [Reference Dai6] and under the conditions in (6.1)-(6.2) and the just illustrated Lipschitz continuity for f, we know that, if $\Lambda(pq,i)\in F_{{\cal Q}}(i)$ for each $i\in{\cal K}$ is a given utility based 2-stage game-theoretic policy corresponding to the game problem in (3.20) and $\{p^{l}q^{l},l\in{\cal R}\}$ is a sequence of valued queue lengths, which satisfies $p^{l}q^{l}\rightarrow pq\in R_{+}^{J}$ as $l\rightarrow\infty$. Then, for each $j\in{\cal J}\setminus {\cal Q}(q)$ and $v\in{\cal V}(\,j)$, we have that:
Second, due to the proof of Lemma 7 in Dai [Reference Dai6], we only need to prove that a weak fluid limit on the RHS of (6.26) satisfies (6.28). In doing so, we suppose that the weak fluid limit on the RHS of (6.26) corresponds to a subsequence of the RHS of (6.26), which is indexed by $r_{l}\in{\cal R}$ with $l\in\{1,2,\ldots\}$. Furthermore, it follows from (6.11), (2.11), and the discussion in the proof of Lemma 7 of Dai [Reference Dai6] that the fluid limit process on the right-hand side of (6.26) is uniformly Lipschitz continuous almost surely. Thus, our discussion can focus on a fixed sample path and each regular point t > 0 over an interval $(\tau_{n-1},\tau_{n})$ with $n\in\{1,2,\ldots\}$ for $\bar{T}_{j}$ with $j\in{\cal J}$. More precisely, it follows from (6.21) that $\bar{Q}$ is differential at t and satisfies:
for each $j\in{\cal J}$. If $\bar{Q}_{j}(t)=0$ for some $j\in{\cal J}$, then it follows from $\bar{Q}_{j}(\cdot)\geq 0$ that
If $\bar{Q}_{j}(t) \gt 0$ for the $j\in{\cal J}$, there is a finite interval $(a,b)\in[0,\infty)$ containing t in it such that $\bar{Q}_{j}(s) \gt 0$ for all $s\in(a,b)$ and hence we can take sufficiently small δ > 0 such that $\bar{Q}_{j}(t+s) \gt 0$ with $s\in(0,\delta)$. Furthermore, by (2.8), $P_{j}(t+s) \gt 0$. Now, let rl with $l\in{\cal R}$ be the subsequence ${\cal R}$ and let $\delta_{l}\in(0,\delta]$ be a sequence such that $\delta_{l}\rightarrow 0$ as $l\rightarrow\infty$ while $\Lambda_{j}$ determined by a same group of users over $(0,\delta_{l}]$. Then, it follows from (6.11) that:
where, the last claim in (6.32) follows from the Lebesgue dominated convergence theorem, the right-continuity of $\alpha(\cdot)$, the Lipschitz continuity of $\bar{Q}(\cdot)$, and the fact in (6.29). Since t is a regular point of $\bar{T}$, it follows from (6.32) that:
which implies that the claims in (6.23)-(6.25) are true.
Along the line of the proofs for Lemma 4.2 in Dai [Reference Dai8], Lemma 4.1 in Dai [Reference Dai9], and Lemma 7 in Dai [Reference Dai6], it suffices to prove the claim that $\bar{Q}(\cdot)=0$ in (6.27) holds for the purpose of our current paper. In fact, for each $i\in{\cal K}$ and $l\in\{1,\ldots,M_{v}\}$, we define
Then, at each regular time $t\geq 0$ of $\bar{Q}(t)$ over time interval $(\tau_{n-1},\tau_{n})$ with a given $n\in\{1,2,\ldots\}$, we have that:
Note that, the second equality in (6.35) follows from the concavity of the utility functions and the fact that $\Lambda_{vj}(\bar{P}(t)\bar{Q}(t),\alpha(t),t)$ is the Pareto maximal Nash equilibrium policy to the utility-maximal game problem in (3.20) when the system is in a particular state. Thus, for any given $n\in\{0,1,2,\ldots\}$ and each $t\in[\tau_{n},\tau_{n+1})$,
where κ is a positive constant, i.e.,
Then, by the fact in (6.36), we know that $\bar{Q}(t)=0$ for all $t\geq 0$. Therefore, we complete the proof of the lemma.
In the end, by considering a specific state $i\in{\cal K}$ and by the index way as used in the proof of Lemma 6.2, we can extend the proofs for Lemma 4.3 to Lemma 4.5 in Dai [Reference Dai8] to the current setting. Then, by using the results in these lemmas to the proof for Theorem 1 in [Reference Dai6], we can reach a proof for Theorem 3.5 of this paper. $\Box$
7. Conclusion
In this paper, we study 2-stage game-theoretic problem oriented 3-stage service policy computing, CNN based algorithm design, and simulation for a blockchained buffering system with federated learning. More precisely, based on the game-theoretic problem consisting of both “win-lose” and “win-win” 2-stage competitions, we derive a 3-stage dynamical service policy via a saddle point to a zero-sum game problem and a Nash equilibrium point to a non-zero-sum game problem. This policy is concerning users-selection, dynamic pricing, and online rate resource allocation via stable digital currency for the system. The main focus is on the design and analysis of the joint 3-stage service policy for given queue/environment state dependent pricing and utility functions. The asymptotic optimality and fairness of this dynamic service policy is justified by diffusion modeling with approximation theory. A general CNN based policy computing algorithm flow chart along the line of the so-called big model framework is presented. Simulation case studies are conducted for the system with three users, where only two of the three users can be selected into the service by a zero-sum dual cost game competition policy at a time point. Then, the selected two users get into service and share the system rate service resource through a non-zero-sum dual cost game competition policy. Applications of our policy in the future blockchain based Internet (e.g., metaverse and web3.0) and supply chain finance are also briefly illustrated.
Acknowledgments
The project is funded by National Natural Science Foundation of China with Grant No. 11771006.
Competing interests
The author declares that he has no conflict of interest.