Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-27T11:23:05.900Z Has data issue: false hasContentIssue false

Interacting urns on a finite directed graph

Published online by Cambridge University Press:  23 September 2022

Gursharn Kaur*
Affiliation:
University of Virginia
Neeraja Sahasrabudhe*
Affiliation:
Indian Institute of Science Education and Research
*
*Postal address: Biocomplexity Institute and Initiative, 994 Research Park Boulevard, University of Virginia, Charlottesville, VA 22911. Email address: gursharn.kaur24@gmail.com
**Postal address: Department of Mathematical Sciences, IISER Mohali, Knowledge City, Sector 81, SAS Nagar, Manauli PO 140306. Email address: neeraja@iisermohali.ac.in
Rights & Permissions [Opens in a new window]

Abstract

We introduce a general two-colour interacting urn model on a finite directed graph, where each urn at a node reinforces all the urns in its out-neighbours according to a fixed, non-negative, and balanced reinforcement matrix. We show that the fraction of balls of either colour converges almost surely to a deterministic limit if either the reinforcement is not of Pólya type or the graph is such that every vertex with non-zero in-degree can be reached from some vertex with zero in-degree. We also obtain joint central limit theorems with appropriate scalings. Furthermore, in the remaining case when there are no vertices with zero in-degree and the reinforcement is of Pólya type, we restrict our analysis to a regular graph and show that the fraction of balls of either colour converges almost surely to a finite random limit, which is the same across all the urns.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Systems with multiple components that evolve randomly through self-reinforcement or reinforcement via interactions with other components of the system have been of great interest for a long time. Interacting urn models are often used to analyse such systems. Recently, there has been a lot of activity in the area of interacting urns [Reference Laruelle and Pagès8, Reference Crimaldi, Dai Pra, Louis and Minelli4, Reference Crimaldi, Dai Pra and Minelli5, Reference Dai Pra, Louis and Minelli6, Reference Benaïm, Benjamini, Chen and Lima2, Reference Sahasrabudhe13]. In simplest terms, an urn model or an urn process refers to a discrete-time random process that involves updating the configuration of an urn consisting of balls of different colours, according to some reinforcement rule. Here by the configuration of an urn at time t, we mean a vector whose ith element represents the number of balls of colour i in the urn at time t. The reinforcement process is assumed to be Markovian; that is, the reinforcement at any time depends only on the present urn configuration. The most common means of self-reinforcement is to select a ball uniformly at random from the urn at time t, and then, depending on the colour of the drawn ball, to add to or remove from the urn a certain number of balls of each colour. Such an urn model can be fully described by the initial configuration of the urn and the associated replacement matrix whose (i, j)th element is the number of balls of colour j added to or removed from the urn when a ball of colour i is drawn.

Traditionally, the study of urn models is classified based on the type of replacement matrix. For instance, the replacement matrix for the classical Pólya urn model is the identity matrix. That is, at every time-step a ball is drawn and is replaced in the urn along with another ball of the same colour. The asymptotic properties of the Pólya urn model have been studied extensively [Reference Mahmoud9]. The best-known result for the Pólya urn model is that the fraction of balls of each colour converges almost surely to a random limit, which is distributed according to a beta distribution with parameters depending on the initial configuration of the urn. An immediate generalisation of the two-colour Pólya urn model is the Friedman urn model [Reference Freedman7], where the chosen ball is replaced with $\alpha\geq 0$ balls of the same colour and $\beta >0 $ balls of the other colour. In this case, the fraction of balls of either colour approaches the deterministic limit $1/2$ with probability 1. Several other generalisations of the Pólya urn model have been studied; we refer the reader to [Reference Mahmoud9] for details.

1.1. General two-colour interacting urn model

In this paper, we study urn processes involving more than one urn. We define a general two-colour interacting urn model as a random process involving N urns, such that the reinforcement in each urn depends on all the urns or on a non-trivial subset of the given set of N urns. More precisely, suppose there are N urns with configurations $(W^t_i, B^t_i) $ , where $W^t_i$ and $B^t_i$ denote the number of white balls and black balls respectively, at time $t\geq 0$ in the ith urn, for every $i\in [N] \,{:\!=}\, \{1, \ldots, N \}$ . Let

\begin{equation*}Z_i^t = \dfrac{W_i^t}{W_i^t+B_i^t}\end{equation*}

be the proportion of white balls in the ith urn at time t and $\mathcal{F}_t = \sigma\!\left( Z_i^s\,{:}\, 0\leq s\leq t, i\in [N] \right)$ . Define $ (I_{i,W}^t, I_{i,B}^t) \,{:\!=}\, (W^t_i, B^t_i) -(W^{t-1}_i, B^{t-1}_i)$ to be the reinforcement in the ith urn at time t. We consider non-negative and finite reinforcement, that is, $0\leq I_{i,W}^t, I_{i,B}^t <\infty $ for every $i \in [N]$ and $t\geq 0$ , such that $\{(I_{i,W}^t, I_{i,B}^t )\}_{i\in [N]}$ are conditionally independent given $\mathcal{F}_{t-1}$ for every $t\geq 1$ . If the evolution of the ith urn depends on urns $\{i_1, \dots,i_{k_i} \} \subseteq [N]$ , then the distribution of $(I^t_{i,W}, I^t_{i,B})$ conditioned on $\mathcal{F}_{t-1}$ is determined by $Z^{t-1}_{i_1}, \dots, Z^{t-1}_{i_{k_i}}$ . We call the set $\{i_1, \dots,i_{k_i} \}$ the dependency set of the ith urn. In particular, in a graph-based general two-colour interacting urn model, a natural choice for the dependency set of an urn is the collection of urns in its neighbourhood.

Several special cases of the general two-colour interacting urn model have been studied recently. In [Reference Crimaldi, Dai Pra and Minelli5, Reference Dai Pra, Louis and Minelli6] and [Reference Sahasrabudhe13], interacting urn models were studied with Pólya- and Friedman-type reinforcements respectively. More precisely, in [Reference Crimaldi, Dai Pra and Minelli5, Reference Dai Pra, Louis and Minelli6] the authors take

\begin{equation*}\mathbb{P}\big((I^t_{i,W}, I^t_{i,B}) = (1,0)|\mathcal{F}_{t-1}\big) = p Z_i^{t-1}+(1-p)\frac{1}{N} \sum \limits_{j=1}^N Z_j^{t-1} \end{equation*}

and

\begin{equation*}\mathbb{P}\big( (I^t_{i,W}, I^t_{i,B}) = (0,1)|\mathcal{F}_{t-1}\big) =1- \mathbb{P}\big((I^t_{i,W}, I^t_{i,B}) = (1,0)|\mathcal{F}_{t-1}\big),\end{equation*}

for every $i\in [N]$ and some fixed $p\in [0,1]$ . Thus, the dependency set of every urn is the entire set of urns. In other words, the underlying network of interactions is a complete undirected graph on N vertices with self-loops at every vertex. Graph-based interactions in urn processes have been studied before in [Reference Aletti, Crimaldi and Ghiglietti1, Reference Benaïm, Benjamini, Chen and Lima2] and [Reference Van der Hofstad, Holmes, Kuznetsov and Ruszel14].

In this paper, we consider N interacting two-colour urns which are placed at the vertices of a finite directed graph. A ball is drawn independently and uniformly at random, simultaneously from every urn. Each urn then reinforces all the urns in its out-neighbourhood according to a reinforcement matrix R. More precisely, if

\begin{equation*}R = \begin{pmatrix} \alpha &\quad \delta \\ \\[-9pt] \gamma &\quad \beta \end{pmatrix}\end{equation*}

is a reinforcement matrix, then if a white ball is drawn from an urn, all its out-neighbours are reinforced with $\alpha$ white and $\delta$ black balls; and if the ball drawn is black, the out-neighbours are reinforced with $\gamma$ white and $\beta$ black balls. We restrict our discussion to a non-negative reinforcement matrix which is balanced; that is, $\alpha + \delta = \gamma + \beta$ .

1.2. Summary of results

We state our main results in Section 4. To study the asymptotic properties of $Z_i^t$ for the proposed model with reinforcement matrix R, we divide the discussion into two parts based on the type of reinforcement matrix, namely, Pólya type (when $\delta=\gamma=0$ ) and non-Pólya type (when $\gamma+\delta >0$ and $\alpha+\beta>0$ ). For non-Pólya-type reinforcement, we show that the fraction of white balls in the urns at vertices with non-zero in-degree converges to a deterministic limit almost surely. Furthermore, under certain conditions on the graph and the initial configuration of the urns we observe synchronisation; that is, the almost sure limit is the same across all urns at vertices with non-zero in-degree. We also obtain fluctuation theorems around the limiting vector.

For Pólya-type reinforcement and the case when $\alpha=\beta=0$ , we obtain results depending on the presence of the vertices with zero in-degree in the graph. When there are vertices with zero in-degree such that every vertex with non-zero in-degree in the graph can be reached from at least one vertex with zero in-degree, we show that the fraction of white balls in every urn converges to a deterministic limit almost surely. When there are no vertices with zero in-degree, we limit our discussion of Pólya-type reinforcement to a regular directed graph, that is, when the in-degree and the out-degree of every vertex equal a constant d. In this case we show that the fraction of balls of white colour in every urn converges to the same random limit almost surely, and we also obtain $\mathcal{L}^2$ -rates of synchronisation. However, the limiting distribution in this case is not known. On this note, we remark that while the reinforcement matrix is of Pólya type, at any given time-step an urn may be reinforced with balls of both colours, depending upon the balls drawn at that time from its in-neighbours. Thus, in our model the random process of a single urn is nowhere similar to the classical Pólya process, and we expect the problem of finding the limiting distribution to be fairly challenging.

We use the stochastic approximation method and the martingale method to study the interacting urn model proposed in this paper. The stochastic approximation method has been used before (see [Reference Zhang15, Reference Laruelle and Pagès8]) to obtain several interesting results for random processes with self-reinforcement or interactive reinforcement. We refer the reader to [Reference Borkar3] for details on stochastic approximation theory. For a more extensive discussion on various techniques used to study random processes with reinforcement (including urn models), we refer the reader to [Reference Pemantle11].

1.3. Outline of the paper

The rest of the paper is organised as follows: in Sections 2 and 3, respectively, we describe in detail the model and the corresponding stochastic approximation scheme. In Section 4, we state the main results. In Section 5, we present the proofs of all the results stated in Section 4. In Section 6 we discuss an application of our results to the study of an opinion dynamics model on finite directed networks.

1.4. Notation

Throughout the paper, we use the following notation: for a sequence of random variables $\{ X_t \}_{t \geq 0}$ and a random variable X, $X_t \xrightarrow{d} X$ means that $X_t$ converges to X in distribution, as $t \to \infty$ , and $X_t \xrightarrow{a.s.} X$ means that $X_t$ converges to X almost surely, as $t \to \infty$ . $N(\mu, \Sigma)$ denotes a normal random vector with mean vector $\mu$ and variance–covariance matrix $\Sigma$ . For a complex-valued sequence $\{a_t\}_{t\geq 0}$ and a real-valued sequence $\{b_t\}_{t\geq 0}$ , we write $a_t = \mathcal{O}(b_t)$ if there exists a positive constant C such that $| a_t| \leq C b_t $ for sufficiently large values of t. The symbol $\|\cdot\|$ denotes the standard $\mathcal{L}^2$ norm, and as defined before, [N] denotes the set $\{1,\dots, N\}$ . For a matrix M, $\lambda_{\min}(M)$ and $\lambda_{\max}(M)$ denote the minimum and the maximum, respectively, of the real parts of the eigenvalues of M. For an eigenvalue $\lambda$ , $\Re(\lambda)$ denotes the real part of $\lambda$ .

2. Model dynamics

Let $G=(V, E)$ be a simple directed graph with $V = [N]$ , $E\subseteq V\times V$ . For every vertex $i\in V$ , let $V(i) \,{:\!=}\, \{j \in V\,{:}\, (j,i)\in E\}$ denote the in-neighbourhood of i, and define $d_i^{\textrm{in}}\,{:\!=}\, |\{ j\in V\,{:}\, (j,i)\in E\}| = |V(i)|$ and $d_i^{\textrm{out}} \,{:\!=}\, | \{j\in V\,{:}\, (i,j)\in E \}|$ respectively to be the in-degree and out-degree of i. We write $i\to j$ if there is a directed edge from i to j and $i \rightsquigarrow j$ if there exists a path $i=i_0\to i_1\to \dots \to i_{k-1}\to i_k=j$ from i to j, for some $i_1,\dots, i_{k-1}\in V$ . The (i, j)th entry of the adjacency matrix A of G is equal to $\mathbb{I}_{i \to j}$ . Throughout this paper we use the term d-regular graph for a directed graph if $d_i^{\textrm{in}}= d_i^{\textrm{out}}= d$ for every $i\in V$ .

Suppose there is an urn at every vertex of G and that each urn contains balls of two colours, white and black. Let $(W^t_i, B^t_i)$ be the configuration of the urn at vertex i, where $W^t_i$ and $B^t_i$ denote the number of white balls and black balls respectively, at time $t\geq 0$ . Let $m \in \mathbb{Z}^+ $ and $\alpha, \beta \in \{0, 1, \dots, m\}$ be fixed; then given $\{(W^t_i, B^t_i)\}_{i\in V}$ , we update the configuration of each urn at time $t+1$ as follows:

A ball is selected uniformly at random from every urn simultaneously and independently of every other urn. The colours of these balls are noted and they are replaced in their respective urns. For every $i \in V$ , if the colour of the ball selected from the ith urn is white, then $\alpha$ white and $(m-\alpha)$ black balls are added to each urn j such that $i\to j$ ; and if the colour of the ball selected from the ith urn is black, then $(m-\beta)$ white balls and $\beta$ black balls are added to each urn j such that $i\to j$ .

That is, each urn i reinforces urn j such that $i\to j$ , according to the following reinforcement matrix:

(1) \begin{equation} R = \left(\begin{array}{c@{\quad}c} \alpha & m-\alpha \\ \\[-9pt] m-\beta & \beta \end{array}\right). \end{equation}

Throughout this paper, we consider only graphs with no isolated vertices (an isolated vertex is any vertex i such that $d_i^{\textrm{in}} = d_i^{\textrm{out}} =0$ ) and no vertices with only self-loops. This is because urns at such vertices will have no interaction with any other urn in the graph. At an isolated vertex, the urn composition would remain constant, and the other case reduces to a single-urn process (independent of every other urn). The asymptotics of such single-urn processes have been studied extensively; we refer the reader to [Reference Mahmoud9] for details.

We call (1) a Pólya-type reinforcement if $\alpha= \beta= m$ (that is, when $R=mI$ ) and a non-Pólya-type reinforcement when $0<\alpha+\beta <2m$ . To simplify the notation, we define $a \,{:\!=}\, \alpha/m$ and $b \,{:\!=}\, \beta/m$ . Let

\begin{equation*}Z_i^t \,{:\!=}\, \dfrac {W^t_i}{W^t_i+B^t_i}\end{equation*}

be the proportion of white balls in the ith urn at time t, and let $Z^t\,{:\!=}\, (Z_1^t, \dots, Z_N^t)$ . Define the $\sigma$ -field $\mathcal{F}_t \,{:\!=}\, \sigma \big( Z^0, Z^1, \ldots, Z^t \big)$ , and let $Y^t_i$ denote the indicator of the event that a white ball is drawn from the ith urn at time t. Then the distribution of $Y_i^{t}$ , conditioned on $\mathcal{F}_{t-1}$ , is given by

\begin{equation*} Y^t_i = \begin{cases}1 & \text{ with probability } Z_i^{t-1},\\ \\[-9pt] 0 & \text{ with probability } 1- Z_i^{t-1},\end{cases} \qquad \text{ for } t \geq 1. \end{equation*}

Note that conditioned on $\mathcal{F}_{t-1}$ , $\{Y_i^t\}_{i\in V}$ are independent random variables. The proposed model is a general two-colour interacting urn model, as defined in Section 1.1, where the dependency set of the ith urn is V(i) such that, given $\mathcal{F}_{t-1}$ ,

\begin{equation*}(I_{i,W}^t, I_{i,B}^t) = \sum_{j\in V(i)} (\alpha, m-\alpha) Y_j^{t} + (m-\beta, \beta)(1-Y_j^{t}),\end{equation*}

for $t\geq 1$ .

We divide the vertex set of the graph into two disjoint sets, say $ V=S\cup F$ , where $S = \{ i \in V\,{:}\, d_i^{\textrm{in}}=0\}$ and $F = \{ i \in V\,{:}\, d_i^{\textrm{in}}>0\}$ . We call the vertices in S stubborn vertices, as the urns at these vertices are not reinforced by any other urn, i.e. $V(i)=\emptyset$ ; we call F the set of flexible vertices. Clearly, for every $i\in S$

\begin{equation*}W_i^t = W_i^0 \qquad \text{and } \qquad B_i^t = B_i^0, \qquad \forall \ t >0, \end{equation*}

whereas for every $i \in F$ , we have

(2) \begin{align} W_i^{t+1} &= W_i^t + \alpha \sum_{j\in V(i)} Y_j^{t+1} + (m-\beta) \sum_{j\in V(i)} (1-Y_j^{t+1}) \nonumber \\[2pt] &= W_i^t + m\sum_{j\in V(i)} \big( 1-b + (a+b-1) Y_j^{t+1} \big) \nonumber \\[2pt] &= W_i^t + m(1-b) d_i^{\textrm{in}}+ m(a+b-1) \sum_{j\in V(i)} Y_j^{t+1}, \qquad \forall \ t >0, \end{align}

and

\begin{equation*}B_i^{t+1} = B_i^t + mb d_i^{\textrm{in}}- m(a+b-1) \sum_{j\in V(i)} Y_j^{t+1}, \qquad \forall \ t >0. \end{equation*}

Let $ T_i^t = W_i^t +B_i^t$ be the total number of balls in the ith urn at time t. Then, for every $i\in V$ ,

(3) \begin{equation} T_i^t = T_i^{t-1} +m\, d_i^{\textrm{in}}= T_i^0 + tm\,d_i^{\textrm{in}}. \end{equation}

Thus, the total number of balls in every urn at a flexible vertex is deterministic and increases linearly with time. In the next section, we briefly discuss stochastic approximation theory, a technique which is used in the later sections to obtain the asymptotic results for $Z^t$ .

3. Stochastic approximation scheme

A stochastic approximation scheme refers to a k-dimensional recursion of following type:

(4) \begin{equation} x_{t+1} = x_t + \gamma_{t+1} \!\left( h(x_t) + \Delta M_{t+1} +\epsilon_{t+1} \right), \qquad \forall t \geq 0,\end{equation}

where $h\,{:}\, \mathbb{R}^k \to \mathbb{R}^k$ is a Lipschitz function, $\{\Delta M_t\}_{t\geq 1}$ is a bounded square-integrable martingale difference sequence, and $\{\gamma_t\}_{t\geq 1}$ are positive step sizes satisfying conditions that ensure that $ \sum_{t\geq 1} \gamma_t$ diverges, but slowly. More precisely, we assume the following:

  1. (i) $\sum_{t\geq 1} \gamma_t = \infty$ and $\sum_{t\geq 1} \gamma_t^2 < \infty$ ;

  2. (ii) there exists $ C>0$ such that $\mathbb{E}\left [\|\Delta M_{t+1}\|^2 \vert \mathcal{G}_t\right] \leq C$ almost surely $\forall t \geq 0$ , where $\mathcal{G}_t = {\sigma (x_0, M_1, \ldots, M_t)}$ ;

  3. (iii) $\sup_{t\geq 0} \|x_t\| < \infty$ , almost surely;

  4. (iv) $\{ \epsilon_t \}_{t\geq 1}$ is a bounded sequence such that $\epsilon_t \to 0$ , almost surely as $t\to \infty$ .

Then the theory of stochastic approximation says that the iterates of (4) converge almost surely to the stable limit points of the solutions of the ordinary differential equation given by $\dot{x}_t = h(x_t)$ . For explicit results and details on a standard stochastic approximation scheme (with $\epsilon_t = 0 \ \forall t \geq 0$ ), we refer the reader to Lemma 1 and Theorem 2 from Chapter 2 of [Reference Borkar3]. Also in Chapter 2, the author discusses various extensions of the standard stochastic approximation scheme, including the stochastic approximation scheme of the form (4). In this section, we use stochastic approximation theory to study the limiting behaviour of $Z^t$ and write a stochastic approximation scheme of the form (4) for $Z^t$ . We first establish some notation.

Without loss of generality, we take $F = \{1, \dots, |F|\}$ and $S = \{|F|+1,\dots, N\}$ . For a vector $g\in \mathbb{R}^N$ and $B \subseteq [N]$ , let $g_B$ denote the $|B|$ -dimensional vector obtained by restricting g to B. Similarly, for a matrix $M\in \mathbb{R}^{N \times N}$ and $B \subseteq [N]$ , let $M_B$ denote the $|B| \times |B|$ matrix obtained by restricting M to the index set $B \times B$ . By $\textbf{1}$ and $\textbf{0}$ we denote matrices of appropriate dimension with all entries equal to 1 and to 0, respectively. We write $Z^t = (Z_1^t, \dots, Z_N^ t)= (Z_F^t, Z_S^0)$ as a row vector and define $N \times N$ matrices

\begin{equation*} \textbf{T}^{t} \,{:\!=}\,\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}T_1^t & 0 & \cdots& 0\\ \\[-7pt]0& T_2^t & \cdots& 0\\ \\[-7pt]\vdots& \cdots & \ddots& \vdots\\ \\[-7pt]0& 0& \cdots& T_N^t \end{array}\right) \qquad \text{and} \qquad \textbf{D} \,{:\!=}\, \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c} d_1^{\textrm{in}} & 0 & \cdots& 0\\ \\[-7pt] 0& d_2^{\textrm{in}} & \cdots& 0\\ \\[-7pt] 0& \cdots & \ddots& \vdots\\ \\[-7pt] 0& 0& \cdots& d_N^{\textrm{in}} \end{array}\right). \end{equation*}

Then

\begin{equation*} \textbf{T}^{t} = \left(\begin{array}{c@{\quad}c} \textbf{T}_F^t &\textbf{0} \\ \\[-7pt] \textbf{0} & \textbf{T}_S^0 \end{array}\right) \qquad \text{and} \qquad \textbf{D} = \left(\begin{array}{c@{\quad}c} \textbf{D}_F&\textbf{0} \\ \\[-7pt] \textbf{0} &\textbf{0} \end{array}\right), \end{equation*}

where $\textbf{T}_F^t$ and $\textbf{D}_F$ are both $|F| \times |F|$ matrices. Note that for $i\in S$ there is no $j\in V$ such that $j\to i$ ; therefore we can decompose the adjacency matrix A of the graph as follows:

\begin{equation*}A= \left(\begin{array}{c@{\quad}c} A_F& \textbf{0}\\ \\[-7pt] A_{S,F} &\textbf{0} \end{array}\right),\end{equation*}

where $ A_F $ is a $ |F|\times |F|$ matrix and $A_{S,F}$ is an $|S|\times |F|$ matrix. We define the scaled adjacency matrix

\begin{equation*} \tilde{A} \,{:\!=}\, A \left(\begin{array}{c@{\quad}c} \textbf{D}_F^{-1}&\textbf{0} \\ \\[-7pt] \textbf{0} &\textbf{0} \end{array}\right) = \left(\begin{array}{c@{\quad}c} \tilde{A}_F &\textbf{0} \\ \\[-7pt] \tilde{A}_{S,F} &\textbf{0} \end{array}\right), \end{equation*}

where the (i, j)th entry of $\tilde{A}$ is equal to

\begin{equation*}\frac{1}{d_j^{\textrm{in}}} \mathbb{I}_{i \to j}\end{equation*}

whenever $d_j^{in}>0$ , and is equal to 0 otherwise. We now write the evolution of $Z^t_i$ , for $i \in F$ , as a stochastic approximation scheme. From (2) and (3) we get

\begin{align*} Z_i^{t+1} & = \frac{1}{T_i^{t+1}} W_i^{t+1} \\ & = \frac{T_i^t}{T_i^{t+1}} Z_i^{t} + \frac{m(1-b) d_i^{\textrm{in}}}{T_i^{t+1}} + \frac{m(a+b-1) }{T_i^{t+1}} \sum_{j\in V(i)} Y_j^{t+1} \\ & =\left(1- \frac{md_i^{\textrm{in}}}{T_i^{t+1}} \right)Z_i^{t} + \frac{m(1-b) d_i^{\textrm{in}}}{T_i^{t+1}} + \frac{m(a+b-1) }{T_i^{t+1}} \sum_{j\in V(i)} Y_j^{t+1}. \end{align*}

Define $ \Delta M_i^{t+1} \,{:\!=}\, (a+b-1) \left( Y_i^{t+1} - \mathbb{E}\left[Y_i^{t+1}\vert \mathcal{F}_t \right] \right)$ ; then we can write

(5) \begin{align} Z_i^{t+1} & = \left(1- \frac{md_i^{\textrm{in}}}{T_i^{t+1}} \right)Z_i^{t} + \frac{m(1-b) d_i^{\textrm{in}}}{T_i^{t+1}} +\frac{m(a+b-1)}{T_i^{t+1}} \sum_{j \in V(i)} \mathbb{E}\left[Y_j^{t+1}\vert \mathcal{F}_t \right] \nonumber \\ & \quad +\frac{m}{T_i^{t+1}} \sum_{j \in V(i)} \Delta M_j^{t+1} \nonumber \\ & = Z_i^t + \frac{m}{T_i^{t+1}} \bigg( (\!-Z_i^{t} + 1-b) d_i^{\textrm{in}} + (a+b-1) \sum_{j \in V(i)} Z_j^t +\sum_{j \in V(i)} \Delta M_j^{t+1}\bigg). \end{align}

Let $\Delta M^t \,{:\!=}\, ( \Delta M_1^t, \dots, \Delta M_{N}^t )$ be the martingale difference vector in $\mathbb{R}^N$ . We can write the recursion from (5) in vector form as follows:

\begin{align*} &Z^{t+1}_F \\ &= Z^t_F + m \bigg( \big(\!-Z^t_F + (1-b) \textbf{1}\big) \textbf{D}_F + (a+b-1) (Z^t A)_F + (\Delta M^{t+1} A)_F \bigg) ( \textbf{T}_F^{t+1})^{-1} \\ & = Z^t_F + m\left(\!-Z^t_F + (1-b) \textbf{1} + (a+b-1) ( Z^t \tilde{A})_F + (\Delta M^{t+1} \tilde{A})_F \right) \textbf{D}_F ( \textbf{T}_F^{t+1})^{-1}\\ & = Z^t_F + \left( h( Z^t_F)+ ( \Delta M^{t+1} \tilde{A})_F \right) (m\textbf{D}_F) (\textbf{T}_F^{t+1})^{-1},\end{align*}

where $h\,{:}\,[0,1]^{|F|} \to \mathbb{R}^{|F|}$ is such that

(6) \begin{equation} h(z) = -z + (1-b) \textbf{1} + (a+b-1) \left((z, Z_S^0) \tilde{A}\right)_F.\end{equation}

Here $\left((z, Z_S^0) \tilde{A}\right)_F = z\tilde{A}_F + Z_S^0 \tilde{A}_{S,F}$ , for $z\in [0,1]^{|F|}$ . The above recursion can be written as

\begin{equation*} Z_F^{t+1} = Z_F^t + \frac{1}{t+1} \left( h(Z_F^t)+ \left(\Delta M^{t+1} \tilde{A}\right)_F \right) + \frac{1}{t+1} \epsilon_{t+1}, \end{equation*}

where

\begin{equation*}\epsilon_{t+1} = \left( h(Z_F^t) + \left(\Delta M^{t+1} \tilde{A}\right)_F \right) \left( m(t+1)\textbf{D}_F \!\left( \textbf{T}_F^{t+1}\right)^{-1} -I\right). \end{equation*}

Thus, we get a recursion of the form (4) given by

(7) \begin{equation} Z_F^{t+1} = Z_F^t +\gamma_{t+1} h(Z_F^t) +\gamma_{t+1} (\Delta M^{t+1}\tilde{A})_F +\gamma_{t+1} \epsilon _{t+1},\end{equation}

where $\gamma_t = \dfrac{1}{t}$ and $(\Delta M^{t+1}\tilde{A})_F$ is a bounded martingale difference sequence. Using (3), it can easily be verified that $\epsilon_t \to 0$ as $t \to \infty$ and h is a Lipschitz function. Thus, the recursion in (7) satisfies the required assumptions (i)–(iv). Now, from stochastic approximation theory, we know that the limit points of the recursion in (7) almost surely coincide with the asymptotically stable equilibria of the ordinary differential equation given by

(8) \begin{equation} \dot{z} = h(z).\end{equation}

Therefore, one can analyse the limiting behaviour of the interacting urn process by analysing the zeroes of the h function and the eigenvalues of its Jacobian. Furthermore, we use stochastic approximation results from [Reference Zhang15] to establish central limit theorems for $Z_F^t$ .

4. Main results

In this section we state our main results. In Section 4.1 we state an almost sure convergence theorem for $Z_F^t$ and show that synchronisation occurs under certain conditions on the initial configuration of the urns and on the structure of the underlying graph. Recall that by synchronisation we mean that the proportion of white balls converges to the same limit for every urn. Furthermore, in Section 4.2 we state the central limit theorems for $Z_F^t$ for a non-Pólya-type reinforcement. In Section 4.3, we consider a special case, namely Pólya-type reinforcement on a regular graph, and show that $Z_F^t$ converges to a random vector almost surely. In addition, the urns synchronise in the sense that $Z_i^t$ converges to the same random variable for every $i\in F$ .

4.1. Convergence and synchronisation results

Theorem 4.1. Suppose that either of the following conditions holds:

  1. (i) The reinforcement is of non-Pólya type, that is, $R\neq mI$ and $a+b>0$ .

  2. (ii) $S\neq \emptyset$ is such that for every $f \in F$ there exists $s \in S$ such that $s \rightsquigarrow f$ .

Then $I-(a+b-1) \tilde{A}_F$ is invertible and $ Z_F^t \longrightarrow z^*$ almost surely as $t\to \infty$ , where

\begin{equation*} z^* = \left( (1-b) \textbf{1}+ (a+b-1)Z^0_S \tilde{A}_{S,F} \right) \left(I-(a+b-1)\tilde{A}_F \right)^{-1}.\end{equation*}

Remark 4.1. Theorem 4.1 does not address the case when $S = \emptyset$ and $|a+b-1|=1$ ; in this case the corresponding stochastic approximation scheme as in (7) holds with

\begin{equation*} h(z)= -z(I-(a+b-1)\tilde{A}_F) +(1-b)\textbf{1}.\end{equation*}

Clearly, $h(z)=\textbf{0}$ has a unique solution whenever $I-(a+b-1)\tilde{A}_F$ is invertible. In fact, when $I-(a+b-1) \tilde{A}_F$ is not invertible, $Z_F^t$ need not admit a deterministic limit. However, even in such cases we expect that on a strongly connected graph, $ Z^{t}$ converges almost surely to a random limit.

One straightforward conclusion from the above theorem is that $Z^t \to (z^*, Z_S^0)$ almost surely, as $t\to \infty$ . Furthermore, for synchronisation to occur among the set of flexible vertices F, we expect the reinforcement at any two flexible vertices to be similar. This manifests itself as a condition that $Z_S^0 \tilde{A}_S$ is a constant vector and that for $i, j \in F$ , $i \neq j$ ,

\begin{equation*}\frac{|V(i) \cap F|}{d_i^{\textrm{in}}} = \frac{|V(j) \cap F|}{d_j^{\textrm{in}}}.\end{equation*}

More precisely, we have the following result.

Corollary 4.1. (Synchronisation.) Suppose $Z^0_S \tilde{A}_{S,F} = c_1\textbf{1} $ and $\textbf{1} \tilde{A}_F =c_2 \textbf{1}$ , for some constants $c_1, c_2 \in [0,1]$ ; then under the condition (i) or (ii) of Theorem 4.1,

\begin{equation*}Z_F^t \xrightarrow{a.s.} \dfrac{(1-b) + (a+b-1) c_1 }{1-(a+b-1)c_2}\textbf{1}.\end{equation*}

In particular, if reinforcement is of non-Pólya type and $S=\emptyset$ , then as $t \to \infty$

\begin{equation*}Z_F^t \xrightarrow{a.s.} \dfrac{1-b }{2-a-b}\textbf{1}.\end{equation*}

4.2. Fluctuation results

We now state the fluctuation theorems for $Z_F^t$ around the almost sure limit $z^*$ . Define

(9) \begin{equation}H \,{:\!=}\, -\frac{\partial h}{\partial z}=I-(a+b-1)\tilde{A}_F \quad \text{and} \quad \rho \,{:\!=}\, \lambda_{\min}(H),\end{equation}

where I is an $|F|\times |F|$ identity matrix. As we will see in the results below, the scaling for fluctuation theorems depends on $\rho$ . In the case when $0<\rho < 1/2$ , there exist finitely many complex random vectors $\xi_1,\dots, \xi_l$ such that $(Z_F^t-z^*)$ scaled appropriately can be almost surely approximated by a weighted sum of $\xi_1,\dots, \xi_l$ . The scaling in this case depends explicitly on $\rho$ . For details we refer the reader to Theorem 2.2 of [Reference Zhang15]. In this paper, we only discuss the cases $\rho > 1/2$ and $\rho = 1/2$ and obtain Gaussian limits with appropriate scaling.

Theorem 4.2. Suppose $ Z_F^t \longrightarrow z^*$ almost surely as $t\to \infty$ and $\rho >1/2$ ; then

(10) \begin{equation}\sqrt{t} \left(Z_F^t - z^* \right) \; \xrightarrow{d} N\left(\textbf{0}, \Sigma \right) \quad \textit{as}\ t \to \infty,\end{equation}

with

\begin{equation*}\Sigma = \int_0^\infty \left( e^{-\left (H - \frac{1}{2} I \right)u} \right)^\top (\tilde{A}^\top \Theta \tilde{A})_F e^{-\left (H - \frac{1}{2} I \right)u} du,\end{equation*}

where H is as defined in (9) and $\Theta$ is an $N\times N$ diagonal matrix such that

\begin{equation*}\Theta_{i,i} =\begin{cases}(a+b-1)^2 z_i^* (1-z_i^*) & \textit{for}\ i \in F,\\ \\[-7pt](a+b-1)^2 Z_i^0(1-Z_i^0) & \textit{for}\ i \in S.\end{cases} \end{equation*}

Corollary 4.2. Suppose the reinforcement is of non-Pólya type and $\tilde{A}=\tilde{A}^\top$ . Then for $\rho >1/2$ , Theorem 4.2 holds with

\begin{equation*}\Sigma = C(a,b)\; \tilde{A}_F^2 \left( I- 2(a+b-1)\tilde{A} _F\right )^{-1},\end{equation*}

where

\begin{equation*}C(a,b) \,{:\!=}\, \dfrac{ (a+b-1)^2(1-a)(1-b)}{(2-a-b)^2}.\end{equation*}

Theorem 4.3. Suppose $ Z_F^t \longrightarrow z^*$ almost surely as $t\to \infty$ and $\rho=1/2$ ; then

(11) \begin{equation} \sqrt{\frac{t}{ \log\!(t)}} \left(Z_F^t - z^* \right) \; \xrightarrow{d} N\!\left(\textbf{0}, \tilde \Sigma \right) \quad \textit{as}\ t \to \infty,\end{equation}

with

\begin{equation*}\tilde \Sigma = \lim_{t\to \infty} \frac{1}{\log t } \int_0 ^ {\log t} \left( e^{ -(H-1/2I)u}\right)^\top (\tilde{A}^\top \Theta \tilde{A})_F \left( e^{ -(H-1/2I)u}\right) du, \end{equation*}

for H as defined in (9) and $\Theta$ as defined in Theorem 4.2.

Corollary 4.3. Suppose the reinforcement is of non-Pólya type and $\tilde{A}=\tilde{A}^\top$ . Then for $\rho =1/2$ , Theorem 4.3 holds with

\begin{equation*}\tilde \Sigma =\frac{C(a,b)}{N}J,\end{equation*}

where J is an $|F|\times |F|$ matrix with all elements equal to 1 and C(a, b) is as defined in Corollary 4.2.

Remark 4.2. (Friedman-type reinforcement.) As an immediate consequence of Corollary 4.1, we get that with $S= \emptyset$ for the Friedman-type reinforcement, that is, when $a=b \in (0,1)$ , $Z_F^t \longrightarrow \frac{1}{2} \textbf{1} $ almost surely. Furthermore, in this case C(a, b) simplifies and Corollary 4.2 and Corollary 4.3 hold with $C(a,b) = \left( a-\frac{1}{2} \right)^2 $ .

4.3. Convergence results for Pólya-type reinforcement

In this section, we consider the Pólya-type reinforcement with the following assumptions:

  1. (A1) G is connected and d-regular, and the scaled adjacency matrix $\tilde{A} = \frac{1}{d} A$ is diagonalisable over $\mathbb{C}$ . More precisely, there exists an invertible matrix P such that $\tilde{A} = P \Lambda P^{-1} $ for a diagonal matrix $ \Lambda$ .

  2. (A2) The initial number of balls in each urn is the same; that is, $\textbf{T}^0 = T^0 I$ , for some $ T^0 \geq 1$ .

Note that for a d-regular graph we have $S=\emptyset$ and $F= V$ ; therefore, to simplify the notation, we remove the subscript F throughout this section. In this case, the associated ordinary differential equation obtained from (8) reduces to $ \dot{z} = h(z) = -z( I- \tilde{A}).$ The equilibrium points of this ordinary differential equation are the left eigenvectors of $\tilde{A}$ corresponding to the eigenvalue 1. Observe that these equilibrium points are not stable, as one of the eigenvalues of $\frac{\partial h}{\partial z}$ is 0. Therefore this case requires a different approach. Specifically, we use martingale theory to obtain convergence results.

Under the assumptions (A1) and (A2)

(12) \begin{equation} \textbf{T}^t = (T^0+ tmd) I \,{=\!:}\, T^t I .\end{equation}

We define

\begin{equation*} \bar{Z}^t \,{:\!=}\, \frac{\sum_{j=1}^N W_j^t}{\sum_{j=1}^N T_j^t} = \frac{\sum_{i=1}^N W_i^t}{N T^t} = \frac{1}{N} \sum_{i=1}^N Z_i^t = \frac{1}{N} Z^{t} \textbf{1}^\top.\end{equation*}

Theorem 4.4. Suppose the reinforcement scheme is of Pólya type, that is, $R = m I$ , and the assumptions (A1) and (A2) hold. Then

\begin{equation*} Z^{t}- \bar{Z}^t \textbf{1} \xrightarrow{a.s.} \textbf{0}\ \textit{and in} \, \mathcal{L}^2, \quad \textit{as}\ t\to \infty.\end{equation*}

Corollary 4.4. Let $\hat{Z}^t = Z^t\tilde{A}$ be the vector of neighbourhood averages of the proportion of white balls in the urns. Then in the setting of Theorem 4.4, $\mathop{\textrm{Var}}\!(\hat{Z}^{t}- \bar{Z}^t \textbf{1}) \longrightarrow \bf{0}_{N\times N} $ , as $t \to \infty.$

The following theorem establishes that for every $i \in V$ , $Z^t_i$ converges almost surely to the same limiting random variable.

Theorem 4.5. (Synchronisation.) Suppose the reinforcement scheme is of Pólya type, that is, $R = m I$ , and the assumptions (A1) and (A2) hold. Then there exists a finite random variable $Z^\infty$ such that as $t \to \infty$ ,

(13) \begin{equation}Z^t \xrightarrow{a.s.} Z^\infty \textbf{1}.\end{equation}

5. Proofs of the main results

In this section, we prove all the results from Section 4. To prove the results from Section 4.1, we use stochastic approximation theory, which was discussed in Section 3. We start with the following lemma.

Lemma 5.1. Let M be an $n \times n$ real non-negative matrix such that each column sum of M is less than or equal to 1. Then $rM-I$ is invertible for all $ r\in \mathbb{R}$ whenever $|r| <1$ .

The above lemma follows from the Perron–Frobenius theorem; however, for the sake of completeness we include a proof here.

Proof of Lemma 5.1. Note that the case $r=0$ is trivial. For $r\neq 0$ , suppose the matrix $(r M-I)$ is not invertible. Then there exists a vector ${\bf w} = (w_1, \ldots, w_n) \in \mathbb{C}^n$ such that $\textbf{w} (r M-I) =\textbf{0}$ , that is, $ \textbf{w} M = \frac{1}{r} \textbf{w}, $ which implies that for every $j\in \{1,\dots, n\}$ ,

(14) \begin{equation} \frac{1}{|r|} | w_j| =\big|\sum_{k=1}^n w_k M_{k,j} \big|\leq (\!\max_{k} |w_k|) \sum_{k=1}^n M_{k,j} \leq \max_k |w_k|.\end{equation}

The last step follows since each column sum of M is less than or equal to 1. Now if $j = \textrm{argmax}\{|w_k|;\ k =1,\ldots, n\}$ , then from (14) we must have $1/|r| \leq 1$ , which contradicts our assumption of $|r|<1$ .

Proof of Theorem 4.1. As discussed in Section 3, it is enough to study the corresponding ordinary differential equation in (8), of the stochastic approximation scheme for $Z^t_F$ obtained in (7). We know that a point $z^* \in [0, 1]^{|F|}$ is an equilibrium point of the associated ordinary differential equation $\dot{z} = h(z)$ if $h(z^*) = 0$ . For the h function given in Equation (6), we have $h(z^*) = 0$ , if and only if

(15) \begin{equation}z^* - (a+b-1)( z^*\tilde{A}_F +Z^0_S \tilde{A}_{S,F}) = (1-b)\textbf{1}.\end{equation}

Thus the unique equilibrium point is given by

\begin{equation*}z^* = \left( (1-b)\textbf{1} +(a+b-1) Z^0_S \tilde{A}_{S,F}\right) \left(I- (a+b - 1)\tilde{A}_F \right)^{-1},\end{equation*}

whenever the matrix $I- (a+b - 1)\tilde{A}_F $ is invertible. We now show that under the assumptions of the theorem, this is indeed true.

  1. (i) When $R\neq mI$ and $a+b>0$ we have $|a+b-1|<1$ . Furthermore, since each column sum of $\tilde{A}_F$ is less than or equal to 1, the invertibility of the matrix $I- (a+b - 1)\tilde{A}_F $ follows from Lemma 5.1.

  2. (ii) We now show that if for every $f \in F$ there exists $s \in S$ such that $s \rightsquigarrow f$ , then $I-(a+b-1) \tilde{A}_F$ is invertible. Observe that the graph G restricted to the set of flexible vertices F can be partitioned into strongly connected components $F_1, \dots, F_k$ . Then the scaled adjacency matrix can be written as an upper block-triangular matrix as follows:

    \begin{equation*} \tilde{A}_F =\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}A_{F_1} & A_{F_1,F_2} & \dots & A_{F_1,F_k}\\ \\[-7pt]0& A_{F_2} & \dots & A_{F_2,F_k}\\ \\[-7pt]\vdots &\vdots & \ddots & \vdots \\ \\[-7pt]0&\dots & 0& A_{F_k}\end{array}\right) \textbf{D}_F^{-1}\,{=\!:}\, \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}\tilde{A}_{F_1} & \tilde{A}_{F_1,F_2} & \dots & \tilde{A}_{F_1,F_k}\\ \\[-7pt]0&\tilde{A}_{F_2} & \dots & \tilde{A}_{F_2,F_k}\\ \\[-7pt]\vdots &\vdots & \ddots & \vdots \\ \\[-7pt]0&0&\dots & \tilde{A}_{F_k}\end{array}\right), \end{equation*}
    where $\tilde{A}_{F_i, F_j}$ is an $|F_i|\times |F_j|$ matrix and $\tilde{A}_{F_i}=\tilde{A}_{F_i, F_i}$ . Since each $F_1,\dots, F_k$ is strongly connected, $\tilde{A}_{F_i}$ is an irreducible matrix for every $1\leq i\leq k$ and the matrix $I- (a+b-1) \tilde{A}_F $ is of the form
    \begin{equation*}\left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}I_1- (a+b-1)\tilde{A}_{F_1} &- (a+b-1) \tilde{A}_{F_1,F_2} & \dots & - (a+b-1)\tilde{A}_{F_1,F_k}\\ \\[-7pt]0&I_2- (a+b-1)\tilde{A}_{F_2} & \dots & - (a+b-1)\tilde{A}_{F_2,F_k}\\ \\[-7pt]\vdots &\vdots & \ddots & \vdots \\ \\[-7pt]0&0&\dots & I_k- (a+b-1)\tilde{A}_{F_k}\end{array}\right), \end{equation*}
    where $I_j$ is the $|F_j| \times |F_j|$ identity matrix. By Schur’s complement, we know that $I- (a+b-1) \tilde{A}_F$ is invertible whenever each block matrix $ I_1-(a+b-1)\tilde{A}_{F_1}, \dots, I_k-(a+b-1)\tilde{A}_{F_k}$ is invertible. Since every $\tilde{A}_{F_j}$ is an irreducible and sub-stochastic matrix (sub-stochasticity follows from the hypothesis in Part (ii) of the theorem), we have that $-1<\lambda_{\min}(\tilde{A}_{F_j})\leq \lambda_{\max}( \tilde{A}_{F_j})<1$ for every j. Hence $ I_j-(a+b-1)\tilde{A}_{F_j}$ is invertible for every $1\leq j\leq k$ .

Finally, using the above arguments we can also conclude that all the eigenvalues of the Jacobian matrix $\frac{\partial h}{\partial z} = (a+b-1)\tilde{A}_F-I$ have negative real parts in both the cases. Therefore $z^*$ is uniformly stable, and this concludes the proof.

Proof of Corollary 4.1. Under the assumptions, the unique solution of (15) is $z^*=c\textbf{1} $ , for

\begin{equation*}c =\frac{(1-b) +(a+b-1)c_1}{1- (a+b-1)c_2}.\end{equation*}

For non-Pólya-type reinforcement we have $0< a+b \neq 2$ , and for $S=\emptyset$ we get $c_1=0$ and $c_2 =1$ . Thus the unique equilibrium point $z^*$ simplifies to

\begin{align*}z^* = \dfrac{1-b}{2-a-b} \textbf{1}.\\[-36pt] \end{align*}

We now prove the scaling limit theorems using tools and results from [Reference Zhang15]. Observe that for $\rho >0$ (where $\rho$ is as defined in (9)), the assumptions 2.2 and 2.3 made in [Reference Zhang15] are satisfied. That is, for some $\delta >0$ ,

\begin{equation*} h(z)= h(z^*) + (z-z^*) Dh(z^*) + o\!\left( \|z-z^*\|^{1+\delta} \right), \end{equation*}

and for every $\epsilon >0$ ,

\begin{equation*} \frac{1}{t} \sum_{s=1}^t \mathbb{E}\left[\| (\Delta M^{s}\tilde{A})_F \|^2 \mathbb{I}\{\| (\Delta M^{s}\tilde{A})_F \|\geq \epsilon \sqrt{t}\} | \mathcal{F}_{s-1} \right] \xrightarrow{a.s.} 0.\end{equation*}

Furthermore, as we shall see in the proofs below,

\begin{equation*} \lim\limits_{t \to \infty} \mathbb{E}\left[ \big((\Delta M^{t+1} \tilde{A})_F \big)^\top \big((\Delta M^{t+1} \tilde{A})_F \big) \Big\vert \mathcal{F}_t\right ]\end{equation*}

exists.

As mentioned before, the scaling for the limit theorems is given by the regimes of $\rho$ , where $\rho$ is as defined in (9). We remark that $\rho$ depends on the eigenvalues of $\tilde{A}_F$ as well as on $a+b-1$ , which is in fact the non-principal eigenvalue of the scaled reinforcement matrix $\frac{1}{m} R$ . In particular, we have the following two cases:

  1. 1. When $\textbf{a + b} - \textbf{1 > 0}$ , we get $\rho = 1-(a+b-1) \lambda_{\max}(\tilde{A}_F).$ Therefore

    \begin{align*}& \rho>1/2 \iff \lambda_{\max}(\tilde{A}_F) \in \left[-1, \frac{1}{2(a+b-1)}\right) \\ \text{and} \qquad & \rho=1/2 \iff \lambda_{\max}(\tilde{A}_F) = \frac{1}{2(a+b-1)}.\end{align*}
  2. 2. When $\textbf{a + b} \boldsymbol{-} \textbf{1 < 0}$ , we get $\rho = 1-(a+b-1) \lambda_{\min}(\tilde{A}_F).$ Therefore

    \begin{align*}&\rho>1/2 \iff \lambda_{\min}(\tilde{A}_F) \in \left(\frac{1}{2(a+b-1)}, 1\right] \\\text{and} \qquad &\rho=1/2 \iff \lambda_{\min}(\tilde{A}_F) = \frac{1}{2(a+b-1)}.\end{align*}

We now give the proofs of the limit theorems.

Proof of Theorem 4.2. The limit theorem in the case of $\rho>1/2$ holds with scaling $\sqrt{t}$ (see Theorem 2.3 in [Reference Zhang15]), with the limiting variance matrix given by

\begin{equation*}\Sigma=\int_0^\infty \left(e^{-\left (H - 1/2I \right)u} \right)^\top\Gamma \, e^{-\left (H - 1/2I \right)u} du, \end{equation*}

where

\begin{equation*} \Gamma = \lim_{t \to \infty} \mathbb{E}\left[(\Delta M^{t+1} \tilde{A})_F ^\top \left((\Delta M^{t+1} \tilde{A})_F \right) \vert \mathcal{F}_t\right ].\end{equation*}

Note that

\begin{equation*}\left((\Delta M^{t+1} \tilde{A})_F \right)^\top \left((\Delta M^{t+1} \tilde{A})_F \right) = \left( \tilde{A}^\top \left(\Delta M^{t+1}\right)^\top \Delta M^{t+1} \tilde{A}\right)_F.\end{equation*}

Now using the fact that for $i\neq j$ , $Y_i^t$ and $Y_j^t$ are conditionally independent, we get

\begin{equation*} \Gamma =(a+b-1)^2 \, \left( \tilde{A}^\top \lim_{t \to \infty} \mathbb{E}\left[( Y^{t+1}- Z^t)^\top (Y^{t+1} -Z^t)\vert \mathcal{F}_t\right ]\tilde{A}\right) _F = (\tilde{A}^\top \Theta \tilde{A})_F, \end{equation*}

where $\Theta$ is a diagonal matrix given by

\begin{equation*}\Theta_{i,i} =\begin{cases}(a+b-1)^2 z_i^* (1-z_i^*) & \text{ for } i \in F,\\ \\[-7pt](a+b-1)^2 Z_i^0(1-Z_i^0) & \text{ for } i \in S.\end{cases} \end{equation*}

Therefore we get

\begin{align*}\Sigma = \int_0^\infty \left(e^{-\left (H - \frac{1}{2} I \right)u} \right)^\top(\tilde{A}^\top \Theta \tilde{A})_F\, e^{-\left (H - \frac{1}{2} I \right)u} du.\\[-36pt] \end{align*}

Proof of Corollary 4.2. For $\tilde{A} = \tilde{A}^\top$ , we have $S =\emptyset$ and $\tilde{A}_F = \tilde{A}$ . Therefore, for non-Pólya-type reinforcement, by Corollary 4.1,

\begin{equation*} z^* =\frac{1-b}{2-a-b} \textbf{1}\end{equation*}

and $\Theta= C(a,b)I$ . Then

\begin{equation*} \Sigma = C(a,b)\int_0^\infty \left( e^{-\left (H - \frac{1}{2} I \right)u} \right)^\top \tilde{A}^\top\tilde{A} \, e^{-\left (H - \frac{1}{2} I \right)u} du, \end{equation*}

where

\begin{equation*}C(a,b) = \dfrac{ (a+b-1)^2(1-a)(1-b)}{(2-a-b)^2}.\end{equation*}

Furthermore, $\tilde{A}$ has a spectral decomposition

(16) \begin{equation} \tilde{A} = P \Lambda P^{-1} \qquad \text{ with } \qquad \Lambda = \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}1 & 0& \cdots &0 \\ \\[-7pt]0&\lambda_1 & \cdots &0\\ \\[-7pt]\vdots & \vdots & \ddots &\vdots \\ \\[-7pt]0&0 &\cdots & \lambda_{N-1} \end{array}\right),\end{equation}

where P is an $N \times N$ real orthogonal matrix and $1, \lambda_1, \dots, \lambda_{N-1}$ are eigenvalues of $\tilde{A}$ . Since H commutes with $\tilde{A}$ , the limiting variance matrix is given by

\begin{align*}\Sigma & = C(a,b)\; \tilde{A}^2 \int_0^\infty e^{-\left (2H - I \right)u} du \\& = C(a,b)\; \tilde{A}^2 \int_0^\infty e^{\!-\left (I - 2(a+b-1)\tilde{A} \right)u} du \\& = C(a,b)\; \tilde{A}^2 P \left (\int_0^\infty e^{-\left (I - 2(a+b-1)\Lambda \right)u} du \right) P^{-1} \\& = C(a,b)\; \tilde{A}^2 \left(I- 2(a+b-1)\tilde{A} \right )^{-1}.\end{align*}

This completes the proof.

Proof of Theorem 4.3. In the case when $\rho =1/2$ , the asymptotic normality holds with scaling $\sqrt{\dfrac{t}{\log t}}$ (see Theorem 2.1 in [Reference Zhang15]) and the limiting variance matrix given by

\begin{equation*} \tilde \Sigma=\lim_{t \to \infty} \frac{1}{\log t}\int_0^{\log t} \left( e^{-\left (H - 1/2I \right)u} \right)^\top\Gamma e^{-\left (H - 1/2I \right)u} du, \end{equation*}

where $\Gamma = (\tilde{A}^\top \Theta \tilde{A})_F $ .

Proof of Corollary 4.3. For $\tilde{A} = \tilde{A}^\top$ , as in the proof of Corollary 4.2, we get that $\Theta = C(a,b)I$ and $\tilde{A}_F =\tilde{A}$ is a symmetric matrix. Therefore, in the case of $\rho=1/2$ , using the spectral decomposition as in (16), the limiting variance matrix is given by

\begin{align*}\tilde \Sigma & = C(a,b)\; \tilde{A}^2 \lim_{t \to \infty} \frac{1}{\log t} \int_0^{\log t} e^{-\left (2H - I \right)u} du \\& = C(a,b)\; \tilde{A}^2 \lim_{t \to \infty} \frac{1}{\log t} \int_0^{\log t} e^{-\left (I - 2(a+b-1)\tilde{A} \right)u} du \\& = C(a,b)\; \tilde{A}^2 P \lim_{t \to \infty} \frac{1}{\log t} \left (\int_0^{\log t} e^{-\left (I - 2(a+b-1)\Lambda \right)u} du \right) P^{-1},\end{align*}

where

\begin{equation*} e^{-(I - 2(a+b-1)\Lambda)u} = \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}1 & 0& \cdots &0 \\0&e^{-(1-2(a+b-1)\lambda_1)u} & \cdots &0\\ \\[-9pt]\vdots & \vdots & \ddots &\vdots \\ \\[-9pt]0&0 &\cdots & e^{-(1-2(a+b-1)\lambda_{N-1})u} \end{array}\right). \end{equation*}

Since for $\rho =1/2$ , $ 1-2(a+b-1)\lambda_i >0$ for every $i=1,\dots, N-1$ , we get

\begin{equation*}\int_0^{\log t} e^{-(1-2(a+b-1)\lambda_i)u} du = \frac{1 - t^{-1+2(a+b-1) \lambda_i }}{1-2(a+b-1)\lambda_i}. \end{equation*}

Thus,

\begin{equation*} \lim_{t \to \infty} \frac{1}{\log t} \int_0^{\log t} e^{-(I-2(a+b-1)\Lambda)u} du = \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}1 & 0& \cdots &0 \\ \\[-9pt]0& 0 & \cdots &0\\ \\[-9pt]\vdots & \vdots & \ddots &\vdots \\ \\[-9pt]0&0 &\cdots & 0 \end{array}\right) \end{equation*}

and we get

\begin{equation*}\tilde{\Sigma} = C(a,b)\; \tilde{A}^2 P \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}1 & 0& \cdots &0 \\ \\[-9pt]0& 0 & \cdots &0\\ \\[-9pt]\vdots & \vdots & \ddots &\vdots \\ \\[-9pt]0&0 &\cdots & 0 \end{array}\right) P^{-1}.\end{equation*}

Since the normalised eigenvector corresponding to the maximal eigenvalue 1 of $\tilde{A}$ is $\frac{1}{\sqrt{N}}\textbf{1}$ , we get

\begin{equation*}\tilde{\Sigma} = C(a,b)\; \tilde{A}^2 \left (\frac{1}{N} J\right) = \frac{C(a,b)}{N}J, \end{equation*}

where $J= \textbf{1} ^\top \textbf{1} $ is an $N \times N$ matrix with all elements equal to 1.

We now prove the results for Pólya-type reinforcement stated in Section 4.3. Our main aim (as stated in Theorem 4.5) is to show that the process $\{Z_i^t\}_{t\geq 0}$ converges almost surely to a finite random variable $Z^\infty$ for every $i\in [N]$ . In order to show this, we first prove that $Z^t-\bar{Z}^t \textbf{1} $ converges almost surely to $\textbf{0}$ , using the convergence result for almost supermartingales [Reference Robbins and Siegmund12]. Then in Lemma 5.2 and Theorem 4.5, we show that both $Z^t$ and $\bar{Z}^t$ admit an almost sure limit, using convergence theorems for quasi-martingales [Reference Métivier10] and martingales respectively.

Proof of Theorem 4.4. Recall that under the assumption (A2) of the theorem, $ \textbf{T}^t = (T^0 + tmd) I = T^t I$ . Define $\Phi_i^t = Z_i^t-\bar{Z}^t$ ; then in the vector notation we have

(17) \begin{equation}\Phi^t = Z^t - Z^t \frac{1}{N} J \,{=\!:}\, Z^t K,\end{equation}

where $\Phi^t=\left(\Phi^t_1, \ldots, \Phi^t_N\right )$ , $K = I-\dfrac{1}{N}J $ , and J is the $N\times N$ matrix with all entries equal to 1. We want to show that each element of $\mathop{\textrm{Var}}\!(\Phi^t) = \mathop{\textrm{Var}}\!(Z^{t}- \bar{Z}^t \textbf{1}) \to 0$ and $\Phi^t \xrightarrow{a.s.}0$ . To this end, we write a recursion for $\mathop{\textrm{Var}}\!\left(\Phi^{t+1}\right) $ using the law of total variance given by

(18) \begin{equation} \mathop{\textrm{Var}}\!\left(\Phi^{t+1}\right) = \mathop{\textrm{Var}}\!\left(\mathbb{E}\left[\Phi^{t+1}\Big\vert\mathcal{F}_t \right] \right) + \mathbb{E} \left[\mathop{\textrm{Var}}\!\left(\Phi^{t+1}\Big\vert\mathcal{F}_t \right) \right].\end{equation}

From Equation (2) we get

(19) \begin{align} \mathbb{E} \left[Z^{t+1}\Big\vert \mathcal{F}_t \right]&= \frac{1}{T^{t+1}} \mathbb{E}\left[W^{t+1}\Big\vert\mathcal{F}_t \right] = \frac{1}{T^{t+1}} \left(W^{t} + m Z^tA \right) \,{=\!:}\, Z^tU_t,\end{align}

where $U_t = \dfrac{1}{T^{t+1}} (T^t I +m A )$ . This gives

(20) \begin{align}\mathbb{E} \left[\Phi^{t+1}\Big\vert\mathcal{F}_t \right] &= Z^{t}U_t K = \Phi^t U_t, \end{align}

and

(21) \begin{equation}\mathop{\textrm{Var}}\!\left(\textrm{E}\left[\Phi^{t+1}\Big\vert\mathcal{F}_t \right] \right) = U_t^\top \mathop{\textrm{Var}}\!(\Phi^{t})U_t.\end{equation}

For the second term of the expression for $\mathop{\textrm{Var}}\!\left(\Phi^{t+1}\right)$ in Equation (18), we have

\begin{align*}\mathop{\textrm{Var}}\!\left(\Phi^{t+1}\Big\vert\mathcal{F}_t \right)&= K\mathop{\textrm{Var}}\!\left(Z^{t+1}\Big\vert\mathcal{F}_t \right) K \\&= \frac{m^2}{(T^{t+1})^2} K A^\top \, \mathop{\textrm{Var}}\!\left(Y^{t+1}\Big\vert\mathcal{F}_t \right)A K\\&= \frac{m^2}{(T^{t+1})^2}K A^\top \, \textrm{Diag}\!\left(Z_i^{t} (1-Z_i^t)\right)_{1\leq i\leq N} A K,\end{align*}

where $\textrm{Diag}\!\left( x_i \right)_{1\leq i\leq N} $ denotes the $N\times N$ diagonal matrix with ith diagonal element equal to $x_i$ . Thus

(22) \begin{equation}\mathbb{E} \left[\mathop{\textrm{Var}}\!\left(\Phi^{t+1}\Big\vert\mathcal{F}_t \right) \right] = \frac{m^2}{(T^{t+1})^2} K A^\top V_z^t AK,\end{equation}

where $V_z^t = \textrm{Diag}\big(\mathbb{E}[Z_i^{t} (1-Z_i^t)]\,\big)_{1\leq i\leq N} $ . Substituting the quantities from equations (21) and (22) in Equation (18), we get

\begin{equation*} \mathop{\textrm{Var}}\!\left(\Phi^{t+1}\right) = U_t^\top \mathop{\textrm{Var}}\!(\Phi^{t}) U_t+ \frac{m^2}{(T^{t+1})^2} KA^\top V_z^t AK. \end{equation*}

Iterating this we get

(23) \begin{align}\mathop{\textrm{Var}} \!\left(\Phi^{t+1}\right)&= \bigg(\prod_{k=0}^{t-1} U_{t-k}^\top \bigg) \mathop{\textrm{Var}}\!(\Phi^{1}) \bigg(\prod_{k=1}^t U_k\bigg) \nonumber \\&\quad + \sum_{j=1}^t \frac{m^2}{(T^{j+1})^2} \bigg(\prod_{k=0}^{t-j-1} U_{t-k}^\top \bigg) K A^\top V_z^ j A K \bigg(\prod_{k=j+1}^t U_k \bigg) \nonumber \\&= \sum_{j=0}^t \frac{m^2}{(T^{j+1})^2} \bigg(\prod_{k=0}^{t-j-1} U_{t-k}^\top \bigg) KA^\top V_z^ j A K \bigg(\prod_{k=j+1}^t U_k \bigg),\end{align}

where the last equality follows from the fact that

\begin{equation*}\mathop{\textrm{Var}} \!\left(\Phi^1\right) = \frac{m^2}{(T^1)^2}KA^\top V_z^0 AK.\end{equation*}

We can write

\begin{equation*}U_t = I+ \frac{md}{T^{t+1}} \!\left (\tilde{A}-I\right ),\end{equation*}

where $\tilde{A} = \frac{1}{d} A$ is the scaled adjacency matrix. Using the assumption (A1), we get that there exists a matrix P such that

\begin{equation*}\tilde{A} = P \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}1 & 0 & \cdots & 0\\ \\[-7pt]0&\lambda_1 & \cdots & 0\\ \\[-7pt]\vdots&\vdots&\ddots&\vdots \\ \\[-7pt]0& 0 & \cdots & \lambda_{N-1}\\ \\[-7pt]\end{array}\right) P^{-1} \quad \text{and} \quad K= PLP^{-1}= P \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}0 & 0 & \cdots & 0\\ \\[-7pt]0&1 & \cdots & 0\\ \\[-7pt]\vdots&\vdots&\ddots&\vdots \\ \\[-7pt]0& 0 & \cdots & 1\end{array}\right) P^{-1},\end{equation*}

where $1, \lambda_1,\cdots, \lambda_{N-1}$ are the N eigenvalues of $\tilde{A}$ . Since $\tilde{A}$ is irreducible with column sums equal to 1, by Perron–Frobenius, we must have $\Re(\lambda_i)<1$ for all $i=1, \dots, N-1$ . Thus for $j \geq 0.$

(24) \begin{equation}K \bigg(\prod_{k=j+1}^t U_k \bigg) =PL \prod_{k=j+1}^t \!\left ( I+ \frac{md}{T^0 +md(k+1) } \left (\Lambda-I\right ) \right )P^{-1}.\end{equation}

Let $S^j \,{:\!=}\, LP^\top A^\top V_z^ j APL$ ; then from (23) and (24) we get that $\mathop{\textrm{Var}} \!(\Phi^{t+1}) = (P^{-1})^\top \Omega^t P^{-1}$ , where $\Omega^t$ equals

\begin{equation*}\sum_{j=0}^t \frac{m^2}{(T^{j+1})^2} \prod_{k=j+1}^t \bigg( I+ \frac{md}{T^0 +md(k+1)} (\Lambda-I) \bigg) S^j \prod_{k=j+1}^t \bigg( I+ \frac{md}{T^0 +md(k+1) } (\Lambda-I) \bigg).\end{equation*}

The (l, n)th element of $\Omega^t$ is

\begin{align*}\Omega^t _{l, n} = \begin{cases} \sum\limits_{j=0}^t \frac{m^2}{(T^{j+1})^2} S^j_{l,n} \,\prod\limits_{k=j+1}^t \left (1+ \frac{\lambda_{l-1} -1}{(T^0/md) +k+1}\right ) \left (1+ \frac{\lambda_{n-1} -1}{(T^0/md) +k+1}\right ) &\text{if } 2 \leq l, n\leq N,\\ \\[-7pt]0& \text{otherwise}. \end{cases}\end{align*}

Using Euler’s approximation, for every $l = 2, \dots, N$ we get

\begin{equation*}\prod_{k=j+1}^t \!\bigg(1+ \frac{\lambda_{l-1} -1}{(T^0/md) +k+1}\bigg) =\mathcal{O}\left( (t/j)^{\Re(\lambda_{l-1})-1} \right).\end{equation*}

Since the elements of the matrix $S^j$ are bounded, for the non-zero elements of $\Omega^t$ we have

(25) \begin{equation} \Omega^t_{l,n} = \mathcal{O}\left( t^{\Re (\lambda_{l-1}+\lambda_{n-1})-2} \sum_{j=1}^t \frac{1}{j^{\Re(\lambda_{l-1} + \lambda_{n-1})}}\right), \quad 2\leq l,n \leq N. \end{equation}

Let $\lambda_{l,n} = \Re(\lambda_{l-1})+\Re(\lambda_{n-1})$ . When $\lambda_{l,n} <1$ , we use the approximation

\begin{equation*}\sum_{j=1}^t j^{-\lambda_{l,n}} =\mathcal{O}\!\left( t^{ -\lambda_{l,n}+1}\right).\end{equation*}

If $\lambda_{l,n} =1$ , then

\begin{equation*}\sum_{j=1}^t j^{-\lambda_{l,n}} =\mathcal{O}\!\left(\log\!(t)\right).\end{equation*}

Finally, if $\lambda_{l,n} >1$ , then $\sum_{j=1}^t j^{-\lambda_{l,n}}<\infty$ ; therefore,

(26) \begin{equation} \Omega^t_{l,n} = \begin{cases} \mathcal{O}(1/t)& \text{ if } \lambda_{l,n} <1, \\ \\[-8pt]\mathcal{O}\!(\log\!(t)/t)& \text{ if } \lambda_{l,n}= 1,\\ \\[-8pt]\mathcal{O}( t^{\lambda_{l,n} -2} )& \text{ if } \lambda_{l,n} >1, \end{cases} \quad \qquad 2 \leq l, n\leq N.\end{equation}

Since $\lambda_{l,n} <2$ and P and $P^{-1}$ have bounded elements, $\mathop{\textrm{Var}}\!(Z^{t}- \bar{Z}^t \textbf{1}) \to \textbf{0}_{N\times N}$ as $t \to \infty$ . Now from Equation (20) we get

\begin{align*}\mathbb{E}[\Phi^t] & = \Phi^0 \prod_{j=0}^t U_j \\& = Z^0 P \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c}0 & 0 & \cdots & 0\\ \\[-7pt]0&\mathcal{O}\left(t^{\Re(\lambda_1)-1} \right)& \cdots & 0\\ \\[-7pt]\vdots&\vdots&\ddots&\vdots \\ \\[-7pt]0& 0 & \cdots & \mathcal{O}\left( t^{\Re(\lambda_{N-1})-1}\right)\end{array}\right) P^{-1} \to 0, \text { as } t \to \infty.\end{align*}

Hence $\Phi^t= Z^{t}- \bar{Z}^t \textbf{1} \to \textbf{0}$ as $t \to \infty$ in $\mathcal{L}^2$ .

Since $0\leq \mathbb{E}((\Phi^t_i)^2) \leq \mathbb{E}(\Phi^t_i) \to 0$ , we get $\mathbb{E}[ \|\Phi^t\|^2] \to 0$ . Therefore, to show that $\Phi^t$ converges almost surely to $\textbf{0}$ , it is enough to show that $\|\Phi^t\|^2 $ admits an almost sure limit:

\begin{align*} \Phi^{t+1} &= \mathbb{E} \left[\Phi^{t+1}\Big\vert\mathcal{F}_t \right]+ \Phi^{t+1} - \mathbb{E} \left[\Phi^{t+1}\Big\vert\mathcal{F}_t \right]\\&= \mathbb{E} \left[\Phi^{t+1}\Big\vert\mathcal{F}_t \right]+ \frac{1}{T^{t+1}} \Delta M_\Phi^{t+1}, \end{align*}

where $\Delta M_\Phi^{t+1}= m \!\left(Y^{t+1} - \mathbb{E} \left[Y^{t+1}\Big\vert\mathcal{F}_t \right]\right) AK$ is a martingale difference sequence. Therefore we have

(27) \begin{align}&\mathbb{E}\left[ \|\Phi^{t+1}\|^2\Big\vert\mathcal{F}_t \right] \nonumber \\&= \mathbb{E} \left[\Phi^{t+1} (\Phi^{t+1})^\top\Big\vert\mathcal{F}_t \right] \nonumber \\& = \mathbb{E} \left[\Phi^{t+1}\Big\vert\mathcal{F}_t \right] \mathbb{E} \left[\Phi^{t+1}\Big\vert\mathcal{F}_t \right] ^\top + \frac{1}{(T^{t+1})^2} \mathbb{E} \left[\Delta M_\Phi^{t+1} (\Delta M_\Phi^{t+1})^\top \Big\vert\mathcal{F}_t \right] \nonumber \\& = \Phi^{t} U_t U_t^\top (\Phi^{t})^\top + \frac{1}{(T^{t+1})^2} \mathbb{E} \left[\Delta M_\Phi^{t+1} (\Delta M_\Phi^{t+1})^\top\Big\vert\mathcal{F}_t \right] \nonumber \\& = \Phi^{t} \left(I- \frac{md}{T^{t+1}} \left (I- \tilde{A} \right ) \right)\left(I- \frac{md}{T^{t+1}} \left (I- \tilde{A}^\top \right ) \right) (\Phi^{t})^\top \nonumber \\&\quad + \frac{1}{(T^{t+1})^2} \mathbb{E} \left[\Delta M_\Phi^{t+1} (\Delta M_\Phi^{t+1})^\top\Big\vert\mathcal{F}_t \right] \nonumber \\& = \|\Phi^{t}\|^2 - \frac{md}{T^{t+1}} \Phi^{t} (2I-\tilde{A}-\tilde{A}^\top ) (\Phi^{t})^\top + \frac{m^2d^2}{(T^{t+1})^2} \Phi^{t} (I-\tilde{A})(I-\tilde{A}^\top) (\Phi^{t})^\top \nonumber \\&\quad + \frac{1}{(T^{t+1})^2} \mathbb{E} \left[\Delta M_\Phi^{t+1} (\Delta M_\Phi^{t+1})^\top\Big\vert\mathcal{F}_t \right] \nonumber \\& \leq \|\Phi^{t}\|^2 + \frac{\eta_t}{(T^{t+1})^2}, \end{align}

where

\begin{equation*} \eta_t = m^2d^2 \Phi^{t} (I-\tilde{A})(I-\tilde{A}^\top) (\Phi^{t})^\top + \mathbb{E} \left[\Delta M_\Phi^{t+1} (\Delta M_\Phi^{t+1})^\top\Big\vert\mathcal{F}_t \right] \end{equation*}

and the last inequality follows from the fact that $ 2I-\tilde{A}-\tilde{A}^\top$ is a positive semidefinite matrix. Since $\eta_t$ is a bounded random variable, from Equation (27) we get that $\|\Phi^t\|^2$ is an almost supermartingale; therefore $\|\Phi^t\|^2 $ converges almost surely (see Theorem 1 in [Reference Robbins and Siegmund12]). This concludes the proof of the theorem.

Proof of Corollary 4.4. Let $\Psi^t = Z^t \!\left (\tilde{A} - \frac{1}{N} J\right ) = Z^t K' $ , where

\begin{equation*}K'= \left (\tilde{A} - \frac{1}{N} J\right ) = P L' P^{-1}\end{equation*}

with $L' = \textrm{diag} \!\left( 0, \lambda_1, \dots, \lambda_{N-1}\right)$ . The proof follows from the same argument as in the proof of Theorem 4.4, with K $^{\prime}$ and L $^{\prime}$ taking the place of K and L respectively.

Recall that Theorem 4.5 says that for every $i \in V$ , $Z_i^t$ converges to the same random limit. To prove this, we use the fact that $Z_i^t- \bar{Z}^t$ converges to 0 almost surely (Theorem 4.4 proved above), and we show that both $Z^t$ and $\bar Z^t$ admit an almost sure limit. From Equation (19), we have

\begin{equation*} \mathbb{E}[Z^{t+1} | \mathcal{F}_t] = Z^t \bigg( \frac{T^tI+ md\tilde{A}}{T^{t+1}}\bigg) = Z^t \bigg(I + \frac{md(\tilde{A}-I)}{T^{t+1}}\bigg).\end{equation*}

Therefore, the N-dimensional process $\{Z^{t}\}_{t\geq 0}$ is a martingale if and only if $\tilde{A} = I$ , that is, when each node is isolated and has a self-loop. While $\{Z^{t}\}_{t\geq 0}$ is not a martingale in general, in the next lemma we show that $Z^t$ admits an almost sure limit.

Lemma 5.2. Under the assumptions (A1) and (A2), $Z^t$ admits an almost sure limit.

Proof. Using Theorem 11.7 in [Reference Métivier10], it is enough to show that process $(Z^t)_{t\geq 0}$ satisfies the following two conditions:

  1. (i) $\sum_{t=0}^\infty \mathbb{E}\left [\| \mathbb{E}[Z^{t+1} | \mathcal{F}_t]-Z_t \|\right ] <\infty$ ;

  2. (ii) $\sup_{t\geq 0} \mathbb{E} [\|Z^t\|] <\infty.$

Note that $Z^t$ satisfies (ii) trivially. Let $\Phi_t$ and $\Psi_t$ be as defined in the proofs of Theorem 4.4 and Corollary 4.4 respectively. Now,

(28) \begin{align}&\sum_{t=0}^\infty \mathbb{E}\left [\| \mathbb{E}[Z^{t+1} | \mathcal{F}_t]-Z_t \|\right ] \nonumber \\& = \sum_{t=0}^\infty \frac{md}{T^{t+1}} \mathbb{E}\left [\|Z^t(\tilde{A}-I) \|\right ] \nonumber \\& = \sum_{t=0}^\infty \frac{md}{T^{t+1}} \mathbb{E}\left [\big\|Z^t\left (\tilde{A}-\frac{1}{N}J\right ) + Z^t \left (\frac{1}{N}J - I\right ) \big\|\right ] \nonumber \\& = \sum_{t=0}^\infty \frac{md}{T^{t+1}} \mathbb{E}\left [\|\Psi^t - \Phi^t \|\right ] \nonumber \\& \leq \sum_{t=0}^\infty \frac{md}{T^{t+1}} \bigg (\mathbb{E}\left [\|\Psi^t -\mathbb{E}[\Psi^t] \| \right ]+ \mathbb{E}\left [\|\Phi^t - \mathbb{E}[\Phi^t]\|\right ] + \|\mathbb{E}[\Psi^t - \Phi^t]\| \bigg )\nonumber \\& \leq \sum_{t=0}^\infty \frac{md}{T^{t+1}} \left (\sqrt{ \sum_{i=1}^N \mathop{\textrm{Var}}\!(\Psi_i^t)} +\sqrt{ \sum_{i=1}^N \mathop{\textrm{Var}}\!(\Phi_i^t)}+ \| \mathbb{E}[\Psi^t - \Phi^t] \| \right ), \end{align}

where the last inequality follows by Jensen’s inequality. The fact that the first two terms of (28) are finite follows from Theorem 4.4 and Corollary 4.4. We will now show that the last term in (28) is also finite. We have

\begin{align*} \mathbb{E}[\Phi^{t+1}]= \Phi^0 \prod_{j=1}^{t+1} \left(I + \frac{m}{T^j} (A-dI ) \right) \quad \text{and}\quad \mathbb{E}[\Psi^{t+1}]= \Psi^0 \prod_{j=1}^{t+1} \left(I + \frac{m}{T^j} (A-dI ) \right). \end{align*}

Using the assumptions (A1) and (A2) we get

\begin{align*}\mathbb{E}[\Psi^t - \Phi^t] & = (\Psi^0-\Phi^0)\prod_{j=1}^t \left(I +\frac{m}{T^j} (A-dI ) \right) \\& = Z^0(\tilde{A} - I )\prod_{j=1}^t \left(I +\frac{md}{T^j} (\tilde{A}-I ) \right)\\&=Z^0 P(\Lambda -I) \prod_{j=1}^t \left(I + \frac{md}{T^j} ( \Lambda -I ) \right) P^{-1} \\& = Z^0 P Q^t P^{-1},\end{align*}

where $Q^t$ is an $N \times N$ diagonal matrix with

\begin{equation*}Q^t(i,i) = (\lambda_i-1) \prod_{j=1}^t \left ( 1+\frac{md(\lambda_{i}-1)}{T^j} \right ) = \mathcal{O}(t^{\Re(\lambda_i)-1})\end{equation*}

and $Q^t(1,1)= 0$ . This implies $ \| \mathbb{E}[\Psi^t - \Phi^t]\| = \mathcal{O}\left (t^{\Re(\lambda_{(2)})-1}\right )$ . Thus the sum in (28) is finite. This completes the proof.

Proof of Theorem 4.5. From Equation (19) we have

\begin{align*}\mathbb{E}[\bar Z^{t+1} \vert\mathcal{F}_t ] & =\frac{1}{N}\mathbb{E}\left[Z^{t+1} \textbf{1}^\top \Big\vert\mathcal{F}_t \right]\\&= \frac{1}{N}Z^t \left (I+\frac{md}{T^{t+1}} (\tilde{A}-I) \right ) \textbf{1}^\top \\&= \frac{1}{N}Z^t\textbf{1}^\top = \bar Z^t,\end{align*}

since $\tilde{A} \textbf{1}^\top =\textbf{1}^\top$ for a regular graph. Thus, $\bar{Z}^t$ is a bounded martingale; therefore by the martingale convergence theorem, there exists a finite random variable $Z^\infty$ such that $\bar{Z}^t \to Z^\infty $ almost surely. Using Theorem 4.4 and Lemma 5.2 we conclude that $Z^t \to Z^\infty \textbf{1} $ almost surely.

Observe that for Pólya-type reinforcement, convergence and synchronisation results are obtained only for regular graphs. This restriction was needed to obtain explicit expressions by using the symmetry and hence the spectral decomposition of the adjacency matrix. We believe that the above results can be extended to more general graphs using similar ideas.

6. Application to opinion dynamics on networks

In this section, we briefly demonstrate how these models can be used for modelling opinion dynamics on networks. We consider urns at vertices of a directed graph and assume that urns represent individuals, with the proportion of white balls (respectively, black balls) in an urn quantifying the positive (respectively, negative) inclination of that individual on a fixed subject. The graph structure determines the interactions between the individuals. That is, if there is an edge from node i to node j, the individual/urn at node i can influence the individual/urn at j via a chosen (but fixed) reinforcement matrix. We define the opinion of an individual i at time t by $O^t_i = \mbox{Sign }\!(Z^t_i-1/2)$ , where for convenience we assume $\mbox{Sign} (0) = 1$ . The asymptotic results for the urn process obtained in this paper can be used to study the evolution of opinions defined this way. Note that this process of evolution of opinions is very different from the traditional voter model or its extensions. In our model, the opinion of an individual does not flip frequently; rather, it evolves slowly, as inclinations change depending on neighbourhood interactions. For instance, at time t consider an individual i with opinion 0 (and $Z_i^t << \frac{1}{2}$ ) such that all of her in-neighbours have opinion 1. After the reinforcement at time $t+1$ , it is quite probable that the positive inclination $Z_i^{t+1}$ of the individual at i gets closer to $\frac{1}{2}$ but the opinion $O_i^{t+1}$ may still remain 0. Thus, the opinion evolution model based on the interacting urn process on a network discussed in this paper models more realistic human behaviour.

The convergence results for the urn process could be used to answer some interesting questions about opinion evolution on a network. Given the reinforcement matrix, we can determine whether the limiting opinion of the majority is 1 or 0. Put differently, we can find conditions on the reinforcement matrix such that the limiting opinion of the majority is 1 or 0. To illustrate this we consider the following example.

Example 6.1. Consider a directed star graph $G=(V, E)$ on N vertices such that the center vertex is labelled 1 and the rest of the vertices are labelled $\{2, 3, \dots, N \}$ . Then 1 is the only flexible vertex, with in-degree $N-1$ and $S= \{2, 3, \dots, N \}$ . Suppose the reinforcement matrix is $R = \begin{pmatrix} \alpha &\quad m-\alpha \\ m-\beta &\quad \beta \end{pmatrix}$ for $\alpha, \beta \in \{0,1,\dots, m\}$ . Then from Theorem 4.1 we get that as $t\to \infty$

\begin{equation*}Z^t_1 \xrightarrow{a.s.} z^*= (1-b) + (a+b-1) \bar{Z}_{S}^0,\end{equation*}

where

\begin{equation*} \bar{Z}_{S}^0 = \frac{1}{N-1} \sum\limits_{j=2}^{N} Z_j^0.\end{equation*}

Thus the asymptotic opinion of vertex 1, that is, $\mbox{Sign}\!\left(z^*- \frac{1}{2}\right)$ , depends on a, b and the average initial inclinations of stubborn individuals.

Acknowledgements

We would like to thank Antar Bandyopadhyay for introducing us to this problem and for his insightful remarks. We would also like to thank the reviewers for their valuable comments, which led to significant improvement of this work.

Funding information

The work of G. Kaur was supported by NUS Research Grant R-155-000-198-114. The work of N. Sahasrabudhe was supported in part by the DST-INSPIRE Faculty Fellowship from the Government of India and in part by CEFIPRA Grant No. IFC/DST-Inria-2016-01/448, ‘Machine Learning for Network Analytics’.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Aletti, G., Crimaldi, I. and Ghiglietti, A. (2020). Interacting reinforced stochastic processes: statistical inference based on the weighted empirical means. Bernoulli 26, 10981138.CrossRefGoogle Scholar
Benaïm, M., Benjamini, I., Chen, J. and Lima, Y. (2015). A generalized Pólya’s urn with graph based interactions. Random Structures Algorithms 46, 614634.CrossRefGoogle Scholar
Borkar, V. S. (2008). Stochastic Approximation: a Dynamical Systems Viewpoint. Cambridge University Press.CrossRefGoogle Scholar
Crimaldi, I., Dai Pra, P., Louis, P.-Y. and Minelli, I. G. (2019). Synchronization and functional central limit theorems for interacting reinforced random walks. Stoch. Process. Appl. 129, 70101.CrossRefGoogle Scholar
Crimaldi, I., Dai Pra, P. and Minelli, I. G. (2016). Fluctuation theorems for synchronization of interacting Pólya’s urns. Stochastic Process. Appl. 126, 930947.CrossRefGoogle Scholar
Dai Pra, P., Louis, P.-Y. and Minelli, I. G. (2014). Synchronization via interacting reinforcement. J. Appl. Prob. 51, 556568.CrossRefGoogle Scholar
Freedman, D. A. (1965). Bernard Friedman’s urn. Ann. Math. Statist. 36, 956970.10.1214/aoms/1177700068CrossRefGoogle Scholar
Laruelle, S. and Pagès, G. (2013). Randomized urn models revisited using stochastic approximation. Ann. Appl. Prob. 23, 14091436.CrossRefGoogle Scholar
Mahmoud, H. M. (2009). Pólya Urn Models. CRC Press, Boca Raton.Google Scholar
Métivier, M. (1982). Semimartingales: a Course on Stochastic Processes. Walter de Gruyter, Berlin, New York.CrossRefGoogle Scholar
Pemantle, R. (2007). A survey of random processes with reinforcement. Prob. Surveys 4, 179.CrossRefGoogle Scholar
Robbins, H. and Siegmund, D. (1971). A convergence theorem for non negative almost supermartingales and some applications. In Optimizing Methods in Statistics: Proceedings of a Symposium Held at the Center for Tomorrow, the Ohio State University, June 14–16, 1971, Academic Press, New York, pp. 233257.Google Scholar
Sahasrabudhe, N. (2016). Synchronization and fluctuation theorems for interacting Friedman urns. J. Appl. Prob. 53, 12211239.CrossRefGoogle Scholar
Van der Hofstad, R., Holmes, M., Kuznetsov, A. and Ruszel, W. (2016). Strongly reinforced Pólya urns with graph-based competition. Ann. Appl. Prob. 26, 24942539.CrossRefGoogle Scholar
Zhang, L.-X. (2016). Central limit theorems of a recursive stochastic algorithm with applications to adaptive designs. Ann. Appl. Prob. 26, 36303658.CrossRefGoogle Scholar