Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-26T06:30:33.968Z Has data issue: false hasContentIssue false

Auto-balanced common shock claim models

Published online by Cambridge University Press:  12 April 2023

Greg Taylor*
Affiliation:
School of Risk and Actuarial Studies, University of New South Wales, UNSW Sydney, NSW 2052, Australia
Phuong Anh Vu
Affiliation:
Taylor Fry, 45 Clarence Street, Sydney, NSW 2000, Australia
*
*Corresponding author. E-mail: greg_taylor60@hotmail.com
Rights & Permissions [Opens in a new window]

Abstract

The paper is concerned with common shock models of claim triangles. These are usually constructed as linear combinations of shock components and idiosyncratic components. Previous literature has discussed the unbalanced property of such models, whereby the shocks may over- or under-contribute to some observations. The literature has also introduced corrections for this. The present paper discusses “auto-balanced” models, in which all shock and idiosyncratic components contribute to observations such that their proportionate contributions are constant from one observation to another. The conditions for auto-balance are found to be simple and applicable to a wide range of model structures. Numerical illustrations are given.

Type
Original Research Paper
Copyright
© The Author(s), 2023. Published by Cambridge University Press on behalf of Institute and Faculty of Actuaries

1. Introduction

Common shock models were introduced to the actuarial literature by Lindskog & McNeil (Reference Lindskog and McNeil2003). The concept has been used in models of claim triangles by Meyers (Reference Meyers2007) and Shi et al. (Reference Shi, Basu and Meyers2012). It has also been used by Furman & Landsman (Reference Furman and Landsman2010) in capital modelling and by Alai et al. (Reference Alai, Landsman and Sherris2013, Reference Alai, Landsman and Sherris2016) in mortality modelling. Avanzi et al. (Reference Avanzi, Taylor and Wong2018) discussed the application of common shocks to claim arrays in generality, e.g. shocks with respect to accident, development and payment periods, and others.

Avanzi et al. (Reference Avanzi, Taylor, Vu and Wong2016) generated a multivariate Tweedie distribution for a claim triangle by means of common shocks. In a subsequent publication, the same authors (2021) noted that, if a common shock model was constructed by adding a multiple of a single common shock to each cell of a triangle, then it might be found to contribute proportionately heavily to some cells and only lightly to others. Such triangles were said to be unbalanced (with respect to the shocks). Equally, it might have been said that the common shock model was unbalanced with respect to the data.

Those authors defined, in response, a procedure that tended to equalise the common shock proportionate contributions over the cells of the triangle. The common shock model was then balanced, or at least more balanced.

The present paper continues to work within the algebra of common shocks in the general setting of Avanzi et al. (Reference Avanzi, Taylor and Wong2018) and identifies certain models that are auto-balanced, i.e. within themselves, balanced without any adjustment by an equalisation procedure.

Section 2 covers some preliminaries relating to the Tweedie family of distributions and common shock models in generality. Section 3 discusses the application of these general common shock models to Tweedie distributed shocks and idiosyncratic components, and section 4 derives the first two moments of observations in these models.

Section 5 derives conditions that define the subset of these Tweedie common shock models that are auto-balanced. A number of numerical illustrations of auto-balance models are set out in section 6, and section 7 then concludes.

2. Framework and Notation

2.1. Notation

Although the papers of Avanzi et al. (Reference Avanzi, Taylor, Vu and Wong2016, Reference Avanzi, Taylor, Vu and Wong2021) on balance worked in the context of claim triangles, the present paper will adopt the more general framework of Avanzi et al. (Reference Avanzi, Taylor and Wong2018).

A claim array $\mathcal{A}$ will be defined here as a 2-dimensional array of random variables ${X_{ij}}$ , indexed by integers $i,j$ , with $1 \le i \le I,1 \le j \le J$ for some fixed integers $I,J$ . For any given pair $i,j$ , the random variable ${X_{ij}}$ may or may not be present.

The subscripts $i,j$ typically index accident period (row) and development period (column), respectively, and the ${X_{ij}}$ represent observations on claims, commonly claim counts or amounts. In the special case $I = J$ and $ = \left\{ {{X_{ij}}\,{:}\,1 \le i \le I,1 \le j \le I - i+1} \right\}$ , the array reduces to the well-known claim triangle.

Define $t = i + j - 1$ , so that $t=1,2, \ldots ,I + J - 1$ . Observations with common $t$ lie on the $t$ -th diagonal of $\mathcal{A}$ .

Subsequent sections will often involve the simultaneous consideration of multiple business segments, with one array for each segment. A segment could be a line of business.

It will be necessary in this case to consider a collection $\mathbb{A} = \left\{ {{\mathcal{A}^{\left( n \right)}},n=1,2, \ldots ,N} \right\}$ of claim arrays, where ${\mathcal{A}^{\left( n \right)}}$ denotes the array for segment $n$ . It will be assumed initially that all ${\mathcal{A}^{\left( n \right)}}$ are congruent, i.e. are of the same dimensions $I,J$ , and that they have missing observations in the same $i,j$ locations, but this assumption will be relaxed in section 5.2.

The $i,j$ observation of ${\mathcal{A}^{\left( n \right)}}$ will be denoted $X_{ij}^{\left( n \right)}$ ; the entire $i$ -th row of ${\mathcal{A}^{\left( n \right)}}$ will be denoted $\mathcal{R}_i^{\left( n \right)}$ ; the entire $j$ -th column will be denoted $\mathcal{C}_j^{\left( n \right)}$ .

It will also be useful to consider diagonals of ${\mathcal{A}^{\left( n \right)}}$ , where the $t$ -th diagonal is defined as the subset $\big\{ {X_{ij}^{\left( n \right)} \in {\mathcal{A}^{\left( n \right)}}\,{:}\,i + j - 1 = t} \big\}$ and represents claim observations from the $t$ -th calendar period, $t=1$ denoting the calendar period in which the first accident period falls. The entire $t$ -th diagonal of ${\mathcal{A}^{\left( n \right)}}$ will be denoted $\mathcal{D}_t^{\left( n \right)}$ .

2.2. Tweedie family of distributions

2.2.1. Univariate Tweedie

The Tweedie family is a sub-family of the exponential dispersion family (EDF). The latter has two well-known representations, the additive and reproductive forms (Jørgensen, Reference Jørgensen1987). The definition of “reproductive” varies from place to place in the literature, and so, for the avoidance of ambiguity, the meaning adopted here is that of Jørgensen (Reference Jørgensen1997, Chapter 3), for which there is a duality relation between additive and reproductive. This renders them merely different forms of representation of a single EDF.

In common with Avanzi et al. (Reference Avanzi, Taylor, Vu and Wong2016), the present note commences within the context of the additive representation, which has the pdf

(1) \begin{align}p\left( {X = x;\ \theta ,\lambda } \right) = a\left( {x,\phi } \right){\rm{exp}}\left\{ {x\theta - \lambda b\left( \theta \right)} \right\}\!,\end{align}

where $\theta $ is a canonical parameter, $\lambda >0$ is a index parameter, and $b\left( \theta \right)$ is called the cumulant function.

This distribution has cumulant generating function

(2) \begin{align}{K_{}}\left( t \right) = \lambda \left[ {b\left( {\theta + t} \right) - b\left( \theta \right)} \right]\!,\end{align}

giving

(3) \begin{align}E\left[ X \right] = \lambda b^{\prime}\left( \theta \right)\!,\end{align}
(4) \begin{align}Var\left[ X \right] = \lambda b^{\prime\prime}\left( \theta \right)\end{align}

The Tweedie sub-family is obtained by the selection

(5) \begin{align} b(\theta)=b_{p}(\theta)&=\frac{1}{2-p}[(1-p)\theta]^{\frac{2-p}{1-p}}, p\epsilon(\!-\infty,0]\bigcup(1,\infty), p\neq 1,2 \nonumber \\&= \exp \theta ,p=1, \nonumber \\&= -\! \ln \left( { - \theta } \right),p=2.\end{align}

If $X$ is distributed according to (1) with $b\left( \theta \right) = {b_p}\left( \theta \right)$ , then it will be denoted $X\sim w_p^*\left( {\theta ,\lambda } \right)$ .

A useful alternative form of (5) in the case $p \ne 1,2$ is

(6) \begin{align}{b_p}\left( \theta \right) = \frac{{\alpha - 1}}{\alpha }{\left[ {\frac{\theta }{{\alpha - 1}}} \right]^\alpha },\end{align}

where $\alpha = \left( {2 - p} \right)/\left( {1 - p} \right)$ .

Remark 2.1. It is required that $sgn\left( \theta \right) = sgn\left( {\alpha - 1} \right)$ if (6) is to be defined for $\alpha $ negative or fractional.

Remark 2.2. It follows from (3) to (6) that, for $Tw_p^*$ with $p \ne 1$ ,

(7) \begin{align}E\left[ X \right] = \lambda {\left[ {\frac{\theta }{{\alpha - 1}}} \right]^{\alpha - 1}}.\end{align}
(8) \begin{align}Var\left[ X \right] = \lambda {\left[ {\frac{\theta }{{\alpha - 1}}} \right]^{\alpha - 2}},\end{align}

and so

(9) \begin{align}\theta = \left( {\alpha - 1} \right)\frac{{E\left[ X \right]}}{{Var\left[ X \right]}}\;.\end{align}

It now follows from (7) and (9) that

(10) \begin{align}\frac{1}{{Co{V^2}\left[ X \right]}} = E\left[ X \right]\frac{{E\left[ X \right]}}{{Var\left[ X \right]}} = \lambda {\left[ {\frac{\theta }{{\alpha - 1}}} \right]^\alpha }\;,\end{align}

where $CoV\!\left[ X \right]$ denotes the coefficient of variation (CoV) of $X$ .

It may be checked by means of (5) that (7), (8) and (10) continue to hold when $p=1$ provided that the right-hand sides are interpreted as the limiting case as $\alpha \to \infty .$ This yields

(11) \begin{align}E\left[ X \right] = Var\left[ X \right] = \frac{1}{{Co{V^2}\left[ X \right]}} = \lambda {e^\theta }\;{\rm{for}}\;p=1.\end{align}

Thus, for fixed $\alpha $ , i.e. fixed $p$ , Tweedie distributions with the same $\theta $ are those with the same mean-to-variance ratio.

It will sometimes be found useful in subsequent development to re-parameterise $Tw_p^*\left( {\theta ,\lambda } \right)$ in terms of $\left( {\mu ,v} \right)$ , where $\mu = E\left[ X \right],v = Co{V^2}\left[ X \right]$ . It will also be useful to denote ${\sigma ^2} = Var\!\left[ X \right] = {\mu ^2}v$ . The re-parameterisation is as set out in the following lemma. These alternative representations of the Tweedie sub-family are also found in Avanzi et al. (Reference Avanzi, Taylor, Vu and Wong2016).

Lemma 2.1. If $X\sim Tw_p^*\left( {\theta ,\lambda } \right)$ , then it may be re-parameterised as $X\sim Tw_p^*\left( {\frac{{\alpha - 1}}{{\mu v}},{\mu ^\alpha }{v^{\alpha - 1}}} \right)$ .

Proof. See Appendix A.

The following lemma derives some closure properties of the Tweedie family under scaling and addition of variates.

Lemma 2.2. Suppose that, for stochastically independent variates ${V_1},{V_2}$ , ${V_i}\sim Tw_p^*\left( {\theta ,{\lambda _i}} \right),i=1,2$ . Then

(12)
(13) \begin{align}{V_1} + {V_2}\sim Tw_p^*\left( {\theta ,{\lambda _1} + {\lambda _2}} \right)\!.\end{align}

Proof. By simple manipulation of the cgf (2), using (5) and (6). The results can also be found in Jørgensen (Reference Jørgensen1997).

2.2.2 Multivariate Tweedie

Following Furman & Landsman (Reference Furman and Landsman2010), Avanzi et al. (Reference Avanzi, Taylor, Vu and Wong2016) consider variates of the form

(14) \begin{align}X_{ij}^{\left( n \right)} = \frac{\theta }{{\theta _{ij}^{\left( n \right)}}}{W_{ij}} + Z_{ij}^{\left( n \right)},\end{align}

where ${W_{ij}}\sim Tw_p^*\left( {\theta ,\lambda } \right),Z_{ij}^{\left( n \right)}\sim Tw_p^{*}\left( {\theta _{ij}^{\left( n \right)},\lambda _{ij}^{\left( n \right)}} \right)\;{\rm{with}}\;\theta _{ij}^{\left( n \right)} = \;\theta \;{\rm{when}}\;p=1$ . Then,

(15) \begin{align}X_{ij}^{\left( n \right)}\sim Tw_p^*\left( {\theta ,\lambda + \lambda _{ij}^{\left( n \right)}} \right)\;{\rm{for}}\;p \ne 1\end{align}
(16) \begin{align}X_{ij}^{\left( n \right)}\sim Tw_p^*\left( {\theta _{ij}^{\left( n \right)},\lambda {{\left( {\frac{\theta }{{\theta _{ij}^{\left( n \right)}}}} \right)}^\alpha } + \lambda _{ij}^{\left( n \right)}} \right)\;{\rm{for}}\;p \ne 1.\end{align}

These results may be checked against Lemma 2.2. Since the marginals are Tweedie, the multi-dimensional variate $\left( {X_{ij}^{\left( 1 \right)}, \ldots ,X_{ij}^{\left( N \right)}} \right)$ is multivariate Tweedie. It should be noted that, if the multiplier of ${W_{ij}}$ is non-zero and other than that shown in (14), then $X_{ij}^{\left( n \right)}$ is not Tweedie unless $p=0$ (normal distribution).

The need to restrict the multiplier of ${W_{ij}}$ to the form shown in (14) is shown by Furman & Landsman (Reference Furman and Landsman2010) to follow from the fact that Cauchy’s functional equation $b\left( {xy} \right) = b\left( x \right)f\left( y \right) + g\left( y \right)$ has the following as the only solutions that are non-constant, continuous, and non-trivial:

\begin{align*}b\left( x \right) &= a\;ln\;x + d\;{\rm{for}}\;f\left( y \right)=1,g\left( y \right) = a\;ln\;y;\ \textrm{or}\\[3pt]b\left( x \right) &= a{x^c} + d\;{\rm{for}}\;f\left( y \right) = {x^c},g\left( y \right) = d\left( {1 - {x^c}} \right)\end{align*}

It is interesting to take the $\left( {\mu ,v} \right)$ -parameterisation of (16), using Lemma 2.1, to obtain

(17) \begin{align}X_{ij}^{\left( n \right)}\sim Tw_p^*\left( {\frac{{\alpha - 1}}{{\mu _{ij}^{\left( n \right)}v_{ij}^{\left( n \right)}}},{{\left( {\mu _{ij}^{\left( n \right)}v_{ij}^{\left( n \right)}} \right)}^\alpha }\left( {\frac{1}{v} + \frac{1}{{v_{ij}^{\left( n \right)}}}} \right)} \right)\;{\rm{for}}\;p \ne 1,\end{align}

where the suffixes on $\mu ,v$ correspond to those on $\theta ,\lambda $ .

2.3. Common shock models

The general common shock framework of Avanzi et al. (Reference Avanzi, Taylor and Wong2018) is as follows.

Let ${\mathcal{P}^{\left( n \right)}}$ be a partition of ${\mathcal{A}^{\left( n \right)}} \in \mathbb{A}$ , i.e. ${\mathcal{P}^{\left( n \right)}} = \left\{ {\mathcal{P}_1^{\left( n \right)}, \ldots ,\mathcal{P}_P^{\left( n \right)}} \right\}$ where the $\mathcal{P}_p^{\left( n \right)}$ are subsets of ${\mathcal{A}^{\left( n \right)}}$ with $\mathcal{P}_p^{\left( n \right)} \cap \mathcal{P}_q^{\left( n \right)} = \emptyset $ for all $p,q=1, \ldots ,P,p \ne q$ and $\bigcup_{p=1}^{p}\mathcal{P}_{p}^{(n)}=\mathcal{A}_{p}^{(n)}$ . Suppose that all partitions are the same in the sense that, for each $p$ , the $\left( {i,j} \right)$ positions of the elements of ${\mathcal{A}^{\left( n \right)}}$ included in $\mathcal{P}_p^{\left( n \right)}$ are the same for different $n$ .

Now consider the following dependency structure on the elements $X_{ij}^{\left( n \right)}$ :

(18) \begin{align}{X}_{ij}^{\left( n \right)} = \alpha _{ij}^{\left( n \right)}{W_{\pi \left( {i,j} \right)}} + \beta _{ij}^{\left( n \right)}W_{\pi \left( {i,j} \right)}^{\left( n \right)} + \phi _{ij}^{\left( n \right)}Z_{ij}^{\left( n \right)}\end{align}

where $\pi \left( {i,j} \right) = p$ such that $X_{ij}^{\left( n \right)} \in \mathcal{P}_p^{\left( n \right)}$ , a unique mapping; ${W_{\pi \left( {i,j} \right)}},W_{\pi \left( {i,j} \right)}^{\left( n \right)},Z_{ij}^{\left( n \right)}$ are independent stochastic variates, and $\alpha _{ij}^{\left( n \right)},\beta _{ij}^{\left( n \right)},\phi _{ij}^{\left( n \right)} \geqslant 0$ are fixed and known constants (“mixture constants”); a convenient terminology for the ${W_{\pi \left( {i,j} \right)}},W_{\pi \left( {i,j} \right)}^{\left( n \right)},Z_{ij}^{\left( n \right)}$ comprises umbrella common shocks, array-specific common shocks, and idiosyncratic components, respectively. The $\pi \left( {i,j} \right)$ will be referred to as partition subsets.

It is evident that ${W_p}$ is a common shock across all $n$ , but affecting only subsets $\mathcal{P}_p^{\left( n \right)}$ for fixed $p$ ; $W_p^{\left( n \right)}$ is similarly a common shock across $\mathcal{P}_p^{\left( n \right)}$ , but now for fixed $n$ and $p$ ; and $Z_{ij}^{\left( n \right)}$ is an idiosyncratic component of $X_{ij}^{\left( n \right)}$ , specific to $i,j$ and $n$ .

The common shock $W_p^{\left( n \right)}$ creates dependency between observations within the subset $\mathcal{P}_p^{\left( n \right)}$ of array ${\mathcal{A}^{\left( n \right)}}$ . Since the partitions ${\mathcal{P}^{\left( n \right)}}$ are the same across $n$ , the common shock ${W_p}$ creates dependency between observations in the subsets $\mathcal{P}_p^{\left( n \right)}$ of the same or different arrays.

Particular selections of partitions ${\mathcal{P}^{\left( n \right)}}$ are of special interest, as set out in Table 1.

Remark 2.3. Since multiplication of a $Tw_p^*$ variate by a constant produces another $Tw_p^*$ variate for $p \ne 1$ (Lemma 2.2), the multiplier $\phi _{ij}^{\left( n \right)}$ in (18) may be absorbed into $Z_{ij}^{\left( n \right)}$ so that (18) simplifies to

(19) \begin{align}X_{ij}^{\left( n \right)} = \alpha _{ij}^{\left( n \right)}{W_{\pi \left( {i,j} \right)}} + \beta _{ij}^{\left( n \right)}W_{\pi \left( {i,j} \right)}^{\left( n \right)} + Z_{ij}^{\left( n \right)}\;{\rm{for}}\;p \ne 1.\end{align}

This will be taken as the general common shock structure for the remainder of this paper.

Table 1. Special cases of common shock.

3. Multivariate Tweedie for a General Common Shock Model

The cell structure (14) used by Avanzi et al. (Reference Avanzi, Taylor, Vu and Wong2016) is a special case of the general common shock formula (19) with ${\mathcal{P}^{\left( n \right)}}$ as in the cell-wise example of Table 1, $\alpha _{ij}^{\left( n \right)} = \theta /\theta _{ij}^{\left( n \right)},\beta _{ij}^{\left( n \right)}=0,\phi _{ij}^{\left( n \right)}=1$ . Since (14) generates a multivariate Tweedie, the objective will now be to identify the most general case of (19) that also generates a multivariate Tweedie. This is done in Appendix A, leading to the following result.

Lemma 3.1. Suppose that ${W_{\pi \left( {i,j} \right)}}\sim Tw_p^*\left( {{\theta _{\pi \left( {i,j} \right)}},{\lambda _{\pi \left( {i,j} \right)}}} \right),$ $W_{\pi \left( {i,j} \right)}^{\left( n \right)}\sim,Tw_p^*\left( {\theta _{\pi \left( {i,j} \right)}^{\left( n \right)},\lambda _{\pi \left( {i,j} \right)}^{\left( n \right)}} \right),$ $Z_{ij}^{\left( n \right)}\sim Tw_p^*\left( {\theta _{ij}^{\left( n \right)},\lambda _{ij}^{\left( n \right)}} \right)$ with ${\theta _{\pi \left( {i,j} \right)}} = \theta _{\pi \left( {i,j} \right)}^{\left( n \right)} = \theta _{ij}^{\left( n \right)}$ in the case $p=1$ . Then, $X_{ij}^{\left( n \right)}$ is Tweedie distributed if and only if $\alpha _{ij}^{\left( n \right)} = {\theta _{\pi \left( {i,j} \right)}}/\theta _{ij}^{\left( n \right)},\beta _{ij}^{\left( n \right)} = \theta _{\pi \left( {i,j} \right)}^{\left( n \right)}/\theta _{ij}^{\left( n \right)}$ . In fact,

(20) \begin{align}X_{ij}^{\left( n \right)}\sim Tw_p^*\left( {\theta _{ij}^{\left( n \right)},{\lambda _{\pi \left( {i,j} \right)}} + {}_{\pi \left( {i,j} \right)}^{\left( n \right)} \,{+}\, \lambda _{ij}^{\left( n \right)}} \right)\;{\rm{for}}\;p=1.\end{align}
(21) \begin{align}X_{ij}^{\left( n \right)}\sim Tw_p^*\left( {\theta _{ij}^{\left( n \right)},{{\left( {\frac{{{\theta _{\pi \left( {i,j} \right)}}}}{{\theta _{ij}^{\left( n \right)}}}} \right)}^\alpha }{\lambda _{\pi \left( {i,j} \right)}} + {{\left( {\frac{{\theta _{\pi \left( {i,j} \right)}^{\left( n \right)}}}{{\theta _{ij}^{\left( n \right)}}}} \right)}^\alpha }\lambda _{\pi \left( {i,j} \right)}^{\left( n \right)} + \lambda _{ij}^{\left( n \right)}} \right)\;{\rm{for}}\;p \ne 1.{\rm{\;}}\;\end{align}

Corollary 3.1. In the case $p=2\;\left( {{\rm{i}}.{\rm{e}}.\;\;\alpha =0} \right)$ , the result (21) reduces to (20).

The following result is obtained by re-parameterisation of Lemma 3.1 according to Lemma 2.1.

Corollary 3.2. In Lemma 3.1, the coefficients $\alpha _{ij}^{\left( n \right)},\beta _{ij}^{\left( n \right)}$ may be re-parameterised as

(22) \begin{align}\alpha _{ij}^{\left( n \right)} = \frac{{\mu _{ij}^{\left( n \right)}}}{{{\mu _{\pi \left( {i,j} \right)}}}}\;\frac{{v_{ij}^{\left( n \right)}}}{{{v_{\pi \left( {i,j} \right)}}}},\end{align}
(23) \begin{align}\beta _{ij}^{\left( n \right)} = \frac{{\mu _{ij}^{\left( n \right)}}}{{\mu _{\pi \left( {i,j} \right)}^{\left( n \right)}}}\;\frac{{v_{ij}^{\left( n \right)}}}{{v_{\pi \left( {i,j} \right)}^{\left( n \right)}}},\end{align}

and the result (21) may be re-parameterised as

(24) \begin{align}X_{ij}^{\left( n \right)}\sim Tw_p^*\left( {\frac{{\alpha - 1}}{{\mu _{ij}^{\left( n \right)}v_{ij}^{\left( n \right)}}},{{\left( {\mu _{ij}^{\left( n \right)}v_{ij}^{\left( n \right)}} \right)}^\alpha }\left( {\frac{1}{{{v_{\pi \left( {i,j} \right)}}}} + \frac{1}{{v_{\pi \left( {i,j} \right)}^{\left( n \right)}}} + \frac{1}{{v_{ij}^{\left( n \right)}}}} \right)} \right)\ {{for}}\ \;p \ne 1,\end{align}

Corollary 3.3. The mixture coefficients $\alpha _{ij}^{\left( n \right)},\beta _{ij}^{\left( n \right)}$ do not depend on $p$ (or $\alpha $ ).

The model obtained from the general common shock model (19) by the substitutions of Lemma 3.1 or Corollary 3.2 for $\alpha _{ij}^{\left( n \right)},\beta _{ij}^{\left( n \right)}$ will be referred to as the general multivariate Tweedie common shock model.

Henceforth, results will be expressed in the $\left( {\mu ,v} \right)$ parameterisation.

Example 3.1. Corollary 3.2 may be illustrated for the case $p \ne 1$ with the choice of row-wise dependence from Table 1, where $\;\mathcal{P}_i^{\left( n \right)} = \mathcal{R}_i^{\left( n \right)}$ , and so $\pi \left( {i,j} \right) = i$ . In this case, (19) and (24) become

(25) \begin{align}X_{ij}^{\left( n \right)} = \left( {\frac{{\mu _{ij}^{\left( n \right)}}}{{{\mu _i}}}\;\frac{{v_{ij}^{\left( n \right)}}}{{{v_i}}}} \right){W_i} + \left( {\frac{{\mu _{ij}^{\left( n \right)}}}{{\mu _i^{\left( n \right)}}}\;\frac{{v_{ij}^{\left( n \right)}}}{{v_i^{\left( n \right)}}}} \right)W_i^{\left( n \right)} + Z_{ij}^{\left( n \right)}\end{align}

and

(26) \begin{align}X_{ij}^{\left( n \right)}\sim Tw_p^*\left( {\frac{{\alpha - 1}}{{\mu _{ij}^{\left( n \right)}v_{ij}^{\left( n \right)}}},{{\left( {\mu _{ij}^{\left( n \right)}v_{ij}^{\left( n \right)}} \right)}^\alpha }\left( {\frac{1}{{{v_i}}} + \frac{1}{{v_i^{\left( n \right)}}} + \frac{1}{{v_{ij}^{\left( n \right)}}}} \right)} \right), \end{align}

with ${W_i}\sim Tw_p^*\left( {\frac{{\alpha - 1}}{{{\mu _i}{v_i}}},\mu _i^{\rm{\alpha }}v_i^{{\rm{\alpha }} - 1}} \right),$ $W_i^{\left( n \right)}\sim Tw_p^*\left( {\frac{{\alpha - 1}}{{\mu _i^{\left( n \right)}v_i^{\left( n \right)}}},{{\left( {\mu _i^{\left( n \right)}} \right)}^\alpha }{{\left( {v_i^{\left( n \right)}} \right)}^{\alpha - 1}}} \right)$ .

According to (25), the observation in the $\left( {i,j} \right)$ cell consists of three contributions:

  1. (1) a shock that impacts all cells of row $i$ in all arrays;

  2. (2) a shock that impacts all cells of row $i$ in just the $n$ -th array;

  3. (3) an idiosyncratic component that impacts just the $\left( {i,j} \right)$ cell.

All three components are $Tw_p^*$ distributed, as is the total observation, and all have the same canonical parameter $\theta _{ij}^{\left( n \right)} = \frac{{\alpha - 1}}{{\mu _{ij}^{\left( n \right)}v_{ij}^{\left( n \right)}}}$ .

4. Moments of Observations in a Multivariate Tweedie Common Shock Model

Consider the general multivariate Tweedie common shock model, with observations represented by (20) or (21). The first two moments of this observation are as in the following lemma.

Lemma 4.1. For $X_{ij}^{\left( n \right)}$ an observation in a general multivariate Tweedie common shock model,

(27) \begin{align}E\left[ {X_{ij}^{\left( n \right)}} \right] = \mu _{ij}^{\left( n \right)}\left( {\frac{{v_{ij}^{\left( n \right)}}}{{{v_{\pi \left( {i,j} \right)}}}} + \frac{{v_{ij}^{\left( n \right)}}}{{v_{\pi \left( {i,j} \right)}^{\left( n \right)}}}+1} \right){\rm{\;}},\end{align}
(28) \begin{align}Var\left[ {X_{ij}^{\left( n \right)}} \right] = {\left( {\sigma _{ij}^{\left( n \right)}} \right)^2}\left( {\frac{{v_{ij}^{\left( n \right)}}}{{{v_{\pi \left( {i,j} \right)}}}} + \frac{{v_{ij}^{\left( n \right)}}}{{v_{\pi \left( {i,j} \right)}^{\left( n \right)}}}+1} \right)\;.\end{align}

Note that these results hold without restriction on $p$ .

Proof. See Appendix A.

The result just below then follows.

Proposition 4.1. For a general multivariate Tweedie common shock model,

  1. (a) $E\left[ {X_{ij}^{\left( n \right)}} \right]$ and $Var\left[ {X_{ij}^{\left( n \right)}} \right]$ are the same multiples of $E\left[ {Z_{ij}^{\left( n \right)}} \right]$ and $Var\left[ {Z_{ij}^{\left( n \right)}} \right],$ respectively;

  2. (b) The common multiple decomposes into its two common shock and single idiosyncratic components in the same proportions for $E\left[ {X_{ij}^{\left( n \right)}} \right]$ and $Var\left[ {X_{ij}^{\left( n \right)}} \right],$ respectively. The multiples do not depend on $p$ (or $\alpha $ ).

5. Balance of a General Multivariate Tweedie Common Shock Model

5.1. Condition for auto-balance

As foreshadowed in section 1, a common shock model is regarded as balanced if the proportionate contribution of each shock and idiosyncratic component to cell expectation is constant over all cells. Now, in the case of general multivariate Tweedie common shock model, the cell expectation is given by (27), where the three components within the bracket correspond to the two common shock and the idiosyncratic contributions, respectively.

Hence, auto-balance occurs if and only if, for each $n$ , the ratios $v_{ij}^{\left( n \right)}/{v_{\pi \left( {i,j} \right)}}{\rm{\;and\;}}v_{ij}^{\left( n \right)}/v_{\pi \left( {i,j} \right)}^{\left( n \right)}$ are independent of $i,j$ . This leads to the necessary and sufficient condition set out in the following proposition.

Proposition 5.1. A general multivariate Tweedie common shock model will be auto-balanced if and only if the following two conditions hold:

  1. (a) $v_{ij}^{\left( n \right)} = {C^{\left( n \right)}}{v_{\pi \left( {i,j} \right)}}$ ; and

  2. (b) $v_{ij}^{\left( n \right)} = {K^{\left( n \right)}}v_{\pi \left( {i,j} \right)}^{\left( n \right)}, $

for all $i,j,n$ , where ${C^{\left( n \right)}},{K^{\left( n \right)}}<0$ are quantities that depend on only $n$ .

For this auto-balanced case,

(29) \begin{align}E\left[ {X_{ij}^{\left( n \right)}} \right] = {{\rm{\kappa }}^{\left( n \right)}}\mu _{ij}^{\left( n \right)}\end{align}
(30) \begin{align}Var\left[ {X_{ij}^{\left( n \right)}} \right] = {{\rm{\kappa }}^{\left( n \right)}}{\left( {\sigma _{ij}^{\left( n \right)}} \right)^2},\end{align}
(31) \begin{align}Co{V^2}\left[ {X_{ij}^{\left( n \right)}} \right] = {\left( {{{\rm{\kappa }}^{\left( n \right)}}} \right)^{ - 1}}v_{ij}^{\left( n \right)}, \end{align}

where ${{\rm{\kappa }}^{\left( n \right)}} = {K^{\left( n \right)}} + {C^{\left( n \right)}}+1$ , which does not depend on $p$ (or $\alpha $ ).

Proof. See Appendix A.

The following remark illustrates that, despite the restrictions of Proposition 5.1 on CoVs, the auto-balanced general multivariate Tweedie common shock model retains richness of structure.

Remark 5.1. The CoVs of umbrella common shocks for partition subsets are unrestricted. For a particular array, the CoVs of the array-specific common shocks over partition subsets must be common multiples of the umbrella common shocks for the corresponding partition subsets. These CoVs may thus vary by both array and partition subset. Within each array, the idiosyncratic CoVs must be common multiples of the CoVs of the array-specific common shocks for the corresponding partition subsets. Thus, the idiosyncratic CoVs may also vary by both array and partition subset, but not within a partition subset of an array. Under these constraints, the total cell CoVs within a particular array are common multiples of the idiosyncratic CoVs. The multiples may vary by array.

Remark 5.2. For the model of Proposition 5.1, the expected cell values $E\left[ {X_{ij}^{\left( n \right)}} \right]$ are all constant multiples of the idiosyncratic expectations $\mu _{ij}^{\left( n \right)}$ , where the multiples depend on only the array number $n$ . Likewise, cell variances $Var\left[ {X_{ij}^{\left( n \right)}} \right]$ are the same constant multiples of the squared idiosyncratic variances ${\left[ {\sigma _{ij}^{\left( n \right)}} \right]^2}$ .

There may be occasions on which one wishes to omit either the umbrella or array-specific common shocks from the model of Lemma 3.1. In such cases, the proof of Proposition 5.1 in Appendix A is simply modified to obtain the following result.

Remark 5.3. Suppose that the umbrella common shock is omitted from the general multivariate Tweedie common shock model for particular array $n = {n^*}$ . Then, Proposition 5.1 holds with condition (a) omitted for $n = {n^*}$ , and ${C^{\left( {{n^*}} \right)}}$ omitted from ${{\rm{\kappa }}^{\left( {{n^*}} \right)}}$ . Similarly, if the array-specific common shock is omitted for array $n = {n^*}$ , then the proposition holds with condition (b) omitted for $n = {n^*}$ , and ${K^{\left( {{n^*}} \right)}}$ omitted from ${{\rm{\kappa }}^{\left( {{n^*}} \right)}}$ .

5.2 Model extensions

The general multivariate Tweedie common shock model set out in Lemma 3.1 contains two common shock components, the umbrella and array-specific, and both affect the same partition subsets $\pi \left( {i,j} \right)$ . The model can be generalised by adding further shocks and allowing different shocks to affect different subsets.

Section 2.3 introduced the partition ${\mathcal{P}^{\left( n \right)}}$ of the claim array ${\mathcal{A}^{\left( n \right)}}$ . Of course, the array may be partitioned in various ways, e.g. accident years, calendar years, etc., as illustrated in Table 1, and a set of common shocks associated with each partition.

Accordingly, consider $R$ distinct partitions $\mathcal{P}_{\left[ r \right]}^{\left( n \right)},r=1, \ldots ,R$ of each array ${\mathcal{A}^{\left( n \right)}}$ . Subsets of $\mathcal{P}_{\left[ r \right]}^{\left( n \right)}$ will be denoted by $\mathcal{P}_{\left[ r \right]1}^{\left( n \right)},\mathcal{P}_{\left[ r \right]2}^{\left( n \right)}, \ldots .\;\;$ For each $r$ , let ${\pi _{\left[ r \right]}}\left( {i,j} \right)$ denote the partition subset that contains cell $\left( {i,j} \right)$ . Next generalise (19) to the following:

(32) \begin{align}X_{ij}^{\left( n \right)} = \mathop \sum \limits_{r=1}^R \alpha _{ij,\left[ r \right]}^{\left( n \right)}{W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}} + \mathop \sum \limits_{r=1}^R \beta _{ij,\left[ r \right]}^{\left( n \right)}W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)} + Z_{ij}^{\left( n \right)}, \end{align}

where ${W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}},W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)}$ are umbrella and array-specific common shocks affecting partition subset ${\pi _{\left[ r \right]}}\left( {i,j} \right)$ and all ${W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}},W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)},{\rm{\;}}Z_{ij}^{\left( n \right)}$ are independent. Adopt the notation ${v_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}} = Co{V^2}\left[ {{W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}}} \right],v_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)} = Co{V^2}\left[ {W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)}} \right]$ .

By convention, and despite the ${\rm{\Sigma }}_{r=1}^R$ notation, it will be permitted that some of the variates ${W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}},W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)}$ may be absent from (32), but it is assumed that at least one or the other is present for each $r$ . It will be assumed that the same terms ${W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}},W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)}$ are present for each $i,n$ . Let $\chi $ and ${\chi ^{\left( n \right)}},$ respectively, denote the number of ${W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}}\;{\rm{and}}\;W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)}$ terms present.

Example 5.1. A possible model is as follows:

(33) \begin{align}X_{ij}^{\left( n \right)} = \alpha _{ij,\left[ 1 \right]}^{\left( n \right)}{W_{{\pi _{\left[ 1 \right]}}\left( {i,j} \right)}} + \beta _{ij,\left[ 1 \right]}^{\left( n \right)}W_{{\pi _{\left[ 1 \right]}}\left( {i,j} \right)}^{\left( n \right)} + \beta _{ij,\left[ 2 \right]}^{\left( n \right)}W_{{\pi _{\left[ 2 \right]}}\left( {i,j} \right)}^{\left( n \right)} + Z_{ij}^{\left( n \right)}, \end{align}

where the ${\pi _{\left[ r \right]}}\left( {i,j} \right)$ are defined by $\mathcal{P}_{\left[ 1 \right]}^{\left( n \right)}$ , the diagonal-wise partition and $\mathcal{P}_{\left[ 2 \right]}^{\left( n \right)}$ , the row-wise partition from Table 1. The model thus includes a diagonal-wise umbrella common shock, and both diagonal-wise and row-wise array-specific common shocks.

Lemma 3.1 and Corollary 3.2 extend easily to this more general situation, as set out in the following proposition.

Proposition 5.2. Suppose that, in the model (32), ${W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}}\sim Tw_p^*\left( {{\theta _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}},{\lambda _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}}} \right),$ $W_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)}\sim,$ $Tw_p^*\left( {\theta _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)},\lambda _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)}} \right),$ $Z_{ij}^{\left( n \right)}\sim Tw_p^*\left( {\theta _{ij}^{\left( n \right)},\lambda _{ij}^{\left( n \right)}} \right)$ with ${\theta _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}} = \theta _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)} = \theta _{ij}^{\left( n \right)}$ in the case $p=1$ . Then, $X_{ij}^{\left( n \right)}$ is Tweedie distributed if and only if $\alpha _{ij,\left[ r \right]}^{\left( n \right)} = {\theta _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}}/\theta _{ij}^{\left( n \right)},\beta _{ij,\left[ r \right]}^{\left( n \right)} = \theta _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)}/\theta _{ij}^{\left( n \right)}$ . In fact,

(34) \begin{align}X_{ij}^{\left( n \right)}\sim Tw_p^*\left( {\theta _{ij}^{\left( n \right)},\mathop \sum \limits_{r=1}^R {\lambda _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}} + \mathop \sum \limits_{r=1}^R \lambda _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)} + \lambda _{ij}^{\left( n \right)}} \right)\;{\rm{for}}\;p=1.\end{align}
(35) \begin{align}X_{ij}^{\left( n \right)}\sim Tw_p^*\left( {\theta _{ij}^{\left( n \right)},\mathop \sum \limits_{r=1}^R {{\left( {\frac{{{\theta _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}}}}{{\theta _{ij}^{\left( n \right)}}}} \right)}^\alpha }{\lambda _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}} + \mathop \sum \limits_{r=1}^R {{\left( {\frac{{\theta _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)}}}{{\theta _{ij}^{\left( n \right)}}}} \right)}^\alpha }\lambda _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)} + \lambda _{ij}^{\left( n \right)}} \right)\;{\rm{for}}\;p \ne 1.\end{align}

As in Corollary 3.2, these results may be expressed in the alternative form:

(36) \begin{align}\alpha _{ij,\left[ r \right]}^{\left( n \right)} = \frac{{\mu _{ij}^{\left( n \right)}}}{{{\mu _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}}}}\;\frac{{v_{ij}^{\left( n \right)}}}{{{v_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}}}},\end{align}
(37) \begin{align}\beta _{ij,\left[ r \right]}^{\left( n \right)} = \frac{{\mu _{ij}^{\left( n \right)}}}{{\mu _{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)}}}\;\frac{{v_{ij}^{\left( n \right)}}}{{v_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)}}},\end{align}
(38) \begin{align}X_{ij}^{\left( n \right)}\sim Tw_p^*\left( {\frac{{\alpha - 1}}{{\mu _{ij}^{\left( n \right)}v_{ij}^{\left( n \right)}}},{{\left( {\mu _{ij}^{\left( n \right)}v_{ij}^{\left( n \right)}} \right)}^\alpha }\left( {\mathop \sum \limits_{r=1}^R \frac{1}{{{v_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}}}} + \mathop \sum \limits_{r=1}^R \frac{1}{{v_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}^{\left( n \right)}}} + \frac{1}{{v_{ij}^{\left( n \right)}}}} \right)} \right)\;{\rm{for}}\;p \ne 1,\end{align}

The establishment of conditions for auto-balance requires the notion of connectedness of cells of an array. Cells $\left( {i,j} \right)$ and $\left( {k,\ell } \right)$ of array ${\mathcal{A}^{\left( n \right)}}$ will be said to be connected if there exists a sequence of subsets $\left\{ {\mathcal{P}_{\left[ {{r_g}} \right]{s_g}}^{\left( n \right)},g=1, \ldots ,G} \right\}$ such that $\left( {i,j} \right) \in \mathcal{P}_{\left[ {{r_1}} \right]{s_1}}^{\left( n \right)},\left( {k,\ell } \right) \in \mathcal{P}_{\left[ {{r_G}} \right]{s_G}}^{\left( n \right)}$ and $\mathcal{P}_{\left[ {{r_g}} \right]{s_g}}^{\left( n \right)} \cap \mathcal{P}_{\left[ {{r_{g+1}}} \right]{s_{g+1}}}^{\left( n \right)} \ne \emptyset ,g=1, \ldots ,G - 1$ . By congruence of the arrays ${\mathcal{A}^{\left( n \right)}}$ , if $\left( {i,j} \right)$ and $\left( {k,\ell } \right)$ are connected within one array, they will be connected in all others.

It is evident that connectedness between two cells is an equivalence relation, and so each array ${\mathcal{A}^{\left( n \right)}}$ partitions into equivalence classes ${\mathcal{E}_h},h=1, \ldots ,H$ which, by congruence, do not depend on $n$ . The cells within any one equivalence class will be connected to all other cells in the class and disconnected from all cells in all other classes.

Further, since all cells within a single partition subset are connected to all other cells in that subset, an equivalence class must consist of a union of partition subsets. It will sometimes be convenient to denote an equivalence class ${\mathcal{E}_h}$ by $\mathcal{E}\left( {i,j} \right)$ for any cell $\left( {i,j} \right)$ contained in the class.

Proposition 5.3 now gives the necessary and sufficient condition for auto-balance in the case of model (32).

Proposition 5.3. Consider a Tweedie common shock model (32), subject to the parameter restrictions imposed by Proposition 5.2. The model will be auto-balanced if and only if the following two conditions hold:

  1. (a) $v_{ij}^{\left( n \right)} = {C^{\left( n \right)}}{v_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}}$ ; and

  2. (b) $v_{ij}^{\left( n \right)} = {K^{\left( n \right)}}{v^{(n)}_{{\pi _{\left[ r \right]}}\left( {i,j} \right)}}$ ;

for all $i,j,n,r$ , where ${C^{\left( n \right)}},{K^{\left( n \right)}}<0$ are quantities that depend on only $n$ .

For this auto-balanced case,

(39) \begin{align}E\left[ {X_{ij}^{\left( n \right)}} \right] = {{\rm{\kappa }}^{\left( n \right)}}\mu _{ij}^{\left( n \right)}\end{align}
(40) \begin{align}Var\left[ {X_{ij}^{\left( n \right)}} \right] = {{\rm{\kappa }}^{\left( n \right)}}{\left( {\sigma _{ij}^{\left( n \right)}} \right)^2},\end{align}
(41) \begin{align}Co{V^2}\left[ {X_{ij}^{\left( n \right)}} \right] = {\left( {{{\rm{\kappa }}^{\left( n \right)}}} \right)^{ - 1}}v_{ij}^{\left( n \right)},\end{align}

where ${{\rm{\kappa }}^{\left( n \right)}} = {K^{\left( n \right)}}\chi + {C^{\left( n \right)}}{\chi ^{\left( n \right)}}+1$ , which does not depend on $p$ (or $\alpha $ ).

Proof. A slight modification of the proof of Proposition 5.1.

This result sets the relationships of the idiosyncratic components with the common shock components in the same cells. However, auto-balance of the Tweedie common shock model of Proposition 5.3 imposes constraints on the common shocks, as stated in the following result.

Proposition 5.4. The auto-balance conditions of Proposition 5.3 imply that, for any given $n,r,i,j,$ ${v_{{\pi _{\left[ r \right]}}\left( {k,\ell } \right)}}\;{\rm{and}}\;v_{{\pi _{\left[ r \right]}}\left( {k,\ell } \right)}^{\left( n \right)}$ are constant (though not necessarily with each other) over all $\left( {k,\ell } \right) \in \mathcal{E}\left( {i,j} \right)$ . In short, common shock CoVs are constant over equivalence classes.

Proof. See Appendix A.

A few examples of connectedness in common shock models follow. As a preliminary, one may note that, in the case $R=1$ in (32), there is a single partition of the arrays. The subsets of a partition are disjoint, by definition, and so there is no connectedness of cells in distinct subsets. The equivalence classes ${\mathcal{E}_h}$ are the partition subsets themselves. By Proposition 5.4, common shock CoVs are constant over just partition subsets, which is the case by their definition.

Example 5.2: cell-wise common shocks. The array partition for cell-wise shocks is given in Table 1, where it is seen that the partition subsets are the cells themselves. Hence, Proposition 5.4 states that common shock CoVs are constant over cells within a partition. This is a vacuous statement, which imposes no restriction on these CoVs.

Example 5.3: row-wise common shocks. By the same type of argument as in Example 5.2, the common shock CoVs are constant over the partition subsets, which are rows.

Example 5.4: simultaneous row-wise and diagonal-wise common shocks. This is the case $R=2$ in (32), where the partition subsets ${\pi _{\left[ 1 \right]}}\left( {i,j} \right)$ are rows and ${\pi _{\left[ 2 \right]}}\left( {k,\ell } \right)$ diagonals.

It is possible to show that all $\left( {i,j} \right),\left( {k,\ell } \right)$ are connected. If $i = k$ , then, the two points are connected by virtue of lying within the same row. If $i + j = k + \ell $ , then they are connected within the same diagonal. If neither of these conditions holds, then, without loss of generality, one may assume that $i + j > k + \ell $ . One may connect the cell $\left( {i,j} \right)$ to $\left( {k + \ell - i} \right)$ in the same row and thence to $\left( {k,\ell } \right)$ in the same diagonal.

Thus, the array contains only one equivalence class, namely the entire array. It then follows from Proposition 5.4 that the umbrella and array-specific common shock CoVs must be constant across all cells in each array if the model is to be balanced.

Example 5.5: a more exotic case. In this case, $R=3$ . Let $\mathcal{P}_{\left[ 1 \right]}^{\left( n \right)}$ be the row-wise partition of Example 5.3, i.e. $\mathcal{P}_{\left[ 1 \right]i}^{\left( n \right)} = \mathcal{R}_i^{\left( n \right)}$ (see Table 1). Now let $\mathcal{P}_{\left[ * \right]}^{\left( n \right)}$ denote the diagonal-wise partition $\mathcal{P}_{\left[ * \right]t}^{\left( n \right)} = \mathcal{D}_t^{\left( n \right)}$ and define $\mathcal{P}_{\left[ 2 \right]}^{\left( n \right)},\mathcal{P}_{\left[ 3 \right]}^{\left( n \right)}$ as $\left\{ {\mathcal{D}_t^{\left( n \right)} \cap \mathcal{R}_i^{\left( n \right)}\,{:}\,i \le {i_0},\;all\;t} \right\}$ and $\left\{ {\mathcal{D}_t^{\left( n \right)} \cap \mathcal{R}_i^{\left( n \right)}\,{:}\,i < {i_0},\;all\;t} \right\},$ respectively, for some chosen ${i_0}$ . Then, $\mathcal{P}_{\left[ 2 \right]}^{\left( n \right)}\;$ represents a diagonal effect in “low” accident periods and $\mathcal{P}_{\left[ 3 \right]}^{\left( n \right)}\;$ a diagonal effect in “high” accident periods.

In this case, rows are connected (as before), and the semi-diagonals constituting $\mathcal{P}_{\left[ 2 \right]}^{\left( n \right)}$ are also connected, just as in Example 5.4. Similarly, the semi-diagonals in $\mathcal{P}_{\left[ 3 \right]}^{\left( n \right)}$ are also connected. However, there is no connection between $\mathcal{P}_{\left[ 2 \right]}^{\left( n \right)},\mathcal{P}_{\left[ 3 \right]}^{\left( n \right)}$ , with the result that the equivalence classes are $\mathcal{P}_{\left[ 2 \right]}^{\left( n \right)},\mathcal{P}_{\left[ 3 \right]}^{\left( n \right)}$ .

A further model extension is possible. Section 2.1 assumed congruence of all arrays ${\mathcal{A}^{\left( n \right)}}$ , and this property has been useful in one or two of the proofs of auto-balance. Now that the conditions for auto-balance have been established; however, the assumption of congruence can be seen to be unnecessary.

Proposition 5.5. Consider the model of Proposition 5.2, which is sufficiently general to include the earlier model of section 5.1. The conditions for auto-balance are given in Proposition 5.3, on the assumption of congruence of all arrays ${\mathcal{A}^{\left( n \right)}}$ . Now delete arbitrary cells from any or all of the ${\mathcal{A}^{\left( n \right)}}$ , not necessarily the same cells for each $n$ . Then, the model will be auto-balanced if and only conditions (a) and (b) of Proposition 5.3 hold for all $i,j,n,r$ for which the $v$ terms exist on both sides of the equations.

Proof. Consider the situation before the deletion of cells, and suppose that conditions (a) and (b) of Proposition 5.3 hold for all $i,j,n,r$ , ensuring auto-balance. This means that the proportionate contribution of each shock and idiosyncratic component to cell expectation is constant over all cells observed. Now delete the nominated cells from each array. The auto-balance condition is unaffected in respect of the remaining cells.

6. Numerical Examples

6.1 Data sets

Three synthetic data sets were used to illustrate the auto-balance of three different claim models when conditions in Proposition 5.1 hold.

All data sets are Tweedie distributed with power parameter $p\; = \;1.8,\;$ and each data set consists of two triangles with dimension 15×15. It follows that each cell of each triangle follows a compound Poisson-Gamma distribution (Jørgensen, Reference Jørgensen1987, section 5). The value of 1.8 is simply a selection by the authors, but Appendix B demonstrates its reasonableness.

Patterns over different development periods of the idiosyncratic and common shock components are different between triangles and from each other. Parameters have been intentionally selected to reflect different degrees of contribution of umbrella and array-specific shocks within each triangle and between triangles.

Data set 1 is used to illustrate the claim model with cell-wise common shocks described in Example 5.2. There is no restriction on the common shock CoVs as the partition subsets are cells themselves. Column-wise CoVs have been chosen only for the purpose of simplification.

The claim model with row-wise umbrella and array-specific shocks in Example 5.3 is illustrated using data set 2. The partition subsets are rows, and hence, the model is specified with row-wise common shock CoVs.

The exotic case described in Example 5.5 is illustrated using the third data set. This data set has split diagonal-wise umbrella shocks which apply to two segments of accident periods $\left\{ {i\; \le 10} \right\}$ and $\left\{ {i\;<10} \right\}$ . The array-specific shocks are row-wise. As described in Example 5.5, all cells within a row in an array are connected due to the array-specific shocks, and all cells within the same semi-diagonal are connected due to the split umbrella diagonal shocks. The two equivalence classes in each triangle are all cells with $\left\{ {i\; \le 10} \right\}$ and $\left\{ {i\;>10} \right\}$ , respectively. CoVs of common shocks are specified to be constant for all cells within each of these equivalence classes.

Parameter specifications of these synthetic data sets are provided in Appendix C.

6.2. Common shock contributions

It is worthy of note that the conditions for auto-balance of common shock models aim to provide equality of expected proportionate contribution of each shock and idiosyncratic component to cell expectation over all cells. Hence, it is sufficient to demonstrate the auto-balance of common shock models using the expected value and CoV of each shock and idiosyncratic component. If one wishes to assess the balance of common shock contributions using simulated data, simulations could be carried out using the specified expected values, CoVs, and power parameter $p.$ This section provides the results of common shock contributions on both expected values and simulated values.

Following (19), (22), and (23), the expected proportionate contribution of umbrella and array-specific shocks to cell expectation overall all cells can be calculated by

\begin{align*}\mu _{ij}^{\left( n \right)}\frac{{v_{ij}^{\left( n \right)}}}{{{v_{\pi \left( {i,j} \right)}}}}\bigg/E\left[ {X_{ij}^{\left( n \right)}} \right],\end{align*}

and

\begin{align*}\mu _{ij}^{\left( n \right)}\frac{{v_{ij}^{\left( n \right)}}}{{v_{\pi \left( {i,j} \right)}^{\left( n \right)}}}\bigg/E\left[ {X_{ij}^{\left( n \right)}} \right],\end{align*}

respectively, and where $E\left[ {X_{ij}^{\left( n \right)}} \right]$ is calculated using (27).

The expected common shock proportions for all three synthetic data sets arising from the parameter specifications in Appendix C are provided in Table 2.

Table 2. Expected common shock proportions to cell expectations of synthetic data.

It is observed that each shock contributes equally to cell expectation across all cells and auto-balance is achieved. It can also be easily verified that the remaining contribution to cell expectation which belongs to the idiosyncratic component satisfies (29) with the shock multiples ${C^{\left( n \right)}}$ and ${K^{\left( n \right)}}$ provided in Appendix C.

For simulated data, the proportionate contribution of umbrella and array-specific shocks to cell total is calculated using the mixture coefficients and simulated values as

\begin{align*}\frac{{\mu _{ij}^{\left( n \right)}}}{{{\mu _{\pi \left( {i,j} \right)}}}}\;\frac{{v_{ij}^{\left( n \right)}}}{{{v_{\pi \left( {i,j} \right)}}}}{W_{\pi \left( {i,j} \right)}}/X_{ij}^{\left( n \right)},\end{align*}

and

\begin{align*}\frac{{\mu _{ij}^{\left( n \right)}}}{{\mu _{\pi \left( {i,j} \right)}^{\left( n \right)}}}\;\frac{{v_{ij}^{\left( n \right)}}}{{v_{\pi \left( {i,j} \right)}^{\left( n \right)}}}W_{\pi \left( {i,j} \right)}^{\left( n \right)}/X_{ij}^{\left( n \right)},\end{align*}

respectively, and where $X_{ij}^{\left( n \right)}$ is the sum of simulated components.

Tables 3 and 4 show the contributions of umbrella shock and array-specific shock to cell total for triangle 1 in data set 3. The equivalent tables for the remaining data sets and triangles are provided in Appendix C.

Table 3. Proportionate contributions of umbrella shock to cell total – Triangle 1, data set 3.

Table 4. Proportionate contributions of array-specific shock to cell total – Triangle 1, data set 3.

The tables are heat maps in which colouring toward the red end of the spectrum indicates shock contributions that are larger than the triangle average; the blue end of the spectrum indicates shock contributions that are smaller.

Due to the random nature of simulation, the proportionate contribution of a shock to a cell total in the simulated data is not constant across all cells. However, it can be observed from Tables 3, 4 and Appendix C that these contributions are neither systematically skewed toward cells with small values in high development periods, nor are they overwhelmed in cells with larger values from early development periods, as occurs in unbalanced common shock models.

Split umbrella diagonal shocks are clear in Table 3 where diagonal patterns are visible and apply to only the relevant subset of accident periods. Row-wise array-specific shocks are also exhibited in Table 4. As shown in Appendix C for data set 1 where shocks are cell-wise, no clear patterns are observed for proportionate contributions of umbrella and array-specific shocks provided. On the other hand, row-wise shock patterns are evident for data set 2 which was simulated from a row-wise dependence model.

7. Conclusions

Section 5.1 derives auto-balance conditions for Tweedie common shock models. These are found to be relatively simple, depending only on the coefficients of variation of the common shock and idiosyncratic components of observations, specifically, the relations between these CoVs. In the case of auto-balance, the means and variances of observations assume particularly simple forms.

The common shock models considered are of quite general form such as row-wise or diagonal-wise, etc., or shocks that affect even quite irregular shapes within a triangle. Section 5.2 extends the range of models further with the inclusion of an indefinite number of shocks within the model. Shocks or a hybrid type, such as intersections of subsets of rows and columns, are also included. Auto-balance conditions are derived for all of these.

Acknowledgement

In relation to the first author, this research was supported under Australian Research Council’s Linkage Projects funding scheme (project number LP130100723). The views expressed herein are those of the authors and are not necessarily those of the supporting body.

Appendix A

Proof of Lemma 2.1. Divide (10) by (7) to obtain

(A.1) \begin{align}\frac{\theta }{{\alpha - 1}} = \frac{1}{{\mu v}}\;.\end{align}

Substitute (A.1) into (10) to obtain

(A.2) \begin{align}\lambda = \frac{1}{v}\;{\left( {\mu v} \right)^\alpha } = {\mu ^\alpha }{v^{\alpha - 1}}\;.\end{align}

The lemma follows from (A.1) and (A.2).

Proof of Lemma 3.1. Consider the last two summands of (19), i.e. $\beta _{ij}^{\left( n \right)}W_{\pi \left( {i,j} \right)}^{\left( n \right)} + Z_{ij}^{\left( n \right)}$ for $p \ne 1$ . By (16),

\begin{align*}\beta _{ij}^{\left( n \right)}W_{\pi \left( {i,j} \right)}^{\left( n \right)} + Z_{ij}^{\left( n \right)}\sim Tw_p^*\left( {\theta _{ij}^{\left( n \right)},{{\left( {\frac{{\theta _{\pi \left( {i,j} \right)}^{\left( n \right)}}}{{\theta _{ij}^{\left( n \right)}}}} \right)}^\alpha }\lambda _{\pi \left( {i,j} \right)}^{\left( n \right)} + \lambda _{ij}^{\left( n \right)}} \right),\end{align*}

provided that $\beta _{ij}^{\left( n \right)} = \theta _{\pi \left( {i,j} \right)}^{\left( n \right)}/\theta _{ij}^{\left( n \right)}$ .

Similarly,

(A.3) \begin{align}X_{ij}^{\left( n \right)} &= \alpha _{ij}^{\left( n \right)}{W_{\pi \left( {i,j} \right)}} + \left( {\beta _{ij}^{\left( n \right)}W_{\pi \left( {i,j} \right)}^{\left( n \right)} + Z_{ij}^{\left( n \right)}} \right)\nonumber\\[3pt]&\quad \sim Tw_p^*\left( {\theta _{ij}^{\left( n \right)},{{\left( {\frac{{{\theta _{\pi \left( {i,j} \right)}}}}{{\theta _{ij}^{\left( n \right)}}}} \right)}^\alpha }{\lambda _{\pi \left( {i,j} \right)}} + {{\left( {\frac{{\theta _{\pi \left( {i,j} \right)}^{\left( n \right)}}}{{\theta _{ij}^{\left( n \right)}}}} \right)}^\alpha }\lambda _{\pi \left( {i,j} \right)}^{\left( n \right)} + \lambda _{ij}^{\left( n \right)}} \right),\end{align}

provided that $\alpha _{ij}^{\left( n \right)} = {\theta _{\pi \left( {i,j} \right)}}/\theta _{ij}^{\left( n \right)}$ .

The case $p=1$ is similarly treated. This proves the sufficiency of the conditions on $\alpha _{ij}^{\left( n \right)}\beta _{ij}^{\left( n \right)}$ . For necessity, consider the cgf of $_{ij}^{\left( n \right)}$ in the case $p \ne 0,1,2.$ It is

(A.4) \begin{align}{K_{X_{ij}^{\left( n \right)}}}\left( t \right) = {K_{{W_{\pi \left( {i,j} \right)}}}}\left( {\alpha _{ij}^{\left( n \right)}t} \right) + {K_{W_{\pi \left( {i,j} \right)}^{\left( n \right)}}}\left( {\beta _{ij}^{\left( n \right)}t} \right) + {K_{Z_{ij}^{\left( n \right)}}}\left( t \right).\end{align}

Consider the first summand on the right:

(A.5) \begin{align}{K_{{W_{\pi \left( {i,j} \right)}}}}\left( {\alpha _{ij}^{\left( n \right)}t} \right) &= {\lambda _{\pi \left( {i,j} \right)}}\left[ {{b_p}\left( {{\theta _{\pi \left( {i,j} \right)}} + \alpha _{ij}^{\left( n \right)}t} \right) - {b_p}\left( {{\theta _{\pi \left( {i,j} \right)}}} \right)} \right] \nonumber\\ &= {\lambda _{\pi \left( {i,j} \right)}}{b_p}\left( {{\theta _{\pi \left( {i,j} \right)}}} \right)\left[ {{{\left( {1 + \frac{{\alpha _{ij}^{\left( n \right)}}}{{{\theta _{\pi \left( {i,j} \right)}}}}t} \right)}^\alpha } - 1} \right].\end{align}

by (2) and (6).

Similarly for the other two summands in (A.4), whereupon

(A.6) \begin{align}&{K_{X_{ij}^{\left( n \right)}}}\left( t \right) = {b_p}\left( {{\theta _{\pi \left( {i,j} \right)}}} \right) \nonumber \\ &\left\{ {{\xi _1}\left[ {{{\left( {1 + \frac{{\alpha _{ij}^{\left( n \right)}}}{{{\theta _{\pi \left( {i,j} \right)}}}}t} \right)}^\alpha } - 1} \right] + {\xi _2}\left[ {{{\left( {1 + \frac{{\beta _{ij}^{\left( n \right)}}}{{\theta _{\pi \left( {i,j} \right)}^{\left( n \right)}}}t} \right)}^\alpha } - 1} \right] + {\xi _3}\left[ {{{\left( {1 + \frac{1}{{\theta _{ij}^{\left( n \right)}}}t} \right)}^\alpha } - 1} \right]} \right\},\end{align}

for constants ${\xi _k}<0,k=1,2,3$ , and, if this is be $Tw_p^*$ , it must be equal to

(A.7) \begin{align}{K_{X_{ij}^{\left( n \right)}}}\left( t \right) = {b_p}\left( {{\theta _{\pi \left( {i,j} \right)}}} \right)\xi \left[ {{{\left( {1 + {\rm{\eta }}t} \right)}^\alpha } - 1} \right],\end{align}

for some constants $\xi ,{\rm{\eta }}<0$ .

Note that, for $p \ne 0,1,2$ , one finds $\alpha \in \left( { - \infty ,0} \right) \cup \left( {0,1} \right)$ . For identity between (A.6) and (A.7) to hold, all three of the terms raised to power $\alpha $ in (A.6) must be equal, yielding

\begin{align*}\frac{{\alpha _{ij}^{\left( n \right)}}}{{{\theta _{\pi \left( {i,j} \right)}}}} = \frac{{\beta _{ij}^{\left( n \right)}}}{{\theta _{\pi \left( {i,j} \right)}^{\left( n \right)}}} = \frac{1}{{\theta _{ij}^{\left( n \right)}}}{\rm{\;}}.\end{align*}

This proves the necessity of the of the conditions on $\alpha _{ij}^{\left( n \right)}\beta _{ij}^{\left( n \right)}$ (as well as re-establishing their sufficiency).

The cases $p=0,1,2,$ respectively, may be dealt with by a similar analysis of cgf ${K_{X_{ij}^{\left( n \right)}}}\left( t \right)$ .

Proof of Lemma 4.1. For $p \ne 1,$ the value of $E\left[ {X_{ij}^{\left( n \right)}} \right]$ is obtained from (7) with $\theta ,\lambda $ replaced by the two arguments on the right side of (24). This immediately yields (27).

For $p=1,$ by (11) and (20),

\begin{align*}E\left[ {X_{ij}^{\left( n \right)}} \right] = \left( {{\lambda _{\pi \left( {i,j} \right)}} + \lambda _{\pi \left( {i,j} \right)}^{\left( n \right)} + \lambda _{ij}^{\left( n \right)}} \right)exp{\rm{\;}}\theta _{ij}^{\left( n \right)} = \lambda _{ij}^{\left( n \right)}exp{\rm{\;}}\theta _{ij}^{\left( n \right)}\left( {\frac{{{\lambda _{\pi \left( {i,j} \right)}}}}{{\lambda _{ij}^{\left( n \right)}}} + \frac{{\lambda _{\pi \left( {i,j} \right)}^{\left( n \right)}}}{{\lambda _{ij}^{\left( n \right)}}}+1} \right){\rm{\;}},\end{align*}

which is equal to (27) when (11) is used again, and it is recalled from Lemma 3.1 that ${\theta _{\pi \left( {i,j} \right)}} = \theta _{\pi \left( {i,j} \right)}^{\left( n \right)} = \theta _{ij}^{\left( n \right)}$ .

Evaluation of $Var\left[ {X_{ij}^{\left( n \right)}} \right]$ for $p \ne 1$ follows the same logic but with (7) replaced by (8), leading to

\begin{align*}Var\left[ {X_{ij}^{\left( n \right)}} \right] = {\left( {\mu _{ij}^{\left( n \right)}} \right)^2}v_{ij}^{\left( n \right)}\left( {\frac{{v_{ij}^{\left( n \right)}}}{{{v_{\pi \left( {i,j} \right)}}}} + \frac{{v_{ij}^{\left( n \right)}}}{{v_{\pi \left( {i,j} \right)}^{(n)}}}+1} \right){\rm{\;}},\end{align*}

which is identical to (28).

Proof of Proposition 5.1. As stated in the preamble to the proposition, a necessary and sufficient condition for balance is that, for each $n$ ,

(A.8) \begin{align}\frac{{v_i^{\left( n \right)}}}{{v_{\pi \left( {i,j} \right)}^{\left( n \right)}}}{\rm{\;must\;be\;independent\;of\;}}i,j;{\rm{\;and}}\end{align}
(A.9) \begin{align}\frac{{v_{ij}^{\left( n \right)}}}{{{v_{\pi \left( {i,j} \right)}}}}{\rm{\;must\;be\;independent\;of\;}}i,j.\end{align}

These are exactly conditions (a) and (b).

Substitution of conditions (a) and (b) of the proposition in (27) and (28) yields (29)–(31) immediately.

Proof of Proposition 5.4. Commence with condition (a) of Proposition 5.3 for fixed $i,j,n\;{\rm{and}}\;r = {r_1}$ and note that ${v_{{\pi _{\left[ {{r_1}} \right]}}\left( {i,j} \right)}}$ is constant over all cells $\left( {k,\ell } \right) \in {\pi _{\left[ {{r_1}} \right]}}\left( {i,j} \right)$ . This implies that $v_{k\ell }^{\left( n \right)}$ is constant over the same set.

Now, choose any fixed but arbitrary $r = {r^*}$ . Either there is a partition subset ${\pi _{\left[ {{r^*}} \right]}}\left( {k,\ell } \right)$ that lies within the equivalence class $\mathcal{E}\left( {i,j} \right)$ or there is not. If not, then all subsets of partition $\mathcal{P}_{\left[ {{r^*}} \right]}^{\left( n \right)}$ are disconnected from $\left( {i,j} \right)$ .

Consider just the former (connected) case, and let cell $\left( {k,\ell } \right)$ be arbitrary within the subset. The cells $\left( {i,j} \right)$ and $\left( {k,\ell } \right)$ are connected and so, by definition, there exists a sequence of subsets $\left\{ {\mathcal{P}_{\left[ {{r_g}} \right]{s_g}}^{\left( n \right)},g=1, \ldots ,G} \right\}$ such that $\left( {i,j} \right) \in \mathcal{P}_{\left[ {{r_1}} \right]{s_1}}^{\left( n \right)} = {\pi _{\left[ {{r_1}} \right]}}\left( {i,j} \right),\left( {k,\ell } \right) \in \mathcal{P}_{\left[ {{r_G}} \right]{s_G}}^{\left( n \right)} = {\pi _{\left[ {{r^*}} \right]}}\left( {k,\ell } \right)$ and $\mathcal{P}_{\left[ {{r_g}} \right]{s_g}}^{\left( n \right)} \cap \mathcal{P}_{\left[ {{r_{g+1}}} \right]{s_{g+1}}}^{\left( n \right)} \ne \emptyset ,g=1, \ldots , - 1$ .

Consider the case $g=1$ , for which ${\pi _{\left[ {{r_1}} \right]}}\left( {i,j} \right)$ and $\mathcal{P}_{\left[ {{r_2}} \right]{s_2}}^{\left( n \right)}$ intersect. It follows that $\mathcal{P}_{\left[ {r_2} \right]{s_2}}^{\left( n \right)}$ contains at least one cell $\left( {{k_2},{\ell _2}} \right)$ which lies within ${\pi _{\left[ {{r_1}} \right]}}\left( {i,j} \right)$ so that $v_{k_{2}\ell_{2}}^{(n)}=\mathcal{C}^{(n)}V_{\pi_{r_{1}}(k_{2},\ell_{2})}$ by condition (a) of Proposition 5.3. But recall that $p_{\left[ {{r_2}{s_2}} \right]}^{\left( n \right)}$ is just a partition subset and so can be represented as ${\pi _{\left[ {{r_2}} \right]}}\left( {{k_2},{\ell _2}} \right)$ , and so, by the same condition, $v_{{k_2}{\ell _2}}^{\left( n \right)} = {C^{\left( n \right)}}{v_{{\pi _{\left[ {{r_2}} \right]}}\left( {{k_2},{\ell _2}} \right)}}$ . From this, it follows that ${v_{{\pi _{\left[ {{r_2}} \right]}}\left( {{k_2},{\ell _2}} \right)}} = {v_{{\pi _{\left[ {{r_1}} \right]}}\left( {i,j} \right)}}$ .

The argument can be extended along the chain of subsets connecting $\left( {i,j} \right)$ to $\left( {k,\ell } \right)$ , yielding the result that ${v_{{\pi _{\left[ {{r_g}+1} \right]}}\left( {{k_{g+1}},{\ell _{g+1}}} \right)}} = {v_{{\pi _{\left[ {{r_g}} \right]}}\left( {{k_g},{\ell _g}} \right)}}$ for $g=1, \ldots ,G - 1$ , where $\left\{ {\left( {{k_g},{\ell _g}} \right),g=1, \ldots ,G} \right\}$ is a connected sequence of cells with $\left( {{k_1},{\ell _1}} \right) = \left( {i,j} \right)$ and $\left( {{k_G},{\ell _G}} \right) = \left( {k,\ell } \right)$ . One may now conclude that ${v_{{\pi _{\left[ {{r_1}} \right]}}\left( {i,j} \right)}} = {v_{{\pi _{\left[ {{r_1}} \right]}}\left( {{k_1},{\ell _1}} \right)}} = {v_{{\pi _{\left[ {{r_G}} \right]}}\left( {{k_G},{\ell _G}} \right)}} = {v_{{\pi _{\left[ {{r^*}} \right]}}\left( {k,\ell } \right)}}$ .

Recall that ${r_1},{r^*}$ were chosen arbitrarily; for given ${r^*}$ , ${\pi _{\left[ {{r^*}} \right]}}\left( {k,\ell } \right)$ was chosen arbitrarily subject to connectedness to $\left( {i,j} \right)$ ; and $\left( {k,\ell } \right)$ is an arbitrary cell in ${\pi _{\left[ {{r^*}} \right]}}\left( {k,\ell } \right)$ . It follows that, for any choice of ${r_1},{r^*}$ , ${v_{{\pi _{\left[ {{r_1}} \right]}}\left( {i,j} \right)}} = {v_{{\pi _{\left[ {{r^*}} \right]}}\left( {k,\ell } \right)}}$ for any cell $\left( {k,\ell } \right)$ that is connected to $\left( {i,j} \right)$ . This proves the proposition in respect of ${v_{{\pi _{\left[ r \right]}}\left( {k,\ell } \right)}}$ . The proof in respect of $v_{{\pi _{\left[ r \right]}}\left( {k,\ell } \right)}^{\left( n \right)}$ is entirely parallel.

Appendix B

Suppose that $Y$ is a compound Poisson Gamma variate

\begin{align*}Y = \mathop \sum \limits_{i=1}^N {X_i},\end{align*}

where $N\sim\ Poisson\left( {} \right)$ and $X$ is Gamma distributed with pdf

\begin{align*}\frac{{{\beta ^\alpha }}}{{{\rm{\Gamma }}\left( \alpha \right)}}{x^{ - \alpha - 1}}{x^{ - \beta x}},\alpha \left\langle {0,\beta } \right\rangle 0.\end{align*}

Then, $Y$ is Tweedie distributed with the Tweedie $\alpha $ parameter equal to the $\alpha $ in the Gamma pdf (Jørgensen, Reference Jørgensen1987, section 5). This distribution is parameterised by

\begin{align*}E\left[ Y \right] = \kappa \frac{{ - \alpha }}{\beta } = \mu ,\end{align*}
\begin{align*}Var\left[ Y \right] = \kappa \frac{{ - \alpha \left( { - \alpha +1} \right)}}{{{\beta ^2}}} = {\sigma ^2}\;.\end{align*}

It follows that

\begin{align*}Co{V^2}\left[ Y \right] = \frac{{1 - \alpha }}{{ - \alpha }}{\kappa ^{ - 1}} = \frac{1}{{\left( {2 - p} \right)\kappa }}\;,\end{align*}

where the relation between $p$ and $\alpha $ has been used.

In addition,

\begin{align*}CoV\left[ {{X_i}} \right] = {\left( { - \alpha } \right)^{1/2}}\end{align*}

If $N$ denotes the number of claims paid in a cell of a loss triangle, and ${X_i}$ the amount paid in respect of that claim, then $Y$ denotes the total claim amount in the cell. A reasonable assumption for one of the longer-tailed lines of business might be that $CoV\left[ {{X_i}} \right]=2$ , in which case $\alpha = - 1/4$ and then $p=1.8$ .

Note that, since $\alpha $ is assumed constant over all cells, $E\left[ Y \right]$ varies over cells according to variations in $\kappa ,\beta $ , and $CoV\left[ Y \right]$ varies with $\kappa $ . Accordingly, cell-specific values of $\kappa ,\beta $ may be selected to generate any desired cell-specific values of $E\left[ Y \right],CoV\left[ Y \right]$ .

Appendix C Parameter specifications of synthetic data

Data set 1 (cell-wise dependence) – Proportionate contributions of shocks to cell total: triangle 1, umbrella shock (top); triangle 1, array-specific shock (bottom)

Data set 1 (cell-wise dependence) – Proportionate contributions of shocks to cell total: triangle 2, umbrella shock (top); triangle 2, array-specific shock (bottom).

Data set 2 (row-wise dependence) – Proportionate contributions of shocks to cell total: triangle 1, umbrella shock (top); triangle 1, array-specific shock (bottom)

Data set 2 (row-wise dependence) – Proportionate contributions of shocks to cell total: triangle 2, umbrella shock (top); triangle 2, array-specific shock (bottom)

Data set 3 (exotic case) – Proportionate contributions of shocks to cell total: triangle 2, umbrella shock (top); triangle 2, array-specific shock (bottom)

References

Alai, D.H., Landsman, Z. & Sherris, M. (2013). Lifetime dependence modelling using a truncated multivariate gamma distribution. Insurance: Mathematics and Economics, 52(3), 542549.Google Scholar
Alai, D.H., Landsman, Z. & Sherris,, M. (2016). Multivariate Tweedie lifetimes: the impact of dependence. Scandinavian Actuarial Journal, 2016(8), 692712.10.1080/03461238.2015.1007891CrossRefGoogle Scholar
Avanzi, B., Taylor, G., Vu, P.A. & Wong, B. (2016). Stochastic loss reserving with dependence: a flexible multivariate Tweedie approach. Insurance: Mathematics and Economics, 71, 6378.Google Scholar
Avanzi, B., Taylor, G., Vu, P.A. & Wong, B. (2021). On unbalanced data and common shock models in stochastic loss reserving. Annals of Actuarial Science, 15(1), 173203 10.1017/S1748499520000196CrossRefGoogle Scholar
Avanzi, B., Taylor, G. & Wong, B. (2018). Common shock models for claim arrays. ASTIN Bulletin, 48(3), 128.10.1017/asb.2018.18CrossRefGoogle Scholar
Furman, E. & Landsman, Z. (2010). Multivariate Tweedie distributions and some related capital-at-risk analyses. Insurance: Mathematics and Economics, 46, 351361.Google Scholar
Jørgensen, B. (1987). Exponential dispersion models. Journal of the Royal Statistical Society, Series B, 49(2), 127162.Google Scholar
Jørgensen, B. (1997). The Theory of Dispersion Models. Chapman & Hall, London, UK.Google Scholar
Lindskog, F. & McNeil, A.J. (2003). Common Poisson shock models: applications to insurance and credit risk modelling. ASTIN Bulletin, 33(2), 209238.CrossRefGoogle Scholar
Meyers, G.G. (2007). The common shock model for correlated insurance losses. Variance, 1(1), 4052.Google Scholar
Shi, P., Basu, S. & Meyers, G.G. (2012). A Bayesian log-normal model for multivariate loss reserving. North American Actuarial Journal, 16(1), 2951.10.1080/10920277.2012.10590631CrossRefGoogle Scholar
Figure 0

Table 1. Special cases of common shock.

Figure 1

Table 2. Expected common shock proportions to cell expectations of synthetic data.

Figure 2

Table 3. Proportionate contributions of umbrella shock to cell total – Triangle 1, data set 3.

Figure 3

Table 4. Proportionate contributions of array-specific shock to cell total – Triangle 1, data set 3.