1. Introduction
A hypergeometric group $\Gamma (\alpha,\beta )$ , associated with the pair of parameters $\alpha =(\alpha _1,\alpha _2,\ldots,\alpha _n)$ , $\beta =(\beta _1,\beta _2,\ldots,\beta _n)\in{\mathbb{C}}^n$ such that $\alpha _j-\beta _k\notin \mathbb{Z}$ for any $1\le j,k\le n$ , is defined as a subgroup of $\mathrm{GL}_n({\mathbb{C}})$ generated by the companion matrices $A$ and $B$ of the polynomials
Levelt (cf. [Reference Beukers and Heckman5, Theorem 3.5]) showed that there exists a basis of the (local) solution space of the hypergeometric differential equation
defined on $\mathbb{P}^1\setminus \{0,1,\infty \}$ , where $D(\alpha,\beta )\;:\!=\;(\theta +\beta _1-1)\cdots (\theta +\beta _n-1)-z(\theta +\alpha _1)\cdots (\theta +\alpha _n)$ and $\theta =z\frac{d}{dz}$ , with respect to which the monodromy group of the hypergeometric equation is the hypergeometric group $\Gamma (\alpha,\beta )$ defined as above.
Now, we consider the hypergeometric groups $\Gamma (\alpha,\beta )$ for which the associated polynomials $f,g$ are products of cyclotomic polynomials and form a primitive pair (that is, both of the polynomials are, simultaneously, not polynomials in $x^k$ for any $k\ge 2$ ). Beukers and Heckman [Reference Beukers and Heckman5, Theorem 6.5] show that in case when the constant terms of both polynomials are $1$ (this happens only when $n$ , the degree of the polynomials $f,g$ , is an even number), the corresponding hypergeometric group $\Gamma (\alpha,\beta )$ preserves a non-degenerate symplectic form $\Omega$ and $\Gamma (\alpha,\beta )$ is contained inside the integral symplectic group $\mathrm{Sp}_\Omega (\mathbb{Z})$ as a Zariski dense subgroup.
If we denote the Zariski closure of the hypergeometric group $\Gamma (\alpha,\beta )$ inside $\mathrm{GL}_n({\mathbb{C}})$ by $\mathbf{G}$ and $\Gamma (\alpha,\beta )\subseteq \mathbf{G}(\mathbb{Z})$ , then we call the hypergeometric group $\Gamma (\alpha,\beta )$ arithmetic if the index of $\Gamma (\alpha,\beta )$ inside $\mathbf{G}(\mathbb{Z})$ is finite and thin otherwise.
The question of Sarnak [Reference Sarnak12], to determine the pairs of parameters $\alpha,\beta \in \mathbb{Q}^n$ for which the associated hypergeometric groups $\Gamma (\alpha,\beta )$ are arithmetic or thin, has been considered by many mathematicians and some progress on answering this question can be seen in refs. [Reference Bajpai, Dona, Singh and Singh1, Reference Bajpai and Singh3, Reference Brav and Thomas6–Reference Hofmann and van Straten10, Reference Singh13–Reference Singh and Venkataramana16, Reference Venkataramana18, Reference Venkataramana19]. The case when $\alpha =(0,0,0,0)$ has drawn much interest. In this case, there are 14 symplectic hypergeometric groups with a maximally unipotent monodromy associated with the pairs of polynomials $f, g$ , where $f(x)=(x-1)^4$ and $g(x)$ are products of cyclotomic polynomials such that $g(0)=1$ , $g(1)\neq 0$ , and the pair $f, g$ forms a primitive pair. These 14 hypergeometric groups arise as monodromy groups of families of Calabi-Yau threefolds fibering over $\mathbb{P}^1\setminus \{0,1,\infty \}$ . It is now well known that half of these $14$ hypergeometric groups are arithmetic (cf. [Reference Singh13] and [Reference Singh and Venkataramana16]) and the other half are thin (cf. [Reference Brav and Thomas6]).
When $n\geq 6$ is an even number, consider $f(x)=(x-1)^n$ and $g(x)$ as described above. Then, one may ask if the hypergeometric groups associated with these pairs of polynomials also have the same dichotomy (as having half of them arithmetic and the other half thin) between arithmeticity and thinness that the $14$ hypergeometric groups (with a maximally unipotent monodromy) of degree four have. The first case to consider is $n=6$ , and in this case, a computation shows that there are 40 symplectic hypergeometric groups (cf. [Reference Bajpai, Dona, Singh and Singh1, Table A]) associated with the pairs of polynomials $f(x)=(x-1)^6$ and $g(x)$ as described above. In ref. [Reference Bajpai, Dona, Singh and Singh1], we show that $18$ of these $40$ symplectic hypergeometric groups are arithmetic.
Remark 1.1. It may be noted that according to [Reference Bajpai, Dona and Nitsche2 , Table 1] the above dichotomy in $n=6$ is now disproved.
In this article, we show the thinness of $7$ of the above $40$ hypergeometric groups.
Theorem 1.2. The hypergeometric groups $\Gamma (\alpha,\beta )$ associated with the $7$ pairs of the parameters $\alpha,\beta$ where $\alpha =(0,0,0,0,0,0)$ and $\beta$ is any of the $7$ parameters appearing in Table 1 are thin.
We now sketch the proof of the above theorem. It follows from [Reference Benson, Campagnolo, Ranicki and Rovi4, Table 2] and the universal coefficient theorem for cohomology that $H^2(\mathrm{Sp}_6(\mathbb{Z}), \mathbb{Q})$ is isomorphic to $\mathbb{Q}$ , and hence, the cohomological dimension of $\mathrm{Sp}_6(\mathbb{Z})$ over $\mathbb{Q}$ is $\geq 2$ . Since $\mathrm{Sp}_6(\mathbb{Z})$ has no $\mathbb{Q}$ -torsion, that is, the order of every finite subgroup of $\mathrm{Sp}_6(\mathbb{Z})$ is a unit in $\mathbb{Q}$ , it follows from [Reference Swan17, Theorem 9.2] that if $\Gamma$ is a finite index subgroup of $\mathrm{Sp}_6(\mathbb{Z})$ then the cohomological dimension of $\Gamma$ over $\mathbb{Q}$ is also $\ge 2$ . Therefore, to prove Theorem 1.2 it is enough to prove that the hypergeometric groups appearing in Theorem 1.2 are free groups or contain a free subgroup of finite index (free groups have cohomological dimensions $1$ ).
The integers $a,d$ , and $c$ determine the entries of some conjugate of the matrix $A$ (cf. Section 2).
Using the technique of Brav and Thomas [Reference Brav and Thomas6], we use the following version of the ping-pong lemma from [Reference Lyndon and Schupp11, Proposition III.12.4] to show that the hypergeometric groups $\Gamma (\alpha,\beta )$ appearing in Theorem 1.2 are isomorphic to $\mathbb{Z}*\mathbb{Z}$ .
Theorem 1.3 (Ping-Pong Lemma). Let a group $G$ be generated by two of its subgroups, $G_1$ and $G_2$ . Let $e$ be the identity element of the group $G$ . Suppose at least one of these two subgroups has order greater than $2$ and $G$ acts on a set $W$ . Suppose there are non-empty subsets $X$ and $Y$ of $W$ such that
-
(i) $X$ and $Y$ are disjoint subsets;
-
(ii) $(G_2\setminus \{e\})X\subseteq Y$ ;
-
(iii) $(G_1\setminus \{e\})Y \subseteq X$ ;
Then, $G=G_1* G_2$ , the free product of $G_1$ and $G_2$ .
For the hypergeometric groups appearing in Table 1, we consider the standard action of $\Gamma (\alpha,\beta )$ on ${\mathbb{R}}^6$ . We apply some change of basis in ${\mathbb{R}}^6$ to make computations simpler. Let $K$ be the change of the basis matrix. Let $U=K^{-1}AK$ and $T=K^{-1}A^{-1}BK$ , where $A$ and $B$ are the matrices defined at the beginning of this section. Let $R=TU$ .
The hypergeometric group $\Gamma (\alpha,\beta )$ (with respect to the new basis) is generated by the cyclic subgroups $G_1=\langle T\rangle$ and $G_2=\langle R\rangle$ . Since $A^{-1}B$ is a non-trivial unipotent matrix and $B$ has repeated roots in all the $7$ cases of Table 1, $G_1$ and $G_2$ both are isomorphic to $\mathbb{Z}$ .
We define the two sets $X$ and $Y$ as
where $C^+$ and $C^-$ are some cones in ${\mathbb{R}}^6$ . With this setting, we are able to verify the ping-pong conditions (i), (ii), and (iii) of Theorem 1.3 for the seven cases of Table 1 and conclude that the corresponding hypergeometric groups $\Gamma (\alpha,\beta )$ are isomorphic to $\mathbb{Z}*\mathbb{Z}$ .
Remark 1.4. Examples 1–7 appearing in Table 1 are, respectively, Examples 1, 2, 3, 4, 5, 6, 10 of [Reference Bajpai, Dona, Singh and Singh1 , Table A]. While we were trying to get all the thin hypergeometric groups of [Reference Bajpai, Dona, Singh and Singh1 , Table A] one by one, Bajpai, Dona, and Nitsche announced their article [Reference Bajpai, Dona and Nitsche2] on arXiv in which they also show the thinness of these examples. In comparison to [Reference Bajpai, Dona and Nitsche2], the added value of the present article is:
-
1. It demonstrates that the specific method of Brav and Thomas works in higher dimensions with almost no changes.
-
2. The brute force input to make the method work consists of only two real numbers appearing in $v$ (cf. Section 2 ). Consequently, there is a chance that one can find a pattern in the choices of those numbers that generalizes to other hypergeometric groups.
2. Notation and strategy
The hypergeometric group $\Gamma (\alpha,\beta )$ with the pair of parameters $\alpha =(0,0,0,0,0,0)$ and $\beta$ as in Table 1, with respect to the new basis (obtained by using a change of the basis matrix $K$ , for suitable $K$ , provided in Section 3), is generated by the matrices $U$ , $T$ and $R=TU$ , where $U=K^{-1}AK$ and $T=K^{-1}A^{-1}BK$ . The matrices $U, T$ , and $R$ have the following form and they preserve a symplectic form $J$ whose matrix form is as given below:
Observe that in case of the hypergeometric groups appearing in Theorem 1.2, $R^m\neq I$ for any $m\in \mathbb{Z}\setminus \{0\}$ as the characteristic polynomial of $B$ has repeated roots.
Following the idea of Brav and Thomas [Reference Brav and Thomas6], we consider first the matrix
where $d$ is defined as the $(5,2)$ entry of the matrix $U$ . It follows that $H$ fixes the subspace $V$ spanned by its first, third, and fifth column vectors. A computation shows that
where $I$ is the $6\times 6$ identity matrix.
For a $6\times 6$ unipotent matrix $F$ , we define
Observe that $T^{-1}R$ and $TR^{-1}$ are unipotent matrices and define $P=\log\!(T^{-1}R)$ and $Q=\log\!(TR^{-1})$ .
It follows from Equation (2.1) that
and from this, it follows that $HP=QH$ , and we get
Now, we choose a vector $v$ in $V$ and define
the open cone generated by the vectors $v, Pv, P^2v, P^3v, P^4v, P^5v$ , and
the open cone generated by the vectors $v, Qv, Q^2v, Q^3v, Q^4v, Q^5v$ inside ${\mathbb{R}}^6$ . Since $HP^j=Q^jH$ (cf. Equation (2.2)) and $Hv=v$ , it follows that
Now, we are ready to define the subsets $X$ and $Y$ of ${\mathbb{R}}^6$ (with the standard action of the hypergeometric group $\Gamma (\alpha,\beta )$ ) that satisfy the hypotheses of the ping-pong lemma (cf. Theorem 1.3). Let
Let $M$ be the matrix whose column vectors are $v, Pv, P^2v, P^3v, P^4v, P^5v$ , and let $N$ be the matrix whose column vectors are $v, Qv, Q^2v, Q^3v, Q^4v, Q^5v$ . Observe that $N=HM$ .
Remark 2.1. For the vector $v$ we choose, the matrices $M$ and $N$ both are invertible and the choice of $v$ is motivated by the analog calculation of [Reference Brav and Thomas6] in which $v$ is the solution vector of the equation $v^tJPv={\textbf{0}}$ in $V$ . In [Reference Brav and Thomas6], the vector $v$ is uniquely determined but in our case the equation $v^tJPv={\textbf{0}}$ has infinitely many solution vectors in $V$ and not all solution vectors help in verifying the ping-pong conditions. Based on computations we choose, for each case in Table 1 separately, a suitable vector $v$ (constructed from the solution vectors of the equation $v^tJPv={\textbf{0}}$ ).
The hypergeometric group $\Gamma (\alpha,\beta )$ is generated by its subgroups $G_1=\langle T\rangle$ and $G_2=\langle R\rangle$ . We show that $G_1\cap G_2=\{I\}$ . Observe that $R^m=T^n$ , for some $m,n\in \mathbb{Z}$ , implies that $A^{-1}B^mA=C^n$ , where $C=A^{-1}B$ fixes the standard basis vectors $e_1,e_2,\ldots,e_5$ in ${\mathbb{R}}^6$ , and we get $A^{-1}B^mAe_j=C^ne_j=e_j$ for all $1\le j\le 5$ . This implies that $B^me_j=e_j$ for all $2\le j\le 6$ . Also, $Be_1=e_2=B^me_2=B^m Be_1$ implies that $B^me_1=e_1$ . It now follows that $B^m=I$ . Since the characteristic polynomials of $B$ associated with the parameters appearing in Theorem 1.2 have repeated roots, we get that $m=0=n$ . Hence, $G_1\cap G_2=\{I\}$ .
Now, we need to verify the following conditions of the ping-pong lemma.
-
(A) $X$ and $Y$ are disjoint subsets.
-
(B) $(G_2\setminus \{I\})X\subseteq Y$ .
-
(C) $(G_1\setminus \{I\})Y \subseteq X$ .
We will verify condition $(A)$ by verifying the following statements.
-
(A1) $R^jC^+$ and $R^jC^-$ are disjoint from $\pm C^+$ for all $j \in \mathbb{Z}\setminus \{0\}$ .
-
(A2) $R^jC^+$ and $R^jC^-$ are also disjoint from $\pm C^-$ for all $j \in \mathbb{Z}\setminus \{0\}$ .
Condition (B) follows from the construction of the subset $Y$ . We will verify condition $(C)$ by verifying the following statements.
-
(C1) $T^{-1}C^+ \subseteq C^+$ .
-
(C2) $TC^- \subseteq C^-$ .
-
(C3) $T^{-1}R^j(C^+ \cup C^-) \subseteq \pm C^+$ for all $j \in \mathbb{Z}\setminus \{0\}$ .
-
(C4) $TR^j(C^+ \cup C^-) \subseteq \pm C^-$ for all $j \in \mathbb{Z}\setminus \{0\}$ .
Now, we have the following convention.
Definition 2.2. We call a row vector $(a_{i1}, a_{i2},\ldots, a_{in})$ of an $n\times n$ matrix $(a_{ij})$ non-negative (respectively, non-positive) if $a_{ij}\geq 0$ (respectively, $a_{ij}\leq 0$ ) for all $1\le j\le n$ . Also, we call an $n\times n$ matrix $(a_{ij})$ non-negative (respectively, non-positive) if $a_{ij}\geq 0$ (respectively, $a_{ij}\leq 0$ ) for all $1\le i, j\le n$ .
Observe that $R^jC^+\cap C^+$ is non-empty exactly if there exist column vectors $v_1,v_2$ having all of their coordinates positive such that $R^jMv_1=Mv_2$ , that is, $M^{-1}R^jMv_1=v_2$ . Similarly, $R^jC^-\cap C^+$ is non-empty exactly if there exist column vectors $v_1,v_2$ having all of their coordinates positive such that $R^jNv_1=Mv_2$ , that is, $M^{-1}R^jNv_1=v_2$ . Therefore, to verify the statement (A1), it is sufficient to show that the matrices $M^{-1}R^jM$ and $M^{-1}R^jN$ , for all $j \in \mathbb{Z}\setminus \{0\}$ , have some non-negative and some non-positive rows (as when this is the case, $M^{-1}R^jM$ and $M^{-1}R^jN$ map a column vector $v_1$ having all of its coordinates positive to a column vector $v_2$ having some positive and some negative coordinates).
Since $N=HM$ , $H^2=I$ and $HRH=R^{-1}$ (cf. Equation (2.1)), we get
It follows from Equations (2.5) and (2.6) that if the matrices $M^{-1}R^jM$ and $M^{-1}R^jN$ , for all $j \in \mathbb{Z}\setminus \{0\}$ , have some non-negative and some non-positive rows, then the matrices $N^{-1}R^jN$ and $N^{-1}R^jM$ , for all $j \in \mathbb{Z}\setminus \{0\}$ , also have some non-negative and some non-positive rows. This shows that, for all $j\in \mathbb{Z}\setminus \{0\}$ , $R^jC^+$ and $R^jC^-$ both are disjoint from $\pm C^-$ . Hence, the sufficient criterion for (A1) above also proves (A2).
The condition (C) is divided into four parts from (C1) to (C4). Observe that $T^{-1}C^+\subseteq C^+$ if for any column vector $v_1$ having all of its coordinates some positive real numbers, there exists a column vector $v_2$ having all of its coordinates some positive real numbers such that $T^{-1}Mv_1=Mv_2$ . Thus, (C1) is equivalent to showing that the matrix $M^{-1}T^{-1}M$ is non-negative (as when this is the case, $M^{-1}T^{-1}M$ will map a column vector $v_1$ having all of its coordinates positive to a column vector $v_2$ having all of its coordinates positive).
Similarly, $TC^-\subseteq C^-$ if for any column vector $v_1$ having all of its coordinates some positive real numbers, there exists a column vector $v_2$ having all of its coordinates some positive real numbers such that $TNv_1=Nv_2$ . Thus, (C2) is equivalent to showing that the matrix $N^{-1}TN$ is non-negative. Again, since $N=HM$ , $H^2=I$ and $HTH=T^{-1}$ (cf. Equation (2.1)), we get
and hence the sufficient criterion for (C1) also proves (C2).
By using similar arguments as above, to verify the statement (C3) it is enough to show that both the matrices $M^{-1}T^{-1}R^jM$ and $M^{-1}T^{-1}R^jN$ , for all $j \in \mathbb{Z}\setminus \{0\}$ , are either non-negative or non-positive.
To show (C4), it is enough to show that both the matrices $N^{-1}TR^jM$ and $N^{-1}TR^jN$ , for all $j \in \mathbb{Z}\setminus \{0\}$ , are either non-negative or non-positive. Again, since $N=HM$ , $H^2=I$ , $HRH=R^{-1}$ , and $HTH=T^{-1}$ (cf. Equation (2.1)), we get
and hence the sufficient criterion for (C3) also proves (C4).
In conclusion, to apply the ping-pong lemma to the groups in Table 1 with $X$ and $Y$ obtained as above from a given $v\in{\mathbb{R}}^6$ , it suffices to check that $M$ has full rank, that $M^{-1}R^jM$ and $M^{-1}R^jN$ have both non-positive and non-negative rows for all $j\neq 0$ (A1), that $M^{-1}T^{-1}M$ is non-negative (C1), and that $M^{-1}T^{-1}R^jM$ and $M^{-1}T^{-1}R^jN$ are non-negative or non-positive for all $j\neq 0$ (C3).
3. Proof of the freeness of the hypergeometric groups of Table 1
As explained above in Section 2, to show that the hypergeometric groups associated with the parameters appearing in Table 1 are free groups (using the ping-pong lemma), we need to verify only the statements (A1), (C1), and (C3) of Section 2 for these groups.
We write below the detailed verification for Example 1 and also for Example 2 as it requires some different explanations. For other examples, we provide the needed data which helps in verifying the ping-pong conditions using the methods of Examples 1 and 2. The computations are an adaptation of the methods in ref. [Reference Brav and Thomas6] to our setting. The main new ingredient is the choice of $v$ . See Section 2 for the notations.
3.1. Case 1
$\alpha =(0,0,0,0,0,0), \quad \beta =(1/2,1/2,1/2,1/2,1/2,1/2)$ .
In this case, the corresponding polynomials $f,g$ are
and the corresponding hypergeometric group $\Gamma (\alpha,\beta )$ is generated by the companion matrices $A, B$ of the polynomials $f,g$ . We first consider the following change of the basis matrix
Then, $U=K^{-1}AK$ , $T=K^{-1}A^{-1}BK$ and $R=TU$ have the form described in Section 2 with $a=64$ , $d=48$ and $c=12$ .
Guided by computer calculations, we pick $v=\left (-\frac{1}{10}, 0,1, 0, -\frac{1162}{225}, 0\right )$ inside the fixed subspace $V$ of the matrix $H$ . Note that in this case, $-R$ is unipotent and so we can define $Z=\log\!(\!-R)$ . Then, as required, the resulting matrix $M$ has full rank, and
To verify (A1), we compute the matrices $M^{-1}R^jM$ and $M^{-1}R^jN$ , for all $j \in \mathbb{Z}\setminus \{0\}$ , and show that they have some non-negative and some non-positive rows.
Denote $A_i=M^{-1}\frac{Z^i}{i!}M$ , for $i=1,2,3,4,5$ . Then,
By computation, we get
Notice that $A_5$ has some non-negative and some non-positive rows. Now, we use the Archimedean property of the real numbers to show that there exists a positive integer $n\in \mathbb{Z}$ such that the entries of the first two rows of the matrix $M^{-1}R^jM$ , for all $j \in \mathbb{Z}\setminus \{0\}$ and $|j|\geq n$ , have the same sign as that of the corresponding entries of the matrix $\mathrm{sign}(j)(\!-\!1)^{|j|}A_5$ .
A sufficient $n$ can be computed after writing
By using the Archimedean property of the real numbers on the entries of the matrices $A_5$ and $A_4$ , we get a positive integer $k_5$ such that the entries of the matrix $A_4+jA_5$ have the same sign as that of the corresponding entries of the matrix $A_5$ for all $j\geq k_5$ . Again using the Archimedean property of the real numbers on the entries of the matrices $A_4+k_5A_5$ and $A_3$ , we get a positive integer $k_4$ such that the entries of the matrix $A_3+j(A_4+k_5A_5)$ have the same sign as that of the corresponding entries of the matrix $A_4+k_5A_5$ for all $j\geq k_4$ . Repeating this process, we get the positive integers $k_5,k_4,k_3,k_2,k_1$ such that for all $j\ge n=\max \{k_5,k_4,k_3,k_2,k_1\}$ the entries of the matrix $(\!-\!1)^j(1+j(A_1+j(A_2+j(A_3+j(A_4+jA_5)))))$ have the same sign as that of the corresponding entries of the matrix $(\!-\!1)^jA_5$ .
In case when $j$ is a negative integer, we take $j=-l$ where $l$ is a positive integer and write the expression for the matrix $M^{-1}R^jM$ as
and use the Archimedean property of the real numbers, as it is described above, to get a positive integer $m$ so that the matrix $M^{-1}R^jM$ , for all $j\le -m \in \mathbb{Z}\setminus \{0\}$ , has some non-negative and some non-positive rows. In this case, we get that $\max \{n,m\}$ is $5$ . So, it follows that for $|j|\ge 5$ , the matrix $M^{-1}R^jM$ has some non-negative and some non-positive rows.
For the remaining values of $j \in \mathbb{Z}\setminus \{0\}$ , that is, for $j\in \mathbb{Z}$ with $1\leq |j|\leq 4$ , we compute the matrices $M^{-1}R^jM$ individually and verify that the matrix $M^{-1}R^jM$ , also for $1\leq |j|\leq 4$ , has some non-negative and some non-positive rows.
Similarly, we verify that the matrix $M^{-1}R^jN$ , for all $j \in \mathbb{Z}\setminus \{0\}$ , has some non-negative and some non-positive rows.
To verify (C1), we need to show that the matrix $M^{-1}T^{-1}M$ is non-negative. By computation we get
To verify (C3), it is sufficient to show that both the matrices $M^{-1}T^{-1}R^jM$ and $M^{-1}T^{-1}R^jN$ , for all $j \in \mathbb{Z}\setminus \{0\}$ , are either non-negative or non-positive.
Using Equation (3.1) for the expression of $R^j$ , we get
where $D_i=M^{-1}\frac{T^{-1}Z^i}{i!}M$ and $E_i=M^{-1}\frac{T^{-1}Z^i}{i!}N$ , for $i=1,2,3,4,5$ .
By computation, we get
Since all entries of the matrices $D_5$ and $E_5$ have the same signs, by using the Archimedean property of the real numbers, as above, we find that for $|j|\geq 7$ the matrices $M^{-1}T^{-1}R^jM$ and $M^{-1}T^{-1}R^jN$ are either non-negative or non-positive. We compute that the matrices $M^{-1}T^{-1}R^jM$ and $M^{-1}T^{-1}R^jN$ , also for $1\le |j|\le 6$ , are either non-negative or non-positive.
This completes the proof of the freeness of the hypergeometric group.
3.2. Case 2
$\alpha =(0,0,0,0,0,0),\quad \beta =(1/2,1/2,1/2,1/2,1/3,2/3)$ .
We consider the change of the basis matrix
In this case, $a=48$ , $d=40$ , $c=11$ and $v=\left (-\frac{1}{10},0,1,0,-\frac{631}{150},0\right )$ . This time neither $R$ nor $-R$ is unipotent. But $R^6$ is unipotent, so let $Z=\log\!(R^6)$ . Note that $Z^4=0$ . Then,
We write $j=6n+k$ for $n \in \mathbb{N}\cup \{0\}$ and $0\leq k \leq 5$ . If we denote $A_{i,k}=M^{-1}\frac{R^kZ^i}{i!}M$ and $B_{i,k}=M^{-1}\frac{R^kZ^i}{i!}N$ for $i=1,2,3$ , then
By denoting $D_{i,k}=M^{-1}\frac{T^{-1}R^kZ^i}{i!}M$ and $E_{i,k}=M^{-1}\frac{T^{-1}R^kZ^i}{i!}N$ for $i=1,2,3$ and using the above Equation (3.8), we get
For each $k$ , we can find a $q\in \mathbb{N}$ such that the signs of the entries of $M^{-1}R^{6n+k}M$ , $M^{-1}R^{6n+k}N$ , $M^{-1}T^{-1}R^{6n+k}M$ and $M^{-1}T^{-1}R^{6n+k}N$ are the same as that of the entries of $A_{3,k}, B_{3,k}, C_{3,k}$ and $D_{3,k}$ for all $|n| \gt q$ , and we verify the ping-pong conditions as in Case 1.
3.3. Case 3
$\alpha =(0,0,0,0,0,0),\quad \beta =(1/2,1/2,1/2,1/2,1/4,3/4)$ .
We consider the change of the basis matrix
In this case, $a=32$ , $d=32$ , $c=10$ , $v=\left (-\frac{1}{11},0,1,0,-{\frac{277}{75}},0\right )$ and $Z=\log\!(R^4)$ .
3.4. Case 4
$\alpha =(0,0,0,0,0,0),\quad \beta =(1/2,1/2,1/2,1/2,1/6,5/6)$ .
We consider the change of the basis matrix
In this case $a=16$ , $d=24$ , $c=9$ , $v=\left (-\frac{1}{10},0,1,0,-{\frac{3041}{810}},0\right )$ and $Z=\log\!(\!-R^3)$ .
3.5. Case 5
$\alpha =(0,0,0,0,0,0),\quad \beta =(1/2,1/2,1/3,2/3,1/3,2/3)$ .
We consider the change of the basis matrix
In this case, $a=36$ , $d=33$ , $c=10$ , $v=\left (-\frac{1}{6},0,1,0,-{\frac{207}{40}},0\right )$ and $Z=\log\!(R^6)$ .
3.6. Case 6
$\alpha =(0,0,0,0,0,0),\quad \beta =(1/2,1/2,1/3,2/3,1/4,3/4)$ .
We consider the change of the basis matrix
In this case, $a=24$ , $d=26$ , $c=9$ , $v=\left (-\frac{1}{6},0,1,0,-{\frac{472}{105}},0\right )$ and $Z=\log\!(R^{12})$ .
3.7. Case 7
$\alpha =(0,0,0,0,0,0),\quad \beta =(1/2,1/2,1/5,2/5,3/5,4/5)$ .
We consider the change of the basis matrix
In this case, $a=20$ , $d=25$ , $c=9$ , $v=\left (-\frac{1}{10},0,1,0,-{\frac{221}{60}},0\right )$ and $Z=\log\!(R^{10})$ .
Acknowledgements
We thank the referee for the invaluable comments and suggestions, which substantially improved the article’s presentation. The work of the first-named author is supported by the MATRICS grant no. MTR/2021/000368. This paper is part of the Ph.D. thesis of the second named author who would like to thank IIT Bombay for providing support.
Competing interests
The authors declare none.