Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-25T05:01:55.317Z Has data issue: false hasContentIssue false

Bounded Littlewood identity related to alternating sign matrices

Published online by Cambridge University Press:  13 December 2024

Ilse Fischer*
Affiliation:
Fakultät für Mathematik, Universität Wien, Oskar-Morgenstern-Platz 1, 1090 Vienna, Austria

Abstract

An identity that is reminiscent of the Littlewood identity plays a fundamental role in recent proofs of the facts that alternating sign triangles are equinumerous with totally symmetric self-complementary plane partitions and that alternating sign trapezoids are equinumerous with holey cyclically symmetric lozenge tilings of a hexagon. We establish a bounded version of a generalization of this identity. Further, we provide combinatorial interpretations of both sides of the identity. The ultimate goal would be to construct a combinatorial proof of this identity (possibly via an appropriate variant of the Robinson-Schensted-Knuth correspondence) and its unbounded version, as this would improve the understanding of the mysterious relation between alternating sign trapezoids and plane partition objects.

Type
Discrete Mathematics
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

Littlewood’s identity reads as

(1.1) $$ \begin{align} \sum_{\lambda} s_{\lambda}(X_1,\ldots,X_n) = \prod_{i=1}^{n} \frac{1}{1-X_i} \prod_{1 \le i < j \le n} \frac{1}{1-X_i X_j}, \end{align} $$

where $s_{\lambda }(X_1,\ldots ,X_n)$ denotes the Schur polynomial associated with the partition $\lambda $ and the sum is over all partitions $\lambda $ . In fact, the identity was already known to Schur (see [Reference Schur26, p. 163] or [Reference Schur27, p. 456]) and written down by Littlewood in [Reference Littlewood21, p. 238]. This identity has a beautiful combinatorial proof that is based on the Robinson-Schensted-Knuth correspondence and exploits its symmetry; see Appendix A and, for example, [Reference Stanley28] for details.

In recent papers [Reference Fischer10, Reference Fischer11, Reference Höngesberg17], where ‘alternating sign matrix objects’ (namely, alternating sign triangles and alternating sign trapezoids) have been connected to certain ‘plane partition objects’ (namely, totally symmetric self-complementary plane partitions and column strict shifted plane partitions of fixed class, which generalize the better known descending plane partitions), a very similar identity played the crucial role to establish this still mysterious [Reference Fischer and Konvalinka12] connection. All these proofs are not of a combinatorial nature and involve rather complicated calculations, and so the study of the combinatorics of our Littlewood-type identity is very likely to lead to a better understanding of the combinatorics of this relation.

In order to formulate the identity, we rewrite (1.1) using the bialternant formula for the Schur polynomial [Reference Stanley28, 7.15.1]

$$ \begin{align*} s_{(\lambda_1,\ldots,\lambda_n)}(X_1,\ldots,X_n) = \frac{\det_{1 \le i,j \le n} \left( X_i^{\lambda_j+n-j} \right)} {\prod_{1 \le i < j \le n} (X_i-X_j)} = \frac{ \mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{i=1}^n X_i^{\lambda_i+n-i} \right] } {\prod_{1 \le i < j \le n} (X_i-X_j)}, \end{align*} $$

allowing zeros at the end section of $(\lambda _1,\ldots ,\lambda _n)$ , with

$$ \begin{align*} \mathbf{ASym}_{X_1,\ldots,X_n} f(X_1,\ldots,X_n) = \sum_{\sigma \in {\mathcal S}_n} \operatorname{\mathrm{sgn}} \sigma \cdot f(X_{\sigma(1)},\ldots,X_{\sigma(n)})\end{align*} $$

as follows:

$$ \begin{align*}\frac{\mathbf{ASym}_{X_1,\ldots,X_n} \left[ \sum_{0 \le k_1 < k_2 < \ldots < k_n} X_1^{k_1} X_2^{k_2} \cdots X_n^{k_n} \right]}{\prod_{1 \le i < j \le n} (X_j-X_i)} = \prod_{i=1}^{n} \frac{1}{1-X_i} \prod_{1 \le i < j \le n} \frac{1}{1-X_i X_j}. \end{align*} $$

Note that we have permuted the variables $X_1,\ldots ,X_n$ in the denominator and numerator compared to the above definition of Schur functions as we are using the transformation $k_i = \lambda _{n+1-i} + i-1$ . We have used the following identity in [Reference Fischer10, Reference Fischer11]:

(1.2) $$ \begin{align} \frac{\mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i < j \le n} (1+X_j + X_i X_j) \sum_{0 \le k_1 < k_2 < \ldots < k_n} X_1^{k_1} X_2^{k_2} \cdots X_n^{k_n} \right]}{\prod_{1 \le i < j \le n} (X_j-X_i)} \nonumber\\[5pt] = \prod_{i=1}^{n} \frac{1}{1-X_i} \prod_{1 \le i < j \le n} \frac{1+X_i + X_j}{1-X_i X_j}. \end{align} $$

In that paper, the formula was proved by induction with respect to n. In [Reference Höngesberg17], an additional parameter has been introduced, which has to be set to $1$ to obtain (1.2). The formula reads as

(1.3) $$ \begin{align} \frac{\mathbf{ASym}_{X_1,\ldots,X_n} \left[\prod_{1 \le i < j \le n} (Q+(Q-1) X_i + X_j + X_i X_j) \sum_{0 \le k_1 < k_2 < \ldots < k_n} \prod_{i=1}^n \left( \frac{X_i(1+X_i)}{Q+X_i} \right)^{k_i} \right]}{\prod_{1 \le i < j \le n}(X_j-X_i)} \nonumber\\[5pt]= \prod_{i=1}^n \frac{Q+X_i}{Q-X_i^2} \prod_{1 \le i < j \le n} \frac{Q(1+X_i)(1+X_j)- X_i X_j}{(Q-X_i X_j)}. \end{align} $$

While (1.2) does not generalize (1.1), (1.3) does generalize the classical Littlewood identity: after setting $Q=2$ , we can pull out $\prod _{1 \le i < j \le n} (Q+(Q-1) X_i + X_j + X_i X_j)$ since it is symmetric in $X_1,\ldots ,X_n$ , and then (1.2) is obtained by an appropriate change of variables. Among other things, we will see in this paper that we can also introduce another parameter in (1.2) as follows:

(1.4) $$ \begin{align} \frac{\mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i < j \le n} (1+ w X_i + X_j + X_i X_j) \sum_{0 \le k_1 < k_2 < \ldots < k_n} X_1^{k_1} X_2^{k_2} \cdots X_n^{k_n} \right]}{\prod_{1 \le i < j \le n} (X_j-X_i)} \nonumber\\[5pt] = \prod_{i=1}^{n} \frac{1}{1-X_i} \prod_{1 \le i < j \le n} \frac{1+X_i + X_j + w X_i X_j}{1-X_i X_j}. \end{align} $$

In fact, there is even the following common generalization of (1.3) and (1.4):

$$ \begin{align*} \frac{\mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i < j \le n} (Q+w X_i + X_j + X_i X_j) \sum_{0 \le k_1 < k_2 < \ldots < k_n} \prod_{i=1}^n \left( \frac{X_i(1+X_i)}{Q+X_i} \right)^{k_i} \right]} {\prod_{1 \le i < j \le n} (X_j-X_i)}\\[7pt] = \prod_{i=1}^n \frac{Q+X_i}{Q-X_i^2} \prod_{1 \le i < j \le n} \frac{Q + Q X_i + Q X_j + w X_i X_j}{Q-X_i X_j}. \end{align*} $$

The latter identity is equivalent to

(1.5) $$ \begin{align} \frac{\mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i < j \le n} (q+ w X_i + X_j + qX_i X_j) \sum_{0 \le k_1 < k_2 < \ldots < k_n} \prod_{i=1}^n \left( \frac{X_i(1+ q X_i)}{q+X_i} \right)^{k_i} \right]} {\prod_{1 \le i < j \le n} (X_j-X_i)} \nonumber\\[7pt]= \prod_{i=1}^n \frac{1+ q^{-1} X_i}{1-X_i^2} \prod_{1 \le i < j \le n} \frac{1+ q X_i + q X_j + w X_i X_j}{1-X_i X_j}, \end{align} $$

when performing the following replacements $Q \to q^2$ and $X_i \to q X_i$ , for $i=1,2,\ldots ,n$ , and this is the version of the identity we consider in this paper. For what follows, it is crucial that the right-hand side of (1.5) can be written as

$$ \begin{align*} \frac{\det_{1 \le i, j \le n} \left( q^{-j} X_i^{n-1} (q+X_i)(q X_i^{-1} + w)^{n-j} (1+ q X_i)^{j-1} \right)}{\prod_{1 \le i \le j \le n} (1-X_i X_j) \prod_{1 \le i < j \le n} (X_j-X_i)}, \end{align*} $$

which follows from the Vandermonde determinant evaluation.

The main purpose of this paper is to derive bounded versions of these identities and to provide combinatorial interpretations of the identities that would allow us to approach them with a combinatorial proof, possibly by a variant of the Robinson-Schensted-Knuth correspondence that mimics the proof for the classical Littlewood identity. By bounded version we mean that the sums $\sum _{0 \le k_1 < k_2 < \ldots < k_n}$ are restricted to, say, $\sum _{0 \le k_1 < k_2 < \ldots < k_n \le m}$ . Macdonald [Reference Macdonald22] has provided such a bounded version of the classical identity (1.1) – namely,

(1.6) $$ \begin{align} \sum_{\lambda \subseteq (m^n)} s_{\lambda}(X_1,\ldots,X_n) &= \sum_{0 \le k_1 \le k_2 \le \ldots \le k_n \le m} s_{(k_n,k_{n-1},\ldots,k_1)}(X_1,\ldots,X_n) \nonumber\\[7pt] &= \frac{ \det_{1 \le i, j \le n} \left( X_i^{j-1} - X_i^{m+2n-j} \right) }{\prod_{i=1}^n (1-X_i) \prod_{1 \le i < j \le n} (X_j-X_i)(1-X_i X_j)}, \end{align} $$

which he used to prove MacMahon’s conjecture. Very recent work on bounded Littlewood identities can be found in [Reference Rains and Warnaar24].

More specifically, we will prove the following.

Theorem 1.1. For $n \ge 1$ , we have

(1.7) $$ \begin{align} \frac{1}{\prod\limits_{1 \le i < j \le n} (X_j-X_i)} \mathbf{ASym}_{X_1,\ldots,X_n} \bigg[ \prod_{1 \le i < j \le n} & (q+ w X_i + X_j + q X_i X_j) \nonumber\\[5pt] \qquad\times \sum_{0 \le k_1 < k_2 < \ldots < k_n \le m} \left( \frac{X_1(1+ q X_1)}{q+X_1} \right)^{k_1} & \left( \frac{X_2(1+ q X_2)}{q+X_2} \right)^{k_2} \cdots \left( \frac{X_n(1+ q X_n)}{q+X_n} \right)^{k_n} \bigg] \nonumber\\[5pt] & ={\frac{\det_{1 \le i, j \le n} \left( a_{j,m,n}(Q,w;X_i) \right)}{\prod\limits_{1 \le i \le j \le n} (1-X_i X_j) \prod\limits_{1 \le i < j \le n} (X_j-X_i)}}, \end{align} $$

with

$$ \begin{align*} a_{j,m,n}(q,w;X) = q^{-j}X^{n-1} & (1+q X)^{m+1} (q+X) \\[5pt]\times & \left( (q X^{-1} + w)^{n-j} (1+ q X)^{j-m-2} - (q X + w)^{n-j} (1+q X^{-1})^{j-m-2} \right). \end{align*} $$

Setting $q=1$ , we obtain, after simplifying the right-hand side, the following corollary.

Corollary 1.2. For $n \ge 1$ , we have

(1.8) $$ \begin{align} & \frac{1}{\prod\limits_{1 \le i < j \le n} (X_j-X_i)} \mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i < j \le n} (1+w X_i+X_j + X_i X_j) \sum_{0 \le k_1 < k_2 < \ldots < k_n \le m} X_1^{k_1} X_2^{k_2} \cdots X_n^{k_n} \right] \nonumber\\[5pt] & \qquad= \frac{\det_{1 \le i, j \le n} \left( X_i^{j-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} - X_i^{m+2n-j} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right)}{\prod\limits_{i=1}^n (1-X_i) \prod\limits_{1 \le i < j \le n} (1-X_i X_j)(X_j-X_i)}. \end{align} $$

In the second part of the paper, we will then provide combinatorial interpretations for both sides of the identity in the corollary.

Outline

In Section 2, we give a proof of (1.1). In Appendix A, we discuss a point of view on the combinatorics of the classical Littlewood identity (1.1) and its bounded version (1.6) that is beneficial for possible combinatorial proofs of the Littlewood-type identities that we establish in this paper. Recall that this is of interest because such identities have been used several times [Reference Fischer11, Reference Fischer10, Reference Höngesberg17] to establish connections between alternating sign matrix objects and plane partition objects. To approach this, we offer combinatorial interpretations of the left-hand sides of (1.4) and (1.8) in Section 3 and in Appendix B. Then, in Section 4, we offer a combinatorial interpretation of the right-hand sides of (1.4) and (1.8). These interpretations are nicest in the cases $w=0,1$ . In Section 5, we offer an outlook on related work on the cases $w=0,-1$ , which will appear in a forthcoming paper with Florian Schreier-Aigner.

2. Proof of Theorem 1.1

Bressoud’s elementary proof [Reference Bressoud2] of (1.6) turned out to be useful to obtain the following (still elementary, but admittedly very complicated) proof of Theorem 1.1 provided here. Conceptually, the proof is not difficult: We use induction with respect to n and show that both sides satisfy the same recursion.

Using the following three functions

(2.1) $$ \begin{align} f(X)=\frac{X(1+q X)}{q+w X}, g(X)=q+w X, h(X)=\frac{X(1+qX)}{q+X} = \frac{f(X) g(X) X^{-1}}{f(X^{-1})g(X^{-1}) X}, \end{align} $$

it is easy to see that (1.7) is equivalent to

(2.2) $$ \begin{align} \frac{1}{\prod\limits_{1 \le i < j \le n} (X_j-X_i)} & \mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i < j \le n} \frac{(f(X_j^{-1})-f(X_i))g(X_i) g(X_j^{-1}) X_j^2}{q(1-X_i X_j)} \right. \nonumber\\ &\times \left. \sum_{0 \le k_1 < k_2 < \ldots < k_n \le m} \prod_{i=1}^{n} h(X_i)^{k_i} \right] = \frac{\det_{1 \le i, j \le n} \left( a_{j,m,n}(q,w;X_i) \right)}{\prod\limits_{1 \le i \le j \le n} (1-X_i X_j) \prod\limits_{1 \le i < j \le n} (X_j-X_i)}, \end{align} $$

with

(2.3) $$ \begin{align} a_{j,m,n}(q,w;X) & = q^{-j} X^{n-m} f(X)^{m+1} g(X)^{m+1} f(X^{-1}) g(X^{-1}) \nonumber\\ &\quad \times \left( X^{-n+m+2} f(X)^{j-m-2}g(X)^{n-m-2} - X^{n-m-2} f(X^{-1})^{j-m-2}g(X^{-1})^{n-m-2} \right). \end{align} $$

2.1. The case $m \to \infty $

We start by proving the $m \to \infty $ case of Theorem 1.1. We first show that this is equivalent to

(2.4) $$ \begin{align} \mathbf{ASym}_{X_1,\ldots,X_n} & \left[ \prod_{1 \le i < j \le n} \frac{(f(X_j^{-1})-f(X_i))g(X_i) g(X_j^{-1}) X_j^2}{q(1-X_i X_j)} \prod_{i=1}^n h(X_i)^{i-1} \prod_{i=1}^n \frac{1}{1- \prod_{j=i}^n h(X_j)} \right] \nonumber\\ &\qquad= \prod_{i=1}^n f(X_i^{-1}) g(X_i^{-1}) X_i^2 \frac{\prod_{1 \le i < j \le n} \left( f(X_i)-f(X_j) \right) g(X_i) g(X_j)} {\prod_{1 \le i \le j \le n} q (1-X_i X_j)}, \end{align} $$

which is just (1.5) multiplied on both sides with $\prod _{1 \le i < j \le n} (X_j-X_j)$ . To see this, we rewrite the left-hand side of (1.7) by using the summation formula for the geometric series n times. As $m \to \infty $ , $a_{j,m,n}(Q,r;X_i)$ simplifies to

$$ \begin{align*}X^2 f(X_i)^{j-1} g(X_i)^{n-1} f(X_i^{-1}) g(X_i^{-1}) \end{align*} $$

in a formal power series sense, and

$$ \begin{align*}\det_{1 \le i,j \le n} \left( X_i^2 f(X_i)^{j-1} g(X_i)^{n-1} f(X_i^{-1}) g(X_i^{-1}) \right) \end{align*} $$

can be computed using the Vandermonde determinant evaluation, we are led to the right-hand side of (2.4) eventually.

We denote by $L_n(X_1,\ldots ,X_n)$ the left-hand side of (2.4) and observe that the following recursion is satisfied:

(2.5) $$ \begin{align} L_n(X_1,\ldots,X_n) = \sum_{k=1}^n (-1)^{k-1} & \frac{1}{1-\prod_{i=1}^n h(X_i) } L_{n-1}(X_1,\ldots,\widehat{X_k},\ldots,X_n) \nonumber\\ &\qquad \times \prod_{1 \le j \le n, \atop j \not= k} \frac{f(X_j)g(X_j) g(X_k) \left( 1 - f(X_j^{-1})^{-1} f(X_k)\right)}{q(1-X_j X_k)}, \end{align} $$

where $\widehat {X_k}$ means that we omit $X_k$ . Indeed, suppose more generally that

$$ \begin{align*}P(X_1,\ldots,X_n) = \mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i < j \le n} s(X_i,X_j) \prod_{i=1}^n t(X_i)^{i-1} \prod_{i=1}^n \frac{1}{1- \prod_{j=i}^n u(X_j)} \right], \end{align*} $$

then

(2.6) $$ \begin{align} \begin{aligned} P(X_1,\ldots,X_n) &= \sum_{k=1}^n \sum_{\sigma \in {\mathcal S}_n: \atop \sigma(1)=k} \operatorname{\mathrm{sgn}} \sigma \frac{1}{1- \prod_{j=1}^n u(X_j)} \prod_{1 \le j \le n, \atop j \not= k} s(X_k,X_j) t(X_j) \\ &\qquad \times \sigma \left[ \prod_{2 \le i < j \le n} s(X_i,X_j) \prod_{i=2}^n t(X_i)^{i-2} \prod_{i=2}^n \frac{1}{1- \prod_{j=i}^n u(X_j)} \right] \\ &= \sum_{k=1}^n (-1)^{k-1} \frac{1}{1- \prod_{j=1}^n u(X_j)} P(X_1,\ldots,\widehat{X_k},\ldots,X_n) \\ &\qquad \times \prod_{1 \le j \le n, \atop j \not= k} s(X_k,X_j) t(X_j), \end{aligned} \end{align} $$

where we use the notation

$$ \begin{align*}\sigma \left[f(X_1,\ldots,X_n) \right] = f(X_{\sigma(1)},X_{\sigma(2)},\ldots,X_{\sigma(n)}). \end{align*} $$

The last equality in (2.6) follows from the fact that the sign of $\sigma $ is the product of $ (-1)^{k-1}$ and the sign of the restriction of $\sigma $ to $\{2,3,\ldots ,n\}$ , assuming $\sigma (1)=k$ and ‘identifying’ the preimage $\{2,3,\ldots ,n\}$ as well as the image $\{1,\ldots ,n\} \setminus \{k\}$ with $\{1,2,\ldots ,n-1\}$ in the natural way.

We show (2.4) by induction with respect to n. The case $n=1$ is easy to check. It suffices to show that the right-hand side of (2.4) satisfies the recursion (2.5) – that is,

$$ \begin{align*} \prod_{i=1}^n& f(X_i^{-1}) g(X_i^{-1}) X_i^2 \frac{\prod_{1 \le i < j \le n} \left( f(X_j)-f(X_i) \right) g(X_i) g(X_j)} {\prod_{1 \le i \le j \le n} q (1-X_i X_j)} \\ & = \sum_{k=1}^n (-1)^{k-1} \frac{1}{1-\prod_{i=1}^n h(X_i) } \prod_{1 \le j \le n, \atop j \not= k} \frac{f(X_j)g(X_j)g(X_j^{-1}) g(X_k) \left( f(X_k)-f(X_j^{-1}) \right) X_j^2}{q(1-X_j X_k)} \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \times \frac{\prod_{1 \le i < j \le n, i,j \not= k} \left( f(X_j)-f(X_i) \right) g(X_i) g(X_j)} {\prod_{1 \le i \le j \le n, i,j \not= k} q (1-X_i X_j)}. \end{align*} $$

We multiply by $\left (1 - \prod _{i=1}^n h(X_i) \right ) \prod _{1 \le i \le j \le n} q (1-X_i X_j)$ and obtain

(2.7) $$ \begin{align} & \left(\prod_{i=1}^{n} f(X_i^{-1}) g(X_i^{-1}) X_i^2 - \prod_{i=1}^{n} f(X_i) g(X_i) \right) \prod_{1 \le i < j \le n} (f(X_j)-f(X_i)) g(X_i) g(X_j) \nonumber\\ & = \prod_{1 \le i < j \le n} (f(X_j)-f(X_i)) g(X_i) g(X_j) \sum_{k=1}^n q (1-X_k^2) \prod_{1 \le j \le n \atop j \not= k} \frac{f(X_j) g(X_j^{-1})( f(X_k)-f(X_j^{-1})) X_j^2}{f(X_k) - f(X_j)}. \end{align} $$

Note that the sign $(-1)^{k+1}$ in the sum has disappeared as we have pulled out the factor $\prod _{1 \le i < j \le n} (f(X_j)-f(X_i))$ .

By the definitions of $f(X)$ and $g(X)$ , this is

$$ \begin{align*} \bigg( \prod_{i=1}^n (q+X_i) - \prod_{i=1}^n X_i & (1+q X_i) \bigg) \prod_{1 \le i < j \le n} q (X_j - X_i)(1+q X_i + q X_j + w X_i X_j) \\[5pt] &\quad = \prod_{1 \le i < j \le n} q (X_j - X_i)(1+q X_i + q X_j + w X_i X_j) \\[5pt] & \times \sum_{k=1}^n (q-X_k^2) \prod_{1 \le j \le n \atop j \not= k} \frac{X_j ( 1 + q X_j)(1-X_j X_k) (q + X_j + w X_k + q X_j X_k)}{(X_j-X_k)(1 + q X_j + q X_k + w X_j X_k)}. \end{align*} $$

For each $s \in \{1,2,\ldots ,n\}$ , both sides are polynomials in $X_s$ of degree not greater than $2n$ . It is not hard to see that both sides vanish for $X_s = X_t$ and $X_s =-\frac {q(1+ q X_t)}{q + w X_t}$ for any $t \in \{1,2,\ldots ,n\} \setminus \{s\}$ . Moreover, it is also not hard to see that the evaluations also agree for $X_s=0,-q^{-1}$ , which gives a total of $2n$ evaluations for each $X_s$ : due to the factors $X_j$ and $1+q X_j$ , all summands on the right-hand side vanish when setting $X_s=0,-q^{-1}$ , except for the one for $k=s$ . This summand can easily be seen to coincide with the specialization of the left-hand side.

It follows that the difference of the left-hand side and the right-hand side is up to a constant in $\mathbb {Q}(q,w)$ equal to

(2.8) $$ \begin{align} \prod_{i=1}^n X_i(1+ q X_i) \prod_{1 \le i < j \le n} (X_j - X_i)(1 + q X_i + q X_j + w X_i X_j). \end{align} $$

To show that this constant is indeed zero, we consider the following specialization

$$ \begin{align*}(X_1,X_2,X_3,X_4,\ldots) = \left(X_1,\frac{1}{X_1},X_3,\frac{1}{X_3},\ldots \right).\end{align*} $$

Note first that (2.8) does not vanish at this specialization, and therefore, it suffices to show that the left-hand side and the right-hand side of (2.7) agree on this specialization. If n is even, this is particularly easy to see because both sides vanish (on the right-hand side, all summands vanish, which is due to the factor $1-X_j X_k$ ). If n is odd, then only the last summand on the right-hand side remains, and it is not hard to see that it is equal to the left-hand side.

2.2. The general case

In order to prove Theorem 1.1, we need to show

(2.9) $$ \begin{align} \det_{1 \le i, j \le n} \left( a_{j,m,n}(q,w;X_i) \right) = \mathbf{ASym}_{X_1,\ldots,X_n} F(m;X_1,\ldots,X_n), \end{align} $$

where

$$ \begin{align*} F(m;X_1,\ldots,X_n)= \prod_{i=1}^n (1-X_i^2) \prod_{1 \le i < j \le n} q^{-1} & (f(X_j^{-1}) - f(X_i))g(X_i) g(X_j^{-1}) X_j^2 \\[5pt] &\quad \times \sum_{0 \le k_1 < k_2 < \ldots < k_n \le m} h(X_1)^{k_1} h(X_2)^{k_2} \cdots h(X_n)^{k_n}. \end{align*} $$

See also (2.2). Observe that we have the following recursion:

$$ \begin{align*} F(m;X_1,\ldots,X_n) = (1-X_1^2) \prod_{j=2}^n q^{-1} & (f(X_j^{-1}) - f(X_1))g(X_1) g(X_j^{-1}) X_j^2 \\ & \times \sum_{l=0}^m h(X_1)^{-1} \left( \prod_{i=1}^n h(X_i) \right)^{l+1} F(m-1-l;X_2,\ldots,X_n). \end{align*} $$

We set

$$ \begin{align*}A(m;X_1,\ldots,X_n) = \mathbf{ASym}_{X_1,\ldots,X_n} F(m;X_1,\ldots,X_n) \end{align*} $$

and observe that

$$ \begin{align*} A(m;X_1,\ldots,X_n) &= \sum_{k=1}^n \sum_{l=0}^m (-1)^{k+1} (1-X_k^2) h(X_k)^{-1} \left( \prod_{i=1}^n h(X_i) \right)^{l+1} \\ &\quad \times A(m-l-1;X_1,\ldots,\widehat{X_k},\ldots,X_n) \\ &\quad \times \prod_{1 \le i \le n, i \not= k} q^{-1} (f(X_i^{-1}) - f(X_k))g(X_k) g(X_i^{-1}) X_i^2, \end{align*} $$

by the same argument that has led to (2.6). By the induction hypothesis, we have

$$ \begin{align*}A(m-l-1;X_1,\ldots,\widehat{X_k},\ldots,X_n) = \det_{1 \le i \le n, i \not= k \atop 1 \le j \le n-1} \left( a_{j,m-l-1,n-1}(q,w;X_i) \right). \end{align*} $$

Therefore, the right-hand side of (2.9) is

(2.10) $$ \begin{align} \sum_{k=1}^n (-1)^{k+1} (1-X_k^2) \prod_{1 \le i \le n, i \not= k} q^{-1} & (f(X_i^{-1}) - f(X_k))g(X_k) g(X_i^{-1}) X_i^2 \nonumber\\ & \times \sum_{l=0}^m h(X_k)^{l} \det_{1 \le i \le n, i \not= k \atop 1 \le j \le n-1} \left( h(X_i)^{l+1} a_{j,m-l-1,n-1}(q,w;X_i) \right), \end{align} $$

and we need to show that it is equal to $\det _{1 \le i, j \le n} \left ( a_{j,m,n}(q,w;X_i) \right )$ .

Using (2.1) and (2.3), we note that

$$ \begin{align*}h(X)^{l+1} a_{j,m-l-1,n-1}(q,w;X) = q^{-j} \left( f(X)^{j} g(X)^{n-1} h(X)^l - X^{2n-2} f(X^{-1})^{j} g(X^{-1})^{n-1} h(X)^{m+1} \right), \end{align*} $$

and thus, we can write the determinant in (2.10) as

$$ \begin{align*}\sum_{\sigma, S} (-1)^{I(\sigma)+|S|} \prod_{i \in S} q^{-\sigma(i)} X_i^{2n-2} f(X_i^{-1})^{\sigma(i)} g(X_i^{-1})^{n-1} h(X_i)^{m+1} \prod_{i \in \overline{S}} q^{-\sigma(i)} f(X_i)^{\sigma(i)} g(X_i)^{n-1} h(X_i)^l, \end{align*} $$

where the sum is over all bijections $\sigma : \{1,2,\ldots ,n \} \setminus \{k\} \to \{1,2,\ldots ,n-1\}$ , all subsets S of $ \{1,2,\ldots ,n \} \setminus \{k\}$ and $I(\sigma )$ is the number of all inversions (i.e., pairs $i,j \in \{1,2,\ldots ,n \} \setminus \{k\} $ with $i<j$ and $\sigma (i)>\sigma (j)$ ). Moreover, $\overline {S}$ denotes the complement of S in $\{1,2,\ldots ,n \} \setminus \{k\}$ . Also note that $(-1)^{I(\sigma )}$ is just the sign of the permutation as it appears in the Leibniz formula of the determinant when expanding it over all permutations. Comparing with (2.10), we multiply by $h(X_k)^{l}$ and take the sum over l. We obtain

$$ \begin{align*} \sum_{\sigma, S} (-1)^{I(\sigma)+|S|} \prod_{i \in S} q^{-\sigma(i)} X_i^{2n-2} f(X_i^{-1})^{\sigma(i)} g(X_i^{-1})^{n-1} h(X_i)^{m+1} \prod_{i \in \overline{S}} & q^{-\sigma(i)} f(X_i)^{\sigma(i)} g(X_i)^{n-1} \\[4pt] & \times \sum_{l=0}^m h(X_k)^l \prod_{i \in \overline{S}} h(X_i)^l. \end{align*} $$

We evaluate the sum and rearrange some terms:

$$ \begin{align*} \sum_{S} (-1)^{|S|} q^{-n+1} \frac{1- \prod_{i \in \overline{S} \cup \{k\}} h(X_i)^{m+1}}{1-\prod_{i \in \overline{S} \cup \{k\}} h(X_i)} & \prod_{i \in S} X_i^{2n-2} f(X_i^{-1}) g(X_i^{-1})^{n-1} h(X_i)^{m+1} \prod_{i \in \overline{S}} f(X_i) g(X_i)^{n-1} \\[4pt] & \times \sum_{\sigma} (-1)^{I(\sigma)} \prod_{i \in S} \left[q^{-1} f(X_i^{-1})\right]^{\sigma(i)-1} \prod_{i \in \overline{S}} \left[ q^{-1} f(X_i) \right]^{\sigma(i)-1}. \end{align*} $$

The inner sum is a Vandermonde determinant, which we evaluate. We obtain

$$ \begin{align*} \sum_{S} (-1)^{|S|} q^{-n+1} \frac{1- \prod_{i \in \overline{S} \cup \{k\}} h(X_i)^{m+1}}{1-\prod_{i \in \overline{S} \cup \{k\}} h(X_i)} \prod_{i \in S} X_i^{2n-2} f(X_i^{-1}) & g(X_i^{-1})^{n-1} h(X_i)^{m+1} \prod_{i \in \overline{S}} f(X_i) g(X_i)^{n-1} \\[4pt] &\quad \times \prod_{1 \le i < j \le n, i,j \not=k} q^{-1} (f(Y_j) - f(Y_i)), \end{align*} $$

with $Y_i=X_i$ if $i \in \overline {S}$ and $Y_i=X_i^{-1}$ if $i \in S$ .

From (2.10), we add the remaining factors and the sum over all k and finally have the full right-hand side of (2.9). We exchange the sum over k and S: now we sum over all proper subsets $S \subseteq [n]$ and all k not in S. If we write $i \notin S$ , then we mean $i \in \{1,2,\ldots ,n\} \setminus S$ . This gives

(2.11) $$ \begin{align} \sum_{S} (-1)^{|S|} & q^{-n+1} \frac{1- \prod_{i \notin S} h(X_i)^{m+1}} {1- \prod_{i \notin S} h(X_i)} \prod_{i \in S} X_i^{2n-2} f(X_i^{-1}) g(X_i^{-1})^{n-1} h(X_i)^{m+1} \prod_{i \notin S} f(X_i) g(X_i)^{n-1} \nonumber\\[4pt] & \times \sum_{k \notin S} (-1)^{k+1} (1-X_k^2) f(X_k)^{-1} \prod_{1 \le i \le n, i \not= k} q^{-1} (f(X_i^{-1}) - f(X_k)) g(X_i^{-1}) X_i^2 \nonumber\\[4pt] &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \times \prod_{1 \le i < j \le n, i,j \not=k} q^{-1} (f(Y_j) - f(Y_i)). \end{align} $$

We rearrange (2.11) slightly as follows

$$ \begin{align*} \sum_{S} (-1)^{|S|} q^{-n+1} & \frac{1- \prod_{i \notin S} h(X_i)^{m+1}} {1- \prod_{i \notin S} h(X_i)} \prod_{i \in S} X_i^{2n} f(X_i^{-1}) g(X_i^{-1})^{n} h(X_i)^{m+1} \prod_{i \notin S} g(X_i)^{n-1} \\[4pt] & \qquad\qquad\times \sum_{k \notin S} (-1)^{k+1} (1-X_k^2) \prod_{i \notin S \cup \{k\}} f(X_i) g(X_i^{-1}) X_i^2 \\[4pt] & \qquad\qquad\qquad\qquad\times \prod_{1 \le i \le n \atop i \not=k} q^{-1} \left(f(X_i^{-1}) - f(X_k) \right) \prod_{1 \le i < j \le n, i,j \not=k} q^{-1} (f(Y_j) - f(Y_i)). \end{align*} $$

This is further equal to

(2.12) $$ \begin{align} \sum_{S} (-1)^{|S|} & q^{-n+1} \frac{1- \prod_{i \notin S} h(X_i)^{m+1}} {1- \prod_{i \notin S} h(X_i)} \prod_{i \in S} X_i^{2n} f(X_i^{-1}) g(X_i^{-1})^{n} h(X_i)^{m+1} \prod_{i \notin S} g(X_i)^{n-1} \nonumber\\ & \qquad\quad\times \prod_{1 \le i < j \le n \atop \{i,j\} \cap S \not= \emptyset} q^{-1} (f(Y_j) - f(Y_i)) \prod_{1 \le i < j \le n \atop \{i,j\} \cap S = \emptyset} q^{-1} (f(X_j) - f(X_i)) \nonumber\\ & \qquad\qquad\qquad\times \sum_{k \notin S} (1-X_k^2) \prod_{i \notin S \cup \{k\}} \frac{ f(X_i) g(X_i^{-1}) \left( f(X_k) - f(X_i^{-1}) \right) X_i^2}{f(X_k)-f(X_i)}. \end{align} $$

We divide (2.7) by $q \prod _{1 \le i < j \le n} (f(X_j) - f(X_i))g(X_i) g(X_j)$ and obtain

$$ \begin{align*} q^{-1} \bigg(\prod_{i=1}^{n} f(X_i^{-1}) g(X_i^{-1}) X_i^2 - \prod_{i=1}^{n} & f(X_i) g(X_i) \bigg) \\ &\quad = \sum_{k=1}^n (1-X_k^2) \prod_{1 \le j \le n \atop j \not= k} \frac{f(X_j) g(X_j^{-1})( f(X_k)-f(X_j^{-1})) X_j^2}{f(X_k) - f(X_j)}. \end{align*} $$

By applying this to the variables $(X_i)_{i \in S}$ , we can use this to replace the sum over all $k \notin S$ in (2.12) by something simpler:

$$ \begin{align*} \sum_{S} (-1)^{|S|} q^{-n} & \frac{1- \prod_{i \notin S} h(X_i)^{m+1}} {1- \prod_{i \notin S} h(X_i)} \prod_{i \in S} X_i^{2n} f(X_i^{-1}) g(X_i^{-1})^{n} h(X_i)^{m+1} \prod_{i \notin S} g(X_i)^{n-1} \\ & \qquad\quad \times \prod_{1 \le i < j \le n \atop \{i,j\} \cap S \not= \emptyset} q^{-1} (f(Y_j) - f(Y_i)) \prod_{1 \le i < j \le n \atop \{i,j\} \cap S = \emptyset} q^{-1} (f(X_j) - f(X_i)) \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad \times \left(\prod_{i \notin S} f(X_i^{-1}) g(X_i^{-1}) X_i^2 - \prod_{i \notin S} f(X_i) g(X_i) \right). \end{align*} $$

We rearrange terms and take into account that $Y_i=X_i$ if $i \notin S$ (extending the definition slightly by setting $Y_k=X_k$ ). After some cancellation, we obtain

$$ \begin{align*} q^{-n} \sum_{S} (-1)^{|S|}& (1- \prod_{i \notin S} h(X_i)^{m+1}) \prod_{i=1}^n X_i^{n+1} f(X_i^{-1}) g(X_i^{-1}) \\ & \times \prod_{i \in S} X_i^{n-1} g(X_i^{-1})^{n-1} h(X_i)^{m+1} \prod_{i \notin S} X_i^{-n+1} g(X_i)^{n-1} \prod_{1 \le i < j \le n} q^{-1} (f(Y_j) - f(Y_i)). \end{align*} $$

Using the Vandermonde determinant formula and the fact that $Y_i=X_i^{-1}$ if $i \in S$ and $Y_i=X_i$ if $i \notin S$ , this is equal to

$$ \begin{align*} q^{-\binom{n+1}{2}} \sum_{S, \sigma} & (-1)^{|S|+I(\sigma)} (1- \prod_{i \notin S} h(X_i)^{m+1}) \prod_{i=1}^n X_i^{n+1} f(X_i^{-1}) g(X_i^{-1}), \\ &\qquad \times \prod_{i \in S} X_i^{n-1} f(X_i^{-1})^{\sigma(i)-1} g(X_i^{-1})^{n-1} h(X_i)^{m+1} \prod_{i \notin S} X_i^{-n+1} f(X_i)^{\sigma(i)-1} g(X_i)^{n-1},\end{align*} $$

which we expand as follows:

(2.13) $$ \begin{align} q^{-\binom{n+1}{2}} & \sum_{S, \sigma} (-1)^{|S|+I(\sigma)} \prod_{i=1}^n X_i^{n+1} f(X_i^{-1}) g(X_i^{-1}) \nonumber\\ & \times \prod_{i \in S} X_i^{n-1} f(X_i^{-1})^{\sigma(i)-1} g(X_i^{-1})^{n-1} h(X_i)^{m+1} \prod_{i \notin S} X_i^{-n+1} f(X_i)^{\sigma(i)-1} g(X_i)^{n-1} \nonumber\\ & \qquad\qquad\quad- q^{-\binom{n+1}{2}} \prod_{i} X_i^{n+1} f(X_i^{-1}) g(X_i^{-1}) h(X_i)^{m+1} \nonumber\\ & \sum_{S, \sigma} (-1)^{|S|+I(\sigma)} \prod_{i \in S} X_i^{n-1} f(X_i^{-1})^{\sigma(i)-1} g(X_i^{-1})^{n-1} \prod_{i \notin S} X_i^{-n+1} f(X_i)^{\sigma(i)-1} g(X_i)^{n-1}, \end{align} $$

for reasons that become clear next. Recall that the sums are over all proper subsets S, but since the sums are equal for $S=\{1,2,\ldots ,n\}$ , we can also sum over all subsets S. The sum in the second term can be written as

$$ \begin{align*} \sum_{S, \sigma} (-1)^{|S|+I(\sigma)} \prod_{i \in S} X_i^{n-1} & f(X_i^{-1})^{\sigma(i)-1} g(X_i^{-1})^{n-1} \prod_{i \notin S} X_i^{-n+1} f(X_i)^{\sigma(i)-1} g(X_i)^{n-1} \\ & = \det_{1 \le i, j \le n}\left( X_i^{-n+1} g(X_i)^{n-1} f(X_i)^{j-1} - X_i^{n-1}g(X_i^{-1})^{n-1} f(X_i^{-1})^{j-1} \right). \end{align*} $$

By the definitions of $f(X)$ and $g(X)$ , this is equal to

$$ \begin{align*}\prod_{i=1}^n X_i^{-n+1} \det_{1 \le i, j \le n}\left( X_i^{j-1} (q+w X_i)^{n-j} (1+q X_i)^{j-1} - X_i^{n-j}(q X_i + w )^{n-j} (X_i+ q )^{j-1} \right). \end{align*} $$

The determinant can be seen to vanish as follows: First, observe that it is a polynomial in $X_1,\ldots ,X_n$ of degree no greater than $2n-2$ in each $X_i$ . For $1 \le i < j \le n$ , the i-th row and the j-th row of the underlying matrix are collinear when setting $X_i=X_j$ or $X_i= X_j^{-1}$ . Moreover, the i-th row vanishes when setting $X_i^2=1$ . It follows that $\prod _{i=1}^{n} (X_i^2-1) \prod _{1 \le i < j \le n} (X_j-X_i)(1-X_i X_j)$ is a divisor of the determinant, but since it is of degree $2n$ in each $X_i$ , the determinant vanishes. The first expression in (2.13) is obviously equal to

$$ \begin{align*}\det_{1 \le i, j \le n} \left( q^{-j} X_i^2 f(X_i^{-1}) g(X_i^{-1}) f(X_i)^{j-1} g(X_i)^{n-1} - q^{-j} X_i^{2n} f(X_i^{-1})^{j} g(X_i^{-1})^n h(X_i)^{m+1} \right), \end{align*} $$

and this is equal to $\det _{1 \le i,j \le n} \left ( a_{j,m,n}(q,w;X_i) \right )$ ; see (2.3) as well as the expression for $h(X)$ in terms of $f(X)$ and $g(X)$ . This concludes the proof of Theorem 1.1.

3. Combinatorial interpretations of the left-hand sides

3.1. Arrowed Gelfand-Tsetlin patterns

To continue the analogy with the ordinary Littlewood identity (1.1) and Macdonald’s bounded version (1.6) of it, both sides of the identities (1.4) and (1.8) will be interpreted combinatorially. For the left-hand side, this was accomplished in another recent paper [Reference Fischer and Schreier-Aigner13], and we will describe the result and adjust to our context next.

In order to motivate the definition for the combinatorial objects, recall the combinatorial interpretation of the left-hand sides of (1.1) and (1.6) in terms of Gelfand-Tsetlin patterns, which is described in Appendix A.3. We need to extend the discussion from there insofar that there is also a sensible extension of the definition of Gelfand-Tsetlin patterns to arbitrary integers sequences $(\lambda _1,\ldots ,\lambda _n)$ . The notion of signed intervals is crucial for this:

$$ \begin{align*} \underline{[a,b]} = \begin{cases} [a,b], & a \le b \\\emptyset, & b=a-1 \\ [b+1,a-1], & b < a-1 \end{cases}. \end{align*} $$

If we are in the last case, then the interval is said to be negative. The condition that defines a Gelfand-Tsetlin pattern can also be written as $a_{i,j} \in [a_{i+1,j},a_{i+1,j+1}]$ . If the bottom row is weakly increasing, we can replace this condition also by $a_{i,j} \in \underline {[a_{i+1,j},a_{i+1,j+1}]}$ (since we then have $a_{i+1,j} \le a_{i+1,j+1}$ as can be seen inductively with respect to n).

We use this now as the definition for arbitrary bottom rows: A (generalized) Gelfand-Tsetlin pattern is a triangular array $A=(a_{i,j})_{1 \le j \le i \le n}$ of integers with $a_{i,j} \in \underline {[a_{i+1,j},a_{i+1,j+1}]}$ for all $i,j$ . Then the sign of a Gelfand-Tsetlin pattern A is

$$ \begin{align*} (-1)^{\# \text{ of negative intervals} \underline{[a_{i+1,j},a_{i+1,j+1}]}}=: \operatorname{\mathrm{sgn}} A. \end{align*} $$

Then

(3.1) $$ \begin{align} s_{(\lambda_1,\ldots,\lambda_n)}(X_1,\ldots,X_n) = \sum_{A=\left( a_{i,j} \right)_{1 \le j \le i \le n}} \operatorname{\mathrm{sgn}} A \prod_{i=1}^n X_i^{\sum_{j=1}^i a_{i,j} - \sum_{j=1}^{i-1} a_{i-1,j}}, \end{align} $$

where the sum is over all Gelfand-Tsetlin patterns $A=(a_{i,j})_{1 \le j \le i \le n}$ with bottom row $(\lambda _n,\lambda _{n-1},\ldots ,\lambda _1)$ and

$$ \begin{align*} s_{(\lambda_1,\ldots,\lambda_n)}(X_1,\ldots,X_n) = \frac{\det_{1 \le i,j \le n} \left( X_i^{\lambda_j+n-j} \right)}{\prod_{1 \le i < j \le n} (X_i-X_j)}.\end{align*} $$

This result is a special case of Theorem 3.4 below that will also cover the combinatorial interpretation of the left-hand side of (1.4) and (1.8). However, this special case appeared essentially also earlier in [Reference Fischer8] (with some details missing).

Definition 3.1. An arrowed Gelfand-Tsetlin pattern (AGTP)Footnote 1 is a triangular array of the following form:

$$ \begin{align*} \begin{array}{ccccccccccccccccc} & & & & & & & & a_{1,1} & & & & & & & & \\ & & & & & & & a_{2,1} & & a_{2,2} & & & & & & & \\ & & & & & & \dots & & \dots & & \dots & & & & & & \\ & & & & & a_{n-2,1} & & \dots & & \dots & & a_{n-2,n-2} & & & & & \\ & & & & a_{n-1,1} & & a_{n-1,2} & & \dots & & \dots & & a_{n-1,n-1} & & & & \\ & & & a_{n,1} & & a_{n,2} & & a_{n,3} & & \dots & & \dots & & a_{n,n}, & & & \end{array}, \end{align*} $$

where each entry $a_{i,j}$ is an integer decorated with an element from $\{\nwarrow , \nearrow , \nwarrow \!\!\!\!\!\;\!\! \nearrow ,\emptyset \}$ and the following is satisfied for each entry a not in the bottom row: Suppose b is the $\swarrow $ -neighbor of a and c is the $\searrow $ -neighbor of a, respectively – that is,

$$ \begin{align*} \begin{array}{ccc} &a& \\ b&&c \end{array}. \end{align*} $$

Depending on the decoration of $b, c$ , denoted by $\operatorname {decor}(b)$ and $\operatorname {decor} ( c )$ , respectively, we need to consider four cases:

  • $(\operatorname {decor}(b),\operatorname {decor}( c )) \in \{\nwarrow ,\emptyset \} \times \{\nearrow , \emptyset \}$ : $a \in \underline {[b,c]}$ .

  • $(\operatorname {decor}(b),\operatorname {decor}( c )) \in \{\nwarrow ,\emptyset \} \times \{\nwarrow , \nwarrow \!\!\!\!\!\;\!\! \nearrow \}$ : $a \in \underline {[b,c-1]}$ .

  • $(\operatorname {decor}(b),\operatorname {decor}( c )) \in \{\nearrow , \nwarrow \!\!\!\!\!\;\!\! \nearrow \} \times \{\nearrow ,\emptyset \}$ : $a \in \underline {[b+1,c]}$ .

  • $(\operatorname {decor}(b),\operatorname {decor}( c)) \in \{\nearrow , \nwarrow \!\!\!\!\!\;\!\! \nearrow \} \times \{\nwarrow , \nwarrow \!\!\!\!\!\;\!\! \nearrow \}$ : $a \in \underline {[b+1,c-1]}$ .

An example is provided next. We write $^\nwarrow e, e^\nearrow , ^\nwarrow e^\nearrow , e$ if the entry e is decorated with $\nwarrow , \nearrow , \nwarrow \!\!\!\!\!\;\!\! \nearrow ,\emptyset $ , respectively.

$$ \begin{align*} \begin{array}{ccccccccccccccccc} & & & & & & & & ^\nwarrow2 & & & & & & & & \\ & & & & & & & 2 & & ^\nwarrow3^\nearrow & & & & & & & \\ & & & & & & ^\nwarrow2 & & 2^\nearrow & & 3^\nearrow & & & & & & \\ & & & & & 3 & & ^\nwarrow2 & & ^\nwarrow3^\nearrow & & ^\nwarrow3^\nearrow & & & & & \\ & & & & 2^\nearrow & & 4 & & ^\nwarrow2^\nearrow & & 3^\nearrow & & 2 & & & & \\ & & & ^\nwarrow6 & & ^\nwarrow2^\nearrow & & 5 & & 1^\nearrow & & ^\nwarrow4 & & ^\nwarrow2^\nearrow & & &\end{array}\end{align*} $$

We define the sign of an AGTP $A=(a_{i,j})_{1 \le j \le i \le n}$ as follows: Each negative interval $\underline {[a_{i+1,j}(+1),a_{i+1,j+1}(-1)]}$ with $i \ge 1$ and $j \le i$ contributes a multiplicative $-1$ , choosing $a_{i+1,j}+1$ iff $\operatorname {decor}(a_{i+1,j}) \in \{\nearrow , \nwarrow \!\!\!\!\!\;\!\! \nearrow \}$ and $a_{i+1,j}$ otherwise, and choosing $a_{i+1,j+1} - 1$ iff $\operatorname {decor}(a_{i+1,j+1}) \in \{ \nwarrow , \nwarrow \!\!\!\!\!\;\!\! \nearrow \}$ and $a_{i+1,j+1}$ otherwise. There are no negative intervals in rows $1,2,3$ , two in rows $4,5$ and three in row $6$ , so that the sign of the pattern is $-1$ .

We associate the following weight to a given arrowed Gelfand-Tsetlin pattern $A=(a_{i,j})_{1 \le j \le i \le n}$ :

$$ \begin{align*} {\operatorname{W}}(A) = \operatorname{\mathrm{sgn}}(A) t^{\# \emptyset} u^{\# \nearrow} v^{\# \nwarrow} w^{\# \nwarrow \!\!\!\!\!\;\!\! \nearrow} \prod_{i=1}^{n} X_i^{\sum_{j=1}^i a_{i,j} - \sum_{j=1}^{i-1} a_{i-1,j} + \# \nearrow \text{in row} i - \# \nwarrow \text{in row} i}. \end{align*} $$

The weight of our example is

$$ \begin{align*}- t^5 u^5 v^5 w^6 X_1 X_2^3 X_3^3 X_4^3 X_5^4 X_6^6. \end{align*} $$

For this paper, only arrowed Gelfand-Tsetlin patterns with weakly increasing bottom row are relevant, and in this case, the description of the objects can be simplified considerably as follows.

Proposition 3.2. An arrowed Gelfand-Tsetlin pattern with weakly increasing bottom row is an ordinary Gelfand-Tsetlin pattern (i.e., with weakly increasing rows), where each entry is decorated with an element from $\{\nwarrow , \nearrow , \nwarrow \!\!\!\!\!\;\!\! \nearrow ,\emptyset \}$ such that the following is satisfied.

  • Suppose an entry a is equal to its $\nearrow $ -neighbor and a is decorated with either $\nearrow $ or $\nwarrow \!\!\!\!\!\;\!\! \nearrow $ (i.e., an arrow is pointing from a to its $\nearrow $ -neighbor). Then the entry right of a in the same row is also equal to a and decorated with $\nwarrow $ or $\nwarrow \!\!\!\!\!\;\!\! \nearrow $ .

  • Suppose an entry a is equal to its $\nwarrow $ -neighbor and a is decorated with either $\nwarrow $ or $\nwarrow \!\!\!\!\!\;\!\! \nearrow $ (i.e., an arrow is pointing from a to its $\nwarrow $ -neighbor). Then the entry left of a in the same row is also equal to a and decorated with $\nearrow $ or $\nwarrow \!\!\!\!\!\;\!\! \nearrow $ .

The sign is $-1$ to the number of entries a that are equal to their $\swarrow $ -neighbor b as well as to their $\searrow $ -neighbor c, and b is decorated with $\nearrow $ or $\nwarrow \!\!\!\!\!\;\!\! \nearrow $ or c is decorated with $\nwarrow $ and $\nwarrow \!\!\!\!\!\;\!\! \nearrow $ .

Proof. Suppose $(a_{i,j})_{1 \le j \le i \le n}$ is an AGTP. If $a_{i+1,j} < a_{i+1,j+1}$ for particular $i,j$ , then $a_{i+1,j} \le a_{i,j} \le a_{i+1,j+1}$ . The first inequality has to be strict if the decoration of $a_{i+1,j}$ contains an arrow pointing toward $a_{i,j}$ (i.e., $\operatorname {decor}(a_{i+1,j}) \in \{\nearrow ,\nwarrow \!\!\!\!\!\;\!\! \nearrow \}$ ), while the second inequality has to be strict if $a_{i+1,j+1}$ contains an arrow pointing toward $a_{i,j}$ (i.e., $\operatorname {decor}(a_{i+1,j+1}) \in \{\nwarrow ,\nwarrow \!\!\!\!\!\;\!\! \nearrow \}$ ).

However, if $a_{i+1,j} = a_{i+1,j+1}$ for particular $i,j$ , then $a_{i+1,j}=a_{i,j}=a_{i+1,j+1}$ . In this case,

$$ \begin{align*}(\operatorname{decor}(a_{i+1,j}),\operatorname{decor}(a_{i+1,j+1})) \in \{\emptyset,\nwarrow\} \times \{\emptyset,\nearrow\} \end{align*} $$

or

(3.2) $$ \begin{align} (\operatorname{decor}(a_{i+1,j}),\operatorname{decor}(a_{i+1,j+1})) \in \{\nearrow,\nwarrow \!\!\!\!\!\;\!\! \nearrow\} \times \{\nwarrow,\nwarrow \!\!\!\!\!\;\!\! \nearrow\}, \end{align} $$

where in the second case, there is a contribution of $-1$ to the sign of the object.

These observations imply that if the bottom row is weakly increasing, then the underlying undecorated triangular array is an ordinary Gelfand-Tsetlin pattern and that the properties on the decoration stated in the proposition are satisfied. The only instance when we have a contribution to the sign is in the case of (3.2).

Conversely, a decoration of a given Gelfand-Tsetlin pattern that follows the rule as given in the statement of the proposition is eligible for an arrowed Gelfand-Tsetlin pattern according to Definition 3.1.

Remark 3.3. In the case that the bottom row of an arrowed Gelfand-Tsetlin pattern is strictly increasing and we forbid the decoration $\emptyset $ , we have that all rows are strictly increasing, and we obtain a monotone triangle. Recall that monotone triangles are defined as Gelfand-Tsetlin patterns with strictly increasing rows; their significance comes from the fact that monotone triangles with bottom row $1,2,\ldots ,n$ are in easy bijective correspondence with $n \times n$ alternating sign matrices; see, for example, [Reference Bressoud3]. In such a case, there is no instance where we gain a $-1$ that contributes to the sign. These objects were used in [Reference Fischer and Schreier-Aigner13] to study alternating sign matrices. Among other things, the generating function of these decorated monotone triangles can be interpreted as a generating function of (undecorated) monotone triangles, and thus of alternating sign matrices.

The following explicit formula for the generating function of arrowed Gelfand-Tsetlin patterns with fixed bottom row $k_1,k_2,\ldots ,k_n$ is proved in [Reference Fischer and Schreier-Aigner13].

Theorem 3.4. The generating function of arrowed Gelfand-Tsetlin patterns with bottom row $k_1,\ldots ,k_n$ is

$$ \begin{align*}\prod_{i=1}^{n} (t + u X_i + v X_i^{-1} + w) \prod_{1 \le i < j \le n}\left( t + u {\operatorname{E}}_{k_i} + v {\operatorname{E}}_{k_j}^{-1} + w {\operatorname{E}}_{k_i} {\operatorname{E}}_{k_j}^{-1} \right) s_{(k_n,k_{n-1},\ldots,k_1)}(X_1,\ldots,X_n), \end{align*} $$

where ${\operatorname {E}}_x$ denotes the shift operator, defined as ${\operatorname {E}}_x p(x) = p(x+1)$ .

The formula has to be applied as follows: First interpret $k_1,\ldots ,k_n$ as variables and apply the operator $\prod _{1 \le i < j \le n} \left ( t + u {\operatorname {E}}_{k_i} + v {\operatorname {E}}_{k_j}^{-1} + w {\operatorname {E}}_{k_i} {\operatorname {E}}_{k_j}^{-1} \right )$ to $s_{(k_n,k_{n-1},\ldots ,k_1)}(X_1,\ldots ,X_n)$ . This will result in a linear combination of expressions of the form $s_{(k_n+i_n,k_{n-1}+i_{n-1},\ldots ,k_1+i_1)}(X_1,\ldots ,X_n)$ for some (varying) integers $i_j$ . The $k_j$ are only specialized to the actual integers after that. Note that we do not necessarily have $k_n+i_n \ge k_{n-1}+i_{n-1} \ge \ldots \ge k_1+i_1$ even if $k_n \ge k_{n-1} \ge \ldots \ge k_1$ , so that the extension of the Schur polynomial in (3.1) is necessary.

Example 3.5. We illustrate the theorem on the example $(k_1,k_2,k_3)=(1,2,3)$ . We list the $8$ Gelfand-Tsetlin pattern with bottom row $1,2,3$ and indicate the possible decorations (one will be listed twice with a disjoint set of decorations), where $L=\{\emptyset , \nwarrow ~\}$ , $R=\{\emptyset , \nearrow \}$ and $LR= \{\emptyset ,\nwarrow ,\nearrow ,\nwarrow \!\!\!\!\!\;\!\! \nearrow \}$ , and on the right, we indicate the generating function restricted to the particular underlying Gelfand-Tsetlin patterns with the indicated decorations, where we use

$$ \begin{align*}L(X)=t+ v X^{-1}, R(X)=t+u X \quad \text{and} \quad LR(X) = t + u X + v X^{-1} + w. \end{align*} $$
$$ \begin{align*} \begin{array}{cl} \begin{array}{ccccc} && {LR \atop 1} && \\ & {L \atop 1} && {LR \atop 2} & \\ {L \atop 1} && {L \atop 2} && {LR \atop 3} \end{array} & X_1 X_2^2 X_3^3 LR(X_1) L(X_2) LR(X_2) L(X_3)^2 LR(X_3) \\ \begin{array}{ccccc} && {LR \atop 2} && \\ & {LR \atop 1} && {R \atop 2} & \\ {L \atop 1} && {L \atop 2} && {LR \atop 3} \end{array} & X_1^2 X_2 X_3^3 LR(X_1) LR(X_2) R(X_2) L(X_3)^2 LR(X_3) \\ \begin{array}{ccccc} && {LR \atop 1} && \\ & {L \atop 1} && {LR \atop 3} & \\ {L \atop 1} && {LR \atop 2} && {R \atop 3} \end{array} & X_1 X_2^3 X_3^2 LR(X_1) L(X_2) LR(X_2) L(X_3) LR(X_3) R(X_3) \\ \begin{array}{ccccc} && {LR \atop 2} && \\ & {LR \atop 1} && {LR \atop 3} & \\ {L \atop 1} && {LR \atop 2} && {R \atop 3} \end{array} & X_1^2 X_2^2 X_3^2 LR(X_1) LR(X_2)^2 L(X_3) LR(X_3) R(X_3) \\ \begin{array}{ccccc} && {LR \atop 3} && \\ & {LR \atop 1} && {R \atop 3} & \\ {L \atop 1} && {LR \atop 2} && {R \atop 3} \end{array} & X_1^3 X_2 X_3^2 LR(X_1) LR(X_2) R(X_2) L(X_3) LR(X_3) R(X_3) \\ \begin{array}{ccccc} && {LR \atop 2} && \\ & {L \atop 2} && {LR \atop 3} & \\ {LR \atop 1} && {R \atop 2} && {R \atop 3} \end{array} & X_1^2 X_2^3 X_3 LR(X_1) L(X_2) LR(X_2) LR(X_3) R(X_3)^2 \\ \begin{array}{ccccc} && {LR \atop 3} && \\ & {LR \atop 2} && {R \atop 3} & \\ {LR \atop 1} && {R \atop 2} && {R \atop 3} \end{array} & X_1^3 X_2^2 X_3 LR(X_1) LR(X_2) R(X_2) LR(X_3) R(X_3)^2 \\ \begin{array}{ccccc} && {LR \atop 2} && \\ & {L \atop 2} && {R \atop 2} & \\ {LR \atop 1} && {\emptyset \atop 2} && {LR \atop 3} \end{array} & X_1^2 X_2^2 X_3^2 LR(X_1) L(X_2) R(X_2) t LR(X_3)^2 \\ \begin{array}{ccccc} && {LR \atop 2} && \\ & {\{ \nearrow, \nwarrow \!\!\!\!\!\;\!\! \nearrow \} \atop 2} && { \{ \nwarrow, \nwarrow \!\!\!\!\!\;\!\! \nearrow \} \atop 2} & \\ {LR \atop 1} && {\emptyset \atop 2} && {LR \atop 3} \end{array} & -X_1^2 X_2^2 X_3^2 LR(X_1) (w+uX_2)(w+v X_2^{-1}) t LR(X_3)^2 \\ \end{array} \end{align*} $$

It is convenient for us to rewrite the formula from Theorem 3.4 as follows.

Corollary 3.6. The generating function of arrowed Gelfand-Tsetlin patterns with bottom row $k_1,\ldots ,k_n$  is

(3.3) $$ \begin{align} \frac{\mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i \le j \le n}\left(v + w X_i + t X_j + u X_i X_j \right) \prod_{i=1}^{n} X_i^{k_i-1} \right] }{\prod_{1 \le i < j \le n} (X_j - X_i)}. \end{align} $$

Proof. Observe that

$$ \begin{align*} & \prod_{i=1}^{n} (t + u X_i + v X_i^{-1}+w) \prod_{1 \le i < j \le n}\left(t+ u {\operatorname{E}}_{k_i} + v {\operatorname{E}}_{k_j}^{-1} + w {\operatorname{E}}_{k_i} {\operatorname{E}}_{k_j}^{-1} \right) s_{(k_n,k_{n-1},\ldots,k_1)}(X_1,\ldots,X_n) \notag \\ \quad &= \prod_{i=1}^{n} (t+u X_i + v X_i^{-1}+w) \prod_{1 \le i < j \le n}\left(t + u {\operatorname{E}}_{k_i} + v {\operatorname{E}}_{k_j}^{-1} + w {\operatorname{E}}_{k_i} {\operatorname{E}}_{k_j}^{-1} \right) \frac{\mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{i=1}^{n} X_i^{k_i+i-1} \right] }{\prod_{1 \le i < j \le n} (X_j - X_i)}. \end{align*} $$

This is further equal to

$$ \begin{align*} & \prod_{i=1}^{n} (t + u X_i + v X_i^{-1}+w) \frac{\mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i < j \le n}\left(t + u {\operatorname{E}}_{k_i} + v {\operatorname{E}}_{k_j}^{-1} + w {\operatorname{E}}_{k_i} {\operatorname{E}}_{k_j}^{-1} \right) \prod_{i=1}^{n} X_i^{k_i+i-1} \right] }{\prod_{1 \le i < j \le n} (X_j - X_i)} \notag \\ \quad &= \prod_{i=1}^{n} (t + u X_i + v X_i^{-1}+w) \frac{\mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i < j \le n}\left(t + u X_i + v X_j^{-1} + w X_i X_j^{-1} \right) \prod_{i=1}^{n} X_i^{k_i+i-1} \right] }{\prod_{1 \le i < j \le n} (X_j - X_i)} \notag \\ \quad &= \frac{\mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i \le j \le n}\left(v + w X_i + t X_j + u X_i X_j \right) \prod_{i=1}^{n} X_i^{k_i-1} \right] }{\prod_{1 \le i < j \le n} (X_j - X_i)}, \end{align*} $$

and the assertion follows.

Remark 3.7. Suppose $(k_1-1,k_2-1,\ldots ,k_n-1)$ is a partition (allowing zero parts). Then, when setting $u=v=0$ and $w=1$ , and replacing t by $-t$ in (3.3), we obtain the Hall-Littlewood polynomials [Reference Macdonald22] up to a factor that is a rational function in t.

3.2. Generating function with respect to a Schur polynomial weight

We are now ready to obtain our first interpretation. Multiplying (1.4) and (1.8) with $\prod _{i=1}^{n} (X_i^{-1} + 1+w + X_i)$ gives

(3.4) $$ \begin{align} &\frac{\mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i \le j \le n} (1+ w X_i + X_j + X_i X_j) \sum_{0 \le k_1 < k_2 < \ldots < k_n} X_1^{k_1-1} X_2^{k_2-1} \cdots X_n^{k_n-1} \right]}{\prod_{1 \le i < j \le n} (X_j-X_i)} \nonumber\\ & \qquad\qquad\qquad\qquad= \prod_{i=1}^{n} (X_i^{-1} + 1+w + X_i) \prod_{i=1}^{n} \frac{1}{1-X_i} \prod_{1 \le i < j \le n} \frac{1+X_i + X_j + w X_i X_j}{1-X_i X_j}, \end{align} $$

and

(3.5) $$ \begin{align} & \frac{\mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i \le j \le n} (1+w X_i+X_j + X_i X_j) \sum_{0 \le k_1 < k_2 < \ldots < k_n \le m} X_1^{k_1-1} X_2^{k_2-1} \cdots X_n^{k_n-1} \right]}{\prod_{1 \le i < j \le n} (X_j-X_i)}\nonumber \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad = \prod_{i=1}^{n} (X_i^{-1} + 1+w + X_i) \nonumber\\ &\qquad \times \frac{\det_{1 \le i, j \le n} \left( X_i^{j-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} - X_i^{m+2n-j} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right)}{\prod\limits_{i=1}^n (1-X_i) \prod\limits_{1 \le i < j \le n} (1-X_i X_j)(X_j-X_i)}, \end{align} $$

respectively, and we can now interpret the left-hand sides as the generating function of arrowed Gelfand-Tsetlin patterns with non-negative strictly increasing bottom row, where we need to specialize $t=u=v=1$ in the weight, and in the second case, the entries in the bottom row are less than or equal to m.

Remark 3.8.

  1. 1. For $\mathbf {X}=(X_1,\ldots ,X_n)$ , let $\mathcal {AGTP}(t,u,v,w;\mathbf {k};\mathbf {X})$ denote the generating function of arrowed Gelfand-Tsetlin patterns with bottom row $\mathbf {k}=(k_1,\ldots ,k_n)$ . Then, using (3.3), it follows by changing $(X_1,\ldots ,X_n)$ to $(X_n,X_{n-1},\ldots ,X_1)$ that

    $$ \begin{align*}\mathcal{AGTP}(t,u,v,w;\mathbf{k};\mathbf{X}) = (-1)^{\binom{n}{2}} \mathcal{AGTP}(w,u,v,t;\overline{\mathbf{k}};\mathbf{X}), \end{align*} $$
    where $\overline {\mathbf {k}}=(k_n,\ldots ,k_1)$ . Therefore, the left-hand sides are up to the sign $(-1)^{\binom {n}{2}}$ also the generating function of AGTPs with strictly decreasing bottom row of non-negative integers, where we need to set $u=v=w=1$ and replace t by w in the weight, and, in the case of (1.8), the entries in the bottom row are less than or equal to m.
  2. 2. For the case $t=0$ , there is worked out a possibility in [Reference Fischer and Schreier-Aigner13] to get around the multiplication with the extra factor $\prod _{i=1}^{n} (X_i^{-1} + 1+w + X_i)$ by working with ‘down arrows’ as decorations. In our application, this can be used in combination with our second combinatorial interpretation concerning AGTPs with strictly decreasing bottom row to give combinatorial interpretations of the left-hand sides of (1.4) and (1.8) in the special case $w=0$ . It is an open problem to explore whether the down-arrowed array can be extended to general t.

In Appendix B, we develop some other (maybe less interesting) combinatorial interpretations of the left-hand sides, which we include for the sake of completeness.

4. Combinatorial interpretations of the right-hand sides of (3.4) and (3.5)

4.1. Right-hand side of (3.4)

For the right-hand side of (3.4), which is

(4.1) $$ \begin{align} \prod_{i=1}^{n} \frac{X_i^{-1}+1+w+X_i}{1-X_i} \prod_{1 \le i < j \le n} \frac{1+X_i + X_j + w X_i X_j}{1-X_i X_j}, \end{align} $$

it is straightforward to give a combinatorial interpretation as a generating function. Recall that, in the ordinary case (1.1), the right-hand side $\prod _{i=1}^{n} \frac {1}{1-X_i} \prod _{1 \le i < j \le n} \frac {1}{1-X_i X_j}$ is interpreted as two-line arrays with entries in $\{1,2,\ldots ,n\}$ , ordered lexicographically, with the top element of each column being greater than or equal to its bottom element. The exponent of $X_i$ in the weight is computed by subtracting from the total number of i’s in the two-line array the number of columns with i as top and bottom element.

To extend this to an interpretation of (4.1), we have one additional column $\binom {j}{i}$ for all pairs $i \le j$ , which are either overlined, underlined, both or neither. An overlined column $\binom {j}{i}$ with $i<j$ contributes an additional multiplicative $X_j$ to the weight, while an underlined column with i as bottom element contributes an additional $X_i$ , and if a column is overlined and underlined, then such a column contributes, in addition to $X_i X_j$ , w. Moreover, an overlined column $\binom {i}{i}$ contributes an additional $X_i$ to the weight, and if it is underlined, then it contributes $X_i^{-1}$ to the weight, and, again, if the column is overlined and underlined, then it contributes also w. In both cases, if the column is neither underlined nor overlined, it contributes nothing in addition.

4.2. Right-hand side of (3.5)

The following theorem provides an interpretation of the right-hand side of (3.5) as a weighted count of (partly non-intersecting) lattice paths. This right-hand side differs from the right-hand side of (1.8) by a simple multiplicative factor. We work as long as possible with general w; however, it will turn out that we need to specialize to $w=0,1$ at some point to obtain a nicer interpretation. We present two different proofs to obtain the result, where the second one is only sketched.

Figure 1 seeks to illustrate the theorem in the case that m is odd.

Figure 1 An example of families of lattice paths in Theorem 4.1.

Theorem 4.1. (1) Assume that $m=2l+1$ . Then the right-hand side of (3.5) has the following interpretation as weighted count of families of n lattice paths.

  • The i-th lattice path starts in one point in the set $A_i=\{(-3i+1,-i+1),(-i+1,-3i+1)\}$ , $i=1,2,\ldots ,n$ , and the end points of the paths are $E_j=(n-j+l+1,j-l-2)$ , $j=1,2,\ldots ,n$ .

  • Below and on the line $x+y=0$ , the step set is $\{(1,1),(-1,1)\}$ for steps that start in $(-3i+1,-i+1)$ , and it is $\{(1,1),(1,-1)\}$ for steps that start in $(-i+1,-3i+1)$ . Steps of type $(-1,1)$ and $(1,-1)$ with distance $0,2,4,\ldots $ from $x+y=0$ are equipped with the weights $X_1,X_2,X_3,\ldots $ , respectively, while such steps with distance $1,3,5,\ldots $ are equipped with the weights $X_1^{-1},X_2^{-1},X_3^{-1},\ldots $ , respectively.

  • Above the line $x+y=0$ , the step set is $\{(1,0),(0,1)\}$ . Above the line $x+y=j-1$ , horizontal steps of the path that ends in $E_j$ are equipped with the weight w.

  • The paths can be assumed to be non-intersecting below the line $x+y=0$ . In case $w=1$ , we can also assume them to be non-intersecting above the line $x+y=0$ . In case $w=0$ , $E_j$ can be replaced by $E^{\prime }_j=(n-j+l+1,2j-n-l-2)$ , $j=1,2,\ldots ,n$ , and then we can also assume the paths to be non-intersecting above the line $x+y=0$ .

  • The sign of family of paths is the sign of the permutation $\sigma $ with the property that the i-th path connects $A_i$ to $E_{\sigma (i)}$ with an extra contribution of $-1$ if we choose $(-i+1,-3i+1)$ from $A_i$ . Moreover, we have an overall factor of

    $$ \begin{align*}(-1)^{\binom{n+1}{2}} \prod_{i=1}^{n} X_i^{l}(X_i^{-1}+1+w + X_i)(1+X_i). \end{align*} $$
  • In case $w=0,1$ , when restricting to non-intersecting paths, let $1 \le i_1 < i_2,\ldots < i_m < n$ be the indices for which we chose $(-3i+1,-i+1)$ from $A_i$ . Then the sign can assumed to be $(-1)^{i_1+\ldots +i_m}$ , and the overall factor is

    $$ \begin{align*}\prod_{i=1}^{n} X_i^{l}(X_i^{-1}+1+w + X_i)(1+X_i). \end{align*} $$

(2) Assume that $m=2l$ . Then, to obtain an interpretation for the right-hand side of (3.5), we only need to replace $E_j$ by a set of two possible endpoints $E_j=\{(n-j+l+1,j-l-2),(n-j+l,j-l-1)\}$ . The overall factor is

$$ \begin{align*}(-1)^{\binom{n+1}{2}} \prod_{i=1}^{n} X_i^{l}(X_i^{-1}+1+w + X_i) \end{align*} $$

in the case when we do not specialize w. The endpoints are replaced by $E^{\prime }_j=\{(n-j+l+1,2j-n-l-2),(n-j+l,2j-n-l-1)\}$ if $w=0$ . In case $w=0,1$ if we restrict to non-intersecting paths and the sign is taken care of as above, then the overall factor is

$$ \begin{align*}\prod_{i=1}^{n} X_i^{l}(X_i^{-1}+1+w + X_i). \end{align*} $$

We discuss the weight and the sign on the example in Figure 1. The weights that come from the individual paths are

$$ \begin{align*}X_1^{-1} \cdot X_1^{-1} \cdot X_2 \cdot X_1 X_3 \cdot X_2 X_3^{-1} \cdot X_5^{-2}, \end{align*} $$

where the factors are arranged in a manner that the i-th factor is the weight of the path that starts in the set $A_i$ . To compute the sign, observe that $\sigma = (6 \, 5 \, 4 \, 3 \, 2 \, 1)$ in one-line notation so that $\operatorname {\mathrm {sgn}} \sigma = -1$ and that we choose the second starting point in $A_i$ except for $i=1$ , so that the total sign is $(-1)\cdot (-1)^5=1$ .

In the case that m is odd, we always need to choose the second lattice point in $A_i$ if $l \ge n-2$ because then all $E_i$ have a non-positive y-coordinate, and this implies that they cannot be reached by any of the first lattice points in $A_i$ since any lattice path starting from the first lattice point in $A_i$ intersects the line $x+y=0$ in a lattice point with positive y-coordinate. This implies that, in the non-intersecting case, the sign is always $1$ . In the case that m is even, the condition is $l \ge n-1$ .

In theses cases and when we have in addition $w=0$ , we can translate the lattice paths easily into pairs of plane partitions. The case $m=2l+1$ is illustrated in Figure 2, while the case $m=2l$ is illustrated in Figure 3. A similar result can in principle be derived for the case $w=1$ , but we omit this here.

Figure 2 Illustration of Corollary 4.2 (1) for $n=7$ and $l=12$ .

Figure 3 Illustration of Corollary 4.2 (1) for $n=7$ and $l=13$ .

Corollary 4.2. Let $w=0$ .

(1) Assume that $m=2l+1$ . In case $l \ge n-2$ , the right-hand side of (3.5) is the generating function of plane partitions $(P,Q)$ of shapes $\lambda , \mu $ , respectively, where $\mu $ is the complement of $\lambda $ in the $n \times l$ -rectangle, P is a column-strict plane partition such that the entries in the i-th row are bounded by $2n+2-2i$ , and Q is a row-strict plane partition of positive integers such that the entries in the i-th row are bounded by $n-i$ . The weight is

$$ \begin{align*} \prod_{i=1}^n X_i^l (X_i^{-1} +1 + X_i)(1+X_i) X_i^{\# \text{ of } 2i-1 \text{ in } P} X_i^{- \, \# \text{ of } 2i \text{ in } P}. \end{align*} $$

(2) Assume that $m=2l$ . In case $l \ge n-1$ , the right-hand side of (3.5) is the generating function of plane partitions $(P,Q)$ of (straight) shape $\lambda $ and skew shape $\mu $ , respectively, such that $\mu $ is the complement of $\lambda $ in the $n \times (l-1)$ -rectangle after possibly deleting the first column of $\mu $ , P is a column strict plane partition such that the entries in the i-th row are bounded by $2n+2-2i$ , and Q is a row-strict plane partition such that the entries in the i-th row are bounded by $n-i$ . The weight is

$$ \begin{align*}\prod_{i=1}^n X_i^l (X_i^{-1} +1 + X_i) X_i^{\# \text{ of } 2i-1 \text{ in } P} X_i^{- \, \# \text{ of } 2i \text{ in } P}. \end{align*} $$

Proof. We consider the case m is odd. Assume that $1 \le k_1 < k_2 < \ldots < k_n$ are chosen such that $(k_i,-k_i)$ is the last point in the intersection of the line $x+y=0$ with the path that connects $A_i$ to $E^{\prime }_{n+1-i}$ when traversing the path from $A_i$ to $E^{\prime }_{n+1-i}$ . Note that the portion of the path from $A_i$ to $(k_i,-k_i)$ has $k_i-i$ steps of type $(1,-1)$ and $2i-1$ steps of type $(1,1)$ . These portions correspond to the plane partition P is follows: The i-th path corresponds to the $(n+1-i)$ -th row where the $(1,-1)$ -steps correspond to the parts, where we fill the cells in the Ferrers diagram from left to right when traversing the path from $A_i$ to $(k_i,-k_i)$ , and a $(1,-1)$ -step at distance d from $x+y=0$ gives the entry $d+1$ . It follows that the length of row i is $k_{n+1-i}-n-1+i$ and that the entries in row i are bounded by $2n+2-2i$ .

Now the portion of the path from $(k_i,-k_i)$ to $E^{\prime }_{n+1-i}$ corresponds to the i-th row of the plane partition Q. More precisely, the horizontal steps correspond to the parts, where we fill the cells in the Ferrers diagram from right to left when traversing the path from $(k_i,-k_i)$ to $E^{\prime }_{n+1-i}$ , where the j-th step gives the entry j. Note that there are $i-k_i+l$ steps of type $(1,0)$ in this portion, while there are $n-i$ steps in total, so that the length of the i-th row is $i-k_i+l$ , and the entries in row i are bounded by $n-i$ .

The case m is even is very similar, and it is therefore omitted here.

Remark 4.3. (1) The plane partitions P in the corollary are in easy bijection with symplectic tableaux as defined in [Reference Koike and Terada19, Section 4]. Also the weight is up to an overall multiplicative factor essentially just the weight that is used for symplectic tableaux. As a consequence, the corollary can be interpreted as to provide the expansion of the generating function of arrowed Gelfand-Tsetlin into symplectic characters. This is in the vein of main results in [Reference Fischer and Schreier-Aigner14] and in [Reference Fischer and Höngesberg7, Remark 2.6].

(2) In the case m is odd, the plane partitions Q are in easy bijective correspondence with $2n \times 2n \times 2n$ totally symmetric self-complementary plane partitions. The bijection is provided in [Reference Fischer and Höngesberg7, Remark 2.6]. In the case m is even, we place the part $n+1-i$ into the cell in the i-th row of the inner shape, and that way, we obtain plane partitions that are in easy bijective correspondence with $(2n+2) \times (2n+2) \times (2n+2)$ totally symmetric self-complementary plane partitions.

4.3. The cases $n=2$ and $m=2,3$

In this section, we give a list of all objects for the left-hand side and right-hand side of (3.5) in the case $n=2$ and $m=2,3$ . We start with the case that $m=3$ , since this is easier on the right-hand side.

Note that $m=3$ implies $l=1$ . The arrowed monotone triangles are as follows, using the notation from Section 3.

$$ \begin{align*} \begin{array}{ccc} \kern-81pt\begin{array}{ccc} & {LR \atop 0} & \\ {L \atop 0} && {LR \atop 1} \end{array}, \begin{array}{ccc} & {LR \atop 1} & \\ {LR \atop 0} && {R \atop 1} \end{array}, \begin{array}{ccc} & {LR \atop 0} & \\ {L \atop 0} && {LR \atop 2} \end{array}, \begin{array}{ccc} & {LR \atop 1} & \\ {LR \atop 0} && {LR \atop 2} \end{array}, \begin{array}{ccc} & {LR \atop 2} & \\ {LR \atop 0} && {R \atop 2} \end{array}, \begin{array}{ccc} & {LR \atop 0} & \\ {L \atop 0} && {LR \atop 3} \end{array},\\[12pt]\kern-18pt\begin{array}{ccc} & {LR \atop 1} & \\ {LR \atop 0} && {LR \atop 3} \end{array}, \begin{array}{ccc} & {LR \atop 2} & \\ {LR \atop 0} && {LR \atop 3} \end{array}, \begin{array}{ccc} & {LR \atop 3} & \\ {LR \atop 0} && {R \atop 3} \end{array}, \begin{array}{ccc} & {LR \atop 1} & \\ {L \atop 1} && {LR \atop 2} \end{array}, \begin{array}{ccc} & {LR \atop 2} & \\ {LR \atop 1} && {R \atop 2} \end{array}, \begin{array}{ccc} & {LR \atop 1} & \\ {L \atop 1} && {LR \atop 3} \end{array}, \\[12pt]\kern165pt\begin{array}{ccc} & {LR \atop 2} & \\ {LR \atop 1} && {LR \atop 3} \end{array}, \begin{array}{ccc} & {LR \atop 3} & \\ {L \atop 1} && {LR \atop 3} \end{array}, \begin{array}{ccc} & {LR \atop 2} & \\ {L \atop 2} && {LR \atop 3} \end{array}, \begin{array}{ccc} & {LR \atop 3} & \\ {LR \atop 2} && {R \atop 3} \end{array} \end{array} \end{align*} $$

The weights are

(4.2) $$ \begin{align} & X_2(1+X_2^{-1}), X_1(1+X_2), X_2^2(1+X_2^{-1}), X_1 X_2(X_2^{-1}+1+w+X_2), X_1^2(1+X_2), X_2^3(1+X_2^{-1}), \nonumber\\ & X_1 X_2^2(X_2^{-1}+1+w+X_2), X_1^2 X_2(X_2^{-1}+1+w+X_2), X_1^3(1+X_2), X_1 X_2^2(1+X_2^{-1}), X_1^2 X_2(1+X_2), \nonumber\\ & X_1 X_2^3 (1+X_2^{-1}), X_1^2 X_2^2 (X_2^{-1} + 1 + w + X_2), X_1^3 X_2 (1+X_2), X_1^2 X_2^3(1+X_2^{-1}), X_1^3 X_2^2(1+X_2), \end{align} $$

up to the overall factor $LR(X_1) LR(X_2)$ , setting $t=u=v=1$ .

The corresponding paths from Theorem 4.1 are as follows.

The weights are

(4.3) $$ \begin{align} -w,-X_1,-X_1^{-1},-X_2,-X_2^{-1},-X_1^{-1} -X_2, -X_1^{-1} X_2^{-1},-1,-X_1 X_2, -X_1 X_2^{-1}, \end{align} $$

up to the overall factor

$$ \begin{align*}&- X_1 X_2 (1+X_1)(1+X_2)(X_1^{-1} + 1 + w + X_1)(X_2^{-1} +1 +w + X_2)\\[3pt]&\quad= -X_1 X_2 (1+X_1)(1+X_2)LR(X_1) LR(X_2), \end{align*} $$

and, as can easily be seen, the sum of weights agrees with those for the arrowed Gelfand-Tsetlin patterns.

Now we consider the case $m=2$ . We have $l=1$ . The arrowed monotone triangles are as follows:

$$ \begin{align*} \begin{array}{ccc} & {LR \atop 0} & \\ {L \atop 0} && {LR \atop 1} \end{array}, \begin{array}{ccc} & {LR \atop 1} & \\ {LR \atop 0} && {R \atop 1} \end{array}, \begin{array}{ccc} & {LR \atop 0} & \\ {L \atop 0} && {LR \atop 2} \end{array}, \begin{array}{ccc} & {LR \atop 1} & \\ {LR \atop 0} && {LR \atop 2} \end{array}, \begin{array}{ccc} & {LR \atop 2} & \\ {LR \atop 0} && {R \atop 2} \end{array}, \begin{array}{ccc} & {LR \atop 1} & \\ {L \atop 1} && {LR \atop 2} \end{array}, \begin{array}{ccc} & {LR \atop 2} & \\ {LR \atop 1} && {R \atop 2.} \end{array}\end{align*} $$

The weights are

$$ \begin{align*} X_2(1+X_2^{-1}), X_1(1+X_2), X_2^2(1+X_2^{-1}), X_1 X_2(X_2^{-1}+1+w+X_2), X_1^2(1+X_2),\, & X_1 X_2^2(1+X_2^{-1}), \\[3pt] & X_1^2 X_2(1+X_2), \end{align*} $$

up to the overall factor $LR(X_1) LR(X_2)$ , setting $t=u=v=1$ .

As for the lattice paths, the situation is very similar to the case $m=3,l=1$ , only $E_1=(3,-2)$ is replaced by the set $E_1=\{(2,-1),(3,-2)\}$ and $E_2=(2,-1)$ is replaced by the set $E_2=\{(1,0),(2,-1)\}$ . It follows that all the families of paths from the case $m=3$ and $l=1$ also appear here. In addition, we have the following families of lattice paths.

Thus, in addition to the weights in (4.3), these families of lattice paths give

$$ \begin{align*}-1,-w,-X_1,-X_1^{-1},-X_2,-X_2^{-1},-1,w, \end{align*} $$

up to the overall factor

$$ \begin{align*}- X_1 X_2 (X_1^{-1} + 1 + w + X_1)(X_2^{-1} +1 +w + X_2)= -X_1 X_2 LR(X_1) LR(X_2), \end{align*} $$

where the last two weights come from the last picture, first by interpreting the endpoint of the path that starts in $A_1$ as element of $E_2$ and second as element of $E_1$ .

4.4. First proof of Theorem 4.1

The approach of the first proof of Theorem 4.1 is closely related to the approach we used in the proof of Theorem 2.2 in [Reference Fischer and Höngesberg7].

We consider the following bases for Laurent polynomials in X that are invariant under the transformation $X \to X^{-1}$ : let

$$ \begin{align*}q_i(X)=\frac{X^i-X^{-i}}{X-X^{-1}} \qquad \text{and} \qquad b_i(X)=(X+X^{-1})^i. \end{align*} $$

Then $(q_i(X))_{i \ge 0}$ and $(b_i(X))_{i \ge 0}$ are two such bases. It is not hard to verify that

(4.4) $$ \begin{align} q_m(X)= \sum_{r=0}^{(m-1)/2} (-1)^r \binom{m-r-1}{r} b_{m-1-2r}(X). \end{align} $$

In order to derive a combinatorial interpretation of the right-hand side of (3.5), consider

(4.5) $$ \begin{align} \det_{1 \le i, j \le n} \left( X_i^{j-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} - X_i^{m+2n-j} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right). \end{align} $$

We start by considering the case where m is odd: We set $m=2l+1$ , and pull out $\prod _{i=1}^n X_i^{l+n}$ , and obtain

(4.6) $$ \begin{align} \prod_{i=1}^n X_i^{l+n} \det_{1 \le i, j \le n} \left( X_i^{j-l-n-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} - X_i^{-j+l+n+1} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right). \end{align} $$

The entry in the i-th row and j-th column of the matrix underlying the determinant is obtained from

$$ \begin{align*} & \frac{X^{j-l-n-1} (1+X)^{j-1} (1+w X)^{n-j} - X^{-j+l+n+1} (1+X^{-1})^{j-1} (1+w X^{-1})^{n-j}}{X-X^{-1}} \\ &\qquad\qquad\qquad\qquad\qquad\qquad = \sum_{p,q \ge 0} \binom{j-1}{p} \binom{n-j}{q} w^q \frac{X^{j-l-n+p+q-1}-X^{-j+l+n-p-q+1}}{X-X^{-1}} \end{align*} $$

by multiplying with $X-X^{-1}$ and then setting $X=X_i$ . Note that this expression is invariant under replacing X by $X^{-1}$ . From (4.4), it follows that this is further equal to

$$ \begin{align*} \sum_{p,q,r \ge 0 \atop |j-l-n+p+q-1|-1-2r \ge 0} \operatorname{\mathrm{sgn}}(j-l-n+p+q-1) & (-1)^r w^q \binom{j-1}{p} \binom{n-j}{q} \\ & \times \binom{|j-l-n+p+q-1|-r-1}{r} b_{|j-l-n+p+q-1|-1-2r}(X). \end{align*} $$

We apply the following lemma. A proof can be found in [Reference Fischer and Schreier-Aigner13, Lemma 7.2]. Note that the lemma also involves complete homogeneous symmetric polynomials $h_k$ with negative k as defined in [Reference Fischer and Schreier-Aigner13, Section 5]. Concretely, we define $h_k(X_1,\ldots ,X_n)=0$ for $k=-1,-2,\ldots ,-n+1$ and

(4.7) $$ \begin{align} h_k(X_1,\ldots,X_n) = (-1)^{n+1} X_1^{-1} \ldots X_n^{-1} h_{-k-n}(X_1^{-1},\ldots,X_n^{-1}) \end{align} $$

for $k \le -n$ . Note that a consequence of this definition is that the latter relation is true for any k.

Lemma 4.4. Let $f_j(Y)$ be formal Laurent series for $1 \le j \le n$ , and define

$$ \begin{align*}f_j[Y_1,\ldots,Y_i]=\sum_{k \in \mathbb{Z}} \langle Y^{k} \rangle f_j(Y) \cdot h_{k-i+1}(Y_1,\ldots,Y_i), \end{align*} $$

where $\langle Y^{k} \rangle f_j(Y)$ denotes the coefficient of $Y^{k}$ in $f_j(Y)$ and $h_{k-i+1}$ denotes the complete homogeneous symmetric polynomial of degree $k-i+1$ . Then

$$ \begin{align*}\frac{\det_{1 \le i, j \le n} \left( f_j(Y_i) \right) }{\prod_{1 \le i < j \le n} (Y_j - Y_i)} = \det_{1 \le i, j \le n} \left( f_j[Y_1,\ldots,Y_i] \right). \end{align*} $$

Noting that a Laurent polynomial in X that is invariant under the replacement $X \to X^{-1}$ can be written as a polynomial in $X+X^{-1}$ , we use the lemma to basically rewrite (4.6) as follows:

$$ \begin{align*} & \frac{ \det_{1 \le i, j \le n} \left( X_i^{j-l-n-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} - X_i^{-j+l+n+1} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right)} {\prod_{1 \le i < j \le n} (X_j + X_j^{-1} - X_i - X_i^{-1})} \\& \quad = \prod_{i=1}^n (X_i - X_i^{-1}) \det_{1 \le i, j \le n} \left( \sum_{p,q,r \ge 0 \atop |j-l-n+p+q-1|-i-2r \ge 0} \operatorname{\mathrm{sgn}}(j-l-n+p+q-1) (-1)^r w^q \binom{j-1}{p} \binom{n-j}{q} \right. \\& \qquad \left. \times\ \binom{|j-l-n+p+q-1|-r-1}{r} h_{|j-l-n+p+q-1|-i-2r}(X_1+X_1^{-1},\ldots,X_i+X_i^{-1}) \right). \end{align*} $$

Now, as $X_j + X_j^{-1} - X_i - X_i^{-1}=(X_i-X_j)(1-X_i X_j)X_i^{-1} X_j^{-1}$ , in order to find a combinatorial interpretation for the right-hand side of (3.5), we need to find a combinatorial interpretation of

$$ \begin{align*} & (-1)^{\binom{n}{2}} \prod_{i=1}^{n} X_i^{l+1}(X_i^{-1}+1+w + X_i) (X_i - X_i^{-1})(1-X_i)^{-1} \\& \qquad \times \det_{1 \le i, j \le n} \left( \sum_{p,q,r \ge 0 \atop |j-l-n+p+q-1|-i-2r \ge 0} \operatorname{\mathrm{sgn}}(j-l-n+p+q-1) (-1)^r w^q \binom{j-1}{p} \binom{n-j}{q} \right. \\& \qquad\left. \times\ \binom{|j-l-n+p+q-1|-r-1}{r} h_{|j-l-n+p+q-1|-i-2r}(X_1+X_1^{-1},\ldots,X_i+X_i^{-1}) \vphantom{\sum_{p,q,r \ge 0 \atop |j-l-n+p+q-1|-i-2r \ge 0}}\right)\\& \quad =(-1)^{\binom{n+1}{2}} \prod_{i=1}^{n} X_i^{l}(X_i^{-1}+1+w + X_i)(1+X_i) \\& \qquad\times \det_{1 \le i, j \le n} \left( \sum_{p,q,r \ge 0 \atop |j-l-n+p+q-1|-i-2r \ge 0} \operatorname{\mathrm{sgn}}(j-l-n+p+q-1) (-1)^r w^q \binom{j-1}{p} \binom{n-j}{q} \right. \\& \qquad\left. \times\ \binom{|j-l-n+p+q-1|-r-1}{r} h_{|j-l-n+p+q-1|-i-2r}(X_1+X_1^{-1},\ldots,X_i+X_i^{-1}) \vphantom{\sum_{p,q,r \ge 0 \atop |j-l-n+p+q-1|-i-2r \ge 0}}\right). \end{align*} $$

For this purpose, we find a combinatorial interpretation of the entry of the underlying matrix – that is,

$$ \begin{align*} \sum_{p,q,r \ge 0 \atop |j-l-n+p+q-1|-i-2r \ge 0} \operatorname{\mathrm{sgn}}(j-l-n+p+q-1) (-1)^r w^q & \binom{j-1}{p} \binom{n-j}{q} \binom{|j-l-n+p+q-1|-r-1}{r} \\ & \times h_{|j-l-n+p+q-1|-i-2r}(X_1+X_1^{-1},\ldots,X_i+X_i^{-1}) \end{align*} $$

in terms of a lattice paths generating function. We simplify the expression using the transformation $q \to n-j-q$ :

(4.8) $$ \begin{align} \sum_{p,q,r \ge 0 \atop |p-q-l-1|-i-2r \ge 0} \operatorname{\mathrm{sgn}}(p-q-l-1) (-1)^r w^{n-j-q} & \binom{j-1}{p} \binom{n-j}{q} \binom{|p-q-l-1|-r-1}{r} \nonumber\\ & \times h_{|p-q-l-1|-i-2r}(X_1+X_1^{-1},\ldots,X_i+X_i^{-1}). \end{align} $$

We simplify the expression further using the following lemma. A combinatorial proof of it using a sign-reversing involution is provided in [Reference Fischer and Höngesberg7, Lemma 7.7].

Lemma 4.5. Let $a,i$ be positive integers with $i \le a$ . Then

$$ \begin{align*}\sum_{r=0}^{(a-i)/2} (-1)^r \binom{a-r-1}{r} h_{a-i-2r}(X_1+X_1^{-1},\ldots,X_i+X_i^{-1}) = h_{a-i}(X_1,X_1^{-1},\ldots,X_i,X_i^{-1}). \end{align*} $$

Therefore, the sum in (4.8) is equal to

(4.9) $$ \begin{align} \sum_{p,q} \operatorname{\mathrm{sgn}}(p-q-l-1) w^{n-j-q} \binom{j-1}{p} \binom{n-j}{q} h_{|p-q-l-1|-i}(X_1,X_1^{-1},\ldots,X_i,X_i^{-1}). \end{align} $$

We claim the following: If $p-q-l-1 \ge 0$ , then (4.9) is the generating function of lattice paths from $(-3i+1,-i+1)$ to $(n-j+l+1,j-l-2)$ such that the following is satisfied.

  • Below and on the line $x+y=0$ , the step set is $\{(1,1),(-1,1)\}$ . Steps of type $(-1,1)$ with distances $0,2,4,\ldots $ from $x+y=n$ are equipped with the weights $X_1,X_2,X_3,\ldots $ , respectively, while steps of type $(-1,1)$ with distances $1,3,5,\ldots $ are equipped with the weights $X_1^{-1},X_2^{-1},X_3^{-1},\ldots $ , respectively.

  • Above the line $x+y=0$ , the step set is $\{(1,0),(0,1)\}$ . Above the line $x+y=j-1$ , horizontal steps are equipped with the weight w.

Namely, if we assume that there are q steps of type $(0,1)$ above the line $x+y=j-1$ , and therefore, $n-j-q$ steps of type $(1,0)$ , then the path intersects the line $x+y=j-1$ in the lattice point $(l+1+q,j-l-2-q)$ , assuming that the endpoint of the path is $(n-j+l+1,j-l-2)$ , and there are $\binom {n-j}{q}$ of such paths each of them contributing $w^{n-j-q}$ to the weight. Note that this weight depends on j if $w \not = 0,1$ , and this causes complications when applying the Lindström-Gessel-Viennot lemma.

If we further assume that there are p steps of type $(1,0)$ below the line $x+y=j-1$ , and therefore, $j-p$ steps of type $(0,1)$ , then the last lattice point of such a path on the line $x+y=0$ when traversing the path from $(-3i+1,-i+1)$ to $(n-j+l+1,j-l-2)$ is $(-p+q+l+1,p-q-l-1)$ . Note that by the assumption $p-q-l-1 \ge 0$ , the lattice point $(-p+q+l+1,p-q-l-1)$ is in the second quadrant – that is, $\{(x,y)|x \le 0, y \ge 0 \}$ .

Finally, lattice paths from $(-3i+1,-i+1)$ to $(-p+q+l+1,p-q-l-1)$ with step set $\{(1,1),(-1,1)\}$ have $p-q-l-1-i$ steps of type $(-1,1)$ and $2i-1$ steps of type $(1,1)$ . The generating function of such paths is clearly $h_{p-q-l-1-i}(X_1,X_1^{-1},\ldots ,X_i,X_i^{-1})=h_{|p-q-l-1|-i}(X_1,X_1^{-1},\ldots ,X_i,X_i^{-1})$ .

The situation is very similar if $p-q-l-1 \le 0$ , except that we need to replace the starting point $(-3i+1,-i+1)$ by $(-i+1,-3i+1)$ , and the step set is $\{(1,1),(1,-1)\}$ below the line $x+y=0$ . Again, we can assume that $(-p+q+l+1,p-q-l-1)$ is the last lattice point on the line $x+y=0$ when traversing the path from $(-i+1,-3i+1)$ to $(-p+q+l+1,p-q-l-1)$ . In this case, $(-p+q+l+1,p-q-l-1)$ lies in the fourth quadrant $\{(x,y)|x \le 0, y \le 0 \}$ . We have $-p+q+l+1-i=|p-q-l-1|-i$ steps of type $(1,-1)$ and $2i-1$ steps of type $(1,1)$ ; thus, the generating function in this segment is also $h_{|p-q-l-1|-i}(X_1,X_1^{-1},\ldots ,X_i,X_i^{-1})$ .

Consequently, we can conclude that the right-hand side of (3.5) has the following combinatorial interpretation: We consider families of n lattice paths from $A_i=\{(-3i+1,-i+1),(-i+1,-3i+1)\}$ , $i=1,2,\ldots ,n$ , to $E_j=(n-j+l+1,j-l-2)$ , $j=1,2,\ldots ,n$ , with steps sets and weights as described above. By the Lindström-Gessel-Viennot lemma [Reference Lindström20, Reference Gessel and Viennot15, Reference Gessel and Viennot16], the paths can be assumed to be non-intersecting on and below the line $x+y=0$ .

In case $w=0,1$ , we can also assume them to be non-intersecting. This is clear for $w=1$ . In case $w=0$ , we can assume that there are no steps of type $(1,0)$ above the line $x+y=j-1$ , and, therefore, we can also have $(n-j+l+1,2j-n-l-2)$ on the line $x+y=j-1$ as endpoint since above the line all the $n-j$ steps have to be of type $(0,1)$ . Whenever we choose $(-i+1,-3i+1)$ , this contributes $-1$ to the weight.

In the non-intersecting setting, suppose we choose $(-i+1,-3i+1)$ from $A_i$ for $1 \le i_1 < \ldots < i_m \le n$ , then the sign of the permutation $\sigma $ such that $A_i$ is connected to $E_{\sigma (i)}$ via the paths is $(-1)^{i_1+i_2+\ldots +i_m-m}$ . This gives a total sign of $(-1)^{i_1+i_2+\ldots +i_m}$ . Recall also that we have an additional overall weight of

$$ \begin{align*}(-1)^{\binom{n+1}{2}} \prod_{i=1}^{n} X_i^{l}(X_i^{-1}+1+w + X_i)(1+X_i). \end{align*} $$

Combining the sign from above with $(-1)^{\binom {n+1}{2}}=(-1)^{1+2+\ldots +n}$ , the sign can also be computed as follows: suppose we choose $(-3i+1,-i+1)$ from $A_i$ precisely for $i_1,\ldots ,i_m$ . Then the sign is $(-1)^{i_1+i_2+\ldots +i_m}$ , and in this setting, the overall weight is

$$ \begin{align*}\prod_{i=1}^{n} X_i^{l}(X_i^{-1}+1+w + X_i)(1+X_i). \end{align*} $$

This concludes the proof of the first part of Theorem 4.1.

Now we consider the case where m is even: We set $m=2l$ in (4.5), pull out $\prod _{i=1}^n X_i^{l+n}$ and obtain

$$ \begin{align*}\prod_{i=1}^n X_i^{l+n} \det_{1 \le i, j \le n} \left( X_i^{j-l-n-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} - X_i^{-j+l+n} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right). \end{align*} $$

The entry in the i-th row of the j-th column of the matrix underlying the determinant is obtained from

$$ \begin{align*} & \frac{X^{j-l-n-1} (1+X)^{j-1} (1+w X)^{n-j} - X^{-j+l+n} (1+X^{-1})^{j-1} (1+w X^{-1})^{n-j}}{1-X^{-1}} \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad= \sum_{p,q \ge 0} \binom{j-1}{p} \binom{n-j}{q} w^q \frac{X^{j-l-n+p+q-1}-X^{-j+l+n-p-q}}{1-X^{-1}}, \end{align*} $$

when multiplying with $1-X^{-1}$ and then setting $X=X_i$ . Now note that

$$ \begin{align*}\frac{X^m-X^{-m-1}}{1-X^{-1}} = q_{m+1}(X) + q_{m}(X) \end{align*} $$

for any integer m, so that we obtain

$$ \begin{align*}\sum_{p,q \ge 0} \binom{j-1}{p} \binom{n-j}{q} w^q \left(q_{j-l-n+p+q}(X) + q_{j-l-n+p+q-1}(X) \right). \end{align*} $$

It follows from (4.4) that this is

$$ \begin{align*} &\sum_{p,q,r \ge 0 \atop |j-l-n+p+q|-1-2r \ge 0} \operatorname{\mathrm{sgn}}(j-l-n+p+q) (-1)^r w^q \binom{j-1}{p} \\& \qquad\qquad\qquad\qquad\times \binom{n-j}{q} \binom{|j-l-n+p+q|-r-1}{r} b_{|j-l-n+p+q|-1-2r}(X) \\&\qquad\qquad + \sum_{p,q,r \ge 0 \atop |j-l-n+p+q-1|-1-2r \ge 0} \operatorname{\mathrm{sgn}}(j-l-n+p+q-1) (-1)^r w^q \binom{j-1}{p} \binom{n-j}{q} \\&\qquad\qquad\qquad\qquad\qquad\qquad\quad \times \binom{|j-l-n+p+q-1|-r-1}{r} b_{|j-l-n+p+q-1|-1-2r}(X). \end{align*} $$

Also, here we simplify the expression using the replacement $q \to n-j-q$ and obtain

$$ \begin{align*} & \sum_{p,q,r \ge 0 \atop |-l+p-q|-1-2r \ge 0} \operatorname{\mathrm{sgn}}(-l+p-q) (-1)^r w^{n-j-q} \binom{j-1}{p} \binom{n-j}{q} \binom{|-l+p-q|-r-1}{r} b_{|-l+p-q|-1-2r}(X) \\& \quad + \sum_{p,q,r \ge 0 \atop |-l+p-q-1|-1-2r \ge 0} \operatorname{\mathrm{sgn}}(-l+p-q-1) (-1)^r w^{n-j-q} \binom{j-1}{p} \binom{n-j}{q} \binom{|-l+p-q-1|-r-1}{r}\\& \quad \times b_{|-l+p-q-1|-1-2r}(X). \end{align*} $$

This implies the following:

$$ \begin{align*} & \frac{ \det_{1 \le i, j \le n} \left( X_i^{j-l-n-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} - X_i^{-j+l+n} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right)} {\prod_{1 \le i < j \le n} (X_j + X_j^{-1} - X_i - X_i^{-1})} \\&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad = \prod_{i=1}^n (1-X_i^{-1}) \det_{1 \le i, j \le n} \left( a_{i,j} \right), \end{align*} $$

with

$$ \begin{align*} a_{i,j} &= \sum_{p,q,r \ge 0 \atop |p-q-l|-1-2r \ge 0} \operatorname{\mathrm{sgn}}(p-q-l) (-1)^r w^{n-j-q} \binom{j-1}{p} \binom{n-j}{q} \\&\quad \times \binom{|p-q-l|-r-1}{r} h_{|p-q-l|-i-2r}(X_1+X_1^{-1},\ldots,X_i+X_i^{-1}) \\&\quad + \sum_{p,q,r \ge 0 \atop |p-q-l-1|-1-2r \ge 0} \operatorname{\mathrm{sgn}}(p-q-l-1) (-1)^r w^{n-j-q} \binom{j-1}{p} \binom{n-j}{q} \\&\quad \times \binom{|p-q-l-1|-r-1}{r} h_{|p-q-l-1|-i-2r}(X_1+X_1^{-1},\ldots,X_i+X_i^{-1}). \end{align*} $$

Using Lemma 4.5, we see that this is equal to

(4.10) $$ \begin{align} b_{i,j} &= \sum_{p,q \ge 0} \operatorname{\mathrm{sgn}}(p-q-l) w^{n-j-q} \binom{j-1}{p} \binom{n-j}{q} h_{|p-q-l|-i}(X_1,X_1^{-1},\ldots,X_i,X_i^{-1}) \ \nonumber\\& \quad + \sum_{p,q \ge 0} \operatorname{\mathrm{sgn}}(p-q-l-1) w^{n-j-q} \binom{j-1}{p} \binom{n-j}{q} h_{|p-q-l-1|-i}(X_1,X_1^{-1},\ldots,X_i,X_i^{-1}). \end{align} $$

Here we need to find a combinatorial interpretation of

$$ \begin{align*}(-1)^{\binom{n+1}{2}} \prod_{i=1}^{n} X_i^{l}(X_i^{-1}+1+w + X_i) \det_{1 \le i, j \le n} \left( b_{i,j} \right). \end{align*} $$

The only modification compared to the odd case is that the endpoints have to be replaced by the following set of two endpoints $E_j=\{(n-j+l+1,j-l-2),(n-j+l,j-l-1)\}$ and that the overall factor is

$$ \begin{align*}\prod_{i=1}^{n} X_i^{l}(X_i^{-1}+1+w+ X_i) , \end{align*} $$

given that the sign is taken care of as above.

This concludes the proof of Theorem 4.1.

4.5. Right-hand side of (3.5), second proof

In this section, we sketch a second proof of Theorem 4.1. It is closely related to the proof of Theorem 2.4 in [Reference Fischer and Höngesberg7]. We only study the case $m=2l+1$ . Again, we need to consider

(4.11) $$ \begin{align} \det_{1 \le i, j \le n} \left( X_i^{j-l-n-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} - X_i^{-j+l+n+1} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right). \end{align} $$

We have the following lemma.

Lemma 4.6. For $n \ge 1$ and $l \in \mathbb {Z}$ , we have

$$ \begin{align*} & \frac{1}{\prod_{1 \le i < j \le n} (X_j-X_i)(X_j^{-1} - X_i^{-1}) \prod_{i,j=1}^n (X_j^{-1} - X_i)} \\&\qquad \times \det_{1 \le i, j \le n} \left( X_i^{j-l-n-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} - X_i^{-j+l+n+1} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right) \\&\qquad \times \det_{1 \le i, j \le n} \left( X_i^{j-l-n-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} + X_i^{-j+l+n+1} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right) \\&\quad = \frac{(-1)^n}{2} \det_{1 \le i, j \le n} \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q (h_{k-i+1}-h_{k+i-1-2n}) \right) \\& \qquad \times \det_{1 \le i,j \le n} \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q (h_{k+i-1-n}+ h_{k-i+1-n}) \right). \end{align*} $$

Proof. We use

$$ \begin{align*}\det\ (A-B) \det(A+B) = \det \left( \begin{array}{c|c} A-B & B \\ \hline 0 & A+B \end{array} \right) = \det \left( \begin{array}{c|c} A-B & B \\ \hline B-A & A \end{array} \right) = \det \left( \begin{array}{c|c} A & B \\ \hline B & A \end{array} \right) \end{align*} $$

to see that the product of determinants on the left-hand side in the assertion of the lemma is equal to

$$ \begin{align*}\det\ \left( \begin{array}{c|c} \left( X_i^{j-l-n-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} \right)_{1 \le i, j \le n} & \left( X_i^{-j+l+n+1} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right)_{1 \le i,j \le n} \\ \hline \left( X_i^{-j+l+n+1} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right)_{1 \le i,j \le n} & \left( X_i^{j-l-n-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} \right)_{1 \le i,j \le n} \end{array} \right). \end{align*} $$

Setting $X_{n+i} = X_i^{-1}$ for $i=1,2,\ldots ,n$ , we can also write this is as

$$ \begin{align*}\det\ \left( \begin{array}{c|c} \left( X_i^{j-l-n-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} \right)_{1 \le i \le 2n \atop 1 \le j \le n} & \left( X_i^{-j+l+n+1} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right)_{1 \le i \le 2n \atop 1 \le j \le n} \end{array} \right). \end{align*} $$

We apply Lemma 4.4 to

$$ \begin{align*}\frac{\det\ \left( \begin{array}{c|c} \left( X_i^{j-l-n-1} (1+X_i)^{j-1} (1+ w X_i)^{n-j} \right)_{1 \le i \le 2n \atop 1 \le j \le n} & \left( X_i^{-j+l+n+1} (1+X_i^{-1})^{j-1} (1+w X_i^{-1})^{n-j} \right)_{1 \le i \le 2n \atop 1 \le j \le n} \end{array} \right)}{\prod_{1 \le i < j \le 2n} (X_j-X_i)} \end{align*} $$

and obtain

$$ \begin{align*} & \det\ \left( \left. \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q h_{k-i+1}(X_1,\ldots,X_i) \right)_{1 \le i \le 2n \atop 1 \le j \le n} \right| \right. \\ &\qquad\qquad\qquad\qquad\qquad\qquad \left. \left( \sum_{k,q} \binom{j-1}{-j-k+l+n-q+1} \binom{n-j}{q} w^q h_{k-i+1}(X_1,\ldots,X_i) \right)_{1 \le i \le 2n \atop 1 \le j \le n} \right). \end{align*} $$

We multiply from the left with the following matrix:

$$ \begin{align*}(h_{j-i}(X_j,X_{j+1},\ldots,X_{2n}))_{1 \le i,j \le 2n} \end{align*} $$

with determinant $1$ . For this purpose, note that

$$ \begin{align*}\sum_{l=1}^{2n} h_{l-i}(X_l,X_{l+1},\ldots,X_{2n}) h_{k-l+1}(X_1,\ldots,X_l)=h_{k-i+1}(X_1,\ldots,X_{2n}), \end{align*} $$

and therefore, the multiplication results in

$$ \begin{align*} & \det\ \left( \left. \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q h_{k-i+1}(X_1,\ldots,X_{2n}) \right)_{1 \le i \le 2n \atop 1 \le j \le n} \right| \right. \\ &\qquad\qquad\qquad\qquad\qquad\qquad \left. \left( \sum_{k,q} \binom{j-1}{-j-k+l+n-q+1} \binom{n-j}{q} w^q h_{k-i+1}(X_1,\ldots,X_{2n}) \right)_{1 \le i \le 2n \atop 1 \le j \le n} \right). \end{align*} $$

We set $X_{i+n}=X_i^{-1}$ for $i=1,2,\ldots ,n$ (so that the arguments of all complete symmetric functions are $(X_1,\ldots ,X_n,X_1^{-1},\ldots ,X_n^{-1})$ ) and omit the $X_i$ ’s now. Also note that, under this specialization, the denominator $\prod _{1 \le i < j \le 2n} (X_j-X_i)$ specializes to the denominator on the left-hand side in the assertion of the lemma. We obtain

$$ \begin{align*}\det\ \left( \begin{array}{c|c} \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q h_{k-i+1} \right)_{1 \le i \le 2n \atop 1 \le j \le n} & \left( \sum_{k,q} \binom{j-1}{-j-k+l+n-q+1} \binom{n-j}{q} w^q h_{k-i+1} \right)_{1 \le i \le 2n \atop 1 \le j \le n} \end{array} \right). \end{align*} $$

With this specialization, we have $h_k=-h_{-k-2n}$ using (4.7). Therefore, the above is

$$ \begin{align*} &(-1)^n \det\ \left( \begin{array}{c|c} \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q h_{k-i+1} \right)_{1 \le i \le 2n \atop 1 \le j \le n} & \left( \sum_{k,q} \binom{j-1}{-j-k+l+n-q+1} \binom{n-j}{q} w^q h_{-k+i-1-2n} \right)_{1 \le i \le 2n \atop 1 \le j \le n} \end{array} \right) \\& \quad = (-1)^n \det\ \left( \begin{array}{c|c} \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q h_{k-i+1} \right)_{1 \le i \le 2n \atop 1 \le j \le n} & \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q h_{k+i-1-2n} \right)_{1 \le i \le 2n \atop 1 \le j \le n} \end{array} \right). \end{align*} $$

Now, for $j=1,2,\ldots ,n$ , we subtract the $(j+n)$ -th column from the j-th column, and this gives

$$ \begin{align*} & (-1)^n \det\ \left( \left. \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q (h_{k-i+1}-h_{k+i-1-2n}) \right)_{1 \le i \le 2n \atop 1 \le j \le n} \right| \right. \\&\qquad\qquad\qquad\qquad\qquad\qquad\qquad \left. \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q h_{k+i-1-2n} \right)_{1 \le i \le 2n \atop 1 \le j \le n} \right). \end{align*} $$

For $i=n+2,n+3,\ldots ,2n$ , we add the $(2n+2-i)$ -th row to the i-th row. This gives a zero block for $\{(i,j) | n+1 \le i \le 2n, 1 \le j \le n\}$ , since

$$ \begin{align*}\sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q (h_{k-i+1}-h_{k+i-1-2n}+h_{k-(2n+2-i)+1}-h_{k+(2n+2-i)-1-2n})=0. \end{align*} $$

The lower right block is

$$ \begin{align*} & \det_{n+1 \le i \le 2n \atop 1 \le j \le n} \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q (h_{k+i-1-2n}+ [i\not=n+1]h_{k+2n+2-i-1-2n}) \right) \\&\quad = \det_{n+1 \le i \le 2n \atop 1 \le j \le n} \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q (h_{k+i-1-2n}+ [i\not=n+1]h_{k-i+1}) \right) \\& \quad = \frac{1}{2} \det_{1 \le i,j \le n} \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q (h_{k+i-1-n}+ h_{k-i+1-n}) \right). \end{align*} $$

This concludes the proof of the lemma.

The identity in the lemma involves, up to factors, a product of two determinants on the left-hand side and also a product of two determinants on the right-hand side. This suggests that each of the determinants on the left-hand side equals up to factors a determinant on the right-hand side. This is indeed the case.

More specifically, one can show that (4.11) is up to factors equal to

$$ \begin{align*}(-1)^n \prod_{i=1}^n (1+X_i) X_i^l \det_{1 \le i,j \le n} \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q (h_{k+i-1-n}+ h_{k-i+1-n}) \right). \end{align*} $$

This can be shown by induction with respect to n as suggested in [Reference Fischer and Höngesberg7, Remark 7.4]. Thus, Lemma 4.6 would actually not have been necessary; however, it explains how the expression was obtained much better than the proof by induction.

Using Lemma 7.5 from [Reference Fischer and Höngesberg7], we can conclude further that the expression is equal to

$$ \begin{align*} &(-1)^n \prod_{i=1}^n (1+X_i) X_i^l \\ & \quad \times \det_{1 \le i,j \le n} \left( \sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q h_{k+i-1-n}(X_1,X_1^{-1},\ldots,X_{n-i+1},X_{n-i+1}^{-1}) \right). \end{align*} $$

Also, this formula can be proven directly by induction with respect to n.

Our next goal would be to give a combinatorial interpretation of

$$ \begin{align*}\sum_{k,q} \binom{j-1}{-j+k+l+n-q+1} \binom{n-j}{q} w^q (h_{k+i-1-n}(X_1,X_1^{-1},\ldots,X_{n-i+1},X_{n-i+1}^{-1}). \end{align*} $$

We replace i by $n-i+1$ and q by $n-j-q$ , and then we get rid of k by setting $p=k+l+q+1$ :

$$ \begin{align*}\sum_{p,q} \binom{j-1}{p} \binom{n-j}{q} w^{n-j-q} h_{p-q-l-1-i}(X_1,X_1^{-1},\ldots,X_{i},X_{i}^{-1}). \end{align*} $$

This is equal to (4.9) when taking into account the definition of complete symmetric functions $h_k$ for negative k’s as given in (4.7).

5. Explicit product formulas in case $(X_1,\ldots ,X_n)=(1,\dots ,1)$ and $w=0,-1$

When evaluating the specializations of the LHS or RHS of (1.8) at $(X_1,\ldots ,X_n)=(1,\ldots ,1)$ in the cases $w=0,-1$ for small values of n, one observes that the numbers involve only small prime factors, and therefore, it is likely that they are expressible by product formulas. (A similar observation is true for the case $w=1$ , but there the explanation is simple, since $\prod _{1 \le i < j \le n} (1+ w X_i+X_j + X_i X_j)$ on the left-hand side of (1.8) is symmetric then.) For the LHS and the case $m=n-1$ , these are unpublished conjectures of Florian Schreier-Aigner from 2018. For instance, in the case $w=-1$ and $m=n-1$ , we obtain the numbers

$$ \begin{align*}1,4,60,3328,678912,\ldots = 2^{n(n-1)/2} \prod_{j=0}^{n-1} \frac{(4j+2)!}{(n+2j+1)!} \end{align*} $$

that have also appeared in recent work of Di Francesco [Reference Di Francesco4].

Related conjectures for arbitrary m have now been proven and will appear in forthcoming work with Florian Schreier-Aigner. The approach we have been successful with involves the transformation of the bialternant formula on the RHS of (1.8) into a Jacobi-Trudi type determinant. Then we can easily set $X_i=1$ , and we were able to guess the LU-decompositions of the relevant matrices. Proving these guesses involves the evaluation of certain triple sums, which is possible using Sister Celine’s algorithm [Reference Celine Fasenmyer5] and the fabulous Mathematica packages provided by RISC [25]. We state the results next. First, we deal with the case $w=0$ .

Theorem 5.1. The specialization of the generating function of arrowed Gelfand-Tsetlin patterns with n rows and strictly increasing non-negative bottom row where the entries are bounded by m at $(X_1,\ldots ,X_n)=(1,\ldots ,1)$ , $u=v=t=1$ and $w=0$ is equal to

$$ \begin{align*}3^{\binom{n+1}{2}} \prod_{i=1}^n \frac{(2n+m+2-3i)_i}{(i)_i}. \end{align*} $$

Now we turn to the case $w=-1$ .

Theorem 5.2. The specialization of the generating function of arrowed Gelfand-Tsetlin patterns with n rows and strictly increasing non-negative bottom row where the entries are bounded by m at $(X_1,\ldots ,X_n)=(1,\ldots ,1)$ , $u=v=t=1$ and $w=-1$ is equal to

$$ \begin{align*}2^n \prod_{i=1}^n \frac{(m-n+3i+1)_{i-1} (m-n+i+1)_i}{\left(\frac{m-n+i+2}{2} \right)_{i-1} (i)_i}. \end{align*} $$

Other future work concerns extending results that have been obtained using (1.2) by replacing this identity by Theorem 1.1 or specializations thereof.

A Aspects of the combinatorics of the classical Littlewood identity and its bounded version

A.1 The combinatorics of the classical Littlewood identity (1.1)

We start by reviewing the classical combinatorial proof of (1.1): one can interpret the Schur polynomial $s_{\lambda }(X_1,\ldots ,X_n)$ as the (multivariate) generating function of semistandard Young tableaux of shape $\lambda $ with entries in $\{1,2,\ldots ,n\}$ , where the exponent of $X_i$ is just the number of occurrences of i in a given semistandard Young tableau. Then the left-hand side of (1.1) is simply the generating function of all such semistandard Young tableaux of any shape $\lambda $ . The right-hand side can be interpreted as the generating function of symmetric $n \times n$ matrices with non-negative integer entries: expanding $\frac {1}{1-X_i X_j} = \sum _{a_{i,j} \ge 0} (X_i X_j)^{a_{i,j}}$ corresponds to the entries $a_{i,j}=a_{j,i}$ for $i<j$ , while expanding $\frac {1}{1-X_i} = \sum _{a_{i,i} \ge 0} X_i^{a_{i,i}}$ corresponds to the diagonal entries $a_{i,i}$ . Then such a matrix determines a two-line array with $a_{i,j}$ occurrences of the pair $\left ( i \atop j \right )$ such that the pairs are ordered lexicographically. The semistandard Young tableau P is simply obtained by applying the Robinson-Schensted-Knuth (RSK) algorithm to the bottom row of the two-line array. It suffices to construct the so-called insertion tableau because by the symmetry of the RSK algorithm, it is equal to the recording tableau. Thus, to reconstruct the two-line array, we apply the inverse Robinson-Schensted-Knuth algorithm to $(P,P)$ .

A.2 Simpler description of the classical bijection

Now we discuss a related but simpler bijective proof of (1.1) that does not invoke the symmetry of the RSK algorithm. After its description, we will actually discover that ‘only’ the description of the algorithm is simpler as we will show that the bijection agrees with the classical one. However, this second version could be of interest for developing the combinatorics of (1.4) and (1.8).

As discussed above, the right-hand side of (1.1) can be interpreted as the generating function of symmetric $n \times n$ matrices $A=(a_{i,j})_{1 \le i,j \le n}$ with non-negative integer entries. They are also equivalent to lexicographically ordered two-line arrays with the property that the upper entry in each column is no smaller than the lower entry: For $i \le j$ , let $a_{i,j}=a_{j,i}$ be the number of columns of type $\left ( j \atop i \right )$ . Comparing to the two-line array from the classical proof, we just have to delete all columns $\left ( j \atop i \right )$ with $i> j$ .

Now we apply the following variant of RSK, which transforms a lexicographically ordered two-line array such that no upper element is smaller than the corresponding lower element into a semistandard Young tableau.

  • As usual, we work through the columns of the two-line array from left to right.

  • Suppose $\left ( j \atop i \right )$ , $i \le j$ , is our current column. We use the usual RSK algorithm to insert i in to the current tableau.

  • If $i < j$ , we additionally place j into the tableau as follows: Suppose that the insertion of i ends with adding an entry to row r. Then we add j to row $r+1$ in the leftmost column where there is no entry so far.

Example A.1. To give an example, observe that the symmetric matrix

$$ \begin{align*}A = \begin{pmatrix} 1 & 0 & 2 & 1 \\ 0 & 0 & 1 & 4 \\ 2 & 1 & 2 & 0 \\ 1 & 4 & 0 & 1 \end{pmatrix} \end{align*} $$

is equivalent to the two-line array

$$ \begin{align*}\left( \begin{array}{cccccccccccc} 1 & 3 & 3 & 3 & 3 & 3 & 4 & 4 & 4 & 4 & 4 & 4 \\ 1 & 1 & 1 & 2 & 3 & 3 & 1 & 2 & 2 & 2 & 2 & 4 \end{array} \right) \end{align*} $$

and that the algorithm results in the following semistandard Young tableau:

Well-definedness of the algorithm. We argue that the resulting tableau is always a semistandard Young tableau. For this, we need an observation that can be deduced from [Reference Stanley28, Lemma 7.11.2 (b)], which says that if we insert a weakly increasing sequence of positive integers $i_1 \le i_2 \le \ldots \le i_r$ from left to right into a semistandard Young tableau, then the ‘insertion path’ of an earlier element lies strictly to the left of a later element. Moreover, for $p < q$ , the insertion path of $i_p$ ends in a row below and to the left of the end of the insertion path of $i_q$ , or in the same row to the left of the end of the insertion path of $i_q$ . This implies that if the $i_k$ ’s are the bottom elements of the columns with top element j in the two line array, then, if the insertion path of an $i_k$ with $i_k < j$ ends in row r, the elements in row $1,2,\ldots ,r$ are in $\{1,2,\ldots ,j-1\}$ .

We show by induction on the number of elements in the tableau that our algorithm always leads to a semistandard Young tableau. Now, if we insert the element i of the column $\left ( j \atop i \right )$ using the classical RSK algorithm into the current semistandard Young tableau, then we obtain another semistandard Young tableau; see [Reference Stanley28, Lemma 7.11.3]. Placing the top element j in case $j>i$ into the next row will also not destroy the columnstrictness, as the elements above the row of j are in $\{1,2,\ldots ,j-1\}$ , as discussed in the previous paragraph.

Remark A.2. Note that from the proof of well-definedness it follows that we may also add all top j’s at once after we have inserted the bottom entries of columns that have j’s as top entries in our algorithm: Consider the skew shape $\lambda / \mu $ , where $\mu $ is the shape of the tableau that we had before the insertion of all these bottom entries and $\lambda $ is the shape of the tableau we obtain after the insertion (but not yet adding the j’s from the top row of the two-line array) except that we exclude in the latter tableau all j’s that come from the bottom of the two-line array. Now, if there are c cells in row r of the skew shape, then we add $c j$ ’s in row $r+1$ to the semistandard Young tableaux with the bottom entries inserted, now including also those that come from columns $\left ( j \atop j \right )$ . This is because the cells of the skew shape are added to the tableau in the course of insertion from bottom to top and within a row from left to right.

Reverse algorithm. We construct the inverse algorithm inductively, where the induction is with respect to the largest element in the tableau. Suppose n is the largest element in the semistandard Young tableau. Then we want to recover the part of the two-line array that has n in the top row (which is an ending section of the array). Suppose

$$ \begin{align*} \left( \begin{array}{cccc} n & n & \ldots & n \\ i_1 & i_2 & \ldots & i_s \end{array} \right) \end{align*} $$

is this section, which implies $i_1 \le i_2 \le \ldots \le i_s$ , and let r be maximal with $i_r < n$ so that $i_{r+1} = i_{r+2}= \ldots = i_s = n$ . Now, from the algorithm it follows that $s-r$ is just the number n’s in the top row of the tableau, and we can delete these elements. Again, it follows from [Reference Stanley28, Lemma 7.11.2 (b)] that we need to determine the number u of n’s in the second row, remove them and the apply the inverse bumping algorithm to the last u element in the first row, from right to left (which means that we just remove them and put them in the bottom row of the two-line array). We continue by counting (and removing) the n’s in the third row, and, if v is this number, apply the inverse bumping to the last v elements in the second row, from right to left. We work through the rows from top to bottom in this way.

Finally, we discover that this algorithm is just another description of the classical bijection.

Proposition A.3. The algorithm just described establishes the same bijection between symmetric $n \times n$ matrices A with non-negative integer entries and semistandard Young tableaux with entries in $\{1,2,\ldots ,n\}$ as the classical one.

Sketch of proof.

The proof is by induction with respect to n. For $n=1$ , there is nothing to prove since the two algorithms coincide in this case.

We perform the step from $n-1$ to n. We can assume $a_{n,n}=0$ since increasing $a_{n,n}$ has the same effect in both algorithms, as in both cases, we just add $a_{n,n}$ columns $\left ( n \atop n \right )$ at the end of the two-line arrays and apply the same procedure to these columns, in both cases at the end of the algorithm.

Suppose B is the restriction of A to the first $n-1$ rows and the first $n-1$ columns. By the induction hypothesis, we know that B is transformed into the same semistandard Young tableau P under both algorithms. Moreover, let a be the two-line array that corresponds to A in the classical algorithm and $a'$ be the initial section that disregards all columns with an n in the top row. Clearly, we can obtain P also by applying RSK to the bottom row of $a'$ and then deleting all n’s because the two-line array b that corresponds to B under the classical algorithm is obtained from $a'$ by deleting all columns that have an n in the bottom row, and the n’s will never bump an element, but at most be bumped in final steps of insertions. Let Q denote the semistandard Young tableau where the n’s are kept (i.e., what we obtain after applying RSK to the bottom row of $a'$ ).

Now note that the final sections of the two-line array with n in the top row agree for both two-line arrays, and denote it by s. Since we assume $a_{n,n}=0$ , the bottom row of s does not contain any n. It is also clear that we will obtain the same tableau if we apply the following two different procedures: Insert the bottom row of s to P or insert the bottom row of s to Q and then delete the n’s. This is because P and Q agree on all entries different from n, and n’s are at most bumped in final steps in the second case.

This implies that the two procedures (namely, the ‘classical’ one and the one that is the subject of this section) result in the same two tableaux when disregarding the n’s. Therefore, it remains to show that they also agree on the n’s. Now we use the fact that the positions of the n’s (as for any other entry) can also be determined by considering the recording tableau (which is due to the symmetry of the classical RSK algorithm); in particular, we need to study how the recording tableau is built up when adding s since this is the only time when n’s are added to the recording tableau. These n’s are added in the final cells of the insertion paths when inserting the bottom row of s into Q. Such an insertion path can either agree with the corresponding insertion path in P or it has one additional step where an n gets bumped. As we already know that up to the n’s, we obtain the same tableaux in both cases, we are always in the case that n’s are bumped, and this proves the assertion.

A.3 RSK in terms of Gelfand-Tsetlin patterns

It is well known that semistandard Young tableaux can be replaced by Gelfand-Tsetlin patterns in the definition of Schur polynomials (and thus in the combinatorial interpretation of the left-hand sides of (1.1) and (1.6)) as there is an easy bijective correspondence, which will be described next. This point of view is valuable for us because the left-hand sides of our Littlewood-type identities can also be interpreted combinatorially as generating functions of Gelfand-Tsetlin-pattern-type objects (see Section 3). The purpose of the current section is to indicate how the classical RSK algorithm works on (classical) Gelfand-Tsetlin patterns, with the hope that something similar can be established for our variant (i.e., arrowed Gelfand-Tsetlin patterns; see Section 3.1).

A Gelfand-Tsetlin pattern is a finite triangular array of integers with centered rows as follows:

$$ \begin{align*} \begin{array}{ccccccc}&&& a_{1,1} &&& \\&&a_{2,1} && a_{2,2} \\& \unicode{x22F0} && \ldots && \ddots & \\a_{n,1} && a_{n,2} && \ldots && a_{n,n}\end{array} \end{align*} $$

such that we have a weak increase in $\nearrow $ -direction as well as in $\searrow $ -direction (i.e., $a_{i+1,j} \le a_{i,j} \le a_{i+1,j+1}$ , for all $1 \le j \le i \le n-1$ ). The bijection between semistandard Young tableaux of shape $(\lambda _1,\lambda _2,\ldots ,\lambda _n)$ (we allow zero entries here) and parts in $\{1,2,\ldots ,n\}$ , and Gelfand-Tsetlin patterns with bottom row $(\lambda _n,\lambda _{n-1},\ldots ,\lambda _1)$ is as follows: reading the i-th row of a Gelfand-Tsetlin pattern in reverse order gives a partition, and this is precisely the shape constituted by the entries less than or equal to i in the corresponding semistandard Young tableau. Under this bijection, the number of entries equal to i in the semistandard Young tableau is equal to the difference of the i-th row sum and the $(i-1)$ -st row sum in the Gelfand-Tsetlin pattern. Therefore,

$$ \begin{align*}s_{(\lambda_1,\ldots,\lambda_n)}(X_1,\ldots,X_n) = \sum \prod_{i=1}^n X_i^{\sum_{j=1}^i a_{i,j} - \sum_{j=1}^{i-1} a_{i-1,j}}, \end{align*} $$

where the sum is over all Gelfand-Tsetlin patterns $(a_{i,j})_{1 \le j \le i \le n}$ with bottom row $(\lambda _n,\lambda _{n-1},\ldots ,\lambda _1)$ .

To give an example, observe that the Gelfand-Tsetlin pattern corresponding to the following semistandard Young tableaux:

(A.1)

is

$$ \begin{align*} \begin{array}{ccccccccccccccc}&&&&&&& 3 &&&&&&& \\&&&&&& 2 && 5 &&&&&& \\&&&&& 0 && 2 && 6 &&&&& \\&&&& 0 && 1 && 3 && 6 &&&& \\&&& 0 && 1 && 3 && 4 && 7 &&& \\&& 0 && 0 && 3 && 3 && 4 && 7 && \\& 0 && 0 && 1 && 3 && 4 && 5 && 7 & \\0 && 0 && 0 && 2 && 4 && 5 && 6 && 7\end{array}.\end{align*} $$

Now suppose we use the RSK algorithm to insert the integer m into a semistandard Young tableau. On the corresponding Gelfand-Tsetlin pattern, we have to do the following.

  • If the number n of rows of the pattern is less than m and the bottom row of the pattern is $k_1,\ldots ,k_n$ , then we add rows of the form $0,\ldots ,0,k_1,\ldots ,k_n$ with the appropriate number of $0$ ’s until we have m rows.

  • Now we start a path in the pattern that starts at the last entry in row m with (unit) steps in $\searrow $ -direction or $\swarrow $ -direction progressing from one entry to a neighboring entry in this direction. The rule is as follows: Whenever the $\searrow $ -neighbor of the current entry is equal to the current entry, we extend our path to the next entry in $\searrow $ -direction; otherwise, we go to the next entry in $\swarrow $ -direction. We continue with this path until we reach the bottom row.

  • Finally, we add $1$ to all entries in the path.

To give an example, if we use RSK to insert $3$ into the semistandard Young tableau from (A.1), we obtain the following tableau, where the insertion path is indicated in red:

On the corresponding Gelfand-Tsetlin pattern, we obtain the following:

$$ \begin{align*}\begin{array}{ccccccccccccccc} &&&&&&& 3 &&&&&&& \\ &&&&&& 2 && 5 &&&&&& \\ &&&&& 0 && 2 && \color{red} 7 &&&&& \\ &&&& 0 && 1 && 3 && \color{red} 7 &&&& \\ &&& 0 && 1 && 3 && \color{red} 5 && 7 &&& \\ && 0 && 0 && 3 && 3 && \color{red} 5 && 7 && \\ & 0 && 0 && 1 && 3 && \color{red} 5 && 5 && 7 & \\ 0 && 0 && 0 && 2 && \color{red} 5 && 5 && 6 && 7 \end{array} \end{align*} $$

It corresponds to the tableau with the $3$ inserted.

Now suppose in our simplified algorithm to prove (1.1), we ‘insert’ the column $\left ( j \atop i \right )$ into the Gelfand-Tsetlin pattern. At this point, the Gelfand-Tsetlin pattern should have j rows. Then we apply the algorithm just described to insert i into the pattern. To insert also j (in case $j \not =i$ ), add $1$ to the entry immediately left of the entry that is the end of the path that is induced by the insertion of i. Whenever we progress to the first column with j as top element in the two-line array, we add one row to the Gelfand-Tsetlin by copying the current bottom row and adding one $0$ at the beginning.

A.4 The right-hand side of the bounded Littlewood identity (1.6)

The irreducible characters of the special orthogonal group $SO_{2n+1}(\mathbb {C})$ associated with the partition $\lambda =(\lambda _1,\ldots ,\lambda _n)$ are

$$ \begin{align*}so^{\text{odd}}_{\lambda}(X_1,\ldots,X_n)=\prod_{i=1}^{n} X_i^{n-1/2} \frac{\det_{1 \le i, j \le n} \left( X_i^{-\lambda_j-n+j-1/2} - X_i^{\lambda_j+n-j+1/2} \right)} {(1 + [\lambda_n=0])\prod_{i=1}^n (1-X_i) \prod_{1 \le i<j \le n} (X_j-X_i)(1-X_i X_j)}; \end{align*} $$

see [Reference Fulton and Harris6, Eq. (24.28)]. These characters can be seen as generating functions of certain halved Gelfand-Tsetlin patterns that are defined next. This can even be extended to so-called half-integer partitions as will be explained also. A half-integer partition is a finite, weakly decreasing sequence of positive half-integers.

Definition A.4. For a positive integer n, a $2n$ -split orthogonal (Gelfand-Tsetlin) pattern is an array of non-negative integers or non-negative half-integers with $2n$ rows of lengths $1,1,2,2,\ldots ,n,n$ , which are aligned as follows for $n=3$ :

$$ \begin{align*}\begin{array}{cccccc} a_{1,1} & & & & & \\ & a_{2,1} & & & & \\ a_{3,1} & & a_{3,2} & & & \\ & a_{4,1} & & a_{4,2} & & \\ a_{5,1} & & a_{5,2} & & a_{5,3} & \\ & a_{6,1} & & a_{6,2} & & a_{6,3} \end{array} , \end{align*} $$

such that the entries are weakly increasing along $\nearrow $ -diagonals and $\searrow $ -diagonals, and in which the entries, except for the first entries in the odd rows (called odd starters), are either all non-negative integers or all non-negative half-integers. Each starter is independently either a non-negative integer or a non-negative half-integer. The weight of a $2n$ -split orthogonal pattern is

$$ \begin{align*}\prod_{i=1}^n X_i^{r_{2i}-2 r_{2i-1}+r_{2i-2}}, \end{align*} $$

where $r_i$ is the sum of entries in row i and $r_0=0$ .

The following theorem is the first part of Theorem 7.1 in [Reference Proctor23].

Theorem A.5. Let $\lambda =(\lambda _1,\ldots ,\lambda _n)$ be a partition (allowing zero entries) or a half-integer partition. Then the generating function of $2n$ -split orthogonal patterns with respect to the above weight that have $\lambda $ as bottom row, written in increasing order, is

$$ \begin{align*}\prod_{i=1}^{n} X_i^{n-1/2} \frac{\det_{1 \le i, j \le n} \left( X_i^{-\lambda_j-n+j-1/2} - X_i^{\lambda_j+n-j+1/2} \right)} {(1 + [\lambda_n=0])\prod_{i=1}^n (1-X_i) \prod_{1 \le i<j \le n} (X_j-X_i)(1-X_i X_j)}. \end{align*} $$

Now the right-hand side of (1.6) can be written as

$$ \begin{align*} & \frac{ \det_{1 \le i, j \le n} \left( X_i^{j-1} - X_i^{m+2n-j} \right) }{\prod_{i=1}^n (1-X_i) \prod_{1 \le i < j \le n} (X_j-X_i)(1-X_i X_j)} \\ &\qquad\qquad\qquad\qquad\qquad\qquad = \prod_{i=1}^n X_i^{(m-1)/2+n} \frac{ \det_{1 \le i, j \le n} \left( X_i^{j-n-(m+1)/2} - X_i^{-j+n+(m+1)/2} \right) }{\prod_{i=1}^n (1-X_i) \prod_{1 \le i < j \le n} (X_j-X_i)(1-X_i X_j)}, \end{align*} $$

so that we can deduce from Theorem A.5 that it is equal to

$$ \begin{align*}\prod_{i=1}^n X_i^{m/2} so^{\text{odd}}_{(m/2,m/2,\ldots,m/2)}(X_1,\ldots,X_n).\end{align*} $$

From (1.6), it now follows that

(A.2) $$ \begin{align} \sum_{\lambda \subseteq (m^n)} s_{\lambda}(X_1,\ldots,X_n) = \prod_{i=1}^n X_i^{m/2} so^{\text{odd}}_{(m/2,m/2,\ldots,m/2)}(X_1,\ldots,X_n). \end{align} $$

A combinatorial proof of this fact can be found in [Reference Stembridge29, Corollary 7.4].

It would be interesting to see whether there is a bijective proof of (A.2) that uses RSK. More concretely, under the bijection that is used in the classical bijective proof of (1.1), semistandard Young tableaux whose shape is in $(m^n)$ correspond to two-line arrays such that the longest increasing subsequence of the bottom row has at most m elements; see [Reference Stanley28, Proposition 7.23.10].

Next, we argue that we can also read off the m from the two-line array we use for our simplified proof of (1.1), in the following sense. The longest increasing subsequence of the bottom row of the ‘classical’ two-line array can be read off the corresponding matrix A with non-negative integers as follows: we consider walks through the matrix with unit $\rightarrow $ -steps and unit $\downarrow $ -steps and add up the entries we traverse. The maximal sum we can achieve with such a path is the length of the longest increasing subsequence of the bottom row of the classical two-line array. Now, if the matrix A is symmetric, we can confine such walks to be weakly above the main diagonal, and the two-line array of the simplified algorithm is constituted by this part of the matrix.

Finally, we give a bijective proof of (A.2) in the case $n=2$ . The left-hand side can be seen as the generating function of semistandard Young tableaux with entries in $\{1,2\}$ , with the weight

$$ \begin{align*}X_1^{\# \text{ of} 1's} X_2^{\# \text{ of} 2'}. \end{align*} $$

Such tableaux have at most $2$ rows and can be encoded by three non-negative integers $x,y,z$ : let y be the number of $2$ ’s in the second row, z be the number of $2$ ’s in the first row and $x+y$ be the number of $1$ ’s, which are necessarily in the first row. The two-line array that corresponds to such a tableau under our simplified algorithm is constituted by x columns $\left ( 1 \atop 1 \right )$ , y columns $\left ( 2 \atop 1 \right )$ and z columns $\left ( 2 \atop 2 \right )$ , ordered lexicograhpically. The corresponding $4$ -split pattern can be obtained as follows: Add $\frac {x+y+z}{2}$ to all entries of the following $4$ -split pattern:

$$ \begin{align*}\begin{array}{cccc} \frac{-x-y-\min(x,z)}{2} & & & \\ & - \min(x,z) & & \\ \frac{-y-z-\min(x,z)}{2} & & 0 & \\ & 0 & & \phantom{1234} 0. \end{array} \end{align*} $$

B Further combinatorial interpretations of the left-hand sides

B.1 Generating function of AGTPs with respect to the bottom row

Setting $X_1=X_2=\ldots =X_n=1$ in Theorem 3.4, we see that the generating function of AGTPs with bottom row $k_1,\ldots ,k_n$ and with respect to the weight

(B.1) $$ \begin{align} \operatorname{\mathrm{sgn}}(A) t^{\emptyset} u^{\nearrow} v^{\nwarrow} w^{\nwarrow \!\!\!\!\!\;\!\! \nearrow} \end{align} $$

is

(B.2) $$ \begin{align} & (t+u+v+w)^n \prod_{1 \le i < j \le n}\left( t + u {\operatorname{E}}_{k_i} + v {\operatorname{E}}_{k_j}^{-1} + w {\operatorname{E}}_{k_i} {\operatorname{E}}_{k_j}^{-1} \right) \prod_{1 \le i < j \le n} \frac{k_j-k_i+j-i}{j-i} \nonumber\\ &\qquad\quad =(t+u+v+w)^n \prod_{1 \le i < j \le n}\left( t {\operatorname{E}}_{k_j} + u {\operatorname{E}}_{k_i} {\operatorname{E}}_{k_j} + v + w {\operatorname{E}}_{k_i} \right) \prod_{j=1}^n {\operatorname{E}}_{k_j}^{-j+1} \prod_{1 \le i < j \le n} \frac{k_j-k_i+j-i}{j-i} \nonumber\\&\qquad\quad =(t+u+v+w)^n \prod_{1 \le i < j \le n}\left( t {\operatorname{E}}_{k_j} + u {\operatorname{E}}_{k_i} {\operatorname{E}}_{k_j} + v + w {\operatorname{E}}_{k_i} \right) \prod_{1 \le i < j \le n} \frac{k_j-k_i}{j-i}, \end{align} $$

using the fact $s_{(k_n,k_{n-1},\ldots ,k_1)}(1,\ldots ,1) = \prod _{1 \le i < j \le n} \frac {k_j-k_i+j-i}{j-i}$ , which follows from [Reference Stanley28, (7.105)] when taking the limit $q \to 1$ .

Generalizing a computation in Section 6 of [Reference Fischer9] slightly, it can be seen that the coefficient of $X_1^{k_1} X_2^{k_2} \cdots X_n^{k_n}$ in

$$ \begin{align*}(t+u+v+w)^n \prod_{i=1}^n X_i^{-n+1}(1-X_i)^{-n} \prod_{1 \le i < j \le n} (X_j-X_i)(u+ t X_i + w X_j + v X_i X_j) \end{align*} $$

is the generating function of AGTPs with bottom row $k_1,k_2,\ldots ,k_n$ as given in (B.2), when interpreting the rational function as a formal Laurent series in $X_1,X_2,\ldots ,X_n$ with $(1-X_i)^{-1}= \sum _{k \ge 0} X_i^k$ and assuming $(k_1,k_2,\ldots ,k_n) \ge 0$ . Phrased differently, for any $(k_1,\ldots ,k_n),(m_1,\ldots ,m_n) \in \mathbb {Z}^n$ with $(k_1+m_1,\ldots ,k_n+m_n) \ge 0$ , the coefficient of $X_1^{m_1} \cdots X_n^{m_n}$ in

$$ \begin{align*}(t+u+v+w)^n \prod_{i=1}^n X_i^{-n+1-k_i}(1-X_i)^{-n} \prod_{1 \le i < j \le n} (X_j-X_i)(u+ t X_i + w X_j + v X_i X_j) \end{align*} $$

is the generating function of AGTPs with bottom $(k_1+m_1,\ldots ,k_n+m_n)$ . Therefore, the coefficient of $X_1^{m_1} \cdots X_n^{m_n}$ in

$$ \begin{align*} & (t+u+v+w)^n \\ &\qquad\qquad \times \mathbf{Sym}_{X_1,\ldots,X_n} \left[ \prod_{i=1}^n X_i^{-n+1-k_i}(1-X_i)^{-n} \prod_{1 \le i < j \le n} (X_j-X_i)(u+ t X_i + w X_j + v X_i X_j) \right] \end{align*} $$

is the generating function of pairs of AGTPs and permutations $\sigma $ , where the difference of the bottom row and $(k_1,\ldots ,k_n)$ is the permutation of $\{m_{1},\ldots ,m_{n}\}$ given by $\sigma $ , assuming $(k_1+m_{\sigma (1)},\ldots ,k_n+m_{\sigma (n)}) \ge 0$ for every permutation $\sigma $ . The latter is always satisfied if $(k_1,\ldots ,k_n),(m_1,\ldots ,m_n) \ge 0$ . The above expression is equal to

$$ \begin{align*} (t+u+v+w)^n \prod_{i=1}^n& (1-X_i)^{-n} \prod_{1 \le i < j \le n} (X_j-X_i) \\ &\qquad \times \mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{i=1}^n X_i^{-k_i} \prod_{1 \le i < j \le n} (v+ w X_i^{-1} + t X_j^{-1} + u X_i^{-1} X_j^{-1}) \right]. \end{align*} $$

We sum over all $0 \le k_1 < k_2 < \ldots < k_n \le m$ .

$$ \begin{align*} & (t+u+v+w)^n \prod_{i=1}^n(1-X_i)^{-n} \prod_{1 \le i < j \le n} (X_j-X_i) \\ &\quad \times \mathbf{ASym}_{X_1,\ldots,X_n} \left[ \prod_{1 \le i < j \le n} (v+ w X_i^{-1} + t X_j^{-1} + u X_i^{-1} X_j^{-1}) \sum_{0 \le k_1 < k_2 < \ldots < k_n \le m} X_1^{-k_1} X_2^{-k_2} \cdots X_n^{-k_n} \right] \end{align*} $$

For $(m_1,\ldots ,m_n) \ge 0$ , the coefficient of $X_1^{m_1} \cdots X_n^{m_n}$ is the generating function of pairs of AGTPs A and permutations $\sigma $ of $\{1,2,\ldots ,n\}$ such that if $(m_{\sigma (1)},\ldots ,m_{\sigma (n)})$ is added to the bottom row of A, we obtain a strictly increasing sequence of non-negative integers. In particular, the constant term is the generating function of AGTPs (with respect to the weight (B.1)), whose bottom row is a strictly increasing sequence of non-negative integers, multiplied by $n!$ . Setting $t=u=v=1$ , this is by (1.8) equal to

$$ \begin{align*} & (3+w)^n \prod_{i=1}^n(1-X_i)^{-n} \prod_{1 \le i < j \le n} (X_j-X_i) \\[5pt] & \quad\times \frac{\det_{1 \le i, j \le n} \left( X_i^{-j+1} (1+X_i^{-1})^{j-1} (1+ w X_i^{-1})^{n-j} - X_i^{-m-2n+j} (1+X_i)^{j-1} (1+w X_i)^{n-j} \right)}{\prod\limits_{i=1}^n (1-X_i^{-1}) \prod\limits_{1 \le i < j \le n} (1-X_i^{-1} X_j^{-1})} \\[5pt] &\qquad\qquad\qquad\qquad\qquad = (3+w)^n \prod_{i=1}^n(1-X_i)^{-n} \prod_{1 \le i < j \le n} (X_j-X_i) \\[5pt] &\quad\times \frac{\det_{1 \le i, j \le n} \left( X_i^{-j+2} (1+X_i)^{j-1} (w+ X_i)^{n-j} - X_i^{-m-n+j} (1+X_i)^{j-1} (1+w X_i)^{n-j} \right)}{\prod\limits_{i=1}^n (X_i-1) \prod\limits_{1 \le i < j \le n} (X_i X_j-1)}. \end{align*} $$

B.2 Generating function of alternating sign triangles with respect to the positions of the $1$ -columns

Alternating sign triangles have been introduced recently in [Reference Ayyer, Behrend and Fischer1].

Definition B.1. An alternating sign triangle (AST) with $n \ge 1$ rows is a triangular array with n centered rows of the following shape:

$$ \begin{align*}\begin{array}{ccccccc} a_{1,1} & a_{1,2} & \ldots & \ldots & \ldots & \ldots & a_{1,2n-1} \\ & a_{2,2} & \ldots & \ldots & \ldots & a_{2,2n-2} & \\ & & \ldots & \ldots & \ldots & & \\ & & & a_{n,n} & & \end{array} \end{align*} $$

such that $a_{i,j} \in \{0,1,-1\}$ , nonzero entries alternate in each row and column, all rows sum to $1$ and the topmost nonzero entry (if any) in each column is $1$ .

Next, we give an example of an AST with $5$ rows:

$$ \begin{align*}\begin{array}{ccccccc} 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ & 0 & 1 & -1 &0 & 1 & \\ & & 0 & 0 & 1 & & \\ & & & 1. & & & \end{array} \end{align*} $$

It is known that there is the same number of $n \times n$ ASMs as there is of ASTs with n rows, but no bijection is known so far. It has even been possible to identify certain equidistributed statistics; see [Reference Fischer11, Reference Fischer10, Reference Ayyer, Behrend and Fischer1].

The columns of an AST sum to $0$ or $1$ . A column that sums to $1$ is said to be a $1$ -column. The central column is always a $1$ -column. Since the sum of all entries in an AST with n rows is n, there are precisely $n-1$ other $1$ -columns. A certain type of generating function with respect to the $1$ -columns has been derived in [Reference Fischer11, Theorem 7]. It involves one other statistic, which we introduce next: A $11$ -column is a $1$ -column with $1$ as bottom element, while a $10$ -column is a $1$ -column with $0$ as bottom element. For an AST, T we define

$$ \begin{align*} \rho(T)= \# 11\text{-columns left of the central}& \text{column} \\ & + \# 10\text{-columns right of the central column} + 1. \end{align*} $$

Theorem B.2. Let n be a positive integer, $0 \le r \le n-1$ and $0 \le j_1 < j_2 < \ldots < j_{n-1} \le 2n-3$ . The coefficient of $t^{r-1} X_1^{j_1} X_2^{j_2} \cdots X_{n-1}^{j_{n-1}}$ in

(B.3) $$ \begin{align} \prod_{i=1}^{n-1} (t+X_i) \prod_{1 \le i < j \le n-1} (1+X_i + X_i X_j)(X_j-X_i) \end{align} $$

is the number of ASTs T with n rows, $\rho (T)=r$ and $1$ -columns in positions $j_1,j_2,\ldots ,j_{n-1}$ , where we exclude the central column and count from the left starting with $0$ .

For what follows, the crucial question is whether we can give the coefficient of $t^{r-1} X_1^{j_1} X_2^{j_2} \cdots X_{n-1}^{j_{n-1}}$ of (B.3) also a meaning if $(j_1,\ldots ,j_{n-1})$ is not strictly increasing. Such an interpretation does not exist so far.

Phrased differently, the theorem states that the coefficient of $X_1^{m_1} X_2^{m_2} \cdots X_{n-1}^{m_{n-1}}$ in

$$ \begin{align*} \prod_{i=1}^{n-1} (t+X_i^{-1}) X_i^{j_i} \prod_{1 \le i < j \le n-1} & (1+X_i^{-1} + X_i^{-1} X_j^{-1})(X_j^{-1}-X_i^{-1}) \\ &\qquad = \prod_{i=1}^{n-1} (1 + t X_i) X_i^{j_i-2n+1} \prod_{1 \le i < j \le n-1} (1+X_j + X_i X_j)(X_i-X_j) \end{align*} $$

is the generating function of ASTs with $1$ -columns in positions $j_1-m_1,j_2-m_2,\ldots ,j_{n-1}-m_{n-1}$ with respect to $\rho (T)-1$ , provided that $j_1-m_1 < j_2 - m_2 < \cdots < j_{n-1}-m_{n-1}$ . Therefore, the coefficient of $X_1^{m_1} X_2^{m_2} \cdots X_{n-1}^{m_{n-1}}$ in

$$ \begin{align*}\mathbf{Sym}_{X_1,\ldots,X_{n-1}} \left[ \prod_{i=1}^{n-1} (1 + t X_i) X_i^{j_i-2n+1} \prod_{1 \le i < j \le n-1} (1+X_j + X_i X_j)(X_i-X_j) \right] \end{align*} $$

is the generating function of pairs of ASTs and permutations of $\{1,2,\ldots ,n-1\}$ , such that $j_1-m_{\sigma (1)},j_2-m_{\sigma (2)},\ldots ,j_{n-1}-m_{\sigma (n-1)}$ are the positions of $1$ -columns, provided that $(j_1-m_{\sigma (1)}, j_2-m_{\sigma (2)},\ldots ,j_{n-1}-m_{\sigma (n-1)})$ is strictly increasing for all $\sigma $ . Note that it is possible to satisfy the strictly increasing condition – for instance, if $(j_1,\ldots ,j_{n-1})$ is strictly increasing and the differences between consecutive $j_l$ are large while the $m_l$ are small.

The expression is equal to

$$ \begin{align*}\prod_{i=1}^{n-1} (1+t X_i) X_i^{-2n+1} \prod_{1 \le i < j \le n-1} (X_i-X_j) \, \mathbf{ASym}_{X_1,\ldots,X_{n-1}} \left[ \prod_{i=1}^{n-1} X_i^{j_i} \prod_{1 \le i < j \le n-1} (1+X_j + X_i X_j) \right]. \end{align*} $$

We sum over all $p \le j_1 < j_2 < \ldots < j_{n-1} \le q$ and obtain

(B.4) $$ \begin{align} \prod_{i=1}^{n-1} & (1+t X_i) X_i^{-2n+1+p} \prod_{1 \le i < j \le n-1} (X_i-X_j) \nonumber\\ &\qquad \times \mathbf{ASym}_{X_1,\ldots,X_{n-1}} \left[ \prod_{1 \le i < j \le n-1} (1+X_j + X_i X_j) \sum_{0 \le j_1 < j_2 < \cdots < j_{n-1} \le q-p} X_1^{j_1} \cdots X_{n-1}^{j_{n-1}} \right]. \end{align} $$

Now, the coefficient of $X_1^{m_1} X_2^{m_2} \cdots X_{n-1}^{m_{n-1}}$ in this expression is the generating function of pairs of, let us say, extended ASTs and permutations of $\{1,2,\ldots ,n-1\}$ such that if $m_{\sigma (1)},\ldots ,m_{\sigma (n-1)}$ is added to the positions of the $1$ -columns, we obtain a strictly increasing sequence of integers between p and q. Extended refers to the fact that we now would need an extended version of Theorem B.2 as indicated above, as we cannot guarantee that $(j_1-m_{\sigma (1)},j_2-m_{\sigma (2)},\ldots ,j_{n-1}-m_{\sigma (n-1)})$ are strictly increasing when we sum over all $p \le j_1 < j_2 < \ldots < j_{n-1} \le q$ .

An exception in this respect is the case when all $m_l=0$ . It follows that the constant term of (B.4) is the generating function of ASTs with n rows whose $1$ -columns are between p and q. Using (1.8), this is equal to

$$ \begin{align*} \prod_{i=1}^{n-1} \frac{(1+t X_i) X_i^{-2n+1+p}}{1-X_i} \prod_{1 \le i < j \le n-1} \frac{X_i-X_j}{1-X_i X_j} \det_{1 \le i, j \le n-1} \left( X_i^{j-1} (1+X_i)^{j-1} - X_i^{q-p+2n-2j+1} (1+X_i)^{j-1} \right). \end{align*} $$

Acknowledgements

This research was funded in part by the Austrian Science Fund (FWF) 10.55776/P34931 and 10.55776/F1002.

Competing interests

The authors have no competing interest to declare.

Footnotes

1 They appeared first in [Reference Fischer and Schreier-Aigner13] as extended arrowed monotone triangles.

References

Ayyer, A., Behrend, R. and Fischer, I., ‘Extreme diagonally and antidiagonally symmetric alternating sign matrices of odd order’, Adv. Math. 367 (2020), 107125.CrossRefGoogle Scholar
Bressoud, D. M., ‘Elementary proof of MacMahon’s conjecture’, J. Algebraic Combin. 7(3) (1998), 253257.CrossRefGoogle Scholar
Bressoud, D., Proofs and Confirmations. The Story of the Alternating Sign Matrix Conjecture (MAA Spectrum) (Mathematical Association of America and Cambridge University Press, Washington, DC, and Cambridge, 1999).CrossRefGoogle Scholar
Di Francesco, P., ‘Twenty vertex model and domino tilings of the Aztec triangle’, Electron. J. Combin. 28(4) (2021), Paper No. 4.38.CrossRefGoogle Scholar
Celine Fasenmyer, M., ‘Some generalized hypergeometric polynomials’ (ProQuest LLC, Ann Arbor, MI), PhD Thesis, University of Michigan, 1946.Google Scholar
Fulton, W. and Harris, J., Representation Theory (Graduate Texts in Mathematics) vol. 129 (Springer-Verlag, New York, 1991).Google Scholar
Fischer, I. and Höngesberg, H., ‘Alternating sign matrices with reflective symmetry and plane partitions: n+3 pairs of equivalent statistics’, Preprint, 2022, arXiv:2207.04469.CrossRefGoogle Scholar
Fischer, I., ‘A method for proving polynomial enumeration formulas’, J. Combin. Theory Ser. A 111 (2005), 3758.CrossRefGoogle Scholar
Fischer, I., ‘An operator formula for the number of halved monotone triangles with prescribed bottom row’, J. Combin. Theory Ser. A 116(3) (2009), 515538.CrossRefGoogle Scholar
Fischer, I., ‘A constant term approach to enumerating alternating sign trapezoids’, Adv. Math. 356 (2019).CrossRefGoogle Scholar
Fischer, I., ‘Enumeration of alternating sign triangles using a constant term approach’, Trans. Amer. Math. Soc. 372 (2019), 14851508.CrossRefGoogle Scholar
Fischer, I. and Konvalinka, M., ‘The mysterious story of square ice, piles of cubes, and bijections’, Proc. Natl. Acad. Sci. USA 117 (38) (2020), 2346023466.CrossRefGoogle ScholarPubMed
Fischer, I. and Schreier-Aigner, F., ‘The relation between alternating sign matrices and descending plane partitions: n + 3 pairs of equivalent statistics’, Preprint, 2021, arXiv:2106.11568, to appear Adv. Math.Google Scholar
Fischer, I. and Schreier-Aigner, F., ‘Alternating sign matrices and totally symmetric plane partitions’, Preprint, 2022, arXiv:2201.13142.Google Scholar
Gessel, I. and Viennot, G., ‘Binomial determinants, paths, and hook length formulae’, Adv. Math. 58(3) (1985), 300321.CrossRefGoogle Scholar
Gessel, I. and Viennot, G., Determinants, paths and plane partitions, 1989.Google Scholar
Höngesberg, H., ‘A fourfold refined enumeration of alternating sign trapezoids’, Electron. J. Combin. 29(3) (2022), Paper No. 3.42.CrossRefGoogle Scholar
Krattenthaler, C., ‘Plane partitions in the work of Richard Stanley and his school’, in The Mathematical Legacy of Richard P. Stanley (Amer. Math. Soc., Providence, RI, 2016), 231261.CrossRefGoogle Scholar
Koike, K. and Terada, I., ‘Young diagrammatic methods for the restriction of representations of complex classical Lie groups to reductive subgroups of maximal rank’, Adv. Math. 79(1) (1990), 104135.CrossRefGoogle Scholar
Lindström, B., ‘On the vector representations of induced matroids’, Bull. London Math. Soc. 5 (1973), 8590.CrossRefGoogle Scholar
Littlewood, D. E., The Theory of Group Characters and Matrix Representations of Groups (Oxford University Press, New York, 1940).Google Scholar
Macdonald, I. G., Symmetric Functions and Hall Polynomials (Oxford Classic Texts in the Physical Sciences), second edn. (The Clarendon Press, Oxford University Press, New York, 2015). With contribution by A. V. Zelevinsky and a foreword by Richard Stanley. Reprint of the 2008 paperback edition [MR1354144].Google Scholar
Proctor, R.. ‘Young tableaux, Gelfand patterns, and branching rules for classical groups’, J. Algebra 164(2) (1994), 299360.CrossRefGoogle Scholar
Rains, E. and Warnaar, O., ‘Bounded Littlewood identities’, Mem. Amer. Math. Soc. 270 (2021), vii+115.Google Scholar
Schur, I.. ‘Aufgabe 569’, Arch. Math. Phys. 27(3) (1918).Google Scholar
Schur, I., Gesammelte Abhandlungen, Vol. 3 (Springer, 1973).CrossRefGoogle Scholar
Stanley, R., Enumerative Combinatorics. Volume 2 (Cambridge Studies in Advanced Mathematics) vol. 62 (Cambridge University Press, Cambridge, 1999).CrossRefGoogle Scholar
Stembridge, J., ‘Nonintersecting paths, Pfaffians, and plane partitions’, Adv. Math. 83(1) (1990), 96131.CrossRefGoogle Scholar
Figure 0

Figure 1 An example of families of lattice paths in Theorem 4.1.

Figure 1

Figure 2 Illustration of Corollary 4.2 (1) for $n=7$ and $l=12$.

Figure 2

Figure 3 Illustration of Corollary 4.2 (1) for $n=7$ and $l=13$.