Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-26T20:59:53.221Z Has data issue: false hasContentIssue false

Hilbert’s 17th problem in free skew fields

Published online by Cambridge University Press:  06 September 2021

Jurij Volčič*
Affiliation:
Department of Mathematics, Texas A&M University, TAMU 3368, College Station, TX 77843, USA; E-mail: volcic@math.tamu.edu

Abstract

This paper solves the rational noncommutative analogue of Hilbert’s 17th problem: if a noncommutative rational function is positive semidefinite on all tuples of Hermitian matrices in its domain, then it is a sum of Hermitian squares of noncommutative rational functions. This result is a generalisation and culmination of earlier positivity certificates for noncommutative polynomials or rational functions without Hermitian singularities. More generally, a rational Positivstellensatz for free spectrahedra is given: a noncommutative rational function is positive semidefinite or undefined at every matricial solution of a linear matrix inequality $L\succeq 0$ if and only if it belongs to the rational quadratic module generated by L. The essential intermediate step toward this Positivstellensatz for functions with singularities is an extension theorem for invertible evaluations of linear matrix pencils.

Type
Algebra
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press

1 Introduction

In his famous problem list of 1900, Hilbert asked whether every positive rational function can be written as a sum of squares of rational functions. The affirmative answer by Artin in 1927 laid the ground for the rise of real algebraic geometry [Reference Bochnak, Coste and RoyBCR98]. Several other sum-of-squares certificates (Positivstellensätze) for positivity on semialgebraic sets followed; since the detection of sums of squares became viable with the emergence of semidefinite programming [Reference Wolkowicz, Saigal and VandenbergheWSV00], these certificates play a fundamental role in polynomial optimisation [Reference LasserreLas01Reference Blekherman, Parrilo and ThomasBPT13].

Positivstellensätze are also essential in the study of polynomial and rational inequalities in matrix variables, which splits into two directions. The first one deals with inequalities where the size of the matrix arguments is fixed [Reference Procesi and SchacherPS76Reference Klep, Špenko and VolčičKŠV18]. The second direction attempts to answer questions about the positivity of noncommutative polynomials and rational functions when matrix arguments of all finite sizes are considered. Such questions naturally arise in control systems [Reference de Oliveira, Helton, McCullough and PutinardOHMP09], operator algebras [Reference OzawaOza16] and quantum information theory [Reference Doherty, Liang, Toner and WehnerDLTW08Reference Pozas-Kerstjens, Rabelo, Rudnicki, Chaves, Cavalcanti, Navascués and AcínP-KRR+19]. This (dimension-)free real algebraic geometry started with the seminal work of Helton [Reference HeltonHel02] and McCullough [Reference McCulloughMcC01], who proved that a noncommutative polynomial is positive semidefinite on all tuples of Hermitian matrices precisely when it is a sum of Hermitian squares of noncommutative polynomials. The purpose of this paper is to extend this result to noncommutative rational functions.

Let $x=(x_1,\dotsc ,x_d)$ be freely noncommuting variables. The free algebra $\mathbb {C}\!\mathop {<}\! x\!\mathop {>}$ of noncommutative polynomials admits a universal skew field of fractions

, also called the free skew field [Reference CohnCoh95Reference Cohn and ReutenauerCR99], whose elements are noncommutative rational functions. We endow

with the unique involution $*$ that fixes the variables and conjugates the scalars. One can consider positivity of noncommutative rational functions on tuples of Hermitian matrices. For example, let

It turns out ${\mathbb{r}}(X)$ is a positive semidefinite matrix for every tuple of Hermitian matrices $X=(X_1,X_2,X_3,X_4)$ belonging to the domain of ${\mathbb{r}}$ (meaning $\ker X_1\cap \ker X_2=\{0\}$ in this particular case). One way to certify this is by observing that ${\mathbb{r}}={\mathbb{r}}_1{\mathbb{r}}_1^*+{\mathbb{r}}_2{\mathbb{r}}_2^*$ , where

$$ \begin{align*} {\mathbb{r}}_1=\left(x_4 - x_3 x_1^{-1} x_2\right) x_2\left(x_1^2+x_2^2\right)^{-1}x_1, \qquad {\mathbb{r}}_2=\left(x_4 - x_3 x_1^{-1} x_2\right) \left(1 + x_2 x_1^{-2} x_2\right)^{-1}. \end{align*} $$

The solution of Hilbert’s 17th problem in the free skew field presented in this paper (Corollary 5.4) states that every

, positive semidefinite on its Hermitian domain, is a sum of Hermitian squares in

. This statement was proved in [Reference Klep, Pascoe and VolčičKPV17] for noncommutative rational functions ${\mathbb{r}}$ that are regular, meaning that ${\mathbb{r}}(X)$ is well-defined for every tuple of Hermitian matrices. As with most noncommutative Positivstellensätze, at the heart of this result is a variation of the Gelfand–Naimark–Segal (GNS) construction. Namely, if

is not a sum of Hermitian squares, one can construct a tuple of finite-dimensional Hermitian operators Y that is a sensible candidate for witnessing nonpositive-definiteness of ${\mathbb{r}}$ . However, the construction itself does not guarantee that Y actually belongs to the domain of ${\mathbb{r}}$ . This is not a problem if one assumes that ${\mathbb{r}}$ is regular, as it was done in [Reference Klep, Pascoe and VolčičKPV17]. However, it is worth mentioning that deciding the regularity of a noncommutative rational function is a challenge on its own, as observed there. In the present paper, the domain issue is resolved with an extension result: the tuple Y obtained from the GNS construction can be extended to a tuple of finite-dimensional Hermitian operators in the domain of ${\mathbb{r}}$ without losing the desired features of Y.

The first main theorem of this paper pertains to linear matrix pencils and is key for the extension already mentioned. It might also be of independent interest in the study of quiver representations and semi-invariants [Reference KingKin94Reference Derksen and MakamDM17]. Let $\otimes $ denote the Kronecker product of matrices.

Theorem A. Let $\Lambda \in \operatorname {\mathrm {M}}_{e}(\mathbb {C})^d$ be such that $\Lambda _1\otimes X_1+\dotsb +\Lambda _d\otimes X_d$ is invertible for some $X\in \operatorname {\mathrm {M}}_{k}(\mathbb {C})^d$ . If $Y\in \operatorname {\mathrm {M}}_{\ell }(\mathbb {C})^d$ , $Y'\in \operatorname {\mathrm {M}}_{m\times \ell }(\mathbb {C})^d$ and $Y''\in \operatorname {\mathrm {M}}_{\ell \times m}(\mathbb {C})^d$ are such that

$$ \begin{align*} \Lambda_1\otimes \begin{pmatrix} Y_1 \\ Y^{\prime}_1\end{pmatrix}+\dotsb+ \Lambda_d\otimes \begin{pmatrix} Y_d \\ Y^{\prime}_d\end{pmatrix} \qquad \text{and}\qquad \Lambda_1\otimes \begin{pmatrix} Y_1 & Y^{\prime\prime}_1\end{pmatrix}+\dotsb+ \Lambda_d\otimes \begin{pmatrix} Y_d & Y^{\prime\prime}_d\end{pmatrix} \end{align*} $$

have full rank, then there exists $Z\in \operatorname {\mathrm {M}}_{n}(\mathbb {C})^d$ for some $n\ge m$ such that

$$ \begin{align*}\Lambda_1\otimes \left(\begin{array}{cc} Y_1 & \begin{matrix} Y^{\prime\prime}_1 & 0\end{matrix} \\ \begin{matrix} Y^{\prime}_1 \\ 0\end{matrix} & Z_1 \end{array}\right) +\dotsb+ \Lambda_d\otimes \left(\begin{array}{cc} Y_d & \begin{matrix} Y^{\prime\prime}_d & 0\end{matrix} \\ \begin{matrix} Y^{\prime}_d \\ 0\end{matrix} & Z_d \end{array}\right) \end{align*} $$

is invertible.

See Theorem 3.3 for the proof. Together with a truncated rational imitation of the GNS construction, Theorem A leads to a rational Positivstellensatz on free spectrahedra. Given a monic Hermitian pencil $L=I+H_1x_1+\dotsb +H_dx_d$ , the associated free spectrahedron $\mathcal {D}(L)$ is the set of Hermitian tuples X satisfying the linear matrix inequality $L(X)\succeq 0$ . Since every convex solution set of a noncommutative polynomial is a free spectrahedron [Reference Helton and McCulloughHM12], the following statement is called a rational convex Positivstellensatz, and it generalises its analogues in the polynomial context [Reference Helton, Klep and McCulloughHKM12] and regular rational context [Reference PascoePas18].

Theorem B. Let L be a Hermitian monic pencil and set . Then ${\mathbb{r}}\succeq 0$ on $\mathcal {D}(L)\cap \operatorname {\mathrm {dom}}{\mathbb{r}}$ if and only if ${\mathbb{r}}$ belongs to the rational quadratic module generated by L:

$$ \begin{align*} {\mathbb{r}}={\mathbb{r}}_1^*{\mathbb{r}}_1+\dotsb+{\mathbb{r}}_m^*{\mathbb{r}}_m+\mathbb{v}_1^*L\mathbb{v}_1+\dotsb+\mathbb{v}_n^*L\mathbb{v}_n \end{align*} $$

where and $\mathbb{v}_j$ are vectors over .

A more precise quantitative version is given in Theorem 5.2 and has several consequences. The solution of Hilbert’s 17th problem in is obtained by taking $L=1$ in Corollary 5.4. Versions of Theorem B for invariant (Corollary 5.7) and real (Corollary 5.8) noncommutative rational functions are also given. Furthermore, it is shown that the rational Positivstellensatz also holds for a family of quadratic polynomials describing nonconvex sets (Subsection 5.4). As a contribution to optimisation, Theorem B implies that the eigenvalue optimum of a noncommutative rational function on a free spectrahedron can be obtained by solving a single semidefinite program (Subsection 5.5), much like in the noncommutative polynomial case [Reference Blekherman, Parrilo and ThomasBPT13Reference Burgdorf, Klep and PovhBKP16] (but not in the classical commutative setting).

Finally, Section 6 contains complementary results about domains of noncommutative rational functions. It is shown that every can be represented by a formal rational expression that is well defined at every Hermitian tuple in the domain of ${\mathbb{r}}$ (Proposition 2.1); this statement fails in general if arbitrary matrix tuples are considered. On the other hand, a Nullstellensatz for cancellation of non-Hermitian singularities is given in Proposition 6.3.

2 Preliminaries

In this section we establish terminology, notation and preliminary results on noncommutative rational functions that are used throughout the paper. Let $\operatorname {\mathrm {M}}_{m\times n}(\mathbb {C})$ denote the space of complex $m\times n$ matrices, and $\operatorname {\mathrm {M}}_{n}(\mathbb {C})=\operatorname {\mathrm {M}}_{n\times n}(\mathbb {C})$ . Let $\operatorname {\mathrm {H}}_{n}(\mathbb {C})$ denote the real space of Hermitian $n\times n$ matrices. For $X=(X_1,\dotsc ,X_d)\in \operatorname {\mathrm {M}}_{m\times n}(\mathbb {C})^d$ , $A\in \operatorname {\mathrm {M}}_{p\times m}(\mathbb {C})$ and $B\in \operatorname {\mathrm {M}}_{n\times q}(\mathbb {C})$ , we write

$$ \begin{align*} AXB= (AX_1B,\dotsc,AX_dB)\in \operatorname{\mathrm{M}}_{p\times q}(\mathbb{C})^d, \qquad X^*=\left(X_1^*,\dotsc,X_d^*\right)\in\operatorname{\mathrm{M}}_{n\times m}(\mathbb{C})^d. \end{align*} $$

2.1 Free skew field

We define noncommutative rational functions using formal rational expressions and their matrix evaluations as in [Reference Kaliuzhnyi-Verbovetskyi and VinnikovK-VV12]. Formal rational expressions are syntactically valid combinations of scalars, freely noncommuting variables $x=(x_1,\dotsc ,x_d)$ , rational operations and parentheses. More precisely, a formal rational expression is an ordered (from left to right) rooted tree whose leaves have labels from $\mathbb {C}\cup \{x_1,\dotsc ,x_d\}$ , and every other node either is labelled $+$ or $\times $ and has two children or is labelled ${}^{-1}$ and has one child. For example, $((2+x_1)^{-1}x_2)x_1^{-1}$ is a formal rational expression corresponding to the following ordered tree:

A subexpression of a formal rational expression r is any formal rational expression which appears in the construction of r (i.e., as a subtree). For example, all subexpressions of $\left ((2+x_1)^{-1}x_2\right )x_1^{-1}$ are

$$ \begin{align*} 2,\ x_1,\ 2+x_1,\ (2+x_1)^{-1},\ x_2,\ (2+x_1)^{-1}x_2,\ x_1^{-1},\ \left((2+x_1)^{-1}x_2\right)x_1^{-1}. \end{align*} $$

Given a formal rational expression r and $X\in \operatorname {\mathrm {M}}_{n}(\mathbb {C})^d$ , the evaluation $r(X)$ is defined in the natural way if all inverses appearing in r exist at X. The set of all $X\in \operatorname {\mathrm {M}}_{n}(\mathbb {C})^d$ such that r is defined at X is denoted $\operatorname {\mathrm {dom}}_n r$ . The (matricial) domain of r is

$$ \begin{align*} \operatorname{\mathrm{dom}} r = \bigcup_{n\in\mathbb{N}} \operatorname{\mathrm{dom}}_n r. \end{align*} $$

Note that $\operatorname {\mathrm {dom}}_n r$ is a Zariski open set in $\operatorname {\mathrm {M}}_{n}(\mathbb {C})^d$ for every $n\in \mathbb {N}$ . A formal rational expression r is nondegenerate if $\operatorname {\mathrm {dom}} r\neq \emptyset $ ; let $\mathfrak {R}_{\mathbb {C}}(x)$ denote the set of all nondegenerate formal rational expressions. On $\mathfrak {R}_{\mathbb {C}}(x)$ we define an equivalence relation $r_1\sim r_2$ if and only if $r_1(X)=r_2(X)$ for all $X\in \operatorname {\mathrm {dom}} r_1\cap \operatorname {\mathrm {dom}} r_2$ . Equivalence classes with respect to this relation are called noncommutative rational functions. By [Reference Kaliuzhnyi-Verbovetskyi and VinnikovK-VV12, Proposition 2.2] they form a skew field denoted , which is the universal skew field of fractions of the free algebra $\mathbb {C}\!\mathop {<}\! x\!\mathop {>}$ by [Reference CohnCoh95, Section 4.5]. The equivalence class of $r\in \mathfrak {R}_{\mathbb {C}}(x)$ is denoted ; we also write $r\in {\mathbb{r}}$ and say that r is a representative of the noncommutative rational function ${\mathbb{r}}$ .

There is a unique involution $*$ on that is determined by $\alpha ^*=\overline {\alpha }$ for $\alpha \in \mathbb {C}$ and $x_j^*=x_j$ for $j=1,\dotsc ,d$ . Furthermore, this involution lifts to an involutive map $*$ on the set $\mathfrak {R}_{\mathbb {C}}(x)$ : in terms of ordered trees, $*$ transposes a tree from left to right and conjugates the scalar labels. Note that $X\in \operatorname {\mathrm {dom}} r$ implies $X^*\in \operatorname {\mathrm {dom}} r^*$ for $r\in \mathfrak {R}_{\mathbb {C}}(x)$ .

2.2 Hermitian domain

For $r\in \mathfrak {R}_{\mathbb {C}}(x)$ , let $\operatorname {\mathrm {hdom}}_n r= \operatorname {\mathrm {dom}}_n r\cap \operatorname {\mathrm {H}}_{n}(\mathbb {C})^d$ . Then

$$ \begin{align*} \operatorname{\mathrm{hdom}} r = \bigcup_{n\in\mathbb{N}} \operatorname{\mathrm{hdom}}_n r \end{align*} $$

is the Hermitian domain of r. Note that $\operatorname {\mathrm {hdom}}_n r$ is Zariski dense in $\operatorname {\mathrm {dom}}_n r$ , because $\operatorname {\mathrm {H}}_{n}(\mathbb {C})$ is Zariski dense in $\operatorname {\mathrm {M}}_{n}(\mathbb {C})$ and $\operatorname {\mathrm {dom}}_n r$ is Zariski open in $\operatorname {\mathrm {M}}_{n}(\mathbb {C})^d$ . Finally, we define the (Hermitian) domain of a noncommutative rational function: for , let

$$ \begin{align*} \operatorname{\mathrm{dom}}{\mathbb{r}} = \bigcup_{r\in{\mathbb{r}}} \operatorname{\mathrm{dom}} r,\qquad \operatorname{\mathrm{hdom}}{\mathbb{r}} = \bigcup_{r\in{\mathbb{r}}} \operatorname{\mathrm{hdom}} r. \end{align*} $$

By the definition of the equivalence relation on nondegenerate expressions, ${\mathbb{r}}$ has a well-defined evaluation at $X\in \operatorname {\mathrm {dom}}{\mathbb{r}}$ , written as ${\mathbb{r}}(X)$ , which equals $r(X)$ for any representative r of ${\mathbb{r}}$ that has X in its domain. The following proposition is a generalisation of [Reference Klep, Pascoe and VolčičKPV17, Proposition 3.3] and is proved in Subsection 6.1:

Proposition 2.1. For every there exists $r\in {\mathbb{r}}$ such that $\operatorname {\mathrm {hdom}} {\mathbb{r}}=\operatorname {\mathrm {hdom}} r$ .

Remark 2.2. There are noncommutative rational functions such that $\operatorname {\mathrm {dom}} {\mathbb{r}}\neq \operatorname {\mathrm {dom}} r$ for every $r\in {\mathbb{r}}$ ; see Example 6.2 or [Reference VolčičVol17, Example 3.13].

2.3 Linear representation of a formal rational expression

A fundamental tool for handling noncommutative rational functions is linear representations (also linearisations or realisations) [Reference Cohn and ReutenauerCR99Reference CohnCoh95Reference Helton, Mai and SpeicherHMS18]. Set $r\in \mathfrak {R}_{\mathbb {C}}(x)$ . By [Reference Helton, Mai and SpeicherHMS18, Theorem 4.2 and Algorithm 4.3] there exist $e\in \mathbb {N}$ , vectors $u,v\in \mathbb {C}^e$ and an affine matrix pencil $M=M_0+M_1x_1+\dotsb +M_dx_d$ , with $M_j\in \operatorname {\mathrm {M}}_{e}(\mathbb {C})$ , satisfying the following. For every unital $\mathbb {C}$ -algebra $\mathcal {A}$ and $a\in \mathcal {A}^d$ ,

  1. (i) if r can be evaluated at a, then $M(a)\in \operatorname {\mathrm {GL}}_e(\mathcal {A})$ and $r(a) = u^* M(a)^{-1}v$ ;

  2. (ii) if $M(a)\in \operatorname {\mathrm {GL}}_e(\mathcal {A})$ and $\mathcal {A}=\operatorname {\mathrm {M}}_{n}(\mathbb {C})$ for some $n\in \mathbb {N}$ , then r can be evaluated at a.

We say that the triple $(u,M,v)$ is a linear representation of r of size e. Usually, linear representations are defined for noncommutative rational functions and with less emphasis on domains; however, the definition here is more convenient for the purpose of this paper.

Remark 2.3. In the definition of a linear representation, (ii) is valid not only for $\operatorname {\mathrm {M}}_{n}(\mathbb {C})$ but more broadly for stably finite algebras [Reference Helton, Mai and SpeicherHMS18, Lemma 5.2]. However, it may fail in general–for example, for the algebra of all bounded operators on an infinite-dimensional Hilbert space.

We will also require the following proposition on pencils that is a combination of various existing results:

Proposition 2.4. [Reference CohnCoh95Reference Kaliuzhnyi-Verbovetskyi and VinnikovK-VV12Reference Derksen and MakamDM17]

Let M be an affine pencil of size e. The following are equivalent:

  1. (i) .

  2. (ii) There are $n\in \mathbb {N}$ and $X\in \operatorname {\mathrm {M}}_{n}(\mathbb {C})^d$ such that $\det M(X)\neq 0$ .

  3. (iii) For every $n\ge e-1$ , there exists $X\in \operatorname {\mathrm {M}}_{n}(\mathbb {C})^d$ such that $\det M(X)\neq 0$ .

  4. (iv) If $U\in \operatorname {\mathrm {M}}_{e'\times e}(\mathbb {C})$ and $V\in \operatorname {\mathrm {M}}_{e\times e''}(\mathbb {C})$ satisfy $UMV=0$ , then $\operatorname {\mathrm {rk}} U+\operatorname {\mathrm {rk}} V\le e$ .

Proof. (i) $\Leftrightarrow $ (ii) follows by the construction of the free skew field via matrix evaluations (compare [Reference Kaliuzhnyi-Verbovetskyi and VinnikovK-VV12, Proposition 2.1]). (iii) $\Rightarrow $ (ii) is trivial, and (ii) $\Rightarrow $ (iii) holds by [Reference Derksen and MakamDM17, Theorem 1.8]. (iv) $\Leftrightarrow $ (i) follows from [Reference CohnCoh95, Corollaries 4.5.9 and 6.3.6], because the free algebra $\mathbb {C}\!\mathop {<}\! x\!\mathop {>}$ is a free ideal ring [Reference CohnCoh95, Theorem 5.4.1].

An affine matrix pencil is full [Reference CohnCoh95, Section 1.4] if it satisfies the (equivalent) properties in Proposition 2.4.

Remark 2.5. If $r\in \mathfrak {R}_{\mathbb {C}}(x)$ admits a linear representation of size e, then $\operatorname {\mathrm {hdom}}_n r\neq \emptyset $ for $n\ge e-1$ , by Proposition 2.4 and the Zariski denseness of $\operatorname {\mathrm {hdom}}_n r$ in $\operatorname {\mathrm {dom}}_n r$ .

3 An extension theorem

An affine matrix pencil M of size e is irreducible if $UMV=0$ for nonzero matrices $U\in \operatorname {\mathrm {M}}_{e'\times e}(\mathbb {C})$ and $V\in \operatorname {\mathrm {M}}_{e\times e''}(\mathbb {C})$ implies $\operatorname {\mathrm {rk}} U+\operatorname {\mathrm {rk}} V\le e-1$ . In other words, a pencil is not irreducible if it can be put into a $2\times 2$ block upper-triangular form with square diagonal blocks $\left (\begin {smallmatrix}\star & \star \\ 0 & \star \end {smallmatrix}\right )$ by a left and a right basis change. Every irreducible pencil is full. On the other hand, every full pencil is, up to a left and a right basis change, equal to a block upper-triangular pencil whose diagonal blocks are irreducible pencils. In terms of quiver representations [Reference KingKin94], $M=M_0+\sum _{j=1}^dM_jx_j$ is full/irreducible if and only if the $(e,e)$ -dimensional representation $(M_0,M_1,\dotsc ,M_d)$ of the $(d+1)$ -Kronecker quiver is $(1,-1)$ -semistable/stable.

For the purpose of this section we extend evaluations of linear matrix pencils to tuples of rectangular matrices. If $\Lambda =\sum _{j=1}^d \Lambda _jx_j$ is of size e and $X\in \operatorname {\mathrm {M}}_{\ell \times m}(\mathbb {C})^d$ , then

$$ \begin{align*} \Lambda(X)=\sum_{j=1}^d \Lambda_j\otimes X_j \in \operatorname{\mathrm{M}}_{e\ell\times em}(\mathbb{C}). \end{align*} $$

The following lemma and proposition rely on an ampliation trick in a free algebra to demonstrate the existence of specific invertible evaluations of full pencils (see [Reference Helton, Klep and VolčičHKV20, Section 2.1] for another argument involving such ampliations):

Lemma 3.1. Let $\Lambda = \sum _{j=1}^d\Lambda _jx_j$ be a homogeneous irreducible pencil of size e. Set $\ell \le m$ and denote $n=(m-\ell )(e-1)$ . Given $C\in \operatorname {\mathrm {M}}_{me\times \ell e}(\mathbb {C})$ , consider the pencil $\widetilde {\Lambda }$ of size $(m+n)e$ in $d(m+n)(n+m-\ell )$ variables $z_{jpq}$ :

where $\widehat {E}_{p,q} \in \operatorname {\mathrm {M}}_{m\times (n+m-\ell )}(\mathbb {C})$ and

are the standard matrix units. If C has full rank, then the pencil $\widetilde {\Lambda }$ is full.

Proof. Suppose U and V are constant matrices with $e(m+n)$ columns and $e(m+n)$ rows, respectively, that satisfy $U\widetilde {\Lambda }V=0$ . There is nothing to prove if $U=0$ , so let $U\neq 0$ . Write

$$ \begin{align*} U=\begin{pmatrix} U_1 & \dotsb & U_{m+n}\end{pmatrix}, \qquad V=\begin{pmatrix} V_0 \\ V_1 \\ \vdots \\ V_{n+m-\ell}\end{pmatrix}, \end{align*} $$

where each $U_p$ has e columns, $V_0$ has $\ell e$ rows and each $V_q$ with $q>0$ has e rows. Also let $U_0=\begin {pmatrix}U_1 & \cdots & U_m\end {pmatrix}$ . Then $U\widetilde {\Lambda }V=0$ implies

(3.1) $$ \begin{align} U_0CV_0 = 0, \end{align} $$
(3.2) $$ \begin{align} U_p\Lambda V_q = 0, \quad 1\le p\le m+n,\ 1\le q\le n+m-\ell. \end{align} $$

Since C has full rank, equation (3.1) implies $\operatorname {\mathrm {rk}} U_0+\operatorname {\mathrm {rk}} V_0\le me$ . Note that $U_{p'}\neq 0$ for some $1\le p'\le m+n$ , because $U\neq 0$ . Since $\Lambda $ is irreducible and $U_{p'}\neq 0$ for some $p'$ , equation (3.2) implies $\operatorname {\mathrm {rk}} V_q\le e-1$ and $\operatorname {\mathrm {rk}} U_p+\operatorname {\mathrm {rk}} V_q\le e-1$ for all $p,q>0$ . Then

$$ \begin{align*} \operatorname{\mathrm{rk}} U+\operatorname{\mathrm{rk}} V & \le \operatorname{\mathrm{rk}} U_0+\operatorname{\mathrm{rk}} V_0+\sum_{p=m+1}^{m+n} \operatorname{\mathrm{rk}} U_p+\sum_{q=1}^{n+m-\ell}\operatorname{\mathrm{rk}} V_q \\ & \le me+n(e-1)+(m-\ell)(e-1) \\ & = (m+n)e \end{align*} $$

by the choice of n. Therefore $\widetilde {\Lambda }$ is full.

Proposition 3.2. Let $\Lambda $ be a homogeneous full pencil of size e, and let $X\in \operatorname {\mathrm {M}}_{m\times \ell }(\mathbb {C})^d$ with $\ell \le m$ be such that $\Lambda (X)$ has full rank. Then there exist $\widehat {X}\in \operatorname {\mathrm {M}}_{m\times (n+m-\ell )}(\mathbb {C})^d$ and for some $n\in \mathbb {N}$ such that

(3.3)

Proof. A full pencil is, up to a left-right basis change, equal to a block upper-triangular pencil with irreducible diagonal blocks. Suppose that the lemma holds for irreducible pencils; since the set of pairs satisfying equation (3.3) is Zariski open, the lemma then also holds for full pencils. Thus we can without loss of generality assume that $\Lambda $ is irreducible.

Let $n_1=(m-\ell )(e-1)$ and $e_1=(m+n_1)e$ . By Lemma 3.1 applied to $C=\sum _{j=1}^d X_j\otimes \Lambda _j$ and Proposition 2.4, there exists $Z \in \operatorname {\mathrm {M}}_{e_1-1}(\mathbb {C})^{d\left (m+n_1\right )\left (n_1+m-\ell \right )}$ such that $\widetilde {\Lambda }(Z)$ is invertible. Therefore the matrix

is invertible since it is similar to $\widetilde {\Lambda }(Z)$ (via a permutation matrix). Thus there are $\widehat {X}\in \operatorname {\mathrm {M}}_{m\times (n+m-\ell )}(\mathbb {C})^d$ and

such that

where $n=(e_1-2)m+n_1(e_1-1)$ .

We are ready to prove the first main result of the paper.

Theorem 3.3. Let $\Lambda $ be a full pencil of size e, and let $Y\in \operatorname {\mathrm {M}}_{\ell }(\mathbb {C})^d$ , $Y'\in \operatorname {\mathrm {M}}_{m\times \ell }(\mathbb {C})^d$ and $Y''\in \operatorname {\mathrm {M}}_{\ell \times m}(\mathbb {C})^d$ be such that

(3.4) $$ \begin{align} \Lambda \begin{pmatrix} Y \\ Y'\end{pmatrix},\qquad \Lambda \begin{pmatrix} Y & Y''\end{pmatrix} \end{align} $$

have full rank. Then there are $n\ge m$ and $Z\in \operatorname {\mathrm {M}}_{n}(\mathbb {C})^d$ such that

$$ \begin{align*} \det \Lambda \left(\begin{array}{cc} Y & \begin{matrix} Y'' & 0\end{matrix} \\ \begin{matrix} Y' \\ 0\end{matrix} & Z \end{array}\right) \neq0. \end{align*} $$

Proof. By Proposition 3.2 and its transpose analogue, there exist $k\in \mathbb {N}$ and

$$ \begin{align*} A'\in \operatorname{\mathrm{M}}_{\ell\times (k+m-\ell)}(\mathbb{C})^d,\qquad B'\in \operatorname{\mathrm{M}}_{m\times (k+m-\ell)}(\mathbb{C})^d,\qquad C'\in \operatorname{\mathrm{M}}_{k\times (k+m-\ell)}(\mathbb{C})^d, \\ A''\in \operatorname{\mathrm{M}}_{(k+m-\ell)\times \ell}(\mathbb{C})^d,\qquad B''\in \operatorname{\mathrm{M}}_{(k+m-\ell)\times m}(\mathbb{C})^d,\qquad C''\in \operatorname{\mathrm{M}}_{(k+m-\ell)\times k}(\mathbb{C})^d, \\ \end{align*} $$

such that the matrices

$$ \begin{align*} \Lambda \begin{pmatrix} Y & A' \\ Y' & B' \\ 0 & C' \end{pmatrix} ,\qquad \Lambda \begin{pmatrix} Y & Y'' &0 \\ A'' & B'' & C'' \end{pmatrix} \end{align*} $$

are invertible. Consequently there exists $\varepsilon \in \mathbb {C}\setminus \{0\}$ such that

$$ \begin{align*} \begin{pmatrix} \Lambda(Y) & 0 & 0 &0 & 0 \\ 0 & 0 & 0 &0 & 0 \\ 0 & 0 & 0 &0 & 0 \\ 0 & 0 & 0 &0 & 0 \\ 0 & 0 & 0 &0 & 0 \end{pmatrix} +\varepsilon\begin{pmatrix} \begin{pmatrix}0 & 0 \\ 0 &0 \end{pmatrix} & \Lambda \begin{pmatrix} Y & Y'' &0 \\ A'' & B'' & C'' \end{pmatrix} \\ \Lambda \begin{pmatrix} Y & A' \\ Y' & B' \\ 0 & C' \end{pmatrix} & \begin{pmatrix}0 & 0 & 0\\ 0 &0 &0\\ 0 &0 &0\end{pmatrix} \end{pmatrix} \end{align*} $$

is invertible; this matrix is similar to

(3.5) $$ \begin{align} \Lambda \begin{pmatrix} Y & 0 & \varepsilon Y & \varepsilon Y'' &0 \\ 0 & 0 & \varepsilon A''& \varepsilon B'' & \varepsilon C'' \\ \varepsilon Y & \varepsilon A' & 0 &0 &0 \\ \varepsilon Y' & \varepsilon B' & 0 &0 &0 \\ 0 & \varepsilon C' & 0 &0 &0 \end{pmatrix}. \end{align} $$

Thus the matrix (3.5) is invertible; its block structure and the linearity of $\Lambda $ imply that matrix (3.5) is invertible for every $\varepsilon \neq 0$ , so we can choose $\varepsilon =1$ . After performing elementary row and column operations on matrix (3.5), we conclude that

(3.6) $$ \begin{align} \Lambda \begin{pmatrix} Y & Y'' & 0 & 0 &0 \\ Y' & 0 & - Y' & B' &0 \\ 0 & - Y'' & - Y & A' &0 \\ 0 & B'' & A''& 0 & C'' \\ 0 & 0 & 0 & C' &0 \end{pmatrix} \end{align} $$

is invertible. So the lemma holds for $n= 2(m+k)$ .

Remark 3.4. It follows from the proofs of Proposition 3.2 and Theorem 3.3 that one can choose

$$ \begin{align*} n= 2 \left(e^3 m^2+e m (2 e \ell-1)+\ell (e \ell-2)\right) \end{align*} $$

in Theorem 3.3. However, this is unlikely to be the minimal choice for n.

Let $\operatorname {\mathrm {M}}_{\infty }(\mathbb {C})$ be the algebra of $\mathbb {N}\times \mathbb {N}$ matrices over $\mathbb {C}$ that have only finitely many nonzero entries in each column; that is, elements of $\operatorname {\mathrm {M}}_{\infty }(\mathbb {C})$ can be viewed as operators on $\oplus ^{\mathbb {N}}\mathbb {C}$ . Given $r\in \mathfrak {R}_{\mathbb {C}}(x)$ , let $\operatorname {\mathrm {dom}}_{\infty } r$ be the set of tuples $X\in \operatorname {\mathrm {M}}_{\infty }(\mathbb {C})^d$ such that $r(X)$ is well defined. If $(u,M,v)$ is a linear representation of r of size e, then $M(X)\in \operatorname {\mathrm {M}}_e(\operatorname {\mathrm {M}}_{\infty }(\mathbb {C}))$ is invertible for every $X\in \operatorname {\mathrm {dom}}_{\infty } r$ by the definition of a linear representation adopted in this paper.

Proposition 3.5. Set $r\in \mathfrak {R}_{\mathbb {C}}(x)$ . If $X\in \operatorname {\mathrm {H}}_{\ell }(\mathbb {C})^d$ and $Y\in \operatorname {\mathrm {M}}_{m\times \ell }(\mathbb {C})^d$ are such that

$$ \begin{align*} \begin{pmatrix} X & \begin{matrix} Y^* & 0\end{matrix} \\ \begin{matrix} Y \\ 0\end{matrix} & W \end{pmatrix} \in \operatorname{\mathrm{dom}}_{\infty} r \end{align*} $$

for some $W\in \operatorname {\mathrm {M}}_{\infty }(\mathbb {C})^d$ , then there exist $n\ge m$ , $E\in \operatorname {\mathrm {M}}_{n}(\mathbb {C})$ and $Z\in \operatorname {\mathrm {H}}_{n}(\mathbb {C})^d$ such that

$$ \begin{align*} \begin{pmatrix} X & \begin{pmatrix} Y^* & 0\end{pmatrix}E^* \\ E\begin{pmatrix} Y \\ 0\end{pmatrix} & Z \end{pmatrix} \in \operatorname{\mathrm{hdom}} r. \end{align*} $$

Proof. Let $(u,M,v)$ be a linear representation of r of size e. By assumption,

$$ \begin{align*} M\begin{pmatrix} X & \begin{matrix} Y^* & 0\end{matrix} \\ \begin{matrix} Y \\ 0\end{matrix} & W \end{pmatrix} \end{align*} $$

is an invertible matrix over $\operatorname {\mathrm {M}}_{\infty }(\mathbb {C})$ . If $M=M_0+M_1x_1+\dotsb +M_dx_d$ , then the matrices

$$ \begin{align*} M_0\otimes \begin{pmatrix}I \\ 0\end{pmatrix}+ \sum_{j=1}^d M_0\otimes \begin{pmatrix}X_j \\ Y_j\end{pmatrix},\qquad M_0\otimes \begin{pmatrix}I & 0\end{pmatrix}+ \sum_{j=1}^d M_0\otimes \begin{pmatrix}X_j & Y_j^*\end{pmatrix}, \end{align*} $$

have full rank. Let $n\in \mathbb {N}$ be as in Theorem 3.3. Then there is $Z'\in \operatorname {\mathrm {M}}_{n}(\mathbb {C})^{1+d}$ such that

(3.7) $$ \begin{align} \det\left(M_0\otimes \begin{pmatrix} I & \begin{matrix} 0 & 0\end{matrix} \\ \begin{matrix} 0 \\ 0\end{matrix} & Z^{\prime}_0 \end{pmatrix}+ \sum_{j=1}^d M_0\otimes \begin{pmatrix} X_j & \begin{matrix} Y_j^* & 0\end{matrix} \\ \begin{matrix} Y_j \\ 0\end{matrix} & Z^{\prime}_j \end{pmatrix} \right)\neq0 \end{align} $$

is invertible. The set of all $Z'\in \operatorname {\mathrm {M}}_{n}(\mathbb {C})^{1+d}$ satisfying equation (3.7) is thus a nonempty Zariski open set in $\operatorname {\mathrm {M}}_{n}(\mathbb {C})^{1+d}$ . Since the set of positive definite $n\times n$ matrices is Zariski dense in $\operatorname {\mathrm {M}}_{n}(\mathbb {C})$ , there exists $Z'\in \operatorname {\mathrm {H}}_{n}(\mathbb {C})^{1+d}$ with $Z^{\prime }_0\succ 0$ such that equation (3.7) holds. If $Z_0' = E^{-1}E^{-*}$ , let $Z_j=EZ^{\prime }_jE^*$ for $1\le j \le d$ . Then

$$ \begin{align*} M\begin{pmatrix} X & \begin{pmatrix} Y^* & 0\end{pmatrix}E^* \\ E\begin{pmatrix} Y \\ 0\end{pmatrix} & Z \end{pmatrix} \end{align*} $$

is invertible, so

$$ \begin{align*} \begin{pmatrix} X & \begin{pmatrix} Y^* & 0\end{pmatrix}E^* \\ E\begin{pmatrix} Y \\ 0\end{pmatrix} & Z \end{pmatrix} \in\operatorname{\mathrm{hdom}} r \end{align*} $$

by the definition of a linear representation.

We also record a non-Hermitian version of Proposition 3.5:

Proposition 3.6. Set $r\in \mathfrak {R}_{\mathbb {C}}(x)$ . If $X\in \operatorname {\mathrm {M}}_{m\times \ell }(\mathbb {C})^d$ with $\ell \le m$ is such that

$$ \begin{align*} \begin{pmatrix} \begin{matrix} X\\ 0\end{matrix} & W \end{pmatrix} \in \operatorname{\mathrm{dom}}_{\infty} r \end{align*} $$

for some $W\in \operatorname {\mathrm {M}}_{\infty }(\mathbb {C})^d$ , then there exist $n\ge m$ and $Z\in \operatorname {\mathrm {M}}_{n\times (n-\ell )}(\mathbb {C})^d$ such that

$$ \begin{align*} \begin{pmatrix} \begin{matrix} X \\ 0\end{matrix} & Z \end{pmatrix} \in \operatorname{\mathrm{dom}} r. \end{align*} $$

Proof. We apply a similar reasoning as in the proof of Proposition 3.5, only this time with Proposition 3.2 instead of Theorem 3.3, and without Hermitian considerations.

4 Multiplication operators attached to a formal rational expression

In this section we assign a tuple of operators $\mathfrak {X}$ on a vector space of countable dimension to each formal rational expression r, so that r is well defined at $\mathfrak {X}$ and the finite-dimensional restrictions of $\mathfrak {X}$ partially retain a certain multiplicative property.

Fix an expression $r\in \mathfrak {R}_{\mathbb {C}}(x)$ . Without loss of generality, we assume that all the variables in x appear as subexpressions in r (otherwise we replace x by a suitable subtuple). Let

$$ \begin{align*} R=\{1\}\cup \{q\in\mathfrak{R}_{\mathbb{C}}(x)\setminus\mathbb{C}\colon q \text{ is a subexpression of }r \text{ or }r^* \}\subset \mathfrak{R}_{\mathbb{C}}(x). \end{align*} $$

Note that R is finite, $\operatorname {\mathrm {hdom}} q\supseteq \operatorname {\mathrm {hdom}} r$ for $q\in R$ , and $q\in R$ implies $q^*\in R$ . Let

be the set of noncommutative rational functions represented by R. For $\ell \in \mathbb {N}$ we define finite-dimensional vector subspaces

Note that $V_{\ell }\subseteq V_{\ell +1}$ , since $1\in R$ . Furthermore, let $V=\bigcup _{\ell \in \mathbb {N}} V_{\ell }$ . Then V is a finitely generated $*$ -subalgebra of

. For $j=1,\dotsc ,d$ , we define operators

$$ \begin{align*} \mathfrak{X}_j:V\to V,\qquad \mathfrak{X}_j {\mathbb{s}} = x_j{\mathbb{s}}. \end{align*} $$

Lemma 4.1. There is a linear functional $\phi :V\to \mathbb {C}$ such that $\phi ({\mathbb{s}}^*)=\overline {\phi ({\mathbb{s}})}$ and $\phi ({\mathbb{s}}{\mathbb{s}}^*)>0$ for all ${\mathbb{s}}\in V\setminus \{0\}$ .

Proof. For some $X\in \operatorname {\mathrm {hdom}} r$ , let $m=\max _{q\in R}\lVert q(X)\rVert $ . Set $\ell \in \mathbb {N}$ . Since $V_{\ell }$ is finite-dimensional, there exist $n_{\ell }\in \mathbb {N}$ and $X^{(\ell )}\in \operatorname {\mathrm {hdom}}_{n_{\ell }} r$ such that

(4.1) $$ \begin{align} \max_{q\in R}\left\lVert q\left(X^{(\ell)}\right)\right\rVert\le m+1 \qquad \text{and}\qquad {\mathbb{s}}\left(X^{(\ell)}\right)\neq 0 \text{ for all } {\mathbb{s}}\in V_{\ell}\setminus\{0\}, \end{align} $$

by the local-global linear dependence principle for noncommutative rational functions (see [Reference VolčičVol18, Theorem 6.5] or [Reference Blekherman, Parrilo and ThomasBPT13, Corollary 8.87]). Define

$$ \begin{align*} \phi:V\to\mathbb{C},\qquad \phi({\mathbb{s}})=\sum_{\ell=1}^{\infty} \frac{1}{\ell!\cdot n_{\ell}}\operatorname{\mathrm{tr}}\left({\mathbb{s}}\left(X^{(\ell)}\right) \right). \end{align*} $$

Since V is a $\mathbb {C}$ -algebra generated by $\mathcal {R}$ , routine estimates show that $\phi $ is well defined. It is also clear that $\phi $ has the desired properties.

For the rest of the paper, fix a functional $\phi $ as in Lemma 4.1. Then

(4.2) $$ \begin{align} ({\mathbb{s}}_1,{\mathbb{s}}_2)=\phi\left({\mathbb{s}}_2^*{\mathbb{s}}_1\right) \end{align} $$

is an inner product on V. With respect to this inner product, we can inductively build an ordered orthogonal basis $\mathcal {B}$ of V with the property that $\mathcal {B}\cap V_{\ell }$ is a basis of $V_{\ell }$ for every $\ell \in \mathbb {N}$ .

Lemma 4.2. With respect to the inner product (4.2) and the ordered basis $\mathcal {B}$ as before, operators $\mathfrak {X}_1,\dotsc ,\mathfrak {X}_d$ are represented by Hermitian matrices in $\operatorname {\mathrm {M}}_{\infty }(\mathbb {C})$ , and $\mathfrak {X}\in \operatorname {\mathrm {dom}}_{\infty } r$ .

Proof. Since

$$ \begin{align*} \left(\mathfrak{X}_j{\mathbb{s}}_1,{\mathbb{s}}_2\right) = \phi\left({\mathbb{s}}_2^*x_j{\mathbb{s}}_1\right) = \left( {\mathbb{s}}_1,\mathfrak{X}_j{\mathbb{s}}_2\right) \end{align*} $$

for all ${\mathbb{s}}_1,{\mathbb{s}}_2\in V$ and $\mathfrak {X}_j(V_{\ell })\subseteq V_{\ell +1}$ for all $\ell \in \mathbb {N}$ , it follows that the matrix representation of $\mathfrak {X}_j$ with respect to $\mathcal {B}$ is Hermitian and has only finitely many nonzero entries in each column and row. The rest follows inductively on the construction of r, since $\mathfrak {X}_j$ are the left multiplication operators on V.

Next we define a complexity-measuring function $\tau :\mathfrak {R}_{\mathbb {C}}(x)\to \mathbb {N}\cup \{0\}$ as in [Reference Klep, Pascoe and VolčičKPV17, Section 4]:

  1. (i) $\tau (\alpha )=0$ for $\alpha \in \mathbb {C}$ ;

  2. (ii) $\tau \left (x_j\right )=1$ for $1\le j\le d$ ;

  3. (iii) $\tau (s_1+s_2)=\max \{\tau (s_1),\tau (s_2)\}$ for $s_1,s_2\in \mathfrak {R}_{\mathbb {C}}(x)$ ;

  4. (iv) $\tau (s_1s_2)=\tau (s_1)+\tau (s_2)$ for $s_1,s_2\in \mathfrak {R}_{\mathbb {C}}(x)$ ;

  5. (v) $\tau \left (s^{-1}\right )=2\tau (s)$ for $s,s^{-1}\in \mathfrak {R}_{\mathbb {C}}(x)$ .

Note that $\tau (s^*)=\tau (s)$ for all $s\in \mathfrak {R}_{\mathbb {C}}(x)$ .

Proposition 4.3. Let the notation be as before, and let U be a finite-dimensional Hilbert space containing $V_{\ell +1}$ . If X is a d-tuple of Hermitian operators on U such that $X\in \operatorname {\mathrm {hdom}} r$ and

$$ \begin{align*} X_j\rvert_{V_{\ell}}=\mathfrak{X}_j\rvert_{V_{\ell}} \end{align*} $$

for $j=1,\dotsc ,d$ , then $X\in \operatorname {\mathrm {hdom}} q$ and

(4.3) $$ \begin{align} q(X){\mathbb{s}}=\mathbb{q}{\mathbb{s}} \end{align} $$

for every $q\in R$ and $s\in \overbrace {R\dotsm R}^{\ell }$ satisfying $2\tau (q)+\tau (s)\le \ell +2$ .

Proof. First note that for every $s\in R\dotsm R$ ,

(4.4) $$ \begin{align} \tau(s)\le k\quad \Rightarrow \quad s\in \overbrace{R\dotsm R}^k, \end{align} $$

since $\tau ^{-1}(0)=\mathbb {C}$ and $R\cap \mathbb {C}=\{1\}$ . We prove equation (4.3) by induction on the construction of q. If $q=1$ , then equation (4.3) trivially holds, and if $q=x_j$ , then $\tau (s)\le \ell $ , so equation (4.3) holds by formula (4.4). Next, if equation (4.3) holds for $q_1,q_2\in R$ such that $q_1+q_2\in R$ or $q_1q_2\in R$ , then it also holds for the latter by the definition of $\tau $ and formula (4.4). Finally, suppose that equation (4.3) holds for $q\in R\setminus \{1\}$ and assume $q^{-1}\in R$ . If $2\tau \left (q^{-1}\right )+\tau (s)\le \ell +2$ , then $2\tau (q)+\left (\tau \left (q^{-1}\right )+\tau (s)\right )\le \ell +2$ . In particular, $\tau \left (q^{-1}s\right )\le \ell $ , and so

$$ \begin{align*} q^{-1}s\in \overbrace{R\dotsm R}^{\ell} \end{align*} $$

by formula (4.4). Therefore,

$$ \begin{align*} q(X)\mathbb{q}^{-1}{\mathbb{s}} = \mathbb{q}\mathbb{q}^{-1}{\mathbb{s}}={\mathbb{s}} \end{align*} $$

by the induction hypothesis, and hence $q^{-1}(X){\mathbb{s}} =\mathbb{q}^{-1}{\mathbb{s}}$ , since $X\in \operatorname {\mathrm {hdom}} q^{-1}$ . Thus equation (4.3) holds for $q^{-1}$ .

5 Positive noncommutative rational functions

In this section we prove various positivity statements for noncommutative rational functions. Let L be a Hermitian monic pencil of size e; that is, $L=I+H_1x_1+\dotsb +H_dx_d$ , with $H_j\in \operatorname {\mathrm {H}}_{e}(\mathbb {C})$ . Then

$$ \begin{align*} \mathcal{D}(L) = \bigcup_{n\in\mathbb{N}}\mathcal{D}_n(L), \qquad \text{where }\mathcal{D}_n(L)=\left\{X\in\operatorname{\mathrm{H}}_{n}(\mathbb{C})^d\colon L(X)\succeq 0 \right\}, \\[-15pt]\end{align*} $$

is a free spectrahedron. The main result of the paper is Theorem 5.2, which describes noncommutative rational functions that are positive semidefinite or undefined at each tuple in a given free spectrahedron $\mathcal {D}(L)$ . In particular, Theorem 5.2 generalises [Reference PascoePas18, Theorem 3.1] to noncommutative rational functions with singularities in $\mathcal {D}(L)$ .

5.1 Rational convex Positivstellensatz

Let L be a Hermitian monic pencil of size e. To $r\in \mathfrak {R}_{\mathbb {C}}(x)$ we assign the finite set R, vector spaces $V_{\ell }$ and operators $\mathfrak {X}_j$ as in Section 4. For $\ell \in \mathbb {N}$ , we also define

$$ \begin{align*} &\qquad \qquad \qquad S_{\ell} = \{{\mathbb{s}}\in V_{\ell}\colon {\mathbb{s}}={\mathbb{s}}^* \}, \\ Q_{\ell} &= \left\{\sum_i {\mathbb{s}}_i^*{\mathbb{s}}_i+\sum_j \mathbb{v}_j^* L\mathbb{v}_j\colon {\mathbb{s}}_i \in V_{\ell}, \mathbb{v}_j\in V_{\ell}^e \right\}\subset S_{2\ell+1}. \\[-15pt]\end{align*} $$

Then $S_{\ell }$ is a real vector space and $Q_{\ell }$ is a convex cone. The proof of the following proposition is a rational modification of a common argument in free real algebraic geometry (compare [Reference Helton, Klep and McCulloughHKM12, Proposition 3.1] and [Reference Klep, Pascoe and VolčičKPV17, Proposition 4.1]). A convex cone is salient if it does not contain a line.

Proposition 5.1. The cone $Q_{\ell }$ is salient and closed in $S_{2\ell +1}$ with the Euclidean topology.

Proof. As in the proof of Lemma 4.1, there exists $X\in \operatorname {\mathrm {hdom}} r$ such that

$$ \begin{align*} {\mathbb{s}}(X)\neq0 \qquad \text{for all }{\mathbb{s}}\in V_{2\ell+1}\setminus\{0\}. \\[-15pt] \end{align*} $$

Furthermore, we can choose X close enough to $0$ , so that $L(X)\succeq \frac 12 I$ . Then clearly ${\mathbb{s}}(X)\succeq 0$ for every ${\mathbb{s}}\in Q_{\ell }$ , so $Q_{\ell }\cap -Q_{\ell }=\{0\}$ and thus $Q_{\ell }$ is salient. Note that $\lVert {\mathbb{s}}\rVert _{\bullet }= \lVert {\mathbb{s}}(X)\rVert $ is a norm on $V_{2\ell +1}$ . Also, the finite-dimensionality of $S_{2\ell +1}$ implies that every element of $Q_{\ell }$ can be written as a sum of $N=1+\dim S_{2\ell +1}$ elements of the form

$$ \begin{align*} {\mathbb{s}}^*{\mathbb{s}}\quad \text{and}\quad \mathbb{v}^* L\mathbb{v} \qquad \text{for }{\mathbb{s}}\in V_{\ell},\ \mathbb{v}\in V_{\ell}^e, \\[-15pt]\end{align*} $$

by Carathéodory’s theorem [Reference BarvinokBar02, Theorem I.2.3]. Assume that a sequence $\{{\mathbb{r}}_n\}_n\subset Q_{\ell }$ converges to ${\mathbb{s}}\in S_{2\ell +1}$ . After restricting to a subsequence, we can assume that there is $0\le M\le N$ such that

$$ \begin{align*} {\mathbb{r}}_n=\sum_{i=1}^M {\mathbb{s}}_{n,i}^*{\mathbb{s}}_{n,i}+\sum_{j=M+1}^N \mathbb{v}_{n,j}^* L\mathbb{v}_{n,j} \\[-15pt]\end{align*} $$

for all $n\in \mathbb {N}$ . The definition of the norm $\lVert \cdot \rVert _{\bullet }$ implies

$$ \begin{align*} \left\lVert{\mathbb{s}}_{n_i}\right\rVert_{\bullet}^2\le \lVert{\mathbb{r}}_n\rVert_{\bullet} \quad \text{and} \quad \max_{1\le i\le e}\left\lVert\left(\mathbb{v}_{n_j}\right)_i\right\rVert_{\bullet}^2\le 2\lVert{\mathbb{r}}_n\rVert_{\bullet}. \\[-15pt]\end{align*} $$

In particular, the sequences $\left \{{\mathbb{s}}_{n,i}\right \}_n\subset V_{\ell }$ for $1\le i\le M$ and $\left \{\mathbb{v}_{n,j}\right \}_n\subset V_{\ell }^e$ for $1\le j\le N$ are bounded. Hence, after restricting to subsequences, we may assume that they are convergent: ${\mathbb{s}}_i=\lim _n{\mathbb{s}}_{n,i}$ for $1\le i\le M$ and $\mathbb{v}_j=\lim _n\mathbb{v}_{n,j}$ for $1\le j\le N$ . Consequently, we have

$$ \begin{align*} {\mathbb{s}}=\lim_n{\mathbb{r}}_n= \sum_{i=1}^M {\mathbb{s}}_i^*{\mathbb{s}}_i+\sum_{j=M+1}^N \mathbb{v}_j^* L\mathbb{v}_j \in Q_{\ell}.\\[-50pt] \end{align*} $$

We are now ready to prove the main result of this paper by combining a truncated GNS construction with extending matrix tuples into the domain of a rational expression as in Proposition 3.5.

Theorem 5.2 Rational convex Positivstellensatz

Let L be a Hermitian monic pencil and set $r\in \mathfrak {R}_{\mathbb {C}}(x)$ . If $Q_{2\tau (r)+1}$ is as before, then $r(X)\succeq 0$ for every $X\in \operatorname {\mathrm {hdom}} r\cap \mathcal {D}(L)$ if and only if ${\mathbb{r}}\in Q_{2\tau (r)+1}$ .

Proof. Only the forward implication is nontrivial. Let $\ell =2\tau (r)-2$ . If ${\mathbb{r}}\neq {\mathbb{r}}^*$ , then there exists $X\in \operatorname {\mathrm {hdom}} r$ such that ${\mathbb{r}}(X)\neq {\mathbb{r}}(X)^*$ . Thus we assume ${\mathbb{r}}={\mathbb{r}}^*$ . Suppose that ${\mathbb{r}}\notin Q_{\ell +3}$ . Since $Q_{\ell +3}$ is a salient closed convex cone in $S_{2\ell +7}$ by Proposition 5.1, there exists a linear functional $\lambda _0:S_{2\ell +7}\to \mathbb {R}$ such that $\lambda _0(Q_{\ell +3}\setminus \{0\})=\mathbb {R}_{>0}$ and $\lambda _0({\mathbb{r}})<0$ by the Hahn–Banach separation theorem [Reference BarvinokBar02, Theorem III.1.3]. We extend $\lambda _0$ to $\lambda :V_{2\ell +7}\to \mathbb {C}$ as $\lambda ({\mathbb{s}})=\frac 12\lambda _0({\mathbb{s}}+{\mathbb{s}}^*)+\frac {i}{2}\lambda _0(i({\mathbb{s}}^*-{\mathbb{s}}))$ . Then $\langle {\mathbb{s}}_1,{\mathbb{s}}_2\rangle =\lambda \left ({\mathbb{s}}_2^*{\mathbb{s}}_1\right )$ defines a scalar product on $V_{\ell +3}$ . Recall that $\mathfrak {X}_j(V_{\ell +2})\subseteq V_{\ell +3}$ . Then for ${\mathbb{s}}_1\in V_{\ell +1}$ and ${\mathbb{s}}_2\in V_{\ell +2}$ ,

(5.1) $$ \begin{align} \left\langle \mathfrak{X}_j{\mathbb{s}}_1,{\mathbb{s}}_2\right\rangle = \lambda\big({\mathbb{s}}_2^*x_j{\mathbb{s}}_1\big) = \left\langle {\mathbb{s}}_1,\mathfrak{X}_j{\mathbb{s}}_2\right\rangle. \end{align} $$

Furthermore,

(5.2) $$ \begin{align} \langle L(\mathfrak{X})\mathbb{v},\mathbb{v}\rangle = \lambda(\mathbb{v}^*L\mathbb{v})>0 \end{align} $$

for all $\mathbb{v}\in V_{\ell +1}^e$ , where the canonical extension of $\langle \cdot ,\cdot \rangle $ to a scalar product on $\mathbb {C}^e\otimes V_{\ell +1}$ is considered.

Let $\mathcal {B}$ be an ordered orthogonal basis of V with respect to the inner product $(\cdot ,\cdot )$ as in Section 4; recall that such a basis has the property that $\mathcal {B}\cap V_k$ is a basis for $V_k$ for all $k\in \mathbb {N}$ . Let $\mathcal {B}_0$ be an ordered orthogonal basis of $V_{\ell +2}$ with respect to $\langle \cdot ,\cdot \rangle $ that contains a basis for $V_{\ell +1}$ , and let $\mathcal {B}_1=\mathcal {B}\setminus V_{\ell +2}$ . If we identify operators $\mathfrak {X}_j$ with their matrix representations relative to the ordered basis $(\mathcal {B}_0,\mathcal {B}_1)$ of V, then $\mathfrak {X}_j\in \operatorname {\mathrm {M}}_{\infty }(\mathbb {C})$ are Hermitian matrices by Lemma 4.2 and equation (5.1).

Let $U_0$ be the orthogonal complement of $V_{\ell +1}$ in $V_{\ell +2}$ relative to $\langle \cdot ,\cdot \rangle $ . Since $\mathfrak {X}_j(V_{\ell +1})\subseteq V_{\ell +2}$ , we can consider the restriction $\mathfrak {X}_j\rvert _{V_{\ell +1}}$ in a block form

$$ \begin{align*} \begin{pmatrix}X_j \\Y_j\end{pmatrix} \end{align*} $$

with respect to the decomposition $V_{\ell +2}=V_{\ell +1}\oplus U_0$ . Since $\mathfrak {X}\in \operatorname {\mathrm {dom}}_{\infty } r$ , by Proposition 3.5 there exist a finite-dimensional vector space $U_1$ , a scalar product on $V_{\ell +1}\oplus U_0\oplus U_1$ extending $\langle \cdot ,\cdot \rangle $ , an operator E on $U_0\oplus U_1$ and a d-tuple Z of Hermitian operators on $U_0\oplus U_1$ such that

(5.3) $$ \begin{align} \widetilde{X}:=\begin{pmatrix} X & \begin{pmatrix} Y^* & 0\end{pmatrix}E^* \\ E\begin{pmatrix} Y \\ 0\end{pmatrix} & Z \end{pmatrix}\in\operatorname{\mathrm{hdom}} r. \end{align} $$

Since $\mathfrak {X}_j(V_{\ell })\subseteq V_{\ell +1}$ , we conclude that

(5.4) $$ \begin{align} \widetilde{X}_j\rvert_{V_{\ell}}=\mathfrak{X}_j\rvert_{V_{\ell}}. \end{align} $$

Observe that for all but finitely many $\varepsilon _1,\varepsilon _2>0$ we can replace $Z,E$ with $\varepsilon _1 Z,\varepsilon _2 E$ and formula (5.3) still holds. By equation (5.2) we can thus assume that Z and E are close enough to $0$ so that $L(\widetilde {X})\succeq 0$ . Finally, since equation (5.4) holds and $2\tau (r)+\tau (1)=\ell +2$ , Proposition 4.3 implies

$$ \begin{align*} \left\langle r\left(\widetilde{X}\right)1,1\right\rangle = \langle {\mathbb{r}},1\rangle=\lambda({\mathbb{r}})<0. \end{align*} $$

Therefore $\widetilde {X}\in \operatorname {\mathrm {hdom}} r\cap \mathcal {D}(L)$ and $r\left (\widetilde {X}\right )$ is not positive semidefinite.

Given a unital $*$ -algebra $\mathcal {A}$ and $A=A^*\in \operatorname {\mathrm {M}}_{\ell }(\mathcal {A})$ , the quadratic module in $\mathcal {A}$ generated by A is

$$ \begin{align*} \operatorname{\mathrm{QM}}_{\mathcal{A}}(A) = \left\{ \sum_j v_j^* (1\oplus A)v_j\colon v_j\in \mathcal{A}^{\ell+1} \right\}. \end{align*} $$

Theorem 5.2 then in particular states that noncommutative rational functions that are positive semidefinite on a free spectrahedron $\mathcal {D}(L)$ belong to .

Remark 5.3. Set $r\in \mathfrak {R}_{\mathbb {C}}(x)$ and

$$ \begin{align*} n=2 \left(e^3 m^2+e m (2 e \ell-1)+\ell (e \ell-2)\right), \end{align*} $$

where $\ell =\dim V_{2\tau (r)-1}$ , $m=\dim V_{2\tau (r)}-\dim V_{2\tau (r)-1}$ and e is the size of a linear representation of r. If $r\not \succeq 0$ on $\operatorname {\mathrm {hdom}} r\cap \mathcal {D}(L)$ , then by Remark 3.4 and the proofs of Theorem 5.2 and Proposition 3.5 there exists $X\in \operatorname {\mathrm {hdom}}_n r\cap \mathcal {D}_n(L)$ such that $r(X)\not \succeq 0$ .

The solution of Hilbert’s 17th problem for a free skew field is now as follows.

Corollary 5.4. Set . Then ${\mathbb{r}}\succeq 0$ on $\operatorname {\mathrm {hdom}}{\mathbb{r}}$ if and only if

$$ \begin{align*} {\mathbb{r}}={\mathbb{r}}_1{\mathbb{r}}_1^*+\dotsb+{\mathbb{r}}_m{\mathbb{r}}_m^* \end{align*} $$

for some with $\operatorname {\mathrm {hdom}}{\mathbb{r}}_i\supseteq \operatorname {\mathrm {hdom}}{\mathbb{r}}$ .

Proof. By Proposition 2.1 there exists $r\in {\mathbb{r}}$ such that $\operatorname {\mathrm {hdom}} {\mathbb{r}}=\operatorname {\mathrm {hdom}} r$ . The corollary then follows directly from Theorem 5.2 applied to $L=1$ , since the Hermitian domain of an element in $V_{2\tau (r)}$ contains $\operatorname {\mathrm {hdom}} {\mathbb{r}}$ .

Remark 5.5. Corollary 5.4 also indicates a subtle distinction between solutions of Hilbert’s 17th problem in the classical commutative context and in the free context. While every (commutative) positive rational function $\rho $ is a sum of squares of rational functions, in general one cannot choose summands that are defined on the whole real domain of the original function $\rho $ . On the other hand, a positive noncommutative rational function always admits a sum-of-squares representation with terms defined on its Hermitian domain.

For a possible future use, we describe noncommutative rational functions whose invertible evaluations have nonconstant signature; polynomials of this type were of interest in [Reference Helton, Klep and VolčičHKV20, Section 3.3].

Corollary 5.6. Set . The following are equivalent:

  1. (i) There are $n\in \mathbb {N}$ and $X,Y\in \operatorname {\mathrm {hdom}}_n {\mathbb{r}}$ such that ${\mathbb{r}}(X),{\mathbb{r}}(Y)$ are invertible and have distinct signatures.

  2. (ii) Neither ${\mathbb{r}}$ or $-{\mathbb{r}}$ equals $\sum _i{\mathbb{r}}_i{\mathbb{r}}_i^*$ for some .

Proof. (i) $\Rightarrow $ (ii) If $\pm {\mathbb{r}}=\sum _i{\mathbb{r}}_i{\mathbb{r}}_i^*$ , then $\pm {\mathbb{r}}(X)\succeq 0$ for all $X\in \operatorname {\mathrm {hdom}}{\mathbb{r}}$ .

(ii) $\Rightarrow $ (i) Let $\mathcal {O}_n=\operatorname {\mathrm {hdom}}_n{\mathbb{r}}\cap \operatorname {\mathrm {hdom}}_n{\mathbb{r}}^{-1}$ . By Remark 2.5 there is $n_0\in \mathbb {N}$ such that $\mathcal {O}_n\neq \emptyset $ for all $n\ge n_0$ . Suppose that ${\mathbb{r}}$ has constant signature on $\mathcal {O}_n$ for each $n\ge n_0$ –that is, ${\mathbb{r}}(X)$ has $\pi _n$ positive eigenvalues for every $X\in \mathcal {O}_n$ . Since $\mathcal {O}_k\oplus \mathcal {O}_{\ell }\subset \mathcal {O}_{k+\ell }$ for all $k,\ell \in \mathbb {N}$ , we have

(5.5) $$ \begin{align} n\pi_m=\pi_{mn}=m\pi_n \end{align} $$

for all $m,n\ge n_0$ . If $\pi _{n'}=n'$ for some $n'\ge n_0$ , then $\pi _n=n$ for all $n\ge n_0$ by equation (5.5), so ${\mathbb{r}}\succeq 0$ on $\mathcal {O}_n$ for every n. Thus ${\mathbb{r}}=\sum _i{\mathbb{r}}_i{\mathbb{r}}_i^*$ by Theorem 5.2. An analogous conclusion holds if $\pi _{n'}=0$ for some $n'\ge n_0$ . However, equation (5.5) excludes any alternative: if $n_0\le m< n$ and n is a prime number, then $0<\pi _n<n$ contradicts equation (5.5).

5.2 Positivity and invariants

Let G be a subgroup of the unitary group $\operatorname {U}_d(\mathbb {C})$ . The action of G on $\mathbb {C}^d$ induces a linear action of G on . If G is finite and solvable, then the subfield of G-invariants is finitely generated [Reference Klep, Pascoe, Podlogar and VolčičKPPV20, Theorem 1.1], and in many cases again a free skew field [Reference Klep, Pascoe, Podlogar and VolčičKPPV20, Theorem 1.3]. Furthermore, we can now extend [Reference Klep, Pascoe, Podlogar and VolčičKPPV20, Corollary 6.6] to invariant noncommutative rational functions with singularities.

Corollary 5.7. Let $G\subset \operatorname {U}_d(\mathbb {C})$ be a finite solvable group. Then there exists

with the following property: if

and L is a Hermitian monic pencil of size e, then ${\mathbb{r}}\succeq 0$ on $\operatorname {\mathrm {hdom}} {\mathbb{r}}\cap \mathcal {D}(L)$ if and only if

, where

Proof. Combine [Reference Klep, Pascoe, Podlogar and VolčičKPPV20, Corollary 6.4] and Theorem 5.2.

5.3 Real free skew field and other variations

In this subsection we explain how the preceding results apply to real free skew fields and their symmetric evaluations, and to another natural involution on a free skew field.

Corollary 5.8 real version of Theorem 5.2

Let L be a symmetric monic pencil of size e and set . Then ${\mathbb{r}}(X)\succeq 0$ for every $X\in \operatorname {\mathrm {hdom}} {\mathbb{r}}\cap \mathcal {D}(L)$ if and only if .

Proof. If and ${\mathbb{r}}\succeq 0$ on $\operatorname {\mathrm {hdom}}{\mathbb{r}}\cap \mathcal {D}(L)$ , then by Theorem 5.2, because the complex vector spaces $V_{\ell }$ are spanned with functions given by subexpressions of some $r\in {\mathbb{r}}$ , and we can choose r in which only real scalars appear. For we define $\text {re} ({\mathbb{s}}) = \frac 12\left ({\mathbb{s}}+\overline {{\mathbb{s}}}\right )$ and $\text {im} ({\mathbb{s}}) = \frac {i}{2}\left (\overline {{\mathbb{s}}}-{\mathbb{s}}\right )$ in . If

$$ \begin{align*} {\mathbb{r}}=\sum_j {\mathbb{s}}_j^*{\mathbb{s}}_j+\sum_k \mathbb{v}_k^* L\mathbb{v}_k \end{align*} $$

for and , then

$$ \begin{align*} {\mathbb{r}}=\text{re}({\mathbb{r}})=\sum_j \left(\text{re}\left({\mathbb{s}}_j\right)^*\text{re}\left({\mathbb{s}}_j\right)+\text{im}\left({\mathbb{s}}_j\right)^*\text{im}\left({\mathbb{s}}_j\right)\right) +\sum_k \left(\text{re}(\mathbb{v}_k)^*L\,\text{re}(\mathbb{v}_k)+\text{im}(\mathbb{v}_k)^*L\,\text{im}(\mathbb{v}_k)\right), \end{align*} $$

and so .

Given , one might prefer to consider only the tuples of real symmetric matrices in the domain of ${\mathbb{r}}$ , and not the whole $\operatorname {\mathrm {hdom}} {\mathbb{r}}$ . Since there exist $*$ -embeddings $\operatorname {\mathrm {M}}_n(\mathbb {C})\hookrightarrow \operatorname {\mathrm {M}}_{2n}(\mathbb {R})$ , evaluations on tuples of real symmetric $2n\times 2n$ matrices carry at least as much information as evaluations on tuples of Hermitian $n\times n$ matrices. Consequently, all dimension-independent statements in this paper also hold if only symmetric tuples are considered. However, it is worth mentioning that for , it can happen that $\operatorname {\mathrm {dom}}_n {\mathbb{r}}$ contains no tuples of symmetric matrices for all odd n–for example, if ${\mathbb{r}}=(x_1x_2-x_2x_1)^{-1}$ .

Another commonly considered free skew field with involution is , generated with $2d$ variables $x_1,\dotsc ,x_d,x_1^*,\dotsc ,x_d^*$ , which is endowed with the involution $*$ that swaps $x_j$ and $x_j^*$ . Elements of can be evaluated on d-tuples of complex matrices. The results of this paper also directly apply to and such evaluations, because is freely generated by elements $\frac 12\left (x_j+x_j^*\right ),\frac {i}{2}\left (x_j^*-x_j\right )$ , which are fixed by $*$ . Finally, as in Corollary 5.8 we see that a suitable analogue of Theorem 5.2 also holds for and evaluations on d-tuples of real matrices.

5.4 Examples of nonconvex Positivstellensätze

Given , let

$$ \begin{align*} \mathcal{D}(\mathbb{m}) = \bigcup_{n\in\mathbb{N}}\mathcal{D}_n(\mathbb{m}), \qquad \text{where }\mathcal{D}_n(\mathbb{m})=\{X\in\operatorname{\mathrm{hdom}}_n \mathbb{m} \colon \mathbb{m}(X)\succeq0 \}, \end{align*} $$

be its positivity domain. Here, the domain of $\mathbb{m}$ is the intersection of domains of its entries.

Proposition 5.9. Set and assume there exist a Hermitian monic pencil L of size $e\ge \ell $ , an $*$ -automorphism $\varphi $ of , and such that

(5.6) $$ \begin{align} \varphi(\mathbb{m})\oplus I = A^*LA. \end{align} $$

If , then ${\mathbb{r}}\succeq 0$ on $\operatorname {\mathrm {hdom}}{\mathbb{r}}\cap \mathcal {D}(\mathbb{m})$ if and only if .

Proof. Equation (5.6), Remark 2.5 and the convexity of $\mathcal {D}_n(L)$ imply that the sets $\mathcal {D}_n(\varphi (\mathbb{m}))$ and $\mathcal {D}_n(\mathbb{m})$ have the same closures as their interiors in the Euclidean topology for all but finitely many n. Therefore, by Theorem 5.2 and Equation (5.6),

The following example presents a family of quadratic noncommutative polynomials $q=q^*\in \mathbb {C}\!\mathop {<}\! x,x^*\!\mathop {>}$ that admit a rational Positivstellensatz on their (not necessarily convex) positivity domains $\mathcal {D}(q)=\{X\colon q(X,X^*)\succeq 0 \}$ :

Example 5.10. Given a linearly independent set $\{a_0,\dotsc ,a_n\}\subset \operatorname {\mathrm {span}}_{\mathbb {C}}\{1,x_1,\dotsc ,x_d\}$ , let

$$ \begin{align*} q =a_0^*a_0-a_1^*a_1-\dotsb-a_n^*a_n \in \mathbb{C}\!\mathop{<}\! x,x^*\!\mathop{>}. \end{align*} $$

One might say that q is a hereditary quadratic polynomial of positive signature $1$ . Note that $\mathcal {D}_1(q)$ is not convex if $a_0\notin \mathbb {C}$ . Since $a_0,\dotsc ,a_n$ are linearly independent affine polynomials in $\mathbb {C}\!\mathop {<}\! x\!\mathop {>}$ (and in particular $n\le d$ ), there exists a linear fractional automorphism $\varphi $ on such that $\varphi ^{-1}\left (x_j\right )=a_ja_0^{-1}$ for $1\le j \le n$ . We extend $\varphi $ uniquely to an $*$ -automorphism on . Then

$$ \begin{align*} \varphi(a_0)^{-*}\varphi(q)\varphi(a_0)^{-1}=1-x_1^*x_1-\dotsb-x_n^*x_n, \end{align*} $$

and thus $\varphi (q)\oplus I_n=A^*LA$ , where

$$ \begin{align*} L=\begin{pmatrix} 1 & x_1^* & \cdots & x_n^* \\ x_1 & \ddots & & \\ \vdots & & \ddots & \\ x_n & & & 1 \end{pmatrix}, \qquad A=\begin{pmatrix} \varphi(a_0) & & & \\ -x_1 & 1 & & \\ \vdots & & \ddots & \\ -x_n & & & 1 \end{pmatrix}. \end{align*} $$

Therefore ${\mathbb{r}}\succeq 0$ on $\operatorname {\mathrm {hdom}}{\mathbb{r}}\cap \mathcal {D}(q)$ if and only if for every , by Proposition 5.9.

For example, the polynomial $x_1^*x_1-1$ is of the type discussed, and thus admits a rational Positivstellensatz. In particular,

On the other hand, we claim that $x_1x_1^*-1\notin \operatorname {\mathrm {QM}}_{\mathbb {C}\!\mathop {<}\! x,x^*\!\mathop {>}}\left (x_1^*x_1-1\right )$ (compare [Reference Helton and McCulloughHM04, Example 4]). If $x_1x_1^*-1$ were an element of $\operatorname {\mathrm {QM}}_{\mathbb {C}\!\mathop {<}\! x,x^*\!\mathop {>}}(x_1^*x_1-1)$ , then the implication

$$ \begin{align*} S^*S-I\succeq0 \quad\Rightarrow\quad SS^*-I\succeq0 \end{align*} $$

would be valid for every operator S on an infinite-dimensional Hilbert space; however, it fails if S is the forward shift operator on $\ell ^2(\mathbb {N})$ . A different Positivstellensatz (polynomial, but with a slack variable) for hereditary quadratic polynomials is given in [Reference Helton, Klep and VolčičHKV20, Corollary 4.6].

5.5 Eigenvalue optimisation

Theorem 5.2 is also essential for optimising noncommutative rational functions. Namely, it implies that finding the eigenvalue supremum or infimum of a noncommutative rational function on a free spectrahedron is equivalent to solving a semidefinite program [Reference Blekherman, Parrilo and ThomasBPT13]. This equivalence was stated in [Reference Klep, Pascoe and VolčičKPV17, Section 5.2.1] for regular noncommutative rational functions; the novelty is that Theorem 5.2 now confirms its validity for noncommutative rational functions with singularities.

Let L be a Hermitian monic pencil of size e, and set . Suppose we are interested in

$$ \begin{align*} \mu_* = \sup_{X\in\operatorname{\mathrm{hdom}} {\mathbb{r}}\cap \mathcal{D}(L)}\big[\text{maximal eigenvalue of }{\mathbb{r}}(X)\big]. \end{align*} $$

Choose some $r\in {\mathbb{r}}$ (the simpler representative the better) and let $\ell =2\tau (r)+1$ . Theorem 5.2 then implies that

(5.7) $$ \begin{align} \mu_* = \inf\left\{ \mu\in\mathbb{R}\colon \mu-{\mathbb{r}}= \sum_{i=1}^M {\mathbb{s}}_i^*{\mathbb{s}}_i+\sum_{j=1}^N \mathbb{v}_j^* L\mathbb{v}_j\colon {\mathbb{s}}_i \in V_{\ell}, \mathbb{v}_j\in V_{\ell}^e \right\}, \end{align} $$

where we can take $M=\dim S_{2\ell }+1$ and $N=\dim S_{2\ell +1}+1$ by Carathéodory’s theorem [Reference BarvinokBar02, Theorem I.2.3]. The right-hand side of equation (5.7) can be stated as a semidefinite program [Reference Wolkowicz, Saigal and VandenbergheWSV00Reference Blekherman, Parrilo and ThomasBPT13]. Concretely, to determine the global (no L) eigenvalue supremum of ${\mathbb{r}}$ , one solves the semidefinite program

(5.8) $$ \begin{align} \begin{array}{ll} \min\limits_H & \quad \mu \\ \text{subject to} & \quad \mu-{\mathbb{r}} = \vec{w}^*H\vec{w}, \\ & \quad H\succeq 0, \end{array} \end{align} $$

where H is a $(\dim V_{\ell })\times (\dim V_{\ell })$ Hermitian matrix and $\vec {w}$ is a vectorised basis of $V_{\ell }$ . For constrained eigenvalue optimisation (L is present), one can set up a similar semidefinite program using localising matrices [Reference Burgdorf, Klep and PovhBKP16, Definition 1.41].

6 More on domains

In this section we prove two new results on (Hermitian) domains. One of them is the aforementioned Proposition 2.1, which states that every noncommutative rational function admits a representative with the largest Hermitian domain. The other one is Proposition 6.3 on cancellation of singularities of noncommutative rational functions.

6.1 Representatives with the largest Hermitian domain

We will require a technical lemma about matrices over formal rational expressions and their Hermitian domains. A representative of a matrix $\mathbb{m}$ over is a matrix over $\mathfrak {R}_{\mathbb {C}}(x)$ of representatives of $\mathbb{m}_{ij}$ , and the domain of a matrix over $\mathfrak {R}_{\mathbb {C}}(x)$ is the intersection of domains of its entries.

Lemma 6.1. Let m be an $e\times e$ matrix over $\mathfrak {R}_{\mathbb {C}}(x)$ such that . Then there exists $s\in \mathbb{m}^{-1}$ such that $\operatorname {\mathrm {hdom}} m\cap \operatorname {\mathrm {hdom}} \mathbb{m}^{-1}=\operatorname {\mathrm {hdom}} s$ .

Proof. Throughout the proof we reserve italic letters ( $m,c$ , etc.) for matrices over $\mathfrak {R}_{\mathbb {C}}(x)$ and bold letters ( $\mathbb{m},\mathbb{c}$ , etc.) for the corresponding matrices over . We prove the statement by induction on e. If $e=1$ , then $m^{-1}$ is the desired expression. Assume the statement holds for matrices of size $e-1$ , and let c be the first column of m. Then $\operatorname {\mathrm {hdom}} c\supseteq \operatorname {\mathrm {hdom}} m$ , and $c(X)$ is of full rank for every $X\in \operatorname {\mathrm {hdom}} m\cap \operatorname {\mathrm {hdom}}\mathbb{m}^{-1}$ . Hence

(6.1) $$ \begin{align} \operatorname{\mathrm{hdom}} (c^*c)^{-1} \supseteq\operatorname{\mathrm{hdom}} m\cap\operatorname{\mathrm{hdom}}\mathbb{m}^{-1}. \end{align} $$

Let $\widehat {\mathbb{m}}$ be the Schur complement of $\mathbb{c}^*\mathbb{c}$ in $\mathbb{m}^*\mathbb{m}$ . Note that the entries of $\widehat {\mathbb{m}}$ are polynomials in entries of $\mathbb{m}^*\mathbb{m}$ and $(\mathbb{c}^*\mathbb{c})^{-1}$ ; lifting these polynomials to formal expressions in $\mathfrak {R}_{\mathbb {C}}(x)$ , we obtain a representative $\widehat {m}\in \widehat {\mathbb{m}}$ such that

(6.2) $$ \begin{align} \operatorname{\mathrm{hdom}}\widehat{m} =\operatorname{\mathrm{hdom}} m\cap \operatorname{\mathrm{hdom}} (c^*c)^{-1}. \end{align} $$

If $X\in \operatorname {\mathrm {hdom}} m$ , then $m(X)$ is invertible if and only if $(c^*c)(X)$ and $\widehat {m}(X)$ are invertible. Thus by formulas (6.1) and (6.2), we have

(6.3) $$ \begin{align} \operatorname{\mathrm{hdom}} m \cap \operatorname{\mathrm{hdom}} \mathbb{m}^{-1} = \operatorname{\mathrm{hdom}} m \cap \left(\operatorname{\mathrm{hdom}} (c^*c)^{-1}\cap\operatorname{\mathrm{hdom}} \widehat{\mathbb{m}}^{-1}\right). \end{align} $$

Since $\hat {m}$ is an $(e-1)\times (e-1)$ matrix, by the induction hypothesis there exists $\widehat {s}\in \widehat {\mathbb{m}}^{-1}$ such that $\operatorname {\mathrm {hdom}} \widehat {m}\cap \operatorname {\mathrm {hdom}} \widehat {\mathbb{m}}^{-1}=\operatorname {\mathrm {hdom}} \widehat {s}$ . By equation (6.3) we have

(6.4) $$ \begin{align} \operatorname{\mathrm{hdom}} m \cap \operatorname{\mathrm{hdom}} \mathbb{m}^{-1} = \operatorname{\mathrm{hdom}} m \cap \left(\operatorname{\mathrm{hdom}} (c^*c)^{-1}\cap\operatorname{\mathrm{hdom}} \widehat{s}\right). \end{align} $$

The entries of $(\mathbb{m}^*\mathbb{m})^{-1}$ can be represented by expressions $s^{\prime }_{ij}$ which are sums and products of expressions $m_{ij},m_{ij}^*,(c^*c)^{-1},\widehat {s}_{ij}$ . Thus $s'\in (\mathbb{m}^*\mathbb{m})^{-1}$ satisfies

$$ \begin{align*} \operatorname{\mathrm{hdom}} m \cap \operatorname{\mathrm{hdom}} \mathbb{m}^{-1}=\operatorname{\mathrm{hdom}} s', \end{align*} $$

by equation (6.4). Finally, $s=s' m^*$ is the desired expression because $\mathbb{m}^{-1}=(\mathbb{m}^*\mathbb{m})^{-1}\mathbb{m}^*$ .

Proof of Proposition 2.1. Set . Let $e\in \mathbb {N}$ , an affine matrix pencil M of size e and $u,v\in \mathbb {C}^e$ be such that ${\mathbb{r}}=u^* M^{-1}v$ in , and e is minimal. Recall that $\operatorname {\mathrm {dom}}{\mathbb{r}}=\bigcup _{r\in {\mathbb{r}}}\operatorname {\mathrm {dom}} r$ . By comparing $(u,M,v)$ with linear representations of representatives of ${\mathbb{r}}$ as in [Reference Cohn and ReutenauerCR99, Theorem 1.4], it follows that

(6.5) $$ \begin{align} \operatorname{\mathrm{dom}}{\mathbb{r}}\subseteq\bigcup_{n\in\mathbb{N}}\left\{X\in\operatorname{\mathrm{M}}_{n}(\mathbb{C})^d: \det M(X)\neq0 \right\}. \end{align} $$

Since M contains no inverses, it is defined at every matrix tuple; thus by Lemma 6.1 there is a representative of $M^{-1}$ whose Hermitian domain equals $\{X=X^*\colon \det M(X)\neq 0 \}$ . Since ${\mathbb{r}}$ is a linear combination of the entries in $M^{-1}$ , there exists $r\in {\mathbb{r}}$ such that $\operatorname {\mathrm {hdom}} r=\operatorname {\mathrm {hdom}} {\mathbb{r}}$ by formula (6.5).

Example 6.2. The domain of given by the expression $\left (x_4-x_3x_1^{-1}x_2\right )^{-1}$ equals

$$ \begin{align*} \operatorname{\mathrm{dom}}{\mathbb{r}} = \bigcup_{n\in\mathbb{N}} \left\{X\in\operatorname{\mathrm{M}}_{n}(\mathbb{C})^4 \colon \det \begin{pmatrix} X_ 1 & X_2 \\ X_3 & X_4 \end{pmatrix}\neq0 \right\}, \end{align*} $$

and $\operatorname {\mathrm {dom}} r\subsetneq \operatorname {\mathrm {dom}} {\mathbb{r}}$ for every $r\in {\mathbb{r}}$ by [Reference VolčičVol17, Example 3.13].

Following the proof of Proposition 2.1 and Lemma 6.1, let $\mathbb{m}=\left (\begin {smallmatrix}x_1 & x_2 \\ x_3 & x_4\end {smallmatrix}\right )$ . Then

$$ \begin{align*} \mathbb{m}^*\mathbb{m}=\begin{pmatrix}x_1^2+x_3^2 & x_1x_2+x_3x_4 \\ x_2x_1+x_4x_3 & x_2^2+x_4^2\end{pmatrix}, \end{align*} $$

and the Schur complement of $\mathbb{m}^*\mathbb{m}$ with respect to the $(1,1)$ -entry equals

$$ \begin{align*} \widehat{\mathbb{m}}=x_2^2+x_4^2-(x_2x_1+x_4x_3)\left(x_1^2+x_3^2\right)^{-1}(x_1x_2+x_3x_4). \end{align*} $$

Since

$$ \begin{align*} \mathbb{m}^{-1}=(\mathbb{m}^*\mathbb{m})^{-1}\mathbb{m}^*= \begin{pmatrix} \star & \star \\ -\widehat{\mathbb{m}}^{-1}(x_2x_1+x_4x_3)\left(x_1^2+x_3^2\right)^{-1} & \widehat{\mathbb{m}}^{-1} \end{pmatrix} \begin{pmatrix} \star & x_3 \\ \star & x_4 \end{pmatrix} \end{align*} $$

and

$$ \begin{align*} {\mathbb{r}} = \begin{pmatrix}0 & 1 \end{pmatrix}\mathbb{m}^{-1}\begin{pmatrix} 0\\ 1\end{pmatrix}, \end{align*} $$

we conclude that the formal rational expression

$$ \begin{align*} \left(x_2^2+x_4^2-(x_2x_1+x_4x_3)\left(x_1^2+x_3^2\right)^{-1}(x_1x_2+x_3x_4)\right)^{-1} \left(x_4-(x_2x_1+x_4x_3)\left(x_1^2+x_3^2\right)^{-1}x_3\right) \end{align*} $$

represents ${\mathbb{r}}$ , and its Hermitian domain coincides with $\operatorname {\mathrm {hdom}} {\mathbb{r}}$ . Of course, the expression $\left (x_4-x_3x_1^{-1}x_2\right )^{-1}$ is a much simpler representative of ${\mathbb{r}}$ .

6.2 Cancellation of singularities

In the absence of left ideals in skew fields, the following proposition serves as a rational analogue of Bergman’s Nullstellensatz for noncommutative polynomials [Reference Helton and McCulloughHM04, Theorem 6.3]. The proof omits some of the details, since it is a derivative of the proof of Theorem 5.2.

Proposition 6.3. The following are equivalent for :

  1. (i) $\ker {\mathbb{r}}(X)\subseteq \ker {\mathbb{s}}(X)$ for all $X\in \operatorname {\mathrm {dom}}{\mathbb{r}}\cap \operatorname {\mathrm {dom}}{\mathbb{s}}$ .

  2. (ii) $\operatorname {\mathrm {dom}} \left ({\mathbb{s}}{\mathbb{r}}^{-1}\right )\supseteq \operatorname {\mathrm {dom}}{\mathbb{r}}\cap \operatorname {\mathrm {dom}}{\mathbb{s}}$ .

Proof. (ii) $\Rightarrow $ (i) If (ii) holds, then ${\mathbb{s}}(X)=\left ({\mathbb{s}}(X){\mathbb{r}}(X)^{-1}\right ){\mathbb{r}}(X)$ for every $X\in \operatorname {\mathrm {dom}}{\mathbb{r}}\cap \operatorname {\mathrm {dom}}{\mathbb{s}}$ , and so $\ker {\mathbb{r}}(X)\subseteq \ker {\mathbb{s}}(X)$ .

(i) $\Rightarrow $ (ii) Suppose (ii) does not hold; then there are $r\in {\mathbb{r}}$ , $s\in {\mathbb{s}}$ and $Y\in \operatorname {\mathrm {dom}} r\cap \operatorname {\mathrm {dom}} s$ such that $\det r(Y)=0$ . Similarly to Section 4, denote

$$ \begin{align*} R=\{1\}\cup \{q\in\mathfrak{R}_{\mathbb{C}}(x)\setminus\mathbb{C}\colon q \text{ is a subexpression of }r \text{ or }s \} \end{align*} $$

and let $\mathcal {R}$ be its image in . We also define finite-dimensional vector spaces $V_{\ell }$ and the finitely generated algebra V as before. The left ideal $V{\mathbb{r}}$ in V is proper: if $\mathbb{q}{\mathbb{r}}=1$ for $q\in V$ , then $q(Y)r(Y)=I$ , since $Y\in \operatorname {\mathrm {dom}} q$ , which contradicts $\det r(Y)=0$ . Furthermore, ${\mathbb{s}}\notin V{\mathbb{r}}$ , since (ii) does not hold. Let $K=V/V{\mathbb{r}}$ , and let $K_{\ell }$ be the image of $V_{\ell }$ for every $\ell \in \mathbb {N}$ . Let $\mathfrak {X}_j: K\to K$ be the operator given by the left multiplication with $x_j$ ; note that $\mathfrak {X}_j(K_{\ell })\subseteq K_{\ell +1}$ for all $\ell $ . By induction on the construction of $q\in R$ , it is straightforward to see that $q(\mathfrak {X})$ is well defined for every $q\in R$ . Let $\ell =2\max \{\tau (r),\tau (s)\}-2$ . By Proposition 3.6, there exist a finite-dimensional vector space U and a d-tuple of operators X on $K_{\ell +1}\oplus U$ such that $X\in \operatorname {\mathrm {dom}} r\cap \operatorname {\mathrm {dom}} s$ and

$$ \begin{align*} X_j\rvert_{K_{\ell}}=\mathfrak{X}_j\rvert_{K_{\ell}} \end{align*} $$

for $j=1,\dotsc ,d$ . A slight modification of Proposition 4.3 implies that

$$ \begin{align*} r(X)[1]=[{\mathbb{r}}]=0,\qquad s(X)[1]=[{\mathbb{s}}]\neq0, \end{align*} $$

where $[\mathbb{q}]\in K$ denotes the image of $\mathbb{q}\in V$ .

The implication (i) $\Rightarrow $ (ii) in Proposition 6.3 fails if only Hermitian domains are considered (e.g., take ${\mathbb{r}}=x_1^2$ and ${\mathbb{s}}=x_1$ ). It is also worth mentioning that while Proposition 6.3 might look rather straightforward at first glance, there is a certain subtlety to it. Namely, the equivalence in Proposition 6.3 fails if only matrix tuples of a fixed size are considered. For example, let ${\mathbb{r}}=x_1$ and ${\mathbb{s}}=x_1x_2$ ; then $\operatorname {\mathrm {dom}}_1{\mathbb{r}}\cap \operatorname {\mathrm {dom}}_1{\mathbb{s}}=\mathbb {C}^2$ and $\ker {\mathbb{r}}(X)\subseteq \ker {\mathbb{s}}(X)$ for all $X\in \mathbb {C}^2$ , but $\operatorname {\mathrm {dom}}_1 \left ({\mathbb{s}}{\mathbb{r}}^{-1}\right )=\mathbb {C}\setminus \{0\}\times \mathbb {C}$ (compare [Reference VolčičVol17, Example 2.1 and Theorem 3.10]).

Acknowledgments

The author thanks Igor Klep for valuable comments and suggestions which improved the presentation of this paper. This research was supported by NSF grant DMS 1954709. The research meets all ethical guidelines, including adherence to the legal requirements of the study country.

Conflicts of interest

None.

References

Barvinok, A., A Course in Convexity , Graduate Studies in Mathematics vol. 54 (American Mathematical Society, Providence, RI, 2002).Google Scholar
Blekherman, G., Parrilo, P. A. and Thomas, R. R. (editors), Semidefinite Optimization and Convex Algebraic Geometry, MOS-SIAM Series on Optimization (Society for Industrial and Applied Mathematics, Philadelphia, PA, 2013).Google Scholar
Bochnak, J., Coste, M. and Roy, M. F., Real Algebraic Geometry , Results in Mathematics and Related Areas (3) vol. 36 (Springer-Verlag, Berlin, 1998).Google Scholar
Burgdorf, S., Klep, I. and Povh, J., Optimization of Polynomials in Non-Commuting Variables , SpringerBriefs in Mathematics (Springer, Cham, Switzerland, 2016).Google Scholar
Cohn, P. M., Skew Fields: Theory of General Division Rings , Encyclopedia of Mathematics and Its Applications vol. 57 (Cambridge University Press, Cambridge, UK, 1995).Google Scholar
Cohn, P. M. and Reutenauer, C., ‘On the construction of the free field’, Internat. J. Algebra Comput. 9 (1999), 307323.CrossRefGoogle Scholar
de Oliveira, M. C., Helton, J. W., McCullough, S. A. and Putinar, M., ‘Engineering systems and free semialgebraic geometry’, in Emerging Applications of Algebraic Geometry (Springer, New York, 2009), 1761.CrossRefGoogle Scholar
Derksen, H. and Makam, V., ‘Polynomial degree bounds for matrix semi-invariants’, Adv. Math. 310 (2017), 4463.CrossRefGoogle Scholar
Doherty, A. C., Liang, Y.-C., Toner, B. and Wehner, S., ‘The quantum moment problem and bounds on entangled multi-prover games’, in 23rd Annual IEEE Conference on Computational Complexity (IEEE Computer Society, Los Alamitos, CA, 2008), 199210.CrossRefGoogle Scholar
Helton, J. W., ‘Positive noncommutative polynomials are sums of squares’, Ann. of Math. (2) 156 (2002), 675694.CrossRefGoogle Scholar
Helton, J. W., Klep, I. and McCullough, S., ‘The convex Positivstellensatz in a free algebra’, Adv. Math. 231 (2012), 516534.CrossRefGoogle Scholar
Helton, J. W., Klep, I. and Volčič, J., ‘Factorization of noncommutative polynomials and Nullstellensätze for the free algebra’, Int. Math. Res. Not. IRMN NNN (2020), https://doi.org/10.1093/imrn/rnaa122.CrossRefGoogle Scholar
Helton, J. W., Mai, T. and Speicher, R., ‘Applications of realizations (aka linearizations) to free probability’, J. Funct. Anal. 274 (2018), 179.CrossRefGoogle Scholar
Helton, J. W. and McCullough, S., ‘A Positivstellensatz for non-commutative polynomials’, Trans. Amer. Math. Soc. 356 (2004), 37213737.CrossRefGoogle Scholar
Helton, J. W. and McCullough, S., ‘Every convex free basic semialgebraic set has an LMI representation’, Ann. of Math. (2) 176 (2012), 9791013.CrossRefGoogle Scholar
Kaliuzhnyi-Verbovetskyi, D. S. and Vinnikov, V., ‘Noncommutative rational functions, their difference-differential calculus and realizations’, Multidimens. Syst. Signal Process. 23 (2012), 4977.CrossRefGoogle Scholar
King, A. D., ‘Moduli of representations of finite-dimensional algebras’, Q. J. Math. 45 (1994), 515530.CrossRefGoogle Scholar
Klep, I., Pascoe, J. E., Podlogar, G. and Volčič, J., ‘Noncommutative rational functions invariant under the action of a finite solvable group’, J. Math. Anal. Appl. 490 (2020), 124341.CrossRefGoogle Scholar
Klep, I., Pascoe, J. E. and Volčič, J., ‘Regular and positive noncommutative rational functions’, J. Lond. Math. Soc. (2) 95 (2017), 613632.CrossRefGoogle Scholar
Klep, I., Špenko, Š. and Volčič, J., ‘Positive trace polynomials and the universal Procesi-Schacher conjecture’, Proc. Lond. Math. Soc. (3) 117 (2018), 11011134.CrossRefGoogle Scholar
Lasserre, J.-B., ‘Global optimization with polynomials and the problem of moments’, SIAM J. Optim. 11 (2001), 796817.CrossRefGoogle Scholar
McCullough, S., ‘Factorization of operator-valued polynomials in several non-commuting variables’, Linear Algebra Appl. 326 (2001), 193203.CrossRefGoogle Scholar
Ozawa, N., ‘Noncommutative real algebraic geometry of Kazhdan’s property (T)’, J. Inst. Math. Jussieu 15 (2016), 8590.CrossRefGoogle Scholar
Pascoe, J. E., ‘Positivstellensätze for noncommutative rational expressions’, Proc. Amer. Math. Soc. 146 (2018), 933937.CrossRefGoogle Scholar
Pozas-Kerstjens, A., Rabelo, R., Rudnicki, Ł., Chaves, R., Cavalcanti, D., Navascués, M. and Acín, A., ‘Bounding the sets of classical and quantum correlations in networks’, Phys. Rev. Lett. 123 (2019), 140503.CrossRefGoogle ScholarPubMed
Procesi, C. and Schacher, M., ‘A non-commutative real Nullstellensatz and Hilbert’s 17th problem’, Ann. of Math. 104 (1976), 395406.CrossRefGoogle Scholar
Volčič, J., ‘On domains of noncommutative rational functions’, Linear Algebra Appl. 516 (2017), 6981.CrossRefGoogle Scholar
Volčič, J., ‘Matrix coefficient realization theory of noncommutative rational functions’, J. Algebra 499 (2018), 397437.CrossRefGoogle Scholar
Wolkowicz, H., Saigal, R. and Vandenberghe, L. (editors), Handbook of Semidefinite Programming: Theory, Algorithms, and Applications, International Series in Operations Research & Management Science 27, Kluwer Academic Publishers, Boston, MA, 2000.CrossRefGoogle Scholar