1. Introduction
The focusing nonlinear Schrödinger equation is a universal evolution equation governing the complex amplitude of a weakly nonlinear, strongly dispersive wave packet over long time scales in very general settings. In one space dimension, this equation can be written in normalized form as
\begin{align}
{\mathrm{i}} q_t + \frac{1}{2}q_{xx} + |q|^2q = 0,\quad q=q(x,t),\quad (x,t)\in\mathbb{R}^2.
\end{align}For instance, in 1969, V. E. Zakharov studied the surface elevation of water wave packets over deep water in the classical setting of plane-parallel irrotational and incompressible (potential) flow below the free surface, which is subject to kinematic and pressure-balance boundary conditions [Reference Zakharov47]. He gave a derivation of (1.1) based on the formalism of the method of multiple scales, with the wave packet amplitude being the fundamental small parameter. This derivation has more recently been made fully rigorous [Reference Totz and Wu36]. Being as the multiple-scale argument is based on Taylor expansions of nonlinear terms and of the linearized dispersion relation, the derivation of (1.1) as a model equation applies in far more settings than surface water waves [Reference Benney and Newell4]. For example, it is also a fundamental model in nonlinear optics [Reference Newell and Moloney26] and in the theory of Bose-Einstein condensation (where it is known as the Gross-Pitaevskii equation) [Reference Ueda43].
In 1983, D. H. Peregrine found a compelling exact solution of (1.1) for which
$q(x,t)$ is not of constant modulus, but nonetheless decays to the exact solution
$q(x,t)={\mathrm{e}}^{{\mathrm{i}} t}$ of (1.1) uniformly in all directions of space-time [Reference Peregrine31]. Peregrine’s solution is
\begin{align}
q(x,t)={\mathrm{e}}^{{\mathrm{i}} t}\left[1-4\frac{1+2{\mathrm{i}} t}{1+4x^2+4t^2}\right] = {\mathrm{e}}^{{\mathrm{i}} t}\left[1+O\left(\frac{1}{\sqrt{x^2+t^2}}\right)\right],\quad (x,t)\to\infty.
\end{align} The error estimate term in (1.2) is not optimal, but it is the optimal radially symmetric estimate. Since in the setting of water waves,
$q(x,t)={\mathrm{e}}^{{\mathrm{i}} t}$ is the complex amplitude of a uniform periodic wavetrain (a Stokes wave), Peregrine’s solution describes a space-time localized fluctuation of a Stokes wave, and as such it is a model for a rogue wave.
The focusing nonlinear Schrödinger equation (1.1) was shown to be a completely integrable system in the work of Zakharov and Shabat [Reference Zakharov and Shabat46]. This means that the methods of soliton theory apply, including tools for deriving numerous exact solutions such as (1.2). These tools have a recursive nature, allowing for a given exact solution to be generalized to a whole infinite family by means of iterated Bäcklund transformations. Thus, one sees that the Peregrine solution (1.2) is by no means the only solution of (1.1) that has the character of a rogue wave. Indeed, there exist algebraic representations in terms of determinants of exact solutions of (1.1) that for any
$N\in\mathbb{N}$ can be viewed as a rogue wave of order N. At each subsequent value of N, new parameters enter into the algebraic solution formula that affect the details of the solution without influencing its fundamental property of decay to the background
$q(x,t)={\mathrm{e}}^{{\mathrm{i}} t}$. If these parameters are scaled suitably, the rogue wave of order N can resemble an array of a triangular number of distant copies of the Peregrine solution on the same background, and it has been shown [Reference Yang and Yang45] that the locations of the Peregrine peaks in space-time are correlated with the complex zeros of the Yablonskii-Vorob’ev polynomials.
However, if the parameters are chosen in a highly correlated way at each order, then the numerous peaks all combine and form a rogue wave of significantly higher amplitude, termed a fundamental rogue wave. For instance one can see that the Peregrine solution (1.2) corresponding to N = 1 has an amplitude
$|q(x,t)|$ that grows to a maximum value of 3 times the (unit) background level. At the level N = 2, the maximum amplitude obtainable is actually 5 times the background level. As such, the N = 2 fundamental rogue wave is a better model than the Peregrine solution for the famous Draupner event [Reference Sunde34] in the North Sea that is frequently cited as the first quantitative observation of sea-surface rogue waves.
To study large-amplitude rogue waves it then becomes of some interest to allow the order N to grow and seek an asymptotic description of fundamental rogue waves as
$N\to\infty$. This limit became tractable with the introduction of a modified form of the inverse scattering transform for (1.1) with nonzero boundary conditions at infinity, which yielded for the first time a Riemann–Hilbert representation of rogue wave solutions of arbitrary order [Reference Bilman and Miller9]. In [Reference Bilman, Ling and Miller8], this representation was used to analyze the fundamental rogue wave
$q=q(x,t)$ of order N in the large-order/near-field limit that
$N\to\infty$ while simultaneously the independent variables are rescaled near the peak
$(x,t)=(0,0)$ so that
$x=2X/N$ and
$t=4T/N^2$ for fixed
$(X,T)\in\mathbb{R}^2$. It was found that a limiting profile
$\Psi(X,T)$ of
$2q/N$ exists as
$N\to\infty$ that was called the rogue wave of infinite order, and was shown (see Theorem 1.10 below) to be a global solution of the focusing nonlinear Schrödinger (NLS) equation in the form
\begin{align}
{\mathrm{i}} \Psi_{T}+\frac{1}{2} \Psi_{X X}+|\Psi|^{2} \Psi=0.
\end{align} It turns out the same solution also appeared recently in the physical literature [Reference Suleimanov33] to describe a universal dispersive regularization of an anomalously catastrophic self-focusing effect predicted by the geometrical optics approximation in self-focusing Kerr media as noted in the 1960s by Talanov [Reference Talanov35], and there is also a rigorous proof that the solution arises in the semiclassical limit scaling of (1.1) when it is taken with real semicircle-profile initial data matching the Talanov form [Reference Buckingham, Jenkins and Miller18]. Indeed, several of the properties of
$\Psi(X,T)$ that were proven in [Reference Bilman, Ling and Miller8] had been also noted independently in the paper of Suleimanov [Reference Suleimanov33].
The methodology developed in [Reference Bilman and Miller9] also allowed for a streamlined analysis of multisoliton solutions of (1.1) on the zero background, and in [Reference Bilman and Buckingham7] the soliton analogue of high-order fundamental rogue waves was analysed in a similar near-field limit. Here one considers reflectionless potentials corresponding to a transmission coefficient with a single pole of arbitrarily high order in the upper half-plane. Unlike the case of fundamental rogue waves, two additional parameters appear in the iterated Darboux transformation that influence the shape of the limiting wave profile. Thus one sees that the rogue wave of infinite order
$\Psi(X,T)$ is a special case of a more general family of special solutions of (1.3). We call these solutions general rogue waves of infinite order.
For rogue-wave solutions, the iterated Darboux transformations are applied to a “seed solution” that is the uniform plane wave
$q(x,t)={\mathrm{e}}^{{\mathrm{i}} t}$, while for high-order soliton solutions the seed is instead the vacuum solution
$q(x,t)=0$. In both cases, the same type of limiting object was observed to appear in the high-order/near-field limit. This observation was first generalized in [Reference Bilman and Miller12] in which a family of exact solutions of (1.1) was described involving a continuous order parameter M that when discretized in two different ways yielded both the fundamental rogue waves and also the arbitrary order solitons, with the same near-field limit appearing no matter how the continuous order was allowed to grow without bound, suggesting a type of universality of the limiting general rogue wave of infinite order. This notion of universality was fully generalized in [Reference Bilman and Miller10], where it was shown that the seed solution could be completely arbitrary (and need not represent any type of explicit solution at all), and the same family of general rogue wave solutions always appears in the high-order/near-field limit.
General rogue waves of infinite order also have other applications. For one thing, the initial condition for Ψ at T = 0 would be expected to generate corresponding integrable dynamics in any evolution equation that commutes with (1.3), i.e., in other equations of the same integrable hierarchy associated with the Zakharov–Shabat operator. One such system is the sharp-line Maxwell-Bloch system, and in [Reference Li and Miller24] the initial profiles of the general rogue waves of infinite order are identified with a family of self-similar solutions of the Maxwell-Bloch system that describe an important boundary-layer phenomenon. There are also analogues of general rogue waves of infinite order in the modified Korteweg-de Vries equation (some constraints on the parameters are required to ensure reality of the solution) [Reference Bilman, Blackstone, Miller and Young6], and in simultaneous solutions of arbitrarily many commuting flows in the focusing NLS hierarchy [Reference Buckingham, Jenkins and Miller18]. There is also some recent interest in general rogue waves of infinite order in the analysis community due to the fact that they lie in
$L^2(\mathbb{R})$ for each fixed
$T\in\mathbb{R}$ (see Theorem 1.9 below) and while (1.3) is globally well-posed on this space [Reference Tsutsumi42], these solutions neither generate any coherent structures (solitons) for large time T nor do they exhibit the expected
$O(T^{-\frac{1}{2}})$ decay consistent with solitonless initial data in smaller spaces such as
$H^{1,1}(\mathbb{R})$ [Reference Borghese, Jenkins and McLaughlin15]. In fact, they decay at the anomalously slow rate of
$O(T^{-\frac{1}{3}})$ (see Theorem 1.22 below) and there is no reason to expect that notions such as “soliton content” from inverse-scattering theory apply (see Remark 1.21).
The main purpose of this paper is to present in one place all of the important properties of the family of general rogue waves of infinite order along with related computational methods. We therefore begin by properly defining these solutions.
1.1. Mathematical definition of general rogue waves of infinite order
In what follows, we denote by
$\mathbf{G}^*$ the entry-wise complex conjugation (without the transpose) for a matrix G and we use the following standard notation for the Pauli spin matrices:
\begin{equation*}
\sigma_1 := \begin{bmatrix} 0 & 1 \\ 1 & 0\end{bmatrix},\quad \sigma_2 := \begin{bmatrix} 0 & -{\mathrm{i}} \\ {\mathrm{i}} & 0\end{bmatrix},\quad \sigma_3:=\begin{bmatrix} 1 & 0 \\ 0 & -1\end{bmatrix}.
\end{equation*}General rogue waves of infinite order are defined in terms of the following Riemann–Hilbert problem.
Riemann–Hilbert Problem 1.
Let
$(X,T)\in\mathbb{R}^2$ and B > 0 be fixed and let G be a
$2\times 2$ matrix satisfying
$\det(\mathbf{G})=1$ and
$\mathbf{G}^*=\sigma_2\mathbf{G}\sigma_2$. Find a
$2\times 2$ matrix
$\mathbf{P}(\Lambda;X,T,\mathbf{G},{B})$ with the following properties:
• Analyticity:
$\mathbf{P}(\Lambda;X,T,\mathbf{G},{B})$ is analytic in Λ for
$|\Lambda|\neq 1$, and it takes continuous boundary values on the clockwise-oriented unit circle from the interior and exterior.• Jump condition: The boundary valuesFootnote 1 on the unit circle are related as follows:
(1.4)
\begin{align}
\mathbf{P}_+(\Lambda;X,T,\mathbf{G},{B})=\mathbf{P}_-(\Lambda;X,T,\mathbf{G},{B})
{\mathrm{e}}^{-{\mathrm{i}}(\Lambda X+\Lambda^2T+2 {B} \Lambda^{-1})\sigma_3}\mathbf{G}
{\mathrm{e}}^{{\mathrm{i}}(\Lambda X+\Lambda^2T+2{B} \Lambda^{-1})\sigma_3},\quad |\Lambda|=1.
\end{align}• Normalization:
$\mathbf{P}(\Lambda;X,T,\mathbf{G},{B})\to\mathbb{I}$ as
$\Lambda\to\infty$.
In general any matrix G satisfying
$\det(\mathbf{G})=1$ and
$\sigma_2 \mathbf{G}^* \sigma_2 = \mathbf{G}$ as in Riemann–Hilbert Problem 1 can be written as
\begin{align}
\mathbf{G}=\mathbf{G}(a,b)=\frac{1}{\sqrt{|a|^{2}+|b|^{2}}}\begin{bmatrix}
a & b^{*} \\
-b & a^{*}
\end{bmatrix}
\end{align} for complex numbers
$a,b$ not both zero.
It is a consequence of the conditions on G and the analytic dependence of the jump matrix on (X, T) that the following holds.
Proposition 1.1 (Global existence)
For each
$(X,T)\in\mathbb{R}^2$ there exists a unique solution to Riemann–Hilbert Problem 1, and the solution depends real-analytically on
$(X,T)\in\mathbb{R}^2$ and the real and imaginary parts of the parameters
$a,b$ of the elements of G.
This follows from Zhou’s vanishing lemma [Reference Zhou48, Theorem 9.3] and the application of analytic Fredholm theory. The special solution
$\Psi(X,T)=\Psi(X,T;\mathbf{G}, {B})$ of (1.3) is defined in terms of the solution of Riemann–Hilbert Problem 1 by
\begin{align}
\Psi(X,T;\mathbf{G},{B}):= 2{\mathrm{i}} \lim_{\Lambda\to\infty}\Lambda P_{12}(\Lambda;X,T,\mathbf{G},{B}).
\end{align} This is in general a transcendental solution of the NLS equation; therefore its quantitative properties and the qualitative the nature of its profile (for instance what boundary conditions are satisfied as
$X\to\pm\infty$), and how these depend on parameters are not immediately clear.
The function
$\Psi(X,T;\mathbf{G},{B})$ was first studied by Suleimanov [Reference Suleimanov33] and independently by the authors with L. Ling [Reference Bilman, Ling and Miller8] for the special case of
\begin{align}
\mathbf{G}=\mathbf{Q}^{-1},\qquad \mathbf{Q}:=\frac{1}{\sqrt{2}}\begin{bmatrix}1& -1 \\ 1 & 1 \end{bmatrix},
\end{align} which corresponds to the choice
$a=b=1$ (or any positive number) in (1.5).
In order to study properties of the special solution
$\Psi(X,T;\mathbf{G},{B})$ and how they depend on parameters, three approaches come to mind: (i) investigate its exact properties including symmetries, special values, differential equations satisfied, and equivalent representations; (ii) work in a variety of interesting asymptotic regimes to obtain rigorous approximations to
$\Psi(X,T;\mathbf{G},{B})$; and (iii) compute
$\Psi(X,T;\mathbf{G},{B})$ accurately in the (X, T)-plane by a suitable numerical method. In this paper, we use all three approaches.
In the rest of this introduction section, we summarize our results in the three areas mentioned above. To set the scene, plots of
$\Psi(X,T;\mathbf{G},{B})$ computed with RogueWaveInfiniteNLS.jl with
$a=b={B}=1$ are shown in Figure 1. RogueWaveInfiniteNLS.jl is a software package for the Julia programming language developed as part of this work to compute rogue waves of infinite order through numerical solution of suitable Riemann–Hilbert problems; see Section 1.4 below.

Figure 1. The solution
$\Psi(X,T;\mathbf{G},{B})$ computed with RogueWaveInfiniteNLS.jl with
$a=b={B}=1$. RogueWaveInfiniteNLS.jl is a software package developed in this work for the Julia programming language to compute rogue waves of infinite order through numerical solution of suitable Riemann–Hilbert problems.
1.2. Exact properties of Ψ
Here we describe the symmetries of
$\Psi(X,T;\mathbf{G},{B})$ (Section 1.2.1), evaluate it and its derivative
$\Psi_X$ at
$(X,T)=(0,0)$ (Section 1.2.2) and give its L 2-norm (Section 1.2.3), give partial and ordinary differential equations satisfied by
$\Psi(X,T;\mathbf{G},{B})$ (Section 1.2.4), and give a new Fredholm determinant formula for the initial condition (Section 1.2.5).
1.2.1. Symmetries
In the setting that
$\Psi(X,T;\mathbf{G},{B})$ arises from the joint near-field/high-order limit of rogue-wave solutions of (1.1), the parameter B > 0 has the interpretation of the amplitude of the background wave supporting the rogue waves. However, it is not hard to see that the dependence on B > 0 can be scaled out of Ψ by the scaling invariance
$\Psi(X,T;\mathbf{G},{B}) \mapsto {B}^{-1} \Psi({B}^{-1}X, {B}^{-2} T;\mathbf{G},{B})$ of the focusing NLS equation (1.3).
Proposition 1.2 (Scaling symmetry)
Given G with
$\det(\mathbf{G})=1$ and
$\mathbf{G}^*=\sigma_2\mathbf{G}\sigma_2$, for each B > 0 and
$(X,T)\in\mathbb{R}^2$, we have
We give a proof in Appendix A. In this paper we make use of Proposition 1.2 and take B = 1. Accordingly, we write
to denote the special solution under study, and similarly we generally omit B = 1 from the argument list of P going forward. On the other hand, the dependence of
$\Psi(X,T;\mathbf{G})$ on the
$2\times 2$ matrix G is nontrivial.
We proceed with two observations that concern the symmetries with respect to reflections in the X variable and in the T variable.
Proposition 1.3 (Reflection in
$X$)
$\Psi(X,T; \mathbf{G}(a,b)) = \Psi(-X,T; \mathbf{G}(b,a))$.
Similarly, we have
Proposition 1.4 (Reflection in
$T$)
$\Psi(X,-T; \mathbf{G}(a,b)) = \Psi(X,T; \mathbf{G}(a,b)^*)^*$.
The proofs of Proposition 1.3 and Proposition 1.4 are in Appendix A. The next observation we make concerns a useful normalization of the parameters
$a,b$. Indeed, we have the following result, which is also proved in Appendix A.
Proposition 1.5 (Normalized parameters)
For all
$(X,T)\in\mathbb{R}^2$ and
$a,b\in\mathbb{C}$ with
$ab\neq 0$,
where
\begin{align}
\mathfrak{a}:=\frac{|a|}{\sqrt{|a|^2+|b|^2}}\quad\text{and}\quad
\mathfrak{b}:=\frac{|b|}{\sqrt{|a|^2+|b|^2}}
\end{align} satisfy
$\mathfrak{a},\mathfrak{b} \gt 0$ with
$\mathfrak{a}^2+\mathfrak{b}^2=1$.
Therefore, up to a phase factor, there is just one real parameter in the family of solutions
$\Psi(X,T;\mathbf{G})$ with ab ≠ 0, which one could take as
$\mathfrak{a}\in (0,1)$, or equivalently as an angle
$\eta\in (0,\frac{1}{2}\pi)$ for which
$\mathfrak{a}=\cos(\eta)$ and
$\mathfrak{b}=\sin(\eta)$. The coordinate η was used, for example, in the analysis of [Reference Li and Miller24, Section 2.3]. Combining Proposition 1.4 and Proposition 1.5 for T = 0 shows that
\begin{align}
\begin{split}
\Psi(X,0;\mathbf{G}(a,b))&={\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}\Psi(X,0;\mathbf{G}(\mathfrak{a},\mathfrak{b}))\\ &={\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}\Psi(X,0;\mathbf{G}(\mathfrak{a},\mathfrak{b}))^*\\ &={\mathrm{e}}^{-2{\mathrm{i}}\arg(ab)}\Psi(X,0;\mathbf{G}(a,b))^*
\end{split}
\end{align} because
$\mathbf{G}(\mathfrak{a},\mathfrak{b})$ is a real matrix. It follows that
$X\mapsto {\mathrm{e}}^{{\mathrm{i}}\arg(ab)}\Psi(X,0;\mathbf{G}(a,b))$ is a real-valued function of
$X\in\mathbb{R}$.
The normalized parameters
$\mathfrak{a} \gt 0$ and
$\mathfrak{b} \gt 0$ with
$\mathfrak{a}^2+\mathfrak{b}^2=1$ will be used in the proofs of our asymptotic results to be described in Section 1.3 below. So that they are available later we record here the following four standard matrix factorizations of the central factor
$\mathbf{G}(\mathfrak{a},\mathfrak{b})$ in the jump matrix (1.4), which have been further manipulated to have a diagonal matrix as the leftmost factor:
\begin{align}
\mathbf{G}(\mathfrak{a},\mathfrak{b})=\begin{bmatrix}\mathfrak{a} & \mathfrak{b}\\-\mathfrak{b} & \mathfrak{a}\end{bmatrix}=\mathfrak{a}^{\sigma_3}\begin{bmatrix}1 & 0\\-\mathfrak{ab} & 1\end{bmatrix}\begin{bmatrix}1 & \displaystyle\frac{\mathfrak{b}}{\mathfrak{a}}\\0 & 1\end{bmatrix},\quad \text{(\unicode{x201C}LDU\unicode{x201C})},
\end{align}
\begin{align}
\mathbf{G}(\mathfrak{a},\mathfrak{b})=\begin{bmatrix}\mathfrak{a} & \mathfrak{b}\\-\mathfrak{b} & \mathfrak{a}\end{bmatrix}=\mathfrak{a}^{-\sigma_3}\begin{bmatrix}1 & \mathfrak{ab}\\0 & 1\end{bmatrix}\begin{bmatrix}1 & 0\\\displaystyle-\frac{\mathfrak{b}}{\mathfrak{a}} & 1\end{bmatrix},\quad\text{(\unicode{x201C}UDL\unicode{x201D})},
\end{align}
\begin{align}
\mathbf{G}(\mathfrak{a},\mathfrak{b})=\begin{bmatrix}\mathfrak{a} & \mathfrak{b}\\-\mathfrak{b} & \mathfrak{a}\end{bmatrix}=\mathfrak{a}^{\sigma_3}\begin{bmatrix}1&0\\
\displaystyle\frac{\mathfrak{a}^3}{\mathfrak{b}} & 1\end{bmatrix}
\begin{bmatrix}0&\displaystyle \frac{\mathfrak{b}}{\mathfrak{a}}\\\displaystyle -\frac{\mathfrak{a}}{\mathfrak{b}} & 0\end{bmatrix}
\begin{bmatrix}1 & 0\\\displaystyle\frac{\mathfrak{a}}{\mathfrak{b}} & 1\end{bmatrix},\quad
\text{(\unicode{x201C}LTL\unicode{x201D})},
\end{align}
\begin{align}
\mathbf{G}(\mathfrak{a},\mathfrak{b})=\begin{bmatrix}\mathfrak{a} & \mathfrak{b}\\-\mathfrak{b} & \mathfrak{a}\end{bmatrix}=\mathfrak{a}^{-\sigma_3}\begin{bmatrix}1 & \displaystyle -\frac{\mathfrak{a}^3}{\mathfrak{b}}\\0 & 1\end{bmatrix}
\begin{bmatrix}0 & \displaystyle\frac{\mathfrak{a}}{\mathfrak{b}}\\\displaystyle-\frac{\mathfrak{b}}{\mathfrak{a}} & 0\end{bmatrix}\begin{bmatrix}1 & \displaystyle-\frac{\mathfrak{a}}{\mathfrak{b}}\\0 & 1\end{bmatrix},\quad \text{(\unicode{x201C}UTU\unicode{x201D})}.
\end{align}1.2.2.
$\Psi(X,0;\mathbf{G})$ near
$X$ = 0
It is straightforward to solve Riemann–Hilbert Problem 1 explicitly when
$(X,T)=(0,0)$. Indeed, one can verify directly that the solution is:
\begin{align}
\mathbf{P}(\Lambda;0,0,\mathbf{G})
=\begin{cases}
\mathbf{G}^{-1},& |\Lambda| \lt 1,\\
\mathbf{G}^{-1}{\mathrm{e}}^{-2{\mathrm{i}}\Lambda^{-1}\sigma_3}\mathbf{G}{\mathrm{e}}^{2{\mathrm{i}}\Lambda^{-1}\sigma_3},&|\Lambda| \gt 1.
\end{cases}
\end{align} Then, it follows from this formula assuming
$|\Lambda| \gt 1$ that
$\mathbf{P}(\Lambda;0,0,\mathbf{G})=\mathbb{I} + (2{\mathrm{i}}\sigma_3 - 2{\mathrm{i}}\mathbf{G}^{-1}\sigma_3\mathbf{G})\Lambda^{-1} + O(\Lambda^{-2})$ as
$\Lambda\to\infty$. Therefore (1.6) yields the following.
Theorem 1.6 (Value at the origin)
\begin{align}
\Psi(0,0;\mathbf{G})=4\left(\mathbf{G}^{-1}\sigma_3\mathbf{G}\right)_{12} = 8\mathfrak{a}\mathfrak{b}{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}=\frac{8a^*b^*}{|a|^2+|b|^2}.
\end{align} Following [Reference Li and Miller24, Section 2.3.3], it is then systematic to calculate derivatives of
$\mathbf{P}(\Lambda;X,0,\mathbf{G})$ with respect to X at X = 0. For instance, setting
$\mathbf{F}(\Lambda;X):=\mathbf{P}_X(\Lambda;X,0,\mathbf{G})\mathbf{P}(\Lambda;X,0,\mathbf{G})^{-1}$, one sees that
$\Lambda\mapsto\mathbf{F}(\Lambda;X)$ is analytic for
$|\Lambda|\neq 1$, that
$\mathbf{F}(\Lambda;X)\to \mathbf{0}$ as
$\Lambda\to\infty$, and that
$\mathbf{P}_+(\Lambda;X,0,\mathbf{G})=\mathbf{P}_-(\Lambda;X,0,\mathbf{G})\mathbf{V}(\Lambda;X)$ implies that also
It follows that (using the Plemelj formula and taking into account the clockwise orientation of the jump contour)
\begin{align}
\begin{split}
\mathbf{F}(\Lambda;X)&=-\frac{1}{2\pi{\mathrm{i}}}\oint_{|\mu|=1}\frac{\mathbf{P}_-(\mu;X,0,\mathbf{G})\mathbf{V}_X(\mu;X)\mathbf{V}(\mu;X)^{-1}\mathbf{P}_-(\mu;X,0,\mathbf{G})^{-1}}{\mu-\Lambda}\, \mathrm{d}\mu\\
&=\frac{1}{2\pi{\mathrm{i}}\Lambda}\oint_{|\mu|=1}\mathbf{P}_-(\mu;X,0,\mathbf{G})\mathbf{V}_X(\mu;X)\mathbf{V}(\mu;X)^{-1}\mathbf{P}_-(\mu;X,0,\mathbf{G})^{-1}\, \mathrm{d}\mu + O(\Lambda^{-2})
\end{split}
\end{align} as
$\Lambda\to\infty$, where on both lines the integration contour has counterclockwise orientation, and where
$\mathbf{P}_-(\mu;X,0,\mathbf{G})$ refers to the boundary value taken from the interior of the unit circle. Since according to (1.4) the jump matrix is given by
$\mathbf{V}(\Lambda;X):={\mathrm{e}}^{-{\mathrm{i}}(\Lambda X + 2\Lambda^{-1})\sigma_3}\mathbf{G}{\mathrm{e}}^{{\mathrm{i}} (\Lambda X+2\Lambda^{-1})\sigma_3}$, we get that
\begin{align}
\mathbf{V}_X(\Lambda;0)\mathbf{V}(\Lambda;0)^{-1}=-{\mathrm{i}}\Lambda\sigma_3 +{\mathrm{i}}\Lambda {\mathrm{e}}^{-2{\mathrm{i}}\Lambda^{-1}\sigma_3}\mathbf{G}\sigma_3\mathbf{G}^{-1}{\mathrm{e}}^{2{\mathrm{i}}\Lambda^{-1}\sigma_3},
\end{align} and according to (1.17) we have
$\mathbf{P}_-(\Lambda;0,0,\mathbf{G})=\mathbf{G}^{-1}$. Therefore, as
$\Lambda\to\infty$,
\begin{align}
\mathbf{F}(\Lambda;0)=\frac{1}{2\pi{\mathrm{i}}\Lambda}\oint_{|\mu|=1}\left[-{\mathrm{i}}\mu\mathbf{G}^{-1}\sigma_3\mathbf{G} + {\mathrm{i}}\mu \mathbf{G}^{-1}{\mathrm{e}}^{-2{\mathrm{i}}\mu^{-1}\sigma_3}\mathbf{G}\sigma_3\mathbf{G}^{-1}{\mathrm{e}}^{2{\mathrm{i}}\mu^{-1}\sigma_3}\mathbf{G}\right]\, \mathrm{d}\mu + O(\Lambda^{-2}).
\end{align} The first term vanishes by Cauchy’s theorem, and the second term can be evaluated by residues at
$\mu=\infty$ using the expansion
${\mathrm{e}}^{\pm 2{\mathrm{i}}\mu^{-1}\sigma_3}=\mathbb{I} \pm 2{\mathrm{i}}\sigma_3\mu^{-1}-2\mathbb{I}\mu^{-2} + O(\mu^{-3})$ as
$\mu\to\infty$. The result is that
\begin{align}
\mathbf{F}(\Lambda;0)=\left[4{\mathrm{i}}\mathbf{G}^{-1}\sigma_3\mathbf{G}\sigma_3\mathbf{G}^{-1}\sigma_3\mathbf{G}-4{\mathrm{i}}\sigma_3\right]\Lambda^{-1}+O(\Lambda^{-2}),\quad\Lambda\to\infty.
\end{align}Differentiation of (1.6) then yields
\begin{align}
\Psi_X(0,0;\mathbf{G}) = 2{\mathrm{i}}\lim_{\Lambda\to\infty}\Lambda\frac{\partial P_{12}}{\partial X}(\Lambda;0,0,\mathbf{G}) = 2{\mathrm{i}}\lim_{\Lambda\to\infty}\Lambda F_{12}(\Lambda;0) = -8\left[\mathbf{G}^{-1}\sigma_3\mathbf{G}\sigma_3\mathbf{G}^{-1}\sigma_3\mathbf{G}\right]_{12}.
\end{align}Explicit evaluation using (1.5) then yields the following result.
Theorem 1.7 (Derivative at the origin)
\begin{align}
\Psi_X(0,0;\mathbf{G})=32{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}\mathfrak{a}\mathfrak{b} (\mathfrak{b}^2-\mathfrak{a}^2)=32a^*b^*\frac{|b|^2-|a|^2}{(|a|^2+|b|^2)^2}.
\end{align}1.2.3. Exceptional parameter values and
$L^2$-norm
Another interesting result for the family
$\Psi(X,T;\mathbf{G})$ of exact solutions of (1.3) has to do with special values of the parameters.
Proposition 1.8 (Degeneration property)
If either
$a=0$ or
$b=0$, then
$\Psi(X,T;\mathbf{G})\equiv 0$.
We give a proof of Proposition 1.8 in Appendix A. The proof relies on the fact that Riemann–Hilbert Problem 1 can be solved explicitly in either of the cases a = 0 or b = 0. It follows from Proposition 1.1 that the function
$\Psi(X,T;\mathbf{G})$ depends continuously on the parameters (a, b) for fixed
$(X,T)\in\mathbb{R}^2$, so Proposition 1.8 also implies the pointwise limit
$\Psi(X,T;\mathbf{G})\to 0$ as a → 0 or b → 0 for each
$(X,T)\in\mathbb{R}^2$, and this convergence can be generalized to be uniform over (X, T) ranging over any given compact set in
$\mathbb{R}^2$. To avoid trivial cases, from this point on in the paper we therefore assume that
$a,b$ are complex numbers with ab ≠ 0.
Another result is that for general
$\mathbf{G}(a,b)$ with ab ≠ 0,
$\Psi(X,T;\mathbf{G})$ lies in
$L^2(\mathbb{R})$ as a function of X with an
$L^2(\mathbb{R})$-norm that is independent of the parameter matrix
$\mathbf{G}=\mathbf{G}(a,b)$. Namely, we prove the following theorem.
Theorem 1.9 (
$L^2$-norm of
$\Psi(\diamond,T;\mathbf{G})$)
Let
$\mathbf{G}=\mathbf{G}(a,b)$ be as in (1.5) with
$ab\neq 0$. We have that
$\Psi(\diamond, T; \mathbf{G})\in L^2(\mathbb{R})$ for all
$T\in\mathbb{R}$ with
$\| \Psi(\diamond, T;\mathbf{G} )\|_{L^2(\mathbb{R})}=\sqrt{8}$.
We prove Theorem 1.9 in Section 2.4 essentially as a corollary of Theorem 1.18 below. When combined with the degeneration property given in Proposition 1.8, the independence of the L 2-norm of
$\Psi(X,T,\mathbf{G})$ from the matrix
$\mathbf{G}=\mathbf{G}(a,b)$ asserted in Theorem 1.9 leaves us with an interesting conundrum! While
$\|\Psi(\diamond,T;\mathbf{G}(a,b))\|_{L^2(\mathbb{R})}=\sqrt{8}$ for any nonzero
$a,b\in\mathbb{C}$, we have
\begin{align}
\lim_{a\to 0}\Psi(X,T;\mathbf{G}(a,b)) = 0\quad\text{and}\quad \lim_{b\to 0}\Psi(X,T;\mathbf{G}(a,b)) = 0
\end{align} pointwise for any
$(X,T)\in\mathbb{R}^2$. A mechanism for this limiting behaviour could be that the
$L^2$-mass of the wave packet
$\Psi(X,T;\mathbf{G}(a,b))$ at any given time T escapes to
$X=\pm\infty$ in the limit a → 0 or b → 0, or alternately, the wave packet spreads in the same limit so as to preserve the L 2-norm while still decaying pointwise or perhaps even uniformly to zero. In fact, it turns out that a combination of both of these mechanisms is at play. The explanation of this phenomenon lies in a double-scaling limit in which
$X\to+\infty$ while also a → 0 or b → 0 at suitably related rates. The details can be found in our next paper on the subject, [Reference Bilman and Miller14].
1.2.4. Differential equations
It is straightforward to derive three different first-order systems of differential equations satisfied by the matrix function
\begin{align}
\mathbf{W}(\Lambda;X,T,\mathbf{G}):=\mathbf{P}(\Lambda;X,T,\mathbf{G}){\mathrm{e}}^{-{\mathrm{i}} (\Lambda X+\Lambda^2T+2\Lambda^{-1})\sigma_3}
\end{align} by following the procedure in [Reference Bilman, Ling and Miller8, Section 3.2.1], which though written with the special case (1.7) in mind, in fact does not depend at all on the details of the matrix
$\mathbf{G}(a,b)$ and hence applies to general rogue waves of infinite order. It follows that the matrix
$\mathbf{W}(\Lambda;X,T,\mathbf{G})$ satisfies the three Lax equations
\begin{align}
\frac{\partial \mathbf{W}}{\partial X} = \mathbf{X}\mathbf{W},\quad\frac{\partial \mathbf{W}}{\partial T}=\mathbf{T}\mathbf{W},\quad\frac{\partial \mathbf{W}}{\partial\Lambda}=\boldsymbol{\Lambda}\mathbf{W},
\end{align} wherein the coefficient matrices X, T, and
$\boldsymbol{\Lambda}$ are explicitly represented in terms of the coefficients
$\mathbf{P}^{[j]}=\mathbf{P}^{[j]}(X,T;\mathbf{G})$ in the convergent Laurent expansion
\begin{align}
\mathbf{P}(\Lambda;X,T,\mathbf{G})=\mathbb{I}+\sum_{j=1}^\infty\mathbf{P}^{[j]}(X,T;\mathbf{G})\Lambda^{-j},\quad |\Lambda| \gt 1
\end{align} or alternatively in terms of the Taylor coefficients
$\mathbf{P}_0:=\mathbf{P}(0;X,T,\mathbf{G})$ etc., of
$\mathbf{P}(\Lambda;X,T,\mathbf{G})$ at
$\Lambda=0$. Thus:
\begin{align}
\mathbf{X} = -{\mathrm{i}}\Lambda\sigma_3 + {\mathrm{i}} [\sigma_3,\mathbf{P}^{[1]}] = \begin{bmatrix}-{\mathrm{i}}\Lambda & \Psi\\-\Psi^* & {\mathrm{i}}\Lambda\end{bmatrix},
\end{align}
\begin{align}
\mathbf{T}=-{\mathrm{i}}\Lambda^2\sigma_3 + {\mathrm{i}} [\sigma_3,\mathbf{P}^{[1]}]\Lambda + {\mathrm{i}}[\mathbf{P}^{[1]},\sigma_3\mathbf{P}^{[1]}] + {\mathrm{i}}[\sigma_3,\mathbf{P}^{[2]}] = \begin{bmatrix}
-{\mathrm{i}}\Lambda^2 +\frac{1}{2}{\mathrm{i}} |\Psi|^2 & \Lambda\Psi + \frac{1}{2}{\mathrm{i}}\Psi_X\\
-\Lambda\Psi^* +\frac{1}{2}{\mathrm{i}}\Psi^*_X & {\mathrm{i}}\Lambda^2 -\frac{1}{2}{\mathrm{i}} |\Psi|^2\end{bmatrix},
\end{align}and
\begin{align}
\boldsymbol{\Lambda}=\begin{bmatrix}-2{\mathrm{i}} T\Lambda -{\mathrm{i}} X +{\mathrm{i}} T|\Psi|^2\Lambda^{-1} & 2T\Psi +(X\Psi+{\mathrm{i}} T\Psi_X)\Lambda^{-1}\\-2T\Psi^*+(-X\Psi^*+{\mathrm{i}} T\Psi_X^*)\Lambda^{-1} & 2{\mathrm{i}} T\Lambda +{\mathrm{i}} X-{\mathrm{i}} T|\Psi|^2\Lambda^{-1}\end{bmatrix} + 2{\mathrm{i}} \mathbf{P}_0\sigma_3\mathbf{P}_0^{-1}\Lambda^{-2}.
\end{align} The global existence of
$\mathbf{P}(\Lambda;X,T,\mathbf{G})$ from Riemann–Hilbert Problem 1 recorded in Proposition 1.1 then guarantees that the three Lax equations (1.28) are mutually compatible. In particular, the compatibility condition
$\mathbf{X}_T-\mathbf{T}_X + [\mathbf{X},\mathbf{T}]=\mathbf{0}$ implies the following basic result which has already been mentioned.
Theorem 1.10 (
$\Psi(X,T)$ solves NLS)
The function
$\mathbb{R}^2\ni (X,T)\mapsto \Psi(X,T;\mathbf{G})\in\mathbb{C}$ obtained from Riemann–Hilbert Problem 1 via (1.6) is a global solution of the focusing NLS equation in the form (1.3).
On the other hand, the compatibility condition
$\mathbf{X}_\Lambda-\boldsymbol{\Lambda}_X + [\mathbf{X},\boldsymbol{\Lambda}]=\mathbf{0}$ is a system of ordinary differential equations with respect to X (in which
$T\in\mathbb{R}$ plays the role of a parameter) that was shown in [Reference Bilman, Ling and Miller8, Section 3.2.1] to be related to the second equation in the Painlevé-III hierarchy of Sakka [Reference Sakka32] when T ≠ 0 and to the Painlevé-III (D6) equation itself when T = 0. More explicitly, we have the following.
Theorem 1.11 (
$\Psi(X,0)$ and the Painlevé-III (D6) equation [Reference Bilman, Ling and Miller8, Corollary 4])
The function
$u(x)$ defined for
$x\in\mathbb{R}\cup({\mathrm{i}}\mathbb{R})$ by
\begin{align}
u(x):=2\left(\frac{ \mathrm{d}}{ \mathrm{d} x}\log\left(x^2\Psi(-\tfrac{1}{8}x^2,0;\mathbf{G})\right)\right)^{-1}
\end{align}is a solution of the Painlevé-III (D6) equation
\begin{align}
\frac{ \mathrm{d}^2u}{ \mathrm{d} x^2}=\frac{1}{u}\left(\frac{ \mathrm{d} u}{ \mathrm{d} x}\right)^2-\frac{1}{x}\frac{ \mathrm{d} u}{ \mathrm{d} x} + \frac{4\Theta_0 u^2 + 4(1-\Theta_\infty)}{x}+4u^3-\frac{4}{u}
\end{align} in the case that both formal monodromy parameters vanish:
$\Theta_0=\Theta_\infty=0$.
Corollary 1.12 (Behaviour of
$u(x)$ near
$x=0$)
The function
$u(x)$ defined by (1.33) is an odd function of
$x$ that is analytic at the origin with Taylor expansion
\begin{align}
u(x)=x + \frac{u'''(0)}{3!}x^3 + O(x^5),\quad x\to 0,\quad u'''(0)=3(\mathfrak{b}^2-\mathfrak{a}^2)=3\frac{|b|^2-|a|^2}{|a|^2+|b|^2}.
\end{align}Proof. Combining Proposition 1.1 with (1.33) shows that u(x) is an odd function having a Taylor expansion about x = 0 with
$u'''(0)=\frac{3}{4}\Psi_X(0,0;\mathbf{G})/\Psi(0,0;\mathbf{G})$. Theorems 1.6 and 1.7 then yield the claimed value of
$u'''(0)$.
Since the value of
$\Psi(0,0;\mathbf{G})$ is known from Theorem 1.6, it is straightforward to invert (1.33) to explicitly express
$\Psi(X,0;\mathbf{G})$ in terms of u:
\begin{align}
\Psi(X,0;\mathbf{G})=\Psi(0,0;\mathbf{G})\exp\left(2\int_0^x\left[\frac{1}{u(y)}-\frac{1}{y}\right]\, \mathrm{d} y\right),\quad X=-\frac{1}{8}x^2.
\end{align} We note that
$-3 \lt u'''(0) \lt 3$. In fact, when
$\Theta_0=\Theta_\infty=0$ there is for each
$\omega\in (-3,3)$ a unique solution of (1.34) analytic at the origin with
$u(0)=0$,
$u'(0)=1$,
$u''(0)=0$, and
$u'''(0)=\omega$. This family of solutions of the Painlevé-III (D6) equation has not only been associated with limits of sequences of solutions of the focusing NLS equation [Reference Bilman and Buckingham7, Reference Bilman, Ling and Miller8], but has also appeared in the description of self-similar boundary layers in the sharp-line Maxwell-Bloch equations [Reference Li and Miller24].
Remark 1.13. The parametrization of solutions of (1.34) is discussed for example in [Reference Barhoumi, Lisovyy, Miller and Prokhorov3, Section 4.6] (see also [Reference van der Put and Saito44]). Solutions are parametrized by points on a certain monodromy manifold characterized by a cubic equation in three variables. In particular, the monodromy manifold for the Painlevé-III (D6) equation (1.34) with parameters
$\Theta_0=\Theta_\infty=0$ consists of those
$(x_1,x_2,x_3)\in \mathbb{C}^3$ for which
\begin{align}
x_1x_2x_3+x_1^2+x_2^2+2x_1+2x_2+1=0.
\end{align} This is a smooth complex surface except at two points obtained by also enforcing that the gradient of the left-hand side vanishes:
$(x_1,x_2,x_3)=(0,-1,2)$ and
$(x_1,x_2,x_3)=(-1,0,2)$. Because the Stokes phenomenon is trivial both at
$\Lambda=0$ and
$\Lambda=\infty$ (nontrivial Stokes phenomenon would be evident in additional jump conditions for Riemann–Hilbert Problem 1 on two contours approaching the origin and two contours approaching
$\Lambda=\infty$), the coordinates of the Painlevé-III (D6) solution arising for T = 0 from Riemann–Hilbert Problem 1 are determined from the matrix G alone (which plays the role of a connection matrix in the isomonodromy theory) by
$(x_1,x_2,x_3)=(G_{11}G_{22}-1,-G_{11}G_{22},2)=(\mathfrak{a}^2-1,-\mathfrak{a}^2,2)$. Since
$0 \lt \mathfrak{a}^2 \lt 1$, this is a segment of a line in
$\mathbb{C}^3$ parametrized by
$G_{11}G_{22}=\mathfrak{a}^2$ that is fully contained within the monodromy manifold (1.37). The line passes through both singular points at parameter values
$G_{11}G_{22}=\mathfrak{a}^2=0$ and
$G_{11}G_{22}=\mathfrak{a}^2=1$, which are the endpoints of the relevant segment. Thus, the endpoints of the segment correspond to normalized parameters
$(\mathfrak{a},\mathfrak{b})=(0,1)$ or
$(\mathfrak{a},\mathfrak{b})=(1,0)$ respectively. The singular points on the monodromy manifold then both correspond to the trivial solution
$\Psi(X,T;\mathbf{G})\equiv 0$ according to Proposition 1.8. However, each interior point of the segment yields a distinct nontrivial solution
$u(x)=u(x;\mathfrak{a})$ of the Painlevé-III (D6) equation (1.34) with
$\Theta_0=\Theta_\infty=0$ related to
$\Psi(X,0;\mathbf{G})$ via (1.33) and its inverse (1.36). Note that the presence of the logarithmic derivative in (1.33) cancels the constant phase factor
${\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}$ in
$\Psi(X,0;\mathbf{G}(a,b))$ (see (1.12)) so that while Ψ depends on the parameters (a, b), u indeed depends only on the normalized parameters
$(\mathfrak{a},\mathfrak{b})$.
The compatibility condition
$\mathbf{T}_\Lambda-\boldsymbol{\Lambda}_T + [\mathbf{T},\boldsymbol{\Lambda}]=\mathbf{0}$ is a system of ordinary differential equations with respect to T in which
$X\in\mathbb{R}$ is a parameter. This system was written out in general in [Reference Bilman, Ling and Miller8, Eqn. (119)], and here we expand on a remark made in that paper concerning a symmetric special case (see also [Reference Suleimanov33]). Suppose that b = a. Then according to Proposition 1.3,
$X\mapsto\Psi(X,T;\mathbf{G}(a,a))$ is an even function for all
$T\in\mathbb{R}$, which in light of real analyticity at X = 0 implies also that
as is consistent with the conclusion of Theorem 1.7 when also T = 0. Thus X = 0 is an axis of symmetry of
$\Psi(X,T;\mathbf{G}(a,a))$, and one may expect some simplification of the compatibility condition
$\mathbf{T}_\Lambda-\boldsymbol{\Lambda}_T+[\mathbf{T},\boldsymbol{\Lambda}]=\mathbf{0}$ yielding ordinary differential equations satisfied by
$T\mapsto \Psi(X,T;\mathbf{G})$. To see this, we remark that Proposition A.1 in Appendix A implies that
$\mathbf{P}(0;0,T,\mathbf{G}(a,a))=-\sigma_3\mathbf{P}(0;0,T,\mathbf{G}(a,a))\sigma_1$, and therefore for some function
$s(T)\neq 0$ the unit-determinant matrix
$\mathbf{P}_0=\mathbf{P}(0;0,T,\mathbf{G}(a,a))$ has the form
\begin{align}
\mathbf{P}_0=\begin{bmatrix} s(T) & -s(T)\\ (2s(T))^{-1} & (2s(T))^{-1}\end{bmatrix}\implies 2{\mathrm{i}}\mathbf{P}_0\sigma_3\mathbf{P}_0^{-1}=2{\mathrm{i}}\begin{bmatrix}0 & n(T)\\n(T)^{-1} & 0\end{bmatrix},\quad n(T):=2s(T)^2.
\end{align} Using this and (1.38), and setting X = 0, the matrix coefficients T and
$\boldsymbol{\Lambda}$ defined in (1.31) and (1.32) respectively take the simplified form
\begin{align}
\mathbf{T}=-{\mathrm{i}}\Lambda^2\sigma_3 +\begin{bmatrix}0 & \Psi\\-\Psi^* & 0\end{bmatrix}\Lambda +\frac{1}{2}{\mathrm{i}} |\Psi|^2\sigma_3
\end{align}and
\begin{align}
\boldsymbol{\Lambda}=-2{\mathrm{i}} T\Lambda\sigma_3 +\begin{bmatrix}0 & 2T\Psi\\-2T\Psi^* & 0\end{bmatrix}+{\mathrm{i}} T|\Psi|^2\Lambda^{-1}\sigma_3 +\frac{2{\mathrm{i}}}{\Lambda^2}\begin{bmatrix} 0 & n(T)\\n(T)^{-1} & 0\end{bmatrix},
\end{align} where
$\Psi=\Psi(T):=\Psi(0,T;\mathbf{G}(a,a))$. In [Reference Kitaev and Vartanian22] a Lax pair is presented for the partially-degenerate Painlevé-III equation (D7 type, with parameters
$\mathscr{A},\mathscr{B}\in\mathbb{C}$ and
$\varepsilon=\pm 1$)
\begin{align}
\frac{ \mathrm{d}^2\mathfrak{u}}{ \mathrm{d} t^2}=\frac{1}{\mathfrak{u}}\left(\frac{ \mathrm{d}\mathfrak{u}}{ \mathrm{d} t}\right)^2 -\frac{1}{t}\frac{ \mathrm{d}\mathfrak{u}}{ \mathrm{d} t} + \frac{-8\varepsilon\mathfrak{u}^2+2\mathscr{A}\mathscr{B}}{t}+\frac{\mathscr{B}^2}{\mathfrak{u}},\quad\mathfrak{u}=\mathfrak{u}(t),
\end{align} that, after a Fabry transformation, resembles the compatible linear system
$\mathbf{W}_T=\mathbf{T W}$ and
$\mathbf{W}_\Lambda=\boldsymbol{\Lambda} \mathbf{W}$ with the coefficient matrices given in (1.40) and (1.41). Hence we may expect that
$\Psi(T)$ is related to a solution of the Painlevé-III (D7) equation. To complete the correspondence, we change variables and make a gauge transformation by
\begin{align}
\lambda:= T^\frac{1}{4}\Lambda,\quad t:=T^{\frac{1}{2}},\quad \mathbf{W}=t^{\frac{1}{2}\sigma_3}\mathbf{V},
\end{align}and then the resulting system takes the form
\begin{align}
\frac{\partial\mathbf{V}}{\partial\lambda}=\left(-2{\mathrm{i}} t\lambda\sigma_3 + 2t\begin{bmatrix}
0 & t^{-\frac{1}{2}}\Psi\\-t^\frac{3}{2}\Psi^* & 0\end{bmatrix}-\frac{1}{\lambda}\left(-{\mathrm{i}} t^2|\Psi|^2\right)\sigma_3
+\frac{1}{\lambda^2}\begin{bmatrix}0 & 2{\mathrm{i}} t^{-\frac{1}{2}}n\\
2{\mathrm{i}} t^\frac{3}{2}n^{-1} & 0\end{bmatrix}\right)\mathbf{V}
\end{align}and
\begin{align}
\frac{\partial\mathbf{V}}{\partial t}=\left(-{\mathrm{i}}\lambda^2\sigma_3 + \lambda\begin{bmatrix}0 & t^{-\frac{1}{2}}\Psi\\
-t^\frac{3}{2}\Psi^* & 0\end{bmatrix} + \left(\frac{1}{2}{\mathrm{i}} t|\Psi|^2-\frac{1}{2t}\right)\sigma_3
-\frac{1}{2t\lambda}\begin{bmatrix}0 & 2{\mathrm{i}} t^{-\frac{1}{2}}n\\2{\mathrm{i}} t^\frac{3}{2}n^{-1} & 0\end{bmatrix}\right)\mathbf{V},
\end{align} in which we now view Ψ,
$\Psi^*$, and n as functions of t. This system has exactly the form of [Reference Kitaev and Vartanian22, Eqn. (12)] provided we make the correspondences
\begin{align}
D=t^{\frac{3}{2}}\Psi^*,\quad \frac{2{\mathrm{i}} A}{\sqrt{-AB}}=t^{-\frac{1}{2}}\Psi,
\end{align}
\begin{align}
{\mathrm{i}} \mathscr{A}+\frac{1}{2}+\frac{2t AD}{\sqrt{-AB}}=-{\mathrm{i}} t^2|\Psi|^2,\quad \frac{{\mathrm{i}} \mathscr{A}}{2t}-\frac{AD}{\sqrt{-AB}}=\frac{1}{2}{\mathrm{i}} t|\Psi|^2-\frac{1}{2t},
\end{align}
\begin{align}
\widetilde{\alpha}=2{\mathrm{i}} t^{-\frac{1}{2}}n,\quad {\mathrm{i}} t B=2{\mathrm{i}} t^\frac{3}{2}n^{-1}.
\end{align} where the notation of [Reference Kitaev and Vartanian22] is on the left-hand sideFootnote 2 and
$A,B,C,D,\widetilde{\alpha}$ are functions of t while
$\mathscr{A}$ is a constant. Using the product of the identities in (1.46) to eliminate
$AD/\sqrt{-AB}$ shows that the two equations in (1.47) actually coincide, and yield
$\mathscr{A}=\frac{1}{2}{\mathrm{i}}$. Then, according to [Reference Kitaev and Vartanian22, Lemma 2.1], the second parameter
$\mathscr{B}$ is determined from (1.48) up to a sign by
$\mathscr{B}=4\varepsilon$. The corresponding solution of (1.42) is then given by
$\mathfrak{u}(t)=\varepsilon t\sqrt{-A(t)B(t)}$. This proves the following, which is also easy to verify directly from the compatibility condition for the system (1.44)–(1.45).
Theorem 1.14 (
$\Psi(0,T)$ for
$b=a$ and the Painlevé-III (D7) equation)
Fix
$a\in\mathbb{C}$ nonzero, and let
$\varepsilon=\pm 1$. The function
\begin{align}
\mathfrak{u}(t):=\frac{1}{4}{\mathrm{i}}\varepsilon t\Psi\cdot\left(\Psi^*+t\frac{ \mathrm{d}\Psi^*}{ \mathrm{d} t}\right),\quad t\in\mathbb{R},\quad\Psi=\Psi(0,t^2;\mathbf{G}(a,a))
\end{align} satisfies the Painlevé-III (D7) equation in the form (1.42) with parameters
$\mathscr{A}=\frac{1}{2}{\mathrm{i}}$ and
$\mathscr{B}=4\varepsilon$.
This result was known to Suleimanov [Reference Suleimanov33], although to extract it from his paper one must take the independent variable to be
$t=T^\frac{1}{2}$ instead of T and correct some constants. Another simple identity satisfied by Ψ that follows from the compatibility condition for (1.44)–(1.45) is
\begin{align}
\left|\Psi+t\frac{ \mathrm{d}\Psi}{ \mathrm{d} t}\right|^2=16,\quad t\in\mathbb{R},\quad\Psi=\Psi(0,t^2;\mathbf{G}(a,a)).
\end{align}Corollary 1.15 (Behaviour of
$\mathfrak{u}(t)$ near
$t=0)$
The function
$\mathfrak{u}(t)$ defined by (1.49) is an odd function of
$t$ that is analytic at the origin with Taylor expansion
Proof. Both Ψ and
$\Psi^*$ are even functions of t with real analytic real and imaginary parts, so
$\mathfrak{u}(t)$ is odd and analytic at the origin. The value of
$\mathfrak{u}'(0)$ follows from Theorem 1.6 using
$b=a$, or alternately, from (1.50).
According to [Reference Kitaev and Vartanian23], the conditions on
$\mathfrak{u}(t)$ asserted in Corollary 1.15 actually uniquely determine the solution of (1.42) because the formal monodromy parameter has the special value
$\mathscr{A}=\frac{1}{2}{\mathrm{i}}$. This value of
$\mathscr{A}$ is special (so is
$\mathscr{A}=\pm\frac{1}{2}{\mathrm{i}} + k$ for any
$k\in\mathbb{Z}$) in that there is a one-parameter family of solutions that are analytic and vanishing at t = 0, but the solution becomes unique once oddness is asserted. On the other hand, for general
$\mathscr{A}$ there is only one solution analytic and vanishing at t = 0.
1.2.5. Fredholm determinant representation of
$\Psi(X,0;\mathbf{G})$
In [Reference Bilman, Ling and Miller8, Section 3.2.1] various transformations of (1.34) to other forms of the Painlevé-III equation were noted, including a transformation to the parameter-free and fully-degenerate Painlevé-III (D8) equation, which takes the form
\begin{align}
\frac{ \mathrm{d}^2U}{ \mathrm{d} z^2} = \frac{1}{U}\left(\frac{ \mathrm{d} U}{ \mathrm{d} z}\right)^2-\frac{1}{z}\frac{ \mathrm{d} U}{ \mathrm{d} z}+\frac{4U^2+4}{z}.
\end{align} Here we report a new piece of information, which is that the relevant solution of (1.52) is expressible in terms of a Fredholm determinant of an integrable operator [Reference Deift19, Reference Its, Izergin, Korepin and Slavnov20]. This in turn leads to an explicit representation of
$\Psi(X,0;\mathbf{G})$ in terms of the same determinant.
Provided that a > 0 and
$b\in{\mathrm{i}}\mathbb{R}$, the matrix
$\hat{\mathbf{S}}(\xi;z)$ defined in terms of
$\mathbf{P}(\Lambda;X,0,\mathbf{G})$ by
\begin{align}
\hat{\mathbf{S}}(\xi;z):=\mathbf{P}(\Lambda;X,0,\mathbf{G}){\mathrm{e}}^{(2z)^{1/2}({\mathrm{i}}\xi +\xi^{-1})\sigma_3},\quad X=-{\mathrm{i}} z,\quad \Lambda=-2{\mathrm{i}} (2z)^{-\frac{1}{2}}\xi,
\end{align} satisfies a related Riemann–Hilbert problem associated with the Painlevé-III (D 8) equation with specialized monodromy parameters
$y_1=b/\sqrt{|a|^2+|b|^2}$,
$y_2={\mathrm{i}} a/\sqrt{|a|^2+|b|^2}$, and
$y_3=0$ (see [Reference Barhoumi, Lisovyy, Miller and Prokhorov3, Riemann–Hilbert Problem 9.2] and the discussion in Section 9.3 of that paper). Comparing (1.6) with [Reference Barhoumi, Lisovyy, Miller and Prokhorov3, Eqns. (9.11) and (9.14)] shows that
\begin{align}
\Psi(X,0;\mathbf{G})=\frac{{\mathrm{i}}}{2}\frac{U'(z)}{U(z)},\quad z={\mathrm{i}} X
\end{align} where U(z) is a solution of the Painlevé-III (D 8) equation (1.52). The parameters y 1 and y 2 are purely imaginary numbers constrained by
$y_1^2+y_2^2+1=0$ (a reduction for
$y_3=0$ of the monodromy cubic
$y_1y_2y_3+y_1^2+y_2^2+1=0$ parametrizing solutions of (1.52)), and they may be further parametrized by a single quantity
$m\in{\mathrm{i}}\mathbb{R}+\mathbb{Z}$ according to
\begin{align}
y_1=\frac{{\mathrm{i}}{\mathrm{e}}^{{\mathrm{i}}\pi m}}{\sqrt{1+{\mathrm{e}}^{2\pi{\mathrm{i}} m}}},\quad y_2=\frac{{\mathrm{i}}}{\sqrt{1+{\mathrm{e}}^{2\pi{\mathrm{i}} m}}}.
\end{align} The generalization of this family of solutions U(z) to allow for arbitrary
$m\in\mathbb{C}\setminus (\mathbb{Z}+\frac{1}{2})$ was proven in [Reference Barhoumi, Lisovyy, Miller and Prokhorov3] to correspond to a certain double-scaling limit of high-even-order rational solutions of the Painlevé-III (D 6) equation when examined near the origin in the complex plane (the only fixed singularity of the equation). The Painlevé-III (D 6) equation has two essential parameters, one of which is quantized for rational solutions (
$n=\frac{1}{2}(\Theta_0-\Theta_\infty+1)\in\mathbb{Z}$) and the other of which is arbitrary and corresponds to the value of
$m=\frac{1}{2}(\Theta_0+\Theta_\infty-1)$.
Going further, the particular solution U(z) with parameter
$m\in\mathbb{C}\setminus(\mathbb{Z}+\frac{1}{2})$ is shown in [Reference Barhoumi, Lisovyy, Miller and Prokhorov3, Corollary 3.4] to be explicitly related to the Fredholm determinant of the scalar Bessel kernel:
\begin{align}
U(z)-\frac{1}{U(z)}=R(z):=-2{\mathrm{i}}-\frac{1}{2}\frac{ \mathrm{d}}{ \mathrm{d} z}z\frac{ \mathrm{d}}{ \mathrm{d} z}\log \left(D_{\varkappa(m)}(32{\mathrm{i}} z)\right),
\end{align} wherein
$\varkappa(m):=(1+{\mathrm{e}}^{2\pi{\mathrm{i}} m})^{-1}=-y_2^2=a^2/(|a|^2+|b|^2)$ and
is the Fredholm determinant of the integral operator
$\mathcal{K}_r:L^2[0,r]\to L^2[0,r]$ with Bessel kernel
\begin{align}
K(x,y):=\frac{\sqrt{x}J_1(\sqrt{x})J_0(\sqrt{y})-J_0(\sqrt{x})\sqrt{y}J_1(\sqrt{y})}{2(x-y)}.
\end{align} It can be shown that
$D_\varkappa(r)$ is entire in
$\varkappa$ and analytic for
$r\in\mathbb{C}$ of sufficiently small modulus. To get a representation of
$\Psi(X,0;\mathbf{G})$, we first solve (1.56) for U(z):
\begin{align}
U(z)=\frac{1}{2}\left(R(z)\pm\sqrt{R(z)^2+4}\right),
\end{align}and hence from (1.54)
\begin{align}
\Psi(X,0;\mathbf{G})=\frac{{\mathrm{i}}}{2}\frac{U'(z)}{U(z)}=\pm\frac{{\mathrm{i}} R'(z)}{2\sqrt{R(z)^2+4}}.
\end{align}We now have to resolve the sign of the square root; first, according to [Reference Barhoumi, Lisovyy, Miller and Prokhorov3, Eqn. (3.22)], one has
\begin{align}
\log \left(D_\varkappa(r)\right)=-\frac{\varkappa}{4}r +\frac{\varkappa-\varkappa^2}{32}r^2+O(r^3),\quad r\to 0,
\end{align} which implies that
$R(z)=-2{\mathrm{i}}+4{\mathrm{i}}\varkappa + 64(\varkappa-\varkappa^2)z + O(z^2)$ and
$R'(z)=64(\varkappa-\varkappa^2)+O(z)$ as z → 0. Hence
$R'(0)=64(\varkappa-\varkappa^2)$ and
$R(0)^2+4=16(\varkappa-\varkappa^2)$. Since
$\varkappa=a^2/(|a|^2+|b|^2)\in (0,1)$, we obtain (using z = 0 implies X = 0)
\begin{align}
\Psi(0,0;\mathbf{G})=\pm 8{\mathrm{i}}\sqrt{\varkappa-\varkappa^2}=\pm\frac{8{\mathrm{i}} a|b|}{a^2+|b|^2}.
\end{align}Comparing this with Theorem 1.6 and recalling that a > 0 while b is purely imaginary shows that we should choose the + (resp., −) sign in (1.60) when b is negative (resp., positive) imaginary, taking the square root as positive when X = 0.
The representation of
$\Psi(X,0;\mathbf{G})$ in terms of a Fredholm determinant is easily generalized to arbitrary
$(a,b)\in\mathbb{C}^2$ with ab ≠ 0 using Proposition 1.5, since
\begin{align}\begin{aligned}
\Psi(X,T;\mathbf{G}(a,b))={\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}\Psi(X,T;\mathbf{G}(\mathfrak{a},\mathfrak{b})) \;\text{and}\; \Psi(X,T;\mathbf{G}(\mathfrak{a},{\mathrm{i}}\mathfrak{b}))=-{\mathrm{i}}\Psi(X,T;\mathbf{G}(\mathfrak{a},\mathfrak{b}))\\
\implies \Psi(X,T;\mathbf{G}(a,b))={\mathrm{i}}{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}\Psi(X,T;\mathbf{G}(\mathfrak{a},{\mathrm{i}}\mathfrak{b})),
\end{aligned}
\end{align} and the latter has parameters
$a=\mathfrak{a}$ positive and
$b={\mathrm{i}}\mathfrak{b}$ positive imaginary.
This proves the following.
Theorem 1.16 (Painlevé-III (D8) and Bessel kernel determinant formula for
$\Psi(X,0)$)
Let
$(a,b)\in\mathbb{C}^2$ with
$ab\neq 0$ correspond to normalized parameters
$(\mathfrak{a},\mathfrak{b})$ by (1.11). Then the function
$\Psi(X,0;\mathbf{G}(\mathfrak{a},{\mathrm{i}}\mathfrak{b}))=-{\mathrm{i}}{\mathrm{e}}^{{\mathrm{i}}\arg(ab)}\Psi(X,0;\mathbf{G}(a,b))$ is expressible by (1.54) in terms of a solution
$U(z)$ of the Painlevé-III (D8) equation (1.52) having unit modulus for
$z\in{\mathrm{i}}\mathbb{R}$, and moreover for
$X$ in a neighborhood of the origin,
\begin{align}
\Psi(X,0;\mathbf{G}(a,b))=\left.\frac{{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}R'(z)}{2\sqrt{R(z)^2+4}}\right|_{z={\mathrm{i}} X},
\end{align}where
\begin{align}
R(z):=-2{\mathrm{i}} -\frac{1}{2}\frac{ \mathrm{d}}{ \mathrm{d} z}z\frac{ \mathrm{d}}{ \mathrm{d} z}\log \left(D_{\mathfrak{a}^2}(32{\mathrm{i}} z)\right)
\end{align} and
$D_\varkappa(r)$ denotes the Fredholm determinant (1.57) of the Bessel kernel (1.58). Here the square root is taken to be positive when
$X=0$ and the formula (1.64) admits real analytic continuation to
$X\in\mathbb{R}$.
Remark 1.17. There is a general method [Reference Bertola5] based on factorization of jump matrices into a product of upper- and lower-triangular nilpotent perturbations of the identity matrix, allowing one to associate a Fredholm determinant to virtually any Riemann–Hilbert problem with jump contour being a closed curve in the plane. The method is applicable in particular to Riemann–Hilbert Problem 1, in which case the resulting Fredholm determinant depends on the variables (X, T) generally allowed to vary in
$\mathbb{C}^2$, and its vanishing detects precisely the values of (X, T) for which the solution fails to exist (we already know that this is impossible on the real subspace
$(X,T)\in\mathbb{R}^2$). There are multiple admissible factorizations of the jump matrix, each of which gives rise to a different Fredholm determinant (τ-function) with the same zero divisor. It might be expected for
$\Psi(X,T;\mathbf{G})$ or its square modulus
$|\Psi(X,T;\mathbf{G})|^2$ to be expressible explicitly in terms of invariant expressions formed by suitable derivatives of any of these Fredholm determinants, which would extend Theorem 1.16 to arbitrary
$T\in\mathbb{R}$. We hope to address this question in future work.
1.3. Asymptotic properties of Ψ
Now we describe the asymptotic properties of
$\Psi(X,T;\mathbf{G})$, i.e., how a general rogue wave of infinite order behaves for large values of the independent variables (X, T), and how this behaviour depends on the parameters in G.
1.3.1. Behaviour of
$\Psi(X,T;\mathbf{G})$ for
$X$ large
Our first result concerns the behaviour of
$\Psi(X,T;\mathbf{G})$ as
$X\to\pm\infty$. Let
$v_\mathrm{c}:=54^{-\frac{1}{2}}$, and define two functions of v on
$(-v_\mathrm{c},v_\mathrm{c})$ by
\begin{align}
z_1(v):=\begin{cases}\displaystyle
\frac{1}{6v}\left(-1+2\cos\left(\frac{1}{3}\arccos\left(
2\frac{v^2}{v_\mathrm{c}^2}
-1\right)\right)\right),&\quad -v_\mathrm{c} \lt v \lt 0,\\\\
\displaystyle \frac{1}{6v}\left(-1+2\cos\left(\frac{1}{3}\arccos\left(
2\frac{v^2}{v_\mathrm{c}^2}
-1\right)-\frac{2}{3}\pi\right)\right),&\quad 0 \lt v \lt v_\mathrm{c},
\end{cases}
\end{align}and
\begin{align}
z_2(v):=\begin{cases}\displaystyle
\frac{1}{6v}\left(-1+2\cos\left(\frac{1}{3}\arccos\left(
2\frac{v^2}{v_\mathrm{c}^2}-1\right)-\frac{2}{3}\pi\right)\right),&\quad -v_\mathrm{c} \lt v \lt 0,\\\\
\displaystyle \frac{1}{6v}\left(-1+2\cos\left(\frac{1}{3}\arccos\left(
2\frac{v^2}{v_\mathrm{c}^2}-1\right)\right)\right),&\quad 0 \lt v \lt v_\mathrm{c},
\end{cases}
\end{align} which are equivalent for
$|v| \lt v_\mathrm{c}/\sqrt{2}$ to
\begin{align}
\begin{split}
z_1(v)&=\frac{1}{6v}\left(-1+2\cos\left(-\frac{1}{3}\arcsin\left(v\sqrt{216\left(1-
\frac{v^2}{v_\mathrm{c}^2}\right)}\right)-\frac{1}{3}\pi\right)\right)\\
z_2(v)&=\frac{1}{6v}\left(-1+2\cos\left(-\frac{1}{3}\arcsin\left(v\sqrt{216\left(1-
\frac{v^2}{v_\mathrm{c}^2}\right)}\right)+\frac{1}{3}\pi\right)\right).
\end{split}
\end{align} In each case, the singularity at v = 0 is removable, and with the definition
$z_1(0)=-\sqrt{2}$ and
$z_2(0)=\sqrt{2}$, we see that
$z_j(v)$,
$j=1,2$, are both analytic functionsFootnote 3 of
$v\in (-v_\mathrm{c},v_\mathrm{c})$ satisfying
$z_1(v) \lt 0 \lt z_2(v)$. Introducing the function
one can check that
$\vartheta'(z_j(v);v)=0$ holds identically for
$-v_\mathrm{c} \lt v \lt v_\mathrm{c}$, and that
$\vartheta''(z_1(v);v) \lt 0$ while
$\vartheta''(z_2(v);v) \gt 0$ (here prime means differentiation with respect to z for fixed v).
Theorem 1.18 (Large-
$X$ regime)
Let
$\tau:=|b/a|$ and
$p:=\frac{1}{2\pi}\ln(1+\tau^2)$, and set
$v:=TX^{-\frac{3}{2}}\in\mathbb{R}$. Then for each
$\delta \gt 0$,
\begin{align}\begin{aligned}
\Psi(X, T;\mathbf{G}) & =\frac{{\mathrm{e}}^{-{\mathrm{i}} \arg(ab)}}{X^{\frac{3}{4}}}
\left[ \frac{\sqrt{2p}}{\sqrt{-\vartheta''(z_1(v);v)}} {\mathrm{e}}^{-2 {\mathrm{i}} X^{1/2}\vartheta(z_1(v);v)} {\mathrm{e}}^{- {\mathrm{i}} \frac{1}{2} p \ln(X) }
{\mathrm{e}}^{-{\mathrm{i}}( \phi_{z_1}(v) + \phi_{0}(v))} \right.\\
& \quad \left.+ \frac{\sqrt{2p}}{\sqrt{\vartheta''(z_2(v);v)}}{\mathrm{e}}^{-2 {\mathrm{i}} X^{1/2}\vartheta(z_2(v);v)} {\mathrm{e}}^{{\mathrm{i}} \frac{1}{2} p \ln(X) } {\mathrm{e}}^{{\mathrm{i}}( \phi_{z_2}(v) + \phi_{0}(v))}
\right] + O(X^{-\frac{5}{4}}), \quad X\to+\infty
\end{aligned}
\end{align} holds uniformly for
$|v|\le v_\mathrm{c}-\delta$ and
$\tau=O(1)$, where real phases
$\phi_0(v)$,
$\phi_{z_1}(v)$, and
$\phi_{z_2}(v)$ are defined by
\begin{align}
\phi_0(v)
:=
\frac{1}{4}\pi + p \ln( 2 (z_2(v)-z_1(v))^2 ) - \arg(\Gamma({\mathrm{i}} p)),
\end{align}and
\begin{align}
\begin{aligned}
\phi_{z_1}(v)&:= p \ln(-\vartheta''(z_1(v);v ) ),\\
\phi_{z_2}(v)&:= p \ln(\vartheta''(z_2(v);v ) ).
\end{aligned}
\end{align}Corollary 1.19 (Large negative
$X$)
The asymptotic behaviour of
$\Psi(X, T;\mathbf{G})$ with
$T=|X|^{\frac{3}{2}} v$ in the limit
$X\to-\infty$ is given by the same formula as in the right-hand side of (1.70) except that
$p$ is replaced with
$\bar{p}:=\frac{1}{2\pi}\ln\left(1+\tau^{-2} \right)$ throughout,
$X$ is replaced with
$|X|$, and the uniformity of the error requires that
$\bar{\tau}:=\tau^{-1}=O(1)$.
Proof. Apply Proposition 1.3. Note that
$\bar{p}$ can be written explicitly in terms of
$(p,\tau)$ by
$\bar{p}=$
$p-\ln(\tau)/\pi$.
These results generalize a theorem [Reference Bilman, Ling and Miller8, Theorem 4] of the authors with L. Ling to the general family of solutions parametrized by the
$2\times 2$ matrix
$\mathbf{G}(a,b)$, and they also sharpen the error estimate (see Remark 2.2 below). We prove Theorem 1.18 in Section 2. Note that the leading term of (1.70) vanishes as
$p\to 0^+$, which is expected in light of Proposition 1.8 since this corresponds to b → 0. Similarly, the leading term of the asymptotic formula valid as
$X\to-\infty$ vanishes as
$\bar{p}\to 0^+$, which corresponds to a → 0. More generally, aside from the phase factor of
${\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}$, the dependence on (a, b) enters only via the value of p, which appears in the approximate formula (1.70) as an overall multiplier
$\sqrt{2p}$ and in smaller corrections to the p-independent dominant phase terms proportional to
$X^\frac{1}{2}$. In Figures 2–3, the accuracy of the approximation (1.70) is illustrated by comparing with numerical computations of
$\Psi(X,T;\mathbf{G})$ achieved using the software package described in Section 1.4.

Figure 2. Left: The real part (navy) and imaginary part (yellow) of the explicit terms on the right-hand side of (1.70) (curves) compared with numerical evaluation of
$\Psi(X,T;\mathbf{G})$ (points) for
$v=0.5v_\mathrm{c}$ and parameters
$a=b=1$. Right: the logarithm of the absolute difference
$E$ between
$\Psi(X,T;\mathbf{G})$ and its explicit approximation in (1.70) for
$v=0.5v_\mathrm{c}$ and parameters
$a=b=1$ plotted against
$\ln(X)$. The purple line is a least-squares best fit, and it has a slope of
$-1.22883$, matching well with the predicted exponent of
$-\frac{5}{4}$.

Figure 3. Same as in Figure 2 but for
$a=\frac{1}{2}{\mathrm{e}}^{{\mathrm{i}}\pi/4}$ and
$b=1$. The best fit line in the right-hand plot has slope
$-1.26571$, again matching well with the predicted exponent of
$-\frac{5}{4}$.
Corollary 1.20 (Large-
$X$ approximation of the squared modulus)
Under the hypotheses of Theorem 1.18,
$ |\Psi(X,T;\mathbf{G}) |^2$ with
$T=X^{\frac{3}{2}}v$ has the behaviour
\begin{align}\begin{aligned}
|\Psi(X,T;\mathbf{G})|^2 = \frac{2p}{X^\frac{3}{2}\sqrt{-\vartheta''(z_1(v);v)\vartheta''(z_2(v);v)}}\left[\sqrt{-\frac{\vartheta''(z_2(v);v)}{\vartheta''(z_1(v);v)}} +\sqrt{-\frac{\vartheta''(z_1(v);v)}{\vartheta''(z_2(v);v)}}\right.\\
\left.{}\vphantom{\sqrt{-\frac{\vartheta''(z_1(v);v)}{\vartheta''(z_2(v);v)}}}+2\cos\left(
\Omega(X,v)\right)\right]
+O(X^{-2}),\quad X\to+\infty,
\end{aligned}
\end{align}where the phase is given by
\begin{align}
\Omega(X,v):=2\varrho(v)X^{\frac{1}{2}}+p \ln(X) +2\varsigma(v),
\end{align} where
$\varrho(v):= \vartheta(z_1(v);v) - \vartheta(z_2(v);v) \lt 0$ and
$\varsigma(v):=\frac{1}{2}\phi_{z_1}(v)+\frac{1}{2}\phi_{z_2}(v)+\phi_0(v)\in\mathbb{R}$.
Proof. This follows immediately from (1.70).
Remark 1.21. In particular, Corollary 1.20 gives the asymptotic behaviour of
$|\Psi(X,T;\mathbf{G})|^2$ in the limit
$X\to +\infty$ with
$T\ge 0$ held fixed, in which case the parameter v tends to zero. After taking the necessary square root, one can see that the function
$\mathbb{R}\ni X\mapsto \Psi(X,T;\mathbf{G})$ is not in
$L^1(\mathbb{R})$ for any
$T\ge 0$. Therefore,
$\Psi(X,T;\mathbf{G})$ is not associated with any sensible scattering data in the (classical) inverse-scattering transform solution of the focusing NLS equation (1.3). Although general rogue waves of infinite order are in
$L^2(\mathbb{R})$ by Theorem 1.9 and the focusing NLS equation is globally well-posed on
$L^2(\mathbb{R})$ [Reference Tsutsumi42], the inverse-scattering method nonetheless does not apply to these solutions.
Aside from an overall factor of
$X^{-\frac{3}{2}}$, the only dependence of
$|\Psi(X,T;\mathbf{G})|^2$ on
$X\gg 1$ in the leading terms for fixed v appears in the phase
$\Omega(X,v)$, making the leading contribution to
$|\Psi(X,T;\mathbf{G})|^2$ highly oscillatory. It is therefore reasonable to assert that
$|\Psi(X,T;\mathbf{G})|^2$ is approximately maximized along curves in the (X, T)-plane where
$\cos(\Omega(X,TX^{-\frac{3}{2}}))=1$. Thus, we set
$\Omega(X,v)=2\pi n$ for
$n\in\mathbb{Z}$ and solve for X. Since
$\varrho(v) \lt 0$ is bounded away from zero, X > 0 large implies that also
$n=-N$ is a large negative integer. The equation
$\Omega(X,v)=-2\pi N$ can then be rearranged as
\begin{align}
\frac{1}{p}\varrho(v)X^\frac{1}{2}\exp\left(\frac{1}{p}\varrho(v)X^\frac{1}{2}\right) = -{\mathrm{e}}^{-\eta},\quad\eta:=\frac{\pi N}{p}+\nu(v),\quad \nu(v):=\frac{\varsigma(v)}{p}+\ln\left(-\frac{p}{\varrho(v)}\right),
\end{align}which is solved using the Lambert W-function [Reference Olver, Daalhuis, Lozier, Schneider, Boisvert, Clark, Miller, Saunders, Cohl and McClain27, Section 4.13]:
\begin{align}
X=\frac{p^2}{\varrho(v)^2}W_{\pm 1}(-{\mathrm{e}}^{-\eta}\mp {\mathrm{i}} 0)^2.
\end{align} Using the asymptotic formula
$W_{\pm 1}(-{\mathrm{e}}^{-\eta}\mp{\mathrm{i}} 0)=-\eta -\ln(\eta)-\ln(\eta)/\eta + O(\ln(\eta)^2/\eta^2)$ as
$\eta\to+\infty$ (see [Reference Olver, Daalhuis, Lozier, Schneider, Boisvert, Clark, Miller, Saunders, Cohl and McClain27, Eqn. 4.13.11]), we then obtain for each large positive integer N a solution
$X=X_N(v)$ that is accurate up to a small absolute error:
\begin{align}
\begin{split}
X_N(v)&=\frac{p^2}{\varrho(v)^2}\left[\eta^2 + 2\eta\ln(\eta) + 2\ln(\eta) + \ln(\eta)^2\right] + O\left(\frac{\ln(\eta)^2}{\eta}\right),\quad\eta\to+\infty\\
&=\frac{\pi^2}{\varrho(v)^2}N^2 + \frac{2\pi p}{\varrho(v)^2}N\ln(N) + \frac{2\pi p}{\varrho(v)^2}\left(
\nu(v)+\ln\left(\frac{\pi}{p}\right)
\right)N\\
&\quad +\frac{p^2}{\varrho(v)^2}\ln(N)^2+\frac{2p^2}{\varrho(v)^2}\left(1+
\nu(v)+\ln\left(\frac{\pi}{p}\right)
\right)\ln(N)\\
&\quad +\frac{p^2}{\varrho(v)^2}\left[
2\nu(v)+\nu(v)^2
+ 2\left(1+
\nu(v)
\right)\ln\left(\frac{\pi}{p}\right) +\log\left(\frac{\pi}{p}\right)^2\right] \\
&\quad +O(N^{-1}\ln(N)^2),\quad N\to+\infty.
\end{split}
\end{align} This expansion is valid uniformly for
$|v|\le v_\mathrm{c}-\delta$ for any δ > 0 however small, but it fails for
$v\approx \pm v_\mathrm{c}$, where
$\phi_{z_1}(v)$ and hence also
$\varsigma(v)$ blows up. The curves
$X=X_N(TX^{-\frac{3}{2}})$ are superimposed on a density plot of the square modulus of
$\Psi(X,T;\mathbf{G})$ in Figure 6. This shows that these curves actually approximate the peaks of the modulus in the region
$|TX^{-\frac{3}{2}}| \lt v_\mathrm{c}$ quite accurately even when X is not very large.
After some preparation in Sections 2.1–2.2, the proof of Theorem 1.18 is given in Section 2.3 below.
1.3.2. Behaviour of
$\Psi(X,T;\mathbf{G})$ for T large
Next, we describe the behaviour of
$\Psi(X,T;\mathbf{G})$ in the limit as
$T\to\pm \infty$. Let
$w_\mathrm{c}:=54^\frac{1}{3} \gt 0$. Define for
$|w| \lt w_\mathrm{c}$ the real quantities:
\begin{align}
Z_1(w):=\frac{1}{12}\left(-w-\sqrt{w^2+8w_\mathrm{c}^2}\right) \lt 0\quad\text{and}\quad
Z_2(w):=\frac{1}{12}\left(-w+\sqrt{w^2+8w_\mathrm{c}^2}\right) \gt 0,
\end{align}and the complex quantity
\begin{align}
Z_0(w):=\frac{1}{3}\left(-w +{\mathrm{i}}\sqrt{w_\mathrm{c}^2-w^2}\right)\in\mathbb{C}_+,
\end{align}where in each case the positive square root is taken. For convenience, we write
Related amplitudes are then defined by
\begin{align}
\begin{split}
m_{Z_1}^\pm(w)&:=\frac{1}{2}(1\pm\cos(\arg(Z_1(w)-Z_0(w)))) \gt 0,\\
m_{Z_2}^\pm(w)&:=\frac{1}{2}(1\pm\cos(\arg(Z_2(w)-Z_0(w)))) \gt 0,
\end{split}
\end{align} which of course satisfy
$m_{Z_j}^+(w)+m_{Z_j}^-(w)=1$ for
$j=1,2$. Now define some real phases by:
\begin{align}
\kappa(w):=-\frac{1}{3}(w^2+w_\mathrm{c}^2),
\end{align}
\begin{align}\begin{aligned}
\Phi(w):= &
2\bar{p}\ln\left(\frac{Z_1(w)-\mathrm{Re}(Z_0(w))+|Z_1(w)-Z_0(w)|}{V(w)}\right)\\
& \qquad \qquad -2p\ln\left(\frac{Z_2(w)-\mathrm{Re}(Z_0(w)) + |Z_2(w)-Z_0(w)|}{V(w)}\right) -\frac{\pi}{2},
\end{aligned}
\end{align}
\begin{align}\begin{aligned}
\Phi^0_{Z_1}(w): & =2p\ln\left(\frac{V(w)^2-2(V(w)-|Z_1(w)-Z_0(w)|)(V(w)-|Z_2(w)-Z_0(w)|)}{V(w)^2-2(V(w)+|Z_1(w)-Z_0(w)|)(V(w)-|Z_2(w)-Z_0(w)|)}\right)\\
& \quad +2p\ln\left(\frac{\mathrm{Re}(Z_0(w))-Z_1(w)}{|Z_1(w)-Z_0(w)|-V(w)}\right)
+ 2\bar{p}\ln\left(\frac{-Z_1(w)V(w)}{4|Z_1(w)-Z_0(w)|^\frac{5}{2}(Z_2(w)-Z_1(w))^\frac{1}{2}}\right)\\
& \quad +\frac{\pi}{4}+\arg(\Gamma({\mathrm{i}}\bar{p})),
\end{aligned}
\end{align}
\begin{align}\begin{aligned}
\Phi^0_{Z_2}(w): & =2\bar{p}\ln\left(\frac{V(w)^2-2(V(w)-|Z_2(w)-Z_0(w)|)(V(w)-|Z_1(w)-Z_0(w)|)}{V(w)^2-2(V(w)+|Z_2(w)-Z_0(w)|)(V(w)-|Z_1(w)-Z_0(w)|)}\right)\\
& \quad +2\bar{p}\ln\left(\frac{Z_2(w)-\mathrm{Re}(Z_0(w))}{|Z_2(w)-Z_0(w)|-V(w)}\right)
+2p\ln\left(\frac{Z_2(w)V(w)}{4|Z_2(w)-Z_0(w)|^\frac{5}{2}(Z_2(w)-Z_1(w))^\frac{1}{2}}\right)\\
& \quad +\frac{\pi}{4}+\arg(\Gamma({\mathrm{i}} p)),
\end{aligned}
\end{align}
\begin{align}
\Phi_{Z_1}(T,w):=2\frac{|Z_1(w)-Z_0(w)|^3}{-Z_1(w)}T^\frac{1}{3} -\frac{1}{3}\bar{p}\ln(T) + \Phi^0_{Z_1}(w),
\end{align}and
\begin{align}
\Phi_{Z_2}(T,w):=2\frac{|Z_2(w)-Z_0(w)|^3}{Z_2(w)}T^\frac{1}{3} -\frac{1}{3}p\ln(T) + \Phi_{Z_2}^0(w).
\end{align}Theorem 1.22 (Large-
$T$ regime)
Let
$\tau:=|b/a|$ and
$p:=\frac{1}{2\pi}\ln(1+\tau^2)$,
$\bar{p}:=\frac{1}{2\pi}\ln(1+\tau^{-2})$, and set
$w:=XT^{-\frac{2}{3}}\in\mathbb{R}$. Then, for each
$\delta \gt 0$,
\begin{align}\begin{aligned}
\Psi(X,T;\mathbf{G}) & ={\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}{\mathrm{e}}^{-{\mathrm{i}} T^{1/3}\kappa(w)}{\mathrm{e}}^{{\mathrm{i}}\Phi(w)}\left[\frac{1}{3}\sqrt{w_\mathrm{c}^2-w^2}T^{-\frac{1}{3}}\right.\\
& \quad -\frac{1}{\sqrt{Z_2(w)-Z_1(w)}}\left\{
\frac{\sqrt{\bar{p}}|Z_1(w)|\left(m_{Z_1}^+(w){\mathrm{e}}^{{\mathrm{i}}\Phi_{Z_1}(T,w)}+m_{Z_1}^-(w){\mathrm{e}}^{-{\mathrm{i}}\Phi_{Z_1}(T,w)}\right)}{|Z_1(w)-Z_0(w)|^\frac{1}{2}}\right.\\
& \quad \left.\left.+\frac{\sqrt{p}Z_2(w)\left(m_{Z_2}^-(w){\mathrm{e}}^{{\mathrm{i}}\Phi_{Z_2}(T,w)}+m_{Z_2}^+(w){\mathrm{e}}^{-{\mathrm{i}}\Phi_{Z_2}(T,w)}\right)}{|Z_2(w)-Z_0(w)|^\frac{1}{2}}\right\}T^{-\frac{1}{2}} + O(T^{-\frac{2}{3}})\right],\quad T\to+\infty
\end{aligned}
\end{align} holds uniformly for
$|w|\le w_\mathrm{c}-\delta$,
$\tau=O(1)$, and
$\tau^{-1}=O(1)$.
Corollary 1.23 (Large negative
$T$)
The asymptotic behaviour of
$\Psi(X,T;\mathbf{G})$ with
$X=w|T|^{\frac{2}{3}}$ in the limit
$T\to-\infty$ is given by the right-hand side of (1.88), except that
$T$ is replaced with
$|T|$ and the signs of the real phases
$\Phi(w)$,
$\Phi_{Z_1}(|T|,w)$,
$\Phi_{Z_2}(|T|,w)$, and
$\kappa(w)$ (but not
$-\arg(ab)$) are changed.
Proof. Apply Proposition 1.4.
These results generalize the long-time asymptotic theorem [Reference Bilman, Ling and Miller8, Theorem 5] of the authors with L. Ling to the general family of solutions parametrized by the
$2\times 2$ matrix
$\mathbf{G}(a,b)$. They also provide an explicit correction term proportional to
$T^{-\frac{1}{2}}$ not previously obtained for any parameters. After some preliminary definitions in Sections 3.1–3.3, Theorem 1.22 is proved in Section 3.4 below. One can see from (1.88) that aside from the factor of
${\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}$, the asymptotic formula depends on (a, b) only via the values of p and
$\bar{p}$, which enter via
$\sqrt{p}$ and
$\sqrt{\bar{p}}$ as multipliers of the two correction terms, and which also appear as multipliers in the phase
$\Phi(w)$ and in subdominant corrections to the leading terms proportional to
$T^\frac{1}{3}$ in the phases
$\Phi_{Z_1}(T,w)$ and
$\Phi_{Z_2}(T,w)$. The accuracy of the large-T approximation afforded by Theorem 1.22 is illustrated in Figures 4–5.

Figure 4. Left: The real part (navy) and imaginary part (yellow) of the explicit terms on the right-hand side of (1.88) (curves) compared with numerical evaluation of
$\Psi(X,T;\mathbf{G})$ (points) for
$w=0.85w_\mathrm{c}$ and parameters
$a=b=1$. Right: the logarithm of the absolute difference
$E$ between
$\Psi(X,T;\mathbf{G})$ and its explicit approximation in (1.88) for
$w=0.85w_\mathrm{c}$ and parameters
$a=b=1$ plotted against
$\ln(T)$. The purple line is a least-squares best fit, and it has a slope of
$-0.66042$, matching well with the predicted exponent of
$-\frac{2}{3}$.

Figure 5. Same as in Figure 4 but for
$a=\frac{1}{2}{\mathrm{e}}^{{\mathrm{i}}\pi/4}$ and
$b=1$. The best fit line in the right-hand plot has slope
$-0.69432$, again matching well with the predicted exponent of
$-\frac{2}{3}$.
Corollary 1.24 (Large-
$T$ approximation of the squared modulus)
Under the hypotheses of Theorem 1.22,
$ |\Psi(X,T;\mathbf{G})|^2$ with
$X=T^\frac{2}{3}w$ has the behaviour
\begin{align}\begin{aligned}
|\Psi(X,T;\mathbf{G})|^2 & = \frac{w_\mathrm{c}^2-w^2}{9}T^{-\frac{2}{3}}\\
& \quad -\frac{2}{3}\sqrt{\frac{w_\mathrm{c}^2-w^2}{Z_2(w)-Z_1(w)}}\left\{\frac{\sqrt{\bar{p}}|Z_1(w)|\cos(\Phi_{Z_1}(T,w))}{|Z_1(w)-Z_0(w)|^\frac{1}{2}} +
\frac{\sqrt{p}Z_2(w)\cos(\Phi_{Z_2}(T,w))}{|Z_2(w)-Z_0(w)|^\frac{1}{2}}\right\}T^{-\frac{5}{6}}\\
& \quad +O(T^{-1}),\quad T\to+\infty
\end{aligned}
\end{align} and for the limit
$T\to-\infty$ we simply replace
$T$ with
$|T|$ on the right-hand side.
Proof. This follows immediately from (1.88).
Comparing with the interpretation of Corollary 1.20 that the modulus of the solution is maximized along certain curves
$X=X_N(TX^{-\frac{3}{2}})$ in the relevant part of the (X, T)-plane, here we see that instead the fluctuation proportional to
$T^{-\frac{5}{6}}$ is a combination of two sinusoids with different phases, so that one term is approximately maximized along curves satisfying
$\Phi_{Z_1}(T,w)=(2M+1)\pi$ while the other is approximately maximized along curves satisfying
$\Phi_{Z_2}(T,w)=(2N+1)\pi$, for independent integers
$(M,N)\in\mathbb{Z}^2$. These two families of curves are plotted over the region
$|XT^{-\frac{2}{3}}| \lt w_\mathrm{c}$ for different values of the parameters (a, b) in Figures 6 and 7, and indeed one can see that even in the part of this region where T is not very large, still the peaks of the fluctuation appear to be localized near the intersections of one curve from each family, so that both terms of the fluctuation are simultaneously maximized.

Figure 6. Density plot of
$|\Psi(X,T)|^2$ with
$a=b=1$ and the boundary curve (purple)
$X/T^{\frac{2}{3}}=54^{\frac{1}{3}}$, overlaid with the level curves (green) on which the cosine in (1.73) in Corollary 1.20 is maximal and the level curves (red and mustard) on which
$\cos(\Phi_{Z_1}(T,w))$ and
$\cos(\Phi_{Z_2}(T,w))$ in (1.89) from Corollary 1.24 are minimal (respectively).

Figure 7. As in Figure 6, but for parameters
$a=\frac{1}{4}$ and
$b=1$.
Like the curves
$X=X_N(TX^{-\frac{3}{2}})$, the curves in the family
$\Phi_{Z_2}(T,w)=(2N+1)\pi$ shown with mustard colour in Figures 6 and 7 appear to approach the common boundary in the first quadrant (X > 0 and T > 0) of the regions of validity of Corollaries 1.20 and 1.24 (shown in purple). It seems from the plot as though these two families of curves could be compared near the boundary curve. The equation
$\Phi_{Z_2}(T,w)=(2N+1)\pi$ can be rewritten in the form
\begin{align}\begin{aligned}
-\frac{2|Z_2(w)-Z_0(w)|^3}{pZ_2(w)}T^\frac{1}{3}\exp\left(-\frac{2|Z_2(w)-Z_0(w)|^3}{pZ_2(w)}T^\frac{1}{3}\right)=-{\mathrm{e}}^{-\kappa},\quad\kappa:=\frac{2\pi N}{p} + \varpi(w),\\
\varpi(w):=\frac{\pi}{p}-\frac{1}{p}\Phi^0_{Z_2}(w)-\ln\left(\frac{2|Z_2(w)-Z_0(w)|^3}{pZ_2(w)}\right),
\end{aligned}
\end{align}which is solved in terms of the Lambert W-function similarly to (1.76):
\begin{align}
T=-\frac{p^3Z_2(w)^3}{8|Z_2(w)-Z_0(w)|^9}W_\pm(-{\mathrm{e}}^{-\kappa}\mp{\mathrm{i}} 0)^3.
\end{align} Since a cube must be calculated in the limit
$\kappa\to +\infty$, we need the more accurate asymptotic formula
\begin{align}\begin{aligned}
W_{\pm 1}(-{\mathrm{e}}^{-\kappa}\mp{\mathrm{i}} 0)=-\kappa-\ln(\kappa)-\kappa^{-1}\ln(\kappa)+\frac{1}{2}\kappa^{-2}\ln(\kappa)^2 -\kappa^{-2}\ln(\kappa) \\+ O(\kappa^{-3}\ln(\kappa)^3),\quad\kappa\to+\infty,
\end{aligned}
\end{align} and therefore for each large positive integer N we obtain a solution
$T=T_N(w)$ with the approximation
\begin{align}
\begin{split}
T_N(w)&=\frac{p^3Z_2(w)^3}{8|Z_2(w)-Z_0(w)|^9}\left[\kappa^3+3\kappa^2\ln(\kappa)+3\kappa\ln(\kappa)^2+3\kappa\ln(\kappa)+\ln(\kappa)^3+\frac{9}{2}\ln(\kappa)^2 + 3\ln(\kappa)\right]\\
&\quad+O\left(\frac{\ln(\kappa)^3}{\kappa}\right),\quad\kappa\to+\infty\\
&=\frac{p^3Z_2(w)^3}{8|Z_2(w)-Z_0(w)|^9}\left[\left(\frac{2\pi}{p}\right)^3N^3+3\left(\frac{2\pi}{p}\right)^2N^2\ln(N)+3\left(\frac{2\pi}{p}\right)^2\left(\varpi(w)+\ln\left(\frac{2\pi}{p}\right)\right)N^2\right.\\
&\quad+\frac{6\pi}{p}N\ln(N)^2+\frac{6\pi}{p}\left(1+2\varpi(w)+2\ln\left(\frac{2\pi}{p}\right)\right)N\ln(N)\\
&\quad+\frac{6\pi}{p}\left(\varpi(w)^2+\varpi(w)+\left(2\varpi(w)+1\right)\ln\left(\frac{2\pi}{p}\right)+\ln\left(\frac{2\pi}{p}\right)^2\right)N + \ln(N)^3\\
&\quad+\left(3\varpi(w)+\frac{9}{2}+3\ln\left(\frac{2\pi}{p}\right)\right)\ln(N)^2\\
&\quad+\left(3\varpi(w)^2+9\varpi(w) + 3+ 3(2\varpi(w)+3)\ln\left(\frac{2\pi}{p}\right)+3\ln\left(\frac{2\pi}{p}\right)^2 \right)\ln(N)\\
&\quad+\varpi(w)^3+\frac{9}{2}\varpi(w)^2+3\varpi(w) +3\left(\varpi(w)^2+3\varpi(w)+1\right)\ln\left(\frac{2\pi}{p}\right) \\
&\quad\left.+ 3\left(\varpi(w)+\frac{3}{2}\right)\ln\left(\frac{2\pi}{p}\right)^2 + \ln\left(\frac{2\pi}{p}\right)^3\right]
+O\left(\frac{\ln(N)^3}{N}\right),\quad N\to+\infty.
\end{split}
\end{align} One can check that the first two terms of
$54T_N(w)^2$, proportional to N 6 and
$N^5\ln(N)$ respectively, have finite values in the limit
$w\uparrow w_\mathrm{c}$. Exactly the same is true of the two leading terms of
$X_N(v)^3$ in the limit
$v\uparrow v_\mathrm{c}$. However, only the leading term of each expansion matches, and a discrepancy appears at the order
$N^5\ln(N)$. Moreover, it is easy to see that the coefficient of
$N^5\ln(N)$ cannot be changed in one of the expansions by adding any fixed integer to N (amounting to re-indexing the curves of the relevant family). Subsequent terms in each expansion actually blow up as
$w\uparrow w_\mathrm{c}$ and
$v\uparrow v_\mathrm{c}$ respectively. This computation shows that it is just an illusion that the families of curves appear to match along the boundary curve
$54T^2=X^3$ in the first quadrant in Figures 6 and 7. Moreover, zooming in near the boundary curve shows that the curves actually turn sharply as they approach the boundary, becoming tangent to it as shown in Figure 8.

Figure 8. Zoom-in plots on two different scales near the boundary curve corresponding to the parameters
$a=b=1$ as in Figure 6, showing the mismatches of the amplitude-maximizing curves near the boundary.
1.3.3. Transitional behaviour of
$\Psi(X,T;\mathbf{G})$
The domains of validity of Theorems 1.18 and 1.22 and their corollaries cover all asymptotic directions in the (X, T)-plane, except for those near the common boundary curves
$|T||X|^{-\frac{3}{2}}=v_\mathrm{c}$ which is equivalent to
$|X||T|^\frac{2}{3}=w_\mathrm{c}$. Our final asymptotic results concern the behaviour of
$\Psi(X,T,\mathbf{G})$ in the transitional region for large (X, T) near these curves. To formulate them, we need to first recall the second Painlevé equation with parameter
$\alpha\in\mathbb{C}$:
every solution of which is a meromorphic function of
$x\in\mathbb{C}$ all of whose poles are simple with residue ±1. There is a unique solution
$\mathcal{U}(x)$ of (1.94) with asymptotic behaviour
$\mathcal{U}(x)=-{\mathrm{i}} (\frac{1}{2}x)^\frac{1}{2}-\frac{1}{2}\alpha x^{-1}+O(|x|^{-\frac{5}{2}})$ as
$x\to\infty$ with
$|\arg(x)| \lt \frac{2}{3}\pi$. This is one of the so-called increasing tritronquée solutions of (1.94). We require this solution in the situation that
$\alpha=\frac{1}{2}+{\mathrm{i}} p$, where
$p=\frac{1}{2\pi}\ln(1+\tau^2)$ with
$\tau=|b/a|$. Given the increasing tritronquée solution
$\mathcal{U}(x)$ for such α determined by τ > 0, a related meromorphic function
$\mathcal{V}(y;\tau)$ is uniquely defined by the conditions
\begin{align}
\frac{\mathcal{V}'(y;\tau)}{\mathcal{V}(y;\tau)}=-(\tfrac{2}{3})^\frac{1}{3}\mathcal{U}(-(\tfrac{2}{3})^\frac{1}{3}y),\quad \mathcal{V}(y;\tau)=-\left(\frac{y}{6}\right)^\alpha(1+O(y^{-\frac{3}{4}})),\quad y\to+\infty.
\end{align} Then it is shown in [Reference Miller25, Theorem 1.4] that
$\mathcal{V}(y;\tau)$ is analytic and non-vanishing for all real y, and it has the complementary asymptotic behaviour
\begin{align}
\mathcal{V}(y;\tau)=\frac{\tau p\Gamma({\mathrm{i}} p)}{2\sqrt{\pi}}{\mathrm{e}}^{-3\pi{\mathrm{i}}/4}{\mathrm{e}}^{-\pi p/2}2^{-{\mathrm{i}} p}{\mathrm{e}}^{-2{\mathrm{i}} (-y/3)^{3/2}}(-3y)^{-\frac{1}{2}(\frac{1}{2}+{\mathrm{i}} p)}(1+O(|y|^{-\frac{5}{4}})),\quad y\to -\infty.
\end{align} See [Reference Miller25, Corollary 1.5], which is valid for all τ > 0 and
$p=\frac{1}{2\pi}\ln(1+\tau^2)$.
Theorem 1.25 (Transition regime)
Let
$\tau:=|b/a|$ and
$p:=\frac{1}{2\pi}\ln(1+\tau^2)$, and set
$v:=TX^{-\frac{3}{2}}$ and
$v_\mathrm{c}:= 54^{-\frac{1}{2}}$. In the limit
$X\to+\infty$,
\begin{align}
\Psi(X,T;\mathbf{G})=2\cdot 3^{\frac{2}{3}}X^{-\frac{2}{3}}\mathcal{V}(2^{\frac{5}{2}}3^{\frac{7}{6}}X^{\frac{1}{3}}(v-v_\mathrm{c});\tau){\mathrm{e}}^{{\mathrm{i}}\Omega_\mathrm{c}(X,v)} + 2^\frac{1}{4}3^{-\frac{1}{4}}p^\frac{1}{2}X^{-\frac{3}{4}}
{\mathrm{e}}^{{\mathrm{i}}\Omega_2(X,v)} +O(X^{-\frac{5}{6}})
\end{align} holds uniformly for
$v-v_\mathrm{c}=O(X^{-\frac{1}{3}})$ and
$\tau=O(1)$, where phases
$\Omega_\mathrm{c}(X,v)$ and
$\Omega_2(X,v)$ are defined by
\begin{align}
\Omega_\mathrm{c}(X,v):=24^{\frac{1}{2}}X^{\frac{1}{2}}- 12 X^{\frac{1}{2}}(v-v_\mathrm{c})-\frac{1}{3}p\ln(X) +\frac{\pi}{2}-\arg(ab) + p\ln(2)-\frac{5}{3}p\ln(3),
\end{align}and
\begin{align}\begin{aligned}
\Omega_2(X,v):& =-\left(\frac{75}{2}\right)^\frac{1}{2}X^{\frac{1}{2}}-3X^\frac{1}{2}(v-v_\mathrm{c}) +\frac{1}{2}p\ln(X)\\
& \qquad +\frac{\pi}{4}-\arg(\Gamma({\mathrm{i}} p))-\arg(ab)+\frac{1}{2}p\ln(2)+\frac{7}{2}p\ln(3).
\end{aligned}
\end{align}Corollary 1.26 (
$\Psi(X,T)$ near reflected transitional curves)
The following results hold for large coordinates near the reflections of the curve
$X^3=54T^2$,
$X \gt 0$,
$T \gt 0$, in the coordinate axes:
• In the limit
$X\to-\infty$ with
$T|X|^{-\frac{3}{2}}-v_\mathrm{c}=O(|X|^{-\frac{1}{3}})$,
$\Psi(X,T;\mathbf{G})$ is given by an analogue of the right-hand side of (1.97) in which
$X$ is replaced by
$-X$, τ is replaced by
$\bar{\tau}=\tau^{-1}$,
$p$ is replaced by
$\bar{p}=\frac{1}{2\pi}\ln(1+\bar{\tau}^2)$, and the error term is uniform for
$\bar{\tau}=O(1)$.• In the limit
$X\to+\infty$ with
$-TX^{-\frac{3}{2}}-v_\mathrm{c}=O(X^{-\frac{1}{3}})$,
$\Psi(X,T;\mathbf{G})$ is given by the an analogue of the right-hand side of (1.97) in which
$T$ is replaced by
$-T$,
$\mathcal{V}(y;\tau)$ is replaced by
$\mathcal{V}(y;\tau)^*$, and all signs except that of
$-\arg(ab)$ in
$\Omega_\mathrm{c}(X,v)$ and
$\Omega_2(X,v)$ are changed.• In the limit
$X\to -\infty$ with
$-T|X|^{-\frac{3}{2}}-v_\mathrm{c}=O(|X|^{-\frac{1}{3}})$,
$\Psi(X,T;\mathbf{G})$ is given by an analogue of the right-hand side of (1.97) in which (
$X, T$) are replaced by
$(-X,-T)$, τ is replaced by
$\bar{\tau}$,
$p$ is replaced by
$\bar{p}$,
$\mathcal{V}(y;\tau)$ is replaced by
$\mathcal{V}(y;\bar{\tau})^*$, and all signs except that of
$-\arg(ab)$ in
$\Omega_\mathrm{c}(-X,v)$ and
$\Omega_2(-X,v)$ are changed. The error term is uniform for
$\bar{\tau}=O(1)$.
Proof. Apply Propositions 1.3 and 1.4.
Theorem 1.25 is proved in Section 4. The leading term in (1.97) was obtainedFootnote 4 in the special case of a = b in [Reference Bilman, Ling and Miller8, Theorem 6]. In the general case, we see that the limiting Painlevé-II function
$\mathcal{V}(\diamond;\tau)$ corresponds to a variable parameter
$\alpha=\frac{1}{2}+{\mathrm{i}} p$ depending on (a, b) via
$\tau=|b/a|$ and
$p=\frac{1}{2\pi}\ln(1+\tau^2)$.
The relative size of the correction term in (1.97) compared to the leading term is
$O(X^{-\frac{1}{12}})$, which decays very slowly as
$X\to\infty$. This observation motivates keeping the correction term although it is asymptotically negligible compared to the leading term. We observe that the correction term is essentially the same as the contribution to
$\Psi(X,T;\mathbf{G})$ of the explicit term on the second line of (1.70), approximated in the limit
$v\uparrow v_\mathrm{c}$. Theorem 1.25 shows that as
$v\uparrow v_\mathrm{c}$, the contribution from the critical point
$z_2(v)$ persists at the same order while that from
$z_1(v)$ becomes larger by a factor proportional to
$X^\frac{1}{12}$ and takes on a universal form expressed in terms of the Painlevé-II special function
$\mathcal{V}(\diamond;\tau)$.
To illustrate the validity of Theorem 1.25, we took
$a=b=1$ and applied the numerical method described in Appendix B to solve Riemann–Hilbert Problem 2.1 from [Reference Miller25] and hence obtain
$\mathcal{V}(y;\tau=1)$ for a dense grid of y-values in the interval
$[-0.2,0.2]$. Given such a value of y, and a large value of X > 0, a corresponding value of T is defined by
$T=X^\frac{3}{2}v=X^\frac{3}{2}(v_\mathrm{c}+2^{-\frac{5}{2}}3^{-\frac{7}{6}}X^{-\frac{1}{3}}y)$. Then we numerically computed
$\Psi(X,T;\mathbf{G}(1,1))$ using the method described in Section 5, and used the result to compare with the prediction of Theorem 1.25 as shown in Figure 9 and also to calculate the pointwise renormalized error
\begin{align}\begin{aligned}
E(y):=\left| 2^{-1}3^{-\frac{2}{3}}X^\frac{2}{3}\Psi(X,T;\mathbf{G}(1,1))-\mathcal{V}(y;1){\mathrm{e}}^{{\mathrm{i}}\Omega_\mathrm{c}(X,v)}-2^{-\frac{3}{4}}3^{-\frac{11}{12}}X^{-\frac{1}{12}}{\mathrm{e}}^{{\mathrm{i}}\Omega_2(X,v)}\right|,\\
v=v_\mathrm{c}+2^{-\frac{5}{2}}3^{-\frac{7}{6}}X^{-\frac{1}{3}}y,\quad T=X^\frac{3}{2}v=X^\frac{3}{2}(v_\mathrm{c}+2^{-\frac{5}{2}}3^{-\frac{7}{6}}X^{-\frac{1}{3}}y),
\end{aligned}
\end{align} for
$y\in [-1,1]$. We next set
$E:=\max_{|y|\le 1}E(y)$ and plotted
$\ln(E)$ against
$\ln(X)$. See Figure 10.

Figure 9. Numerical evaluation of
$\Psi(X,T;\mathbf{G}(1,1))$ as a function of
$y$ for fixed
$X$ (points) compared with the approximation of the two explicit terms in Theorem 1.25 (solid curves). First row: real and imaginary parts; second row: modulus. Left-to-right:
$X=4000$,
$X=40,000$,
$X=400,000$. The shaded region in each plot corresponds to an error bar proportional by a fixed constant to
$X^{-5/6}$.

Figure 10. The uniform renormalized error
$E$ over
$y\in [-1,1]$ as a function of
$X$. Black points are numerical computations and the purple line is a least-squares best fit line with slope
$-0.162334$. This matches very well the prediction of Theorem 1.25, namely
$E=O(X^{-\frac{1}{6}})$, suggesting that the error term in (1.97) is sharp.
1.4. The software package RogueWaveInfiniteNLS.jl for Julia
As part of this work we introduce a software package titled RogueWaveInfiniteNLS.jl [Reference Bilman and Miller11] in the Julia programming language. The package is based on a theoretical framework for the numerical solution of Riemann–Hilbert problems due to S. Olver and T. Trogdon; see [Reference Olver28, Reference Olver29], and [Reference Trogdon and Olver39]. The main utility of RogueWaveInfiniteNLS.jl is that the end user can easily evaluate to high accuracy
$\Psi(X,T;\mathbf{G},B)$ at a given
$(X,T)\in\mathbb{R}^2$ for arbitrary choice of parameters
$(a,b)\in\mathbb{C}^2$ (indexing the family of solutions) and the scalar B > 0. This is achieved by numerically solving a suitably regularized (via numerical implementation of noncommutative steepest descent techniques) Riemann–Hilbert problem depending on the chosen value of (X, T) and extracting from the solution of that Riemann–Hilbert problem the value of
$\Psi(X,T;\mathbf{G},B)$. The user need not worry about how the parameters affect the computation or about any mechanics underlying the procedure; the computation occurs in a black-box manner. Indeed, one can simply call the main routine
to evaluate
$\Psi(X,T;\mathbf{G},B)$ at
$(X,T)=(2,9.2)$ with parameters
$(a,b)=(2{\mathrm{i}},4)$ and B = 1. The choice of the appropriate deformed Riemann–Hilbert problem to solve numerically is taken care of automatically. In this regard RogueWaveInfiniteNLS.jl resembles the ISTPackage (for Mathematica) by T. Trogdon [Reference Trogdon41]. The ISTPackage includes a suite of (again, black-box) routines for computing the solution of the initial-value problem on the full line with rapidly decaying initial data via the numerical inverse-scattering transform for several integrable systems. See [Reference Trogdon, Olver and Deconinck37] for the Korteweg-de Vries equation, [Reference Trogdon and Olver38] for the focusing and defocusing NLS equations, [Reference Bilman and Trogdon13] for the Toda lattice, for example. The first step of the numerical inverse-scattering transform procedure involved in these works is of course computation of the scattering data associated with the given initial data. In contrast, the Riemann–Hilbert problem representation of
$\Psi(X,T;\mathbf{G}, B)$ given by Riemann–Hilbert Problem 1 does not arise from an initial-value problem or the inverse-scattering transform associated with the NLS equation (1.3) — recall Remark 1.21. Therefore, the aforementioned routines for the NLS equation do not apply to compute the general rogue waves of infinite order studied in this work and the starting point for the framework implemented in RogueWaveInfiniteNLS.jl is directly the Riemann–Hilbert problem representation, which is rather the definition of this special family of solutions of the NLS equation. There is no computation of a forward (or direct) scattering transform. RogueWaveInfiniteNLS.jl relies on the routines available in the software package OperatorApproximation.jl [Reference Trogdon40] to solve the relevant Riemann–Hilbert problems numerically. Full details on the installation, usage, and the implementation for the software package RogueWaveInfiniteNLS.jl are provided in Section 5.
Sample codes using RogueWaveInfiniteNLS.jl for the computations underlying the comparison and error plots given in Figure 2, Figure 3, Figure 4, Figure 5, and Figure 10 can be found in the notebook titled Paper-Code.ipynb in the public GitHub repository [2].
Remark 1.27. An alternate method for numerically computing
$\Psi(X,T;\mathbf{G}(a,b))$ in the special case of T = 0 is made possible due to Theorem 1.16. Indeed, Bornemann’s implementation [Reference Bornemann16] of Nystrom’s method for the numerical evaluation of Fredholm determinants could be a viable approach. It would be interesting to compare this approach with the RogueWaveInfiniteNLS.jl package when T = 0 but it is beyond the scope of this paper.
2. Asymptotic behaviour of
$\Psi(X,T;\mathbf{G})$ for large
$|X|$
This section is devoted to proving Theorem 1.18 and Theorem 1.9. We begin with Theorem 1.18. To study
$\Psi(X,T;\mathbf{G})$ for X > 0 large and general
$a,b\in\mathbb{C}$ with ab ≠ 0, it is sufficient in light of Proposition 1.5 to write
$\Psi(X,T;\mathbf{G}(a,b))={\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}\Psi(X,T;\mathbf{G}(\mathfrak{a},\mathfrak{b}))$ where the normalized parameters
$(\mathfrak{a},\mathfrak{b})$ are defined in terms of (a, b) as in (1.11).
For X > 0, writing
$T=vX^\frac{3}{2}$ and rescaling the spectral parameter Λ by
$\Lambda=X^{-\frac{1}{2}}z$, the phase conjugating the jump matrix in (1.4) for B = 1 takes the form
\begin{align}
\Lambda X+\Lambda^{2} T+2 \Lambda^{-1}=X^{\frac{1}{2}} \vartheta(z ; v), \quad \vartheta(z ; v):= z + v z^2 + 2 z^{-1}.
\end{align} From the solution of Riemann–Hilbert Problem 1 for
$\mathbf{G}=\mathbf{G}(\mathfrak{a},\mathfrak{b})$ with
$\mathfrak{a},\mathfrak{b} \gt 0$ and
$\mathfrak{a}^2+\mathfrak{b}^2=1$, and for brevity omitting G from the argument lists, we define a related matrix
$\mathbf{S}(z;X,v)$ by
\begin{align}
\mathbf{S}(z ; X, v):=\mathbf{P}(X^{-\frac{1}{2}} z ; X, X^{\frac{3}{2}} v ),\quad X \gt 0,
\end{align}and see from (1.6) and Proposition 1.5 that
\begin{align}
\Psi(X, X^{\frac{3}{2}} v )=2 {\mathrm{i}} {\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}X^{-\frac{1}{2}} \lim _{z \rightarrow \infty} z S_{12}(z ; X, v), \quad X \gt 0.
\end{align} We consider the limit
$X\to+\infty$ with
$v\in\mathbb{R}$ held fixed (further conditions on v will be introduced shortly). The matrix function
$\mathbf{S}(z ; X, v)$ clearly satisfies
$\mathbf{S}(z ; X, v) \to \mathbb{I}$ as
$z\to\infty$ and it is analytic in the complement of an arbitrary Jordan curve Γ surrounding z = 0 with clockwise orientation. Across Γ,
$\mathbf{S}(z ; X, v)$ satisfies the jump condition
\begin{align}
\mathbf{S}_+(z ; X, v) = \mathbf{S}_-(z ; X, v) {\mathrm{e}}^{-{\mathrm{i}} X^{1/2}\vartheta(z;v)\sigma_3}\mathbf{G}(\mathfrak{a},\mathfrak{b}) {\mathrm{e}}^{{\mathrm{i}} X^{1/2}\vartheta(z;v)\sigma_3},\quad z\in \Gamma.
\end{align}2.1. Steepest-descent deformation
The phase
$\vartheta(z;v)$ coincides with the one in [Reference Bilman, Ling and Miller8, Section 4.1], therefore we proceed exactly as in [Reference Bilman, Ling and Miller8], assuming that
$|v| \lt 54^{-\frac{1}{2}}$ so that
$\vartheta(z;v)$ has real and simple critical points. Under this assumption, the level curve
$\operatorname{Im}(\vartheta(z;w))=0$ has a component that is a Jordan curve surrounding the origin z = 0 and passing through two distinct (real) critical points of
$\vartheta(z;w)$, which we denote by
$z_1(v) \lt z_2(v)$ with
$z_1(v)z_2(v) \lt 0$. We choose this Jordan curve to be the jump contour Γ for
$\mathbf{S}(z;X,v)$. A third real critical point is present for the indicated range of v only if v ≠ 0, and it lies in the unbounded exterior of Γ. See Figure 11 for the sign charts of
$\operatorname{Im}(\vartheta(z;w))$ as
$v$ varies in the range
$|v| \lt 54^{-\frac{1}{2}}$.

Figure 11. The sign charts of
$\operatorname{Im}(\vartheta(z;w))$ as
$v$ varies in the range
$|v| \lt 54^{-\frac{1}{2}}$.
We define the regions
$L^{\pm}$,
$R^{\pm}$, and
$\Omega^{\pm}$ as shown in the left-hand panel of Figure 12. The explicit formulæ (1.66)–(1.68) for
$z_1(v)$ and
$z_2(v)$ are consequences of Cardano’s formula, since z = 0 cannot be a critical point of
$\vartheta(\diamond;v)$ and hence
$\vartheta'(z;v)=0$ is equivalent to a cubic equation for z.
It suffices to employ the factorizations (1.13) and (1.14) of the matrix
$\mathbf{G}(\mathfrak{a},\mathfrak{b})$ for the steepest descent analysis in this case. We introduce a new unknown matrix function
$\mathbf{T}(z;X,v)$ as a substitution based on these factorizations:
where
$ \boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}}= \boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}}(z;X,v)$ is defined in various regions by
\begin{align}
\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}}
:= \begin{bmatrix} 1 &0 \\ \displaystyle\frac{\mathfrak{b}}{\mathfrak{a}}{\mathrm{e}}^{2 {\mathrm{i}} X^{1/2} \vartheta(z;v)} & 1\end{bmatrix}
,\quad z\in L^+,
\qquad
\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}}
:=
\mathfrak{a}^{-\sigma_3} \begin{bmatrix} 1 &\mathfrak{ab} {\mathrm{e}}^{- 2 {\mathrm{i}} X^{1/2} \vartheta(z;v)} \\ 0 & 1\end{bmatrix}
,\quad z\in R^+,
\end{align}
\begin{align}
\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}}
:=
\mathfrak{a}^{\sigma_3} \begin{bmatrix} 1 & 0 \\ -\mathfrak{a b}{\mathrm{e}}^{ 2 {\mathrm{i}} X^{1/2} \vartheta(z;v)} & 1\end{bmatrix}
,\quad z\in R^-,
\qquad
\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}}
:=
\begin{bmatrix} 1 & -\displaystyle\frac{\mathfrak{b}}{\mathfrak{a}}{\mathrm{e}}^{-2 {\mathrm{i}} X^{1/2} \vartheta(z;v)}\\ 0 & 1\end{bmatrix}
,\quad z\in L^-.
\end{align} and we simply set
$ \boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}}:= \mathbb{I}$ everywhere else. See the left-hand panel of Figure 12 for the definition of the regions
$R^\pm$,
$L^\pm$, and
$\Omega^\pm$. The jump conditions satisfied by
$\mathbf{T}(z;X,v)$ are given by
where
$\mathbf{V}^{\mathbf{T}}=\mathbf{V}^{\mathbf{T}}(z;X,v)$ is defined on various arcs by
\begin{align}
\mathbf{V}^{\mathbf{T}}:=
\begin{bmatrix} 1 &0 \\ -\displaystyle\frac{\mathfrak{b}}{\mathfrak{a}}{\mathrm{e}}^{2 {\mathrm{i}} X^{1/2} \vartheta(z;v)} & 1\end{bmatrix}
,\quad z\in C_{L}^+,
\qquad
\mathbf{V}^{\mathbf{T}}:=
\begin{bmatrix} 1 &\mathfrak{a b}{\mathrm{e}}^{- 2 {\mathrm{i}} X^{1/2} \vartheta(z;v)} \\ 0 & 1\end{bmatrix}
,\quad z\in C_{R}^+.
\end{align}
\begin{align}
\mathbf{V}^{\mathbf{T}}:=
\begin{bmatrix} 1 & 0 \\ -\mathfrak{a b}{\mathrm{e}}^{ 2 {\mathrm{i}} X^{1/2} \vartheta(z;v)} & 1\end{bmatrix}
,\quad z\in C_{R}^-,
\qquad
\mathbf{V}^{\mathbf{T}}:=
\begin{bmatrix} 1 & \displaystyle\frac{\mathfrak{b}}{\mathfrak{a}}{\mathrm{e}}^{-2 {\mathrm{i}} X^{1/2} \vartheta(z;v)}\\ 0 & 1\end{bmatrix}
,\quad z\in C_{L}^-.
\end{align} See the right-hand panel of Figure 12 for definitions of the jump contours
$C_R^\pm$,
$C_L^\pm$, and I.

Figure 12. Left: the regions
$R^\pm$,
$L^\pm$, and
$\Omega^\pm$ used to define
$\mathbf{T}(z;X,v)$. Right: the jump contours of the Riemann–Hilbert problem satisfied by
$\mathbf{T}(z;X,v)$.
Note that we have
$\operatorname{Im}(\vartheta(z;v)) \gt 0$ on
$C_{L}^{+}$and
$C_{R}^{-}$ and
$\operatorname{Im}(\vartheta(z;v)) \lt 0$ on
$C_{L}^{-}$and
$C_{R}^{+}$. Therefore, as
$X\to+\infty$ the jump matrices on these contours are become exponentially small perturbations of
$\mathbb{I}$ uniformly except near the critical points
$z=z_1(v),z_2(v)$. Since
$\mathbf{S}(z;X,v)\equiv\mathbf{T}(z;X,v)$ for
$|z|$ sufficiently large, we have from (2.3)
\begin{align}
\Psi(X, X^{\frac{3}{2}} v )=2 {\mathrm{i}} {\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}X^{-\frac{1}{2}} \lim _{z \rightarrow \infty} z T_{12}(z ; X, v), \quad X \gt 0.
\end{align}2.2. Parametrix construction
The asymptotic analysis as
$X\to+\infty$ requires an outer parametrix and two inner parametrices to be used in small disks centred at
$z=z_1(v),z_2(v)$. Before we proceed, we define a few quantities that let us rewrite the constant factors in the jump matrices in (2.10)–(2.12) in a more convenient way. First, note that the identity
$\mathfrak{a}^2+\mathfrak{b}^2=1$ implies that
\begin{align}
\frac{1}{\mathfrak{a}^2} = 1 + \left(\frac{\mathfrak{b}}{\mathfrak{a}}\right)^2 \gt 1,
\end{align}and hence we may define
\begin{align}
p:= \frac{1}{2\pi} \ln\left( 1 + \left( \frac{\mathfrak{b}}{\mathfrak{a}}\right)^2 \right) \gt 0,
\end{align} so that the jump matrix in (2.11) reads
$\mathbf{V}^\mathbf{T}={\mathrm{e}}^{2\pi p\sigma_3}$. Next, introduce
\begin{align}
\tau:=\frac{\mathfrak{b}}{\mathfrak{a}}.
\end{align} Then, using again
$\mathfrak{a}^2+\mathfrak{b}^2=1$,
\begin{align}
\mathfrak{a b} = \frac{\mathfrak{b}}{\mathfrak{a}} \mathfrak{a}^2 = \tau {\mathrm{e}}^{-2\pi p}.
\end{align}2.2.1. Outer parametrix
We seek an outer parametrix
$\breve{\mathbf{T}}^{\mathrm{out}}(z)$ with the with the following properties:
•
$\breve{\mathbf{T}}^{\mathrm{out}}(z)$ is analytic in z for
$z\in\mathbb{C}\setminus I$.•
$\breve{\mathbf{T}}^{\mathrm{out}}(z) \to \mathbb{I}$ as
$z\to\infty$.• The jump condition satisfied by
$\breve{\mathbf{T}}^{\mathrm{out}}(z)$ on I agrees exactly with (2.9) with jump matrix (2.11).
We define, using the principal branch of the power function,
\begin{align}
\breve{\mathbf{T}}^{\mathrm{out}}(z)=\breve{\mathbf{T}}^{\mathrm{out}}(z;v):= \left( \frac{z-z_1(v)}{z-z_2(v)}\right)^{{\mathrm{i}} p \sigma_3},
\end{align}which satisfies all of the aforementioned properties, where the value of p is given in (2.15).
2.2.2. Inner parametrices
We now construct inner parametrices to be used within disks
$D_{z_1}(\delta)$ and
$D_{z_2}(\delta)$ centred at
$z=z_1(v)$ and
$z=z_2(v)$, respectively, with sufficiently small and fixed radius δ > 0 independent of X. We recall that
$\vartheta'(z_1(v);v)=\vartheta'(z_2(v);v)=0$ for
$|v| \lt v_\mathrm{c}$ and note that
$\vartheta''(z_1(v);v) \lt 0$, whereas
$\vartheta''(z_2(v);v) \gt 0$. Accordingly, we define the conformal mappingsFootnote 5
$\varphi_{z_1}(z;v)$ and
$\varphi_{z_2}(z;v)$ locally near
$z=z_1(v)$ and
$z=z_2(v)$, respectively, by the equations
and we choose the analytic solutions satisfying
$\varphi_{z_1}'(z_1(v) ; v) \lt 0$ and
$\varphi_{z_2}'(z_2(v) ; v) \gt 0$. Next, introducing the rescaled conformal coordinates
$\zeta_{z_1}:=X^{\frac{1}{4}}\varphi_{z_1}$ and
$\zeta_{z_2}:=X^{\frac{1}{4}}\varphi_{z_2}$, we observe that the jump conditions satisfied by
\begin{align}
{\mathbf{U}}^{z_1} := \mathbf{T} {\mathrm{e}}^{-{\mathrm{i}} X^{\frac{1}{2}}\vartheta(z_1(v);v)\sigma_3}({\mathrm{i}} \sigma_2),\quad \text{for}\ z\ \text{near}\ z_1
\end{align}and by
\begin{align}
{\mathbf{U}}^{z_2} := \mathbf{T} {\mathrm{e}}^{-{\mathrm{i}} X^{\frac{1}{2}}\vartheta(z_2(v);v)\sigma_3},\quad \text{for}\ z\ \text{near}\ z_2
\end{align} take exactly the same form when expressed in terms of the respective conformal coordinates
$\zeta=\zeta_{z_1}$ and
$\zeta=\zeta_{z_2}$ and when the jump contours are locally taken to coincide with the five rays
$\arg(\zeta)= \pm \frac{1}{4}\pi$,
$\arg(\zeta)= \pm \frac{3}{4}\pi$, and
$\arg(-\zeta)= 0$. Moreover, these jump conditions coincide exactly with those in (for example) [Reference Miller25, Riemann–Hilbert Problem A.1] for a standard parabolic cylinder parametrix. See Figure 13 for the jump contours and matrices for
$\mathbf{U}^{z_j}$ expressed in the coordinate
$\zeta=\zeta_{z_j}$ for
$j=1,2$.

Figure 13. The jump contours and jump matrices near
$z=z_1(v)$ and
$z=z_2(v)$ take the form given in this figure when expressed in the rescaled conformal coordinates
$\zeta=\zeta_{z_1}$ and
$\zeta=\zeta_{z_2}$, respectively, for which
$\zeta=0$ is the image of
$z=z_{1,2}(v)$. Compare with [[Reference Miller25], Figure 9].
Note that the consistency condition
$\tau^2 = {\mathrm{e}}^{2\pi p} - 1$ for the jump matrices at ζ = 0 is satisfied by definition of p and τ; see (2.15) and (2.16).
We now let
$\mathbf{U}(\zeta)=\mathbf{U}(\zeta;p,\tau)$ denote the unique solution of [Reference Miller25, Riemann–Hilbert Problem A.1]. This solution has the following important properties.
•
$\mathbf{U}(\zeta)$ is analytic in the five sectors shown in Figure 13, which are
$S_0: |\arg(\zeta)| \lt \frac{1}{4}\pi$,
$S_{1}: \frac{1}{4}\pi \lt \arg (\zeta) \lt \frac{3}{4}\pi$,
$S_{-1}:-\frac{3}{4}\pi \lt \arg (\zeta) \lt -\frac{1}{4}\pi$,
$S_{2}: \frac{3}{4}\pi \lt \arg (\zeta) \lt \pi$, and
$S_{-2}:-\pi \lt \arg (\zeta) \lt -\frac{3}{4}\pi$.•
$\mathbf{U}(\zeta)$ takes continuous boundary values on the excluded rays and at the origin from each of the five sectors, which are related by the jump condition
$\mathbf{U}_+(\zeta) = \mathbf{U}_-(\zeta) \mathbf{V}^{\mathrm{PC}}(\zeta)$, where the jump contours and the jump matrix
$\mathbf{V}^{\mathrm{PC}}(\zeta)$ are given in Figure 13.• Importantly, the diagonal (resp., off-diagonal) part of
$\mathbf{U}(\zeta)\zeta^{{\mathrm{i}} p \sigma_3}$ has a complete asymptotic expansion in descending even (resp., odd) powers of ζ as
$\zeta \to \infty$, with coefficients that are independent of the sector in which
$\zeta\to \infty$. In more detail, we have
(2.22)
\begin{align}
\mathbf{U}(\zeta ; p, \tau) \zeta^{{\mathrm{i}} p \sigma_{3}}=\mathbb{I}+\frac{1}{2{\mathrm{i}} \zeta}
\begin{bmatrix}
0 & r(p, \tau) \\
-s(p, \tau) & 0
\end{bmatrix}+\begin{bmatrix}
O\left(\zeta^{-2}\right) & O\left(\zeta^{-3}\right) \\
O\left(\zeta^{-3}\right) & O\left(\zeta^{-2}\right)
\end{bmatrix}, \quad \zeta \rightarrow \infty.
\end{align}See [Reference Miller25, Eqn. (A.9)]. Here the error terms in (2.22) are uniform for bounded τ and p,
(2.23)
\begin{align}
r(p,\tau) := 2 {\mathrm{e}}^{{\mathrm{i}} \frac{1}{4}\pi}\sqrt{\pi}\frac{{\mathrm{e}}^{\frac{1}{2}\pi p} {\mathrm{e}}^{{\mathrm{i}} p \ln(2)}}{\tau \Gamma({\mathrm{i}} p)},
\end{align}where
$\Gamma(\diamond)$ is the Euler gamma function, and
(2.24)
\begin{align}
s(p,\tau) := - \frac{2p}{r(p,\tau)} = {\mathrm{e}}^{{\mathrm{i}} \frac{3}{4}\pi} \tau p \Gamma({\mathrm{i}} p) {\mathrm{e}}^{-\frac{1}{2}\pi p} {\mathrm{e}}^{-{\mathrm{i}} p \ln(2)}.
\end{align}Again, see [Reference Miller25, Eqn. (A.7) and Eqn. (A.8)]. In the special case p > 0 relevant here, it follows that
$s(p,\tau)=-r(p,\tau)^*$. Then (2.24) implies that
$|r(p,\tau)| = \sqrt{2 p}$ as well as
$|s(p,\tau)| = \sqrt{2 p}$; see [Reference Miller25, Remark A.2]. Therefore,
(2.25)
\begin{align}
r(p,\tau) = \sqrt{2 p } {\mathrm{e}}^{{\mathrm{i}}(\frac{1}{4}\pi+ p \ln(2) - \arg(\Gamma({\mathrm{i}} p)))}\qquad\text{and}\qquad
s(p,\tau) = -\sqrt{2 p } {\mathrm{e}}^{-{\mathrm{i}}(\frac{1}{4}\pi+ p \ln(2) - \arg(\Gamma({\mathrm{i}} p)))}.
\end{align}
We define the inner parametrices by
\begin{align}
\breve{\mathbf{T}}^{z_1}(z;X,v) := \mathbf{Y}^{z_1}(z;X,v) \mathbf{U}(\zeta_{z_1}) ({\mathrm{i}} \sigma_2)^{-1} {\mathrm{e}}^{{\mathrm{i}} X^{1/2}\vartheta(z_1(v);v)\sigma_3}, \quad z\in D_{z_1}(\delta),
\end{align}and
\begin{align}
\breve{\mathbf{T}}^{z_2}(z;X,v) := \mathbf{Y}^{z_2}(z;X,v) \mathbf{U}(\zeta_{z_2}) {\mathrm{e}}^{{\mathrm{i}} X^{1/2}\vartheta(z_2(v);v)\sigma_3}, \quad z\in D_{z_2}(\delta),
\end{align} where the holomorphic prefactor matrices
$\mathbf{Y}^{z_1}(z;X,v) $ and
$\mathbf{Y}^{z_2}(z;X,v) $ will be chosen to ensure that the mismatch with the outer parametrix on the boundaries of the disks has the behaviour
$\mathbb{I} + o(1)$ as
$X\to +\infty$. To specify these prefactors, we first express the outer parametrix near
$z=z_1$ and near
$z=z_2$ in terms of the relevant conformal coordinate to see that
\begin{align}
\begin{aligned}
\breve{\mathbf{T}}^{\mathrm{out}}(z;v) {\mathrm{e}}^{-{\mathrm{i}} X^{1/2}\vartheta(z_1(v);v)\sigma_3}({\mathrm{i}} \sigma_2) &= X^{-{\mathrm{i}}\frac{1}{4} p\sigma_3} {\mathrm{e}}^{-{\mathrm{i}} X^{1/2}\vartheta(z_1(v);v)\sigma_3} \mathbf{H}^{z_1}(z;v) \zeta_{z_1}^{-{\mathrm{i}} p\sigma_3},\\
\mathbf{H}^{z_1}(z;v)&:= (z_2(v)-z)^{-{\mathrm{i}} p \sigma_3}\left( \frac{z_1(v) - z}{\varphi_{z_1}(z;v)}\right)^{{\mathrm{i}} p \sigma_3}({\mathrm{i}} \sigma_2),
\end{aligned}
\end{align}and
\begin{align}
\begin{aligned}
\breve{\mathbf{T}}^{\mathrm{out}}(z;v) {\mathrm{e}}^{-{\mathrm{i}} X^{1/2}\vartheta(z_2(v);v)\sigma_3}&= X^{{\mathrm{i}}\frac{1}{4} p\sigma_3} {\mathrm{e}}^{-{\mathrm{i}} X^{1/2}\vartheta(z_2(v);v)\sigma_3} \mathbf{H}^{z_2}(z;v) \zeta_{z_2}^{-{\mathrm{i}} p\sigma_3},\\
\mathbf{H}^{z_2}(z;v)&:= (z-z_1(v))^{{\mathrm{i}} p \sigma_3}\left( \frac{\varphi_{z_2}(z;v)}{z-z_2(v)}\right)^{{\mathrm{i}} p \sigma_3}.
\end{aligned}
\end{align} Here all power functions are defined as principal branches, so it is easy to verify that the matrix functions
$\mathbf{H}^{z_1}(z;v)$ and
$\mathbf{H}^{z_2}(z;v)$ are holomorphic in neighborhoods of
$z=z_1$ and
$z=z_2$, respectively. The product of factors to the left of
$\zeta_{z_1}^{-{\mathrm{i}} p\sigma_3}$ in (2.28) and to the left of
$\zeta_{z_1}^{-{\mathrm{i}} p\sigma_3}$ in (2.29) determine the holomorphic prefactors in (2.26) and (2.27), respectively. Thus, we define the inner parametrices by (2.26) with
\begin{align}
\mathbf{Y}^{z_1}(z;X,v):=X^{-{\mathrm{i}}\frac{1}{4} p\sigma_3} {\mathrm{e}}^{-{\mathrm{i}} X^{1/2}\vartheta(z_1(v);v)\sigma_3} \mathbf{H}^{z_1}(z;v) ,
\quad z\in D_{z_1}(\delta),
\end{align}and by (2.27) with
\begin{align}
\mathbf{Y}^{z_2}(z;X,v):=X^{{\mathrm{i}}\frac{1}{4} p\sigma_3} {\mathrm{e}}^{-{\mathrm{i}} X^{1/2}\vartheta(z_2(v);v)\sigma_3} \mathbf{H}^{z_2}(z;v),
\quad z\in D_{z_2}(\delta).
\end{align} The analyticity properties and the jump conditions satisfied by
$\mathbf{U}(\zeta)$ imply that the inner parametrices
$\breve{\mathbf{T}}^{z_1}(z;X,v) $ and
$\breve{\mathbf{T}}^{z_2}(z;X,v) $ exactly satisfy the jump conditions for
$\mathbf{T}(z;X,v)$ within their respective disks after deforming the jump contours in the disk to agree with the preimages under
$\lambda\mapsto \zeta_{z_1,z_2}$ of the five rays shown in Figure 13.
Remark 2.1. The unique solution
$\mathbf{U}(\zeta)=\mathbf{U}(\zeta;p,\tau)$ of [Reference Miller25, Riemann–Hilbert Problem A.1] becomes the identity matrix if p = 0. Note from (2.15) that as
$\mathfrak{b}\to 0$, we have
$p\to 0^+$. Therefore, both of the inner parametrices degenerate to the identity matrix as
$\mathfrak{b}\to 0$.
We define the global parametrix
$\breve{\mathbf{T}}(z;X,v)$ by
\begin{align}
\breve{\mathbf{T}}(z;X,v):=\begin{cases}
\breve{\mathbf{T}}^{z_1}(z;X,v),&\quad z\in D_{z_1}(\delta),\\
\breve{\mathbf{T}}^{z_2}(z;X,v),&\quad z\in D_{z_2}(\delta),\\
\breve{\mathbf{T}}^{\mathrm{out}}(z;v),&\quad z \in \mathbb{C} \setminus \left(I \cup D_{z_1}(\delta) \cup D_{z_2}(\delta)\right).
\end{cases}
\end{align}2.3. Asymptotics as
$X\to+\infty$
We compare the unknown
$\mathbf{T}(z;X,v)$ with the global parametrix
$\breve{\mathbf{T}}(z;X,v)$ and define the error
with the meaning that the size of
$\mathbf{F}-\mathbb{I}$ measures the accuracy of approximating T with
$\breve{\mathbf{T}}$. As
$\breve{\mathbf{T}}(z;X,v)$ is an exact solution of the jump conditions satisfied by
${\mathbf{T}}(z;X,v)$ on the part of I outside the disks
$D_{z_1}(\delta)$ and
$D_{z_2}(\delta)$ and on the arcs inside these disks, it follows that
$\mathbf{F}(z;X,v)$ extends as an analytic function of
$z \in \mathbb{C}\setminus \Sigma_{\mathbf{F}}$, where the contour
$\Sigma_{\mathbf{F}}$ consists of the arcs of
$C_{L}^{\pm}$and
$C_{R}^{\pm}$ lying outside of the disks
$D_{z_1, z_2}(\delta)$, and the boundaries
$\partial D_{z_1, z_2}(\delta)$. We orient the circular boundaries
$\partial D_{z_1, z_2}(\delta)$ clockwise, and consider the jump matrix
$\mathbf{V}^{\mathbf{F}}(z;X,v)$ that relates the boundary values of
$\mathbf{F}(z;X,v)$ through the jump condition
Since
$\breve{\mathbf{T}}^{\mathrm{out}}(z;X,v)$ is analytic in z for
$z\in (C_{L}^{\pm}\cup C_{R}^{\pm} ) \cap \Sigma_{\mathbf{F}}$, we may express the jump matrix
$\mathbf{V}^{\mathbf{F}}(z;X,v)$ for
$\mathbf{F}(z;X,v)$ on these arcs as
\begin{align}\begin{aligned}
\mathbf{V}^{\mathbf{F}}(z;X,v) = \mathbf{F}_-(z;X,v)^{-1} \mathbf{F}_+(z;X,v) =\breve{\mathbf{T}}^{\mathrm{out}}(z ; v) \mathbf{T}_{-}(z ; X, v)^{-1} \mathbf{T}_{+}(z ; X, v) \breve{\mathbf{T}}^{\mathrm{out}}(z ; v)^{-1},\\
z \in\left(C_{L}^{\pm} \cup C_{R}^{\pm}\right) \cap \Sigma_{\mathbf{F}}
\end{aligned}
\end{align} As δ > 0 is fixed,
$\breve{\mathbf{T}}^{\text {out }}(z;v)$ is independent of X, and z is restricted to the arcs
$C_{L}^{\pm} \cup C_{R}^{\pm}$ that lie outside the disks
$D_{z_1,z_2}(\delta)$ on which the jump matrix for
$\mathbf{T}(z;X,v)$ becomes an exponentially small perturbation of
$\mathbb{I}$ as
$X\to+\infty$, there exists a constant
$K(\varepsilon) \gt 0$ such that
\begin{align}
\sup _{z \in\left(C_{L}^{\pm} \cup C_{R}^{\pm}\right) \cap \Sigma_{\mathbf{F}}}\left\|\mathbf{V}^{\mathbf{F}}(z ; X, v)-\mathbb{I}\right\|=O({\mathrm{e}}^{-K(\varepsilon)X^{1/2}}),\quad X\to+\infty,
\end{align} holds uniformly for
$|v|\le 54^{-\frac{1}{2}}-\varepsilon$ and normalized parameters
$(\mathfrak{a},\mathfrak{b})$ with
$\mathfrak{b}/\mathfrak{a}\le\varepsilon^{-1}$, where
$\|\diamond \|$ denotes the matrix norm induced from an arbitrary norm on
$\mathbb{C}^{2}$.
To analyze the jump
$\mathbf{V}^{\mathbf{F}}(z;X,v)$ on the circular boundaries
$\partial D_{z_1}(\delta)$ and
$\partial D_{z_2}(\delta)$, we use the fact that
$\mathbf{T}(z;X , v)$ is analytic in z at all but finitely many points on
$\partial D_{z_1}(\delta)$ and
$\partial D_{z_2}(\delta)$ and hence observe that
\begin{align}
\mathbf{V}^\mathbf{F}(z;X,v) =
\begin{cases}
\breve{\mathbf{T}}^{z_1}(z;X,v) \breve{\mathbf{T}}^{\mathrm{out}}(z;v)^{-1},&\quad \partial D_{z_1}(\delta),\\
\breve{\mathbf{T}}^{z_2}(z;X,v) \breve{\mathbf{T}}^{\mathrm{out}}(z;v)^{-1},&\quad \partial D_{z_2}(\delta).
\end{cases}
\end{align} Then, recalling that
$\zeta_{z_1} = X^{\frac{1}{4}}\varphi_{z_1}(z;v)$, from (2.22), (2.26), and (2.30) we have
\begin{align}\begin{aligned}
\mathbf{V}^\mathbf{F}(z;X,v) & =
X^{-{\mathrm{i}}\frac{1}{4} p\sigma_3} {\mathrm{e}}^{-{\mathrm{i}} X^{1/2}\vartheta(z_1(v);v)\sigma_3} \mathbf{H}^{z_1}(z;v) \\
& \quad \cdot \left( \mathbb{I} + \frac{1}{2{\mathrm{i}} X^{\frac{1}{4}}\varphi_{z_1}(z;v)} \begin{bmatrix}
0 & r(p, \tau) \\
-s(p, \tau) & 0
\end{bmatrix}+\begin{bmatrix}
O(X^{-\frac{1}{2}}) & O(X^{-\frac{3}{4}}) \\
O(X^{-\frac{3}{4}}) & O(X^{-\frac{1}{2}})
\end{bmatrix}
\right)\\
& \quad \cdot \mathbf{H}^{z_1}(z;v)^{-1} {\mathrm{e}}^{{\mathrm{i}} X^{1/2}\vartheta(z_1(v);v)\sigma_3} X^{{\mathrm{i}}\frac{1}{4} p\sigma_3},\quad z\in\partial D_{z_1}(\delta).
\end{aligned}
\end{align} Similarly, recalling that
$\zeta_{z_2} = X^{\frac{1}{4}}\varphi_{z_2}(z;v)$, from (2.22), (2.27), and (2.31) we have
\begin{align}\begin{aligned}
\mathbf{V}^\mathbf{F}(z;X,v) & =
X^{{\mathrm{i}}\frac{1}{4} p\sigma_3} {\mathrm{e}}^{-{\mathrm{i}} X^{1/2}\vartheta(z_2(v);v)\sigma_3} \mathbf{H}^{z_2}(z;v) \\
& \quad \cdot \left( \mathbb{I} + \frac{1}{2{\mathrm{i}} X^{\frac{1}{4}}\varphi_{z_2}(z;v)} \begin{bmatrix}
0 & r(p, \tau) \\
-s(p, \tau) & 0
\end{bmatrix}+\begin{bmatrix}
O(X^{-\frac{1}{2}}) & O(X^{-\frac{3}{4}}) \\
O(X^{-\frac{3}{4}}) & O(X^{-\frac{1}{2}})
\end{bmatrix}
\right)\\
& \quad \cdot \mathbf{H}^{z_2}(z;v)^{-1} {\mathrm{e}}^{{\mathrm{i}} X^{1/2}\vartheta(z_2(v);v)\sigma_3} X^{-{\mathrm{i}}\frac{1}{4} p\sigma_3},\quad z\in\partial D_{z_2}(\delta).
\end{aligned}
\end{align} In both (2.38) and (2.39) the error terms are uniform for normalized parameters
$(\mathfrak{a},\mathfrak{b})$ with
$\mathfrak{b}/\mathfrak{a}$ bounded because the latter condition implies that p and τ are bounded. Combining (2.36), (2.38), and (2.39) gives in particular the uniform estimate
\begin{align}
\sup_{z\in \Sigma_{\mathbf{F}}} \| \mathbf{V}^{\mathbf{F}}(z;X,v) - \mathbb{I}\| = O_\varepsilon(X^{-\frac{1}{4}}), \quad X\to+\infty,\quad |v|\le 54^{-\frac{1}{2}}-\varepsilon,\; \mathfrak{b}/\mathfrak{a}\le\varepsilon^{-1}.
\end{align} Here, the notation
$O_\varepsilon(\diamond)$ indicates that the implied constant depends on ɛ > 0. Since
$\Sigma_\mathbf{F}$ is a compact contour and
$\mathbf{F}(z;X,v)$ is analytic in z for
$z\in\mathbb{C}\setminus \Sigma_\mathbf{F} $ with
$\mathbf{F}(z;X,v) \to \mathbb{I}$ as
$z\to\infty$, the small-norm theory for such Riemann–Hilbert problems implies that the error
$\mathbf{F}(z;X,v) $ satisfies
\begin{align}
\mathbf{F}_-(\diamond; X,v ) - \mathbb{I} = O_\varepsilon(X^{-\frac{1}{4}}),\quad X\to +\infty,\quad |v|\le54^{-\frac{1}{2}}-\varepsilon,\; \mathfrak{b}/\mathfrak{a}\le\varepsilon^{-1}
\end{align} in the
$L^2(\Sigma_\mathbf{F})$ sense. See [Reference Bilman, Ling and Miller8, Section 4.1.3] for details of the argument implying this estimate. Reformulating the jump condition (2.34) in the form
$\mathbf{F}_+ - \mathbf{F}_- = \mathbf{F}_-(\mathbf{V}^{\mathbf{F}} - \mathbb{I})$ and using the fact that F tends to the identity matrix as
$z\to\infty$ and that
$\mathbf{F}_\pm(\diamond;X,v)-\mathbb{I}\in L^2(\Sigma_\mathbf{F})$, we obtain from the Plemelj formula
\begin{align}
\mathbf{F}(z;X,v) = \mathbb{I} + \frac{1}{2\pi {\mathrm{i}}} \int_{\Sigma_\mathbf{F}} \frac{\mathbf{F}_-(\zeta;X,v)(\mathbf{V}^{\mathbf{F}}(\zeta;X,v) - \mathbb{I})}{\zeta-z} \mathrm{d} \zeta,\quad z\in\mathbb{C}\setminus \Sigma_{\mathbf{F}}.
\end{align}Now, we have
holding for
$|z|$ sufficiently large. Recall from (2.18) that
$\breve{\mathbf{T}}^{\mathrm{out}}(z;v)$ is a diagonal matrix that tends to
$\mathbb{I}$ as
$z\to \infty$. Thus, we see from (2.13) that
\begin{align}
\Psi(X,X^{\frac{3}{2}}v) = 2{\mathrm{i}} {\mathrm{e}}^{-{\mathrm{i}} \arg(ab)} X^{-\frac{1}{2}}\lim_{z\to\infty} z F_{12}(z;X,v).
\end{align} Then, using the Laurent series expansion of
$\mathbf{F}(z;X,v)$ obtained from (2.42) and convergent for
$|z| \gt \sup_{s\in\Sigma_\mathbf{F}}|s|$, we arrive at the exact formula
\begin{align}\begin{aligned}
\Psi(X, X^{\frac{3}{2}} v)=-\frac{{\mathrm{e}}^{-{\mathrm{i}} \arg(ab)}}{\pi X^{\frac{1}{2}}}\left[\int_{\Sigma_{\mathbf{F}}} F_{11-}(z ; X, v) V_{12}^{\mathbf{F}}(z ; X, v) \mathrm{d} z \right.\\
\left.{}+\int_{\Sigma_{\mathbf{F}}} F_{12-}(z ; X, v)\left(V_{22}^{\mathbf{F}}(z; X, v)-1\right) \mathrm{d} z\right].
\end{aligned}
\end{align} Looking at the diagonal elements of (2.38) and (2.39) along with (2.36), we see that
${V}_{22}^{\mathbf{F}}(\diamond; X,v) - 1 = O_\varepsilon(X^{-\frac{1}{2}})$ holds uniformly on
$\Sigma_\mathbf{F}$. Since
$\Sigma_\mathbf{F}$ is compact, we also have
${V}_{22}^{\mathbf{F}}(\diamond; X,v) - 1 = O_\varepsilon(X^{-\frac{1}{2}})$ in
$L^2(\Sigma_{\mathbf{F}})$ as
$X\to +\infty$. On the other hand, the L 2 estimate (2.41) implies that
$F_{12-}(\diamond;X,v) = O_\varepsilon(X^{-\frac{1}{4}})$ holds in
$L^2(\Sigma_{\mathbf{F}})$. Thus, applying Cauchy-Schwarz inequality to the modulus of the second integral in (2.45) yields
\begin{align}
\Psi(X, X^{\frac{3}{2}} v)=-\frac{{\mathrm{e}}^{-{\mathrm{i}} \arg(ab)}}{\pi X^{\frac{1}{2}}} \int_{\Sigma_{\mathbf{F}}} F_{11-}(z ; X, v) V_{12}^{\mathbf{F}}(z ; X, v) \mathrm{d} z + O_\varepsilon(X^{-\frac{5}{4}}),\quad X\to +\infty,
\end{align} when
$|v|\leq 54^{-\frac{1}{2}}-\varepsilon$ and
$\mathfrak{b}/\mathfrak{a}\le\varepsilon^{-1}$. We express the integral in (2.46) as
\begin{align}
\int_{\Sigma_{\mathbf{F}}} F_{11-}(z ; X, v) V_{12}^{\mathbf{F}}(z ; X, v) \mathrm{d} z = \int_{\Sigma_{\mathbf{F}}} (F_{11-}(z ; X, v) - 1 ) V_{12}^{\mathbf{F}}(z ; X, v) \mathrm{d} z + \int_{\Sigma_{\mathbf{F}}} V_{12}^{\mathbf{F}}(z ; X, v) \mathrm{d} z.
\end{align} By Cauchy-Schwarz and the fact that because
$\Sigma_\mathbf{F}$ is compact the L 2-norm is subordinate to the
$L^\infty$-norm,
\begin{align}
\int_{\Sigma_{\mathbf{F}}}(F_{11-}(z;X,v)-1)V_{12}^{\mathbf{F}}(z;X,v)\, \mathrm{d} z = O\left(\|F_{11-}(\diamond;X,v)-1\|_{L^2(\Sigma_\mathbf{F})}\|V_{12}^{\mathbf{F}}(\diamond;X,v)\|_{L^\infty(\Sigma_\mathbf{F})}\right).
\end{align} Now the conjugating factors in (2.38) and (2.39) are off-diagonal and diagonal respectively, and these factors are also oscillatory. Therefore, using the exponential bound (2.36) gives the estimates
${V}_{12}^{\mathbf{F}}(\diamond; X,v) = O_\varepsilon(X^{-\frac{1}{4}})$,
$V_{21}^{\mathbf{F}}(\diamond;X,v)=O_\varepsilon(X^{-\frac{1}{4}})$, and
$V_{11}^{\mathbf{F}}(\diamond;X,v)-1=O_\varepsilon(X^{-\frac{1}{2}})$ all holding in the
$L^{\infty}(\Sigma_\mathbf{F})$ sense. For the L 2 estimate of
$F_{11-}(\diamond;X,v)-1$, we can improve upon (2.41) by using the fact that the “minus” boundary value of the Cauchy integral on the right-hand side of (2.42) is bounded as a linear operator in
$L^2(\Sigma_\mathbf{F})$ acting on the matrix-valued numerator to obtain from the 11-entry:
\begin{align}\begin{aligned}
\|F_{11-}(\diamond;X,v)-1\|_{L^2(\Sigma_\mathbf{F})} & \le C(\Sigma_\mathbf{F})\left[\|(F_{11-}(\diamond;X,v)-1)(V_{11}^{\mathbf{F}}(\diamond;X,v)-1)\|_{L^2(\Sigma_\mathbf{F})}\right.\\
& \quad \left.{}+ \|V_{11}^{\mathbf{F}}(\diamond;X,v)-1\|_{L^2(\Sigma_\mathbf{F})} + \|F_{12-}(\diamond;X,v)V_{21}^{\mathbf{F}}(\diamond;X,v)\|_{L^2(\Sigma_\mathbf{F})}\right].
\end{aligned}
\end{align} The bound
$C(\Sigma_\mathbf{F})$ of the operator norm depends only on the contour
$\Sigma_\mathbf{F}$, which in turn depends on v but neither on X nor on the normalized parameters
$(\mathfrak{a},\mathfrak{b})$; however it may be taken to depend only on ɛ if
$|v|\le 54^{-\frac{1}{2}}-\varepsilon$. Using the
$L^\infty(\Sigma_\mathbf{F})$ estimate of
$V^{\mathbf{F}}_{11}(\diamond;X,v)-1$, this yields (assuming X > 0 is sufficiently large)
\begin{align}
\begin{split}
\|F_{11-}(\diamond;X,v)-1\|_{L^2(\Sigma_\mathbf{F})} &= O_\varepsilon\left(\|V_{11}^{\mathbf{F}}(\diamond;X,v)-1\|_{L^2(\Sigma_\mathbf{F})}\right) + O_\varepsilon\left( \|F_{12-}(\diamond;X,v)V_{21}^{\mathbf{F}}(\diamond;X,v)\|_{L^2(\Sigma_\mathbf{F})}\right)\\
&= O_\varepsilon\left(\|V_{11}^{\mathbf{F}}(\diamond;X,v)-1\|_{L^\infty(\Sigma_\mathbf{F})}\right) \\
&\quad+ O_\varepsilon\left( \|F_{12-}(\diamond;X,v)\|_{L^2(\Sigma_\mathbf{F})}\|V_{21}^{\mathbf{F}}(\diamond;X,v)\|_{L^\infty(\Sigma_\mathbf{F})}\right),
\end{split}
\end{align} again using the fact that
$\Sigma_\mathbf{F}$ is compact. Combining the
$L^\infty$ estimates for
$V_{11}^\mathbf{F}(\diamond;X,v)-1$ and
$V_{21}^\mathbf{F}(\diamond;X,v)$ with the
$O_\varepsilon(X^{-\frac{1}{4}})$ estimate of
$F_{12-}(\diamond;X,v)$ in L 2 implied by (2.41) then shows that
$F_{11-}(\diamond;X,v)-1=O_\varepsilon(X^{-\frac{1}{2}})$ holds in the
$L^2(\Sigma_\mathbf{F})$ sense, provided
$|v|\le 54^{-\frac{1}{2}}-\varepsilon$ and
$\mathfrak{b}/\mathfrak{a}\le\varepsilon^{-1}$. Using this and the
$O(X^{-\frac{1}{4}})$ estimate of
$V_{12}^\mathbf{F}(\diamond;X,v)$ in
$L^\infty(\Sigma_\mathbf{F})$ in (2.48) then shows that the first term on the right-hand side of (2.47) is
$O_\varepsilon(X^{-\frac{3}{4}})$ as
$X\to+\infty$, and hence its contribution to (2.46) can be absorbed into the error term already present in that asymptotic formula. Therefore, (2.46) gives a formula for
$\Psi(X,X^\frac{3}{2}v)$ with an explicit leading term:
\begin{align}\begin{aligned}
\Psi(X, X^{\frac{3}{2}} v)=-\frac{{\mathrm{e}}^{-{\mathrm{i}} \arg(ab)}}{\pi X^{\frac{1}{2}}} \int_{\Sigma_{\mathbf{F}}} V_{12}^{\mathbf{F}}(z ; X, v) \mathrm{d} z + O_\varepsilon(X^{-\frac{5}{4}}),\\ X\to +\infty,
\quad |v|\le 54^{-\frac{1}{2}}-\varepsilon,\; \mathfrak{b}/\mathfrak{a}\le\varepsilon^{-1}.
\end{aligned}
\end{align} Recalling the exponential decay (2.36), the estimate (2.51) holds with a different error, but of the same size, if we replace the contour of integration
$\Sigma_{\mathbf{F}}$ by
$\partial D_{z_1}(\delta) \cup \partial D_{z_2}(\delta)$. Thus, we arrive at
\begin{align}\begin{aligned}
\Psi(X, X^{\frac{3}{2}} v)=-\frac{{\mathrm{e}}^{-{\mathrm{i}} \arg(ab)}}{\pi X^{\frac{1}{2}}} \int_{\partial D_{z_1}(\delta) \cup \partial D_{z_2}(\delta)} V_{12}^{\mathbf{F}}(z ; X, v) \mathrm{d} z + O_\varepsilon(X^{-\frac{5}{4}}),\\ X\to +\infty,\quad |v|\le54^{-\frac{1}{2}}-\varepsilon,\;\mathfrak{b}/\mathfrak{a}\le\varepsilon^{-1}.
\end{aligned}
\end{align} Since
$\mathbf{H}^{z_1}(z;v)$ is an off-diagonal matrix (see (2.28)), (2.38) shows that as
$X\to+\infty$,
\begin{align}
V_{12}^{\mathbf{F}}(z;X,v) = \frac{X^{-{\mathrm{i}} \frac{1}{2} p} {\mathrm{e}}^{-2{\mathrm{i}} X^{1/2}\vartheta(z_1(v);v)}}{2{\mathrm{i}} X^{\frac{1}{4}} \varphi_{z_1}(z;v)}s(p,\tau) H^{z_1}_{12}(z;v)^2 + O_\varepsilon(X^{-\frac{3}{4}}),\quad z\in \partial D_{z_1}(\delta),
\end{align} and since
$\mathbf{H}^{z_2}(z;v)$ is a diagonal matrix (recall (2.29)), (2.39) shows that as
$X\to+\infty$,
\begin{align}
V_{12}^{\mathbf{F}}(z;X,v) = \frac{X^{{\mathrm{i}} \frac{1}{2} p} {\mathrm{e}}^{-2{\mathrm{i}} X^{1/2}\vartheta(z_2(v);v)}}{2{\mathrm{i}} X^{\frac{1}{4}} \varphi_{z_2}(z;v)}r(p,\tau) H^{z_2}_{11}(z;v)^2 + O_\varepsilon(X^{-\frac{3}{4}}),\quad z\in \partial D_{z_2}(\delta),
\end{align} where both of the errors are uniform on the indicated boundary contours, and
$|v| \lt 54^{-\frac{1}{2}}-\varepsilon$ and
$\mathfrak{b}/\mathfrak{a}\le\varepsilon^{-1}$. We can compute the integrals of the explicit leading terms in (2.53) and (2.54) on the relevant circular boundaries by residues at
$z=z_1, z_2$ since
$\varphi_{z_1}(z;v)$ has a simple zero at
$z=z_1(v)$ and
$\varphi_{z_2}(z;v)$ has a simple zero at
$z=z_2(v)$, while
$\mathbf{H}^{z_1}(z;v)$ and
$\mathbf{H}^{z_2}(z;v)$ are analytic in
$D_{z_1}(\delta)$ and
$D_{z_2}(\delta)$, respectively. Doing so in (2.52) noting the clockwise orientation of the circles
$\partial D_{z_1,z_2}(\delta)$ yields, as
$X\to+\infty$,
\begin{align}\begin{aligned}
\Psi(X, X^{\frac{3}{2}} v) & =\frac{{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}}{X^{\frac{3}{4}}}\left[ X^{-{\mathrm{i}} \frac{1}{2} p} {\mathrm{e}}^{-2{\mathrm{i}} X^{1/2}\vartheta(z_1(v);v)} \frac{s(p,\tau) H^{z_1}_{12}(z_1(v);v)^2}{\varphi_{z_1}^\prime (z_1(v);v)}\right. \\
& \quad \left.+ X^{{\mathrm{i}} \frac{1}{2} p} {\mathrm{e}}^{-2{\mathrm{i}} X^{1/2}\vartheta(z_2(v);v)} \frac{r(p,\tau) H^{z_2}_{11}(z_2(v);v)^2}{\varphi_{z_2}^\prime(z_2(v);v)} \right] + O_\varepsilon(X^{-\frac{5}{4}}),
\end{aligned}
\end{align} assuming
$|v| \lt 54^{-\frac{1}{2}}-\varepsilon$ and
$\mathfrak{b}/\mathfrak{a}\le\varepsilon^{-1}$.
Remark 2.2. Analogues of (2.51), (2.53), (2.54), and (2.52) were obtained for a particular choice of the parameters (a, b) in equations (160), (161), (162), and (163) respectively of our earlier paper [Reference Bilman, Ling and Miller8], but with cruder error estimates.
It now remains to compute the four quantities
$H_{12}^{z_1}(z_1(v) ; v)$,
$\varphi_{z_1}^{\prime}(z_1(v) ; v)$,
$H_{11}^{z_2}(z_2(v) ; v)$, and
$\varphi_{z_2}^{\prime}(z_2(v) ; v)$. First, by definition
\begin{align}
\varphi_{z_1}^{\prime}(z_1(v) ; v) = -\sqrt{- \vartheta''(z_1(v);v)}\qquad\text{and}\qquad \varphi_{z_2}^{\prime}(z_2(v) ; v) = \sqrt{\vartheta''(z_2(v);v)}.
\end{align}Next, using (2.28) and (2.29) and L’Hôpital’s rule,
\begin{align}
\mathbf{H}^{z_1}(z_1(v);v) = (z_2(v) - z_1(v))^{-{\mathrm{i}} p \sigma_3} \left(\frac{-1}{\varphi_{z_1}^\prime(z_1(v); v)}\right)^{{\mathrm{i}} p \sigma_3}({\mathrm{i}}\sigma_2)
\end{align}and
\begin{align}
\mathbf{H}^{z_2}(z_2(v);v) = (z_2(v) - z_1(v))^{{\mathrm{i}} p \sigma_3} \varphi_{z_2}^\prime(z_2(v); v)^{{\mathrm{i}} p \sigma_3}.
\end{align}Therefore, using (2.25) we obtain
\begin{align}
\begin{aligned}
\frac{s(p,\tau) H^{z_1}_{12}(z_1(v);v)^2}{\varphi_{z_1}^\prime (z_1(v);v)} &= - (z_2(v) - z_1(v))^{- 2{\mathrm{i}} p} (-\vartheta''(z_1(v);v))^{-{\mathrm{i}} p}
\frac{s(p,\tau)}{\sqrt{-\vartheta''(z_1(v);v)}}\\
&= \frac{\sqrt{2p}{\mathrm{e}}^{-{\mathrm{i}}(\frac{1}{4}\pi+ p \ln(2) - \arg(\Gamma({\mathrm{i}} p)))}}{\sqrt{-\vartheta''(z_1(v);v)}} (z_2(v) - z_1(v))^{- 2{\mathrm{i}} p} (-\vartheta''(z_1(v);v))^{-{\mathrm{i}} p}
\end{aligned}
\end{align}and
\begin{align}
\begin{aligned}
\frac{r(p,\tau) H^{z_2}_{11}(z_2(v);v)^2}{\varphi_{z_2}^\prime(z_2(v);v)} &= (z_2(v) - z_1(v))^{2{\mathrm{i}} p} \vartheta''(z_2(v);v)^{{\mathrm{i}} p} \frac{r(p,\tau)}{\sqrt{\vartheta''(z_2(v);v)}}\\
&= \frac{\sqrt{2 p } {\mathrm{e}}^{{\mathrm{i}}(\frac{1}{4}\pi+ p \ln(2) - \arg(\Gamma({\mathrm{i}} p)))}}{\sqrt{\vartheta''(z_2(v);v)}} (z_2(v) - z_1(v))^{2{\mathrm{i}} p} \vartheta''(z_2(v);v)^{{\mathrm{i}} p}.
\end{aligned}
\end{align} Recalling that
$z_1(v) \lt z_2(v)$ and using
$\mathfrak{b}/\mathfrak{a}=|b/a|$ in the definition (2.15), we let phases
$\phi_0(v)$,
$\phi_{z_1}(v)$, and
$\phi_{z_2}(v)$ independent of the large parameter X be defined by (1.71) and (1.72). Substituting (2.59)–(2.60) in (2.52), and using (1.71)–(1.72) in the resulting expressions establishes (1.70) in Theorem 1.18.
2.4.
$L^2(\mathbb{R})$-norm of
$\Psi(X,T;\mathbf{G})$
We now prove Theorem 1.9. Since the
$L^2(\mathbb{R})$-norm of a solution of (1.3) is a conserved quantity, it suffices to compute
$\| \Psi(\diamond,0;\mathbf{G})\|_{L^2(\mathbb{R})}$. We let
$\mathbf{P}^{[1]}(X,T;\mathbf{G})$ and
$\mathbf{P}^{[2]}(X,T;\mathbf{G})$ denote the coefficients in the asymptotic expansion
of the unique solution
$\mathbf{P}(\Lambda)=\mathbf{P}(\Lambda;X,T,\mathbf{G}) $ of Riemann–Hilbert Problem 1. A standard dressing calculation using the symmetry
$\sigma_2 \mathbf{P}(\Lambda^*)^*\sigma_2 =\mathbf{P}(\Lambda)$ shows that
\begin{align}
\mathbf{P}^{[1]}(X,T;\mathbf{G}) =
\frac{1}{2{\mathrm{i}}}
\begin{bmatrix}
\Phi(X,T;\mathbf{G}) & \Psi(X,T;\mathbf{G}) \\
\Psi(X,T;\mathbf{G})^* & - \Phi(X,T;\mathbf{G})
\end{bmatrix},
\end{align} where
$\dfrac{\partial}{\partial X}\Phi(X,T;\mathbf{G}) = | \Psi(X,T;\mathbf{G})|^2$. Thus,
\begin{align}
\begin{split}
\| \Psi(\diamond,0;\mathbf{G})\|_{L^2(\mathbb{R})}^2 &=-2{\mathrm{i}} \int_{-\infty}^{+\infty} \frac{\partial}{\partial X} P_{22}^{[1]}(Y,0;\mathbf{G}) \mathrm{d} Y\\
&=-2{\mathrm{i}} \lim_{X\to +\infty} \left( P^{[1]}_{22}(X,0;\mathbf{G}) - P^{[1]}_{22}( - X,0;\mathbf{G}) \right).
\end{split}
\end{align} Using Proposition 1.5 we can assume that
$\mathbf{G}=\mathbf{G}(\mathfrak{a},\mathfrak{b})$ depends on the normalized parameters defined from (a, b) in (1.11). For
$|\Lambda| \gt 1$, Proposition A.1 implies that
\begin{align}
\mathbf{P}(\Lambda;-X,0,\mathbf{G}(\mathfrak{a},\mathfrak{b})) = \sigma_3 \mathbf{P}(-\Lambda;X,0,\mathbf{G}(\mathfrak{b},\mathfrak{a})) {\mathrm{e}}^{4{\mathrm{i}}\Lambda^{-1}\sigma_3} \sigma_3, \quad |\Lambda| \gt 1.
\end{align} Expanding the right-hand side as
$\Lambda\to \infty$ yields the identity
\begin{align}
P_{22}^{[1]}(-X,0;\mathbf{G}(\mathfrak{a},\mathfrak{b})) = - P_{22}^{[1]}(X,0;\mathbf{G}(\mathfrak{b},\mathfrak{a})) - 4{\mathrm{i}} .
\end{align}Using this in (2.63) gives
\begin{align}
\| \Psi(\diamond,0;\mathbf{G})\|_{L^2(\mathbb{R})}^2 = 8 - 2{\mathrm{i}} \lim_{X\to +\infty} \left( P^{[1]}_{22}(X,0;\mathbf{G}(\mathfrak{a},\mathfrak{b})) + P^{[1]}_{22}( X,0;\mathbf{G}(\mathfrak{b},\mathfrak{a})) \right).
\end{align} Recalling that
$\mathbf{S}(z;X,0,\mathbf{G}(\mathfrak{a},\mathfrak{b}))=\mathbf{T}(z;X,0,\mathbf{G}(\mathfrak{a},\mathfrak{b}))$ for
$|z|$ sufficiently large together with (2.2) we have
\begin{align}
\mathbf{P}(X^{-\frac{1}{2}} z; X, 0, \mathbf{G}(\mathfrak{a},\mathfrak{b})) = \mathbf{T}(z;X,0,\mathbf{G}(\mathfrak{a},\mathfrak{b})), \quad |z|\gg 1,
\end{align}which implies the identity
\begin{align}
P^{[1]}_{22}(X,0;\mathbf{G}(\mathfrak{a},\mathfrak{b})) = X^{-\frac{1}{2}} T^{[1]}_{22}(X,0;\mathbf{G}(\mathfrak{a},\mathfrak{b})),
\end{align} where
$\mathbf{T}^{[1]}$ is the sub-leading coefficient matrix in the large-z expansion of the matrix function
$\mathbf{T}(z;X,0,\mathbf{G}(\mathfrak{a},\mathfrak{b}))$:
$\mathbf{T}(z;X,0,\mathbf{G}(\mathfrak{a},\mathfrak{b})) = \mathbb{I} + \mathbf{T}^{[1]}(X,0,\mathbf{G}(\mathfrak{a},\mathfrak{b}))z^{-1} + O(z^{-2})$ as
$z\to\infty$. The identity (2.68) clearly holds regardless of the values of
$\mathfrak{a},\mathfrak{b}$, and in particular when
$\mathbf{G}(\mathfrak{a},\mathfrak{b})$ is replaced with
$\mathbf{G}(\mathfrak{b},\mathfrak{a})$. On the other hand, from (2.43) we have
$\mathbf{T}(z;X,0,\mathbf{G}) = \mathbf{F}(z;X,0,\mathbf{G}) \breve{\mathbf{T}}(z;X,0,\mathbf{G})$ and for
$|z|$ large enough we have
$\breve{\mathbf{T}}(z;X,0,\mathbf{G}) = \breve{\mathbf{T}}^{\mathrm{out}}(z;0,\mathbf{G})$, which is independent of X. Then, expanding (2.42) for
$|z|$ large and using the definition (2.18) shows that
$\mathbf{T}(z;X,0,\mathbf{G})$ has the expansion
\begin{align}\begin{aligned}
\mathbf{T}(z;X,0,\mathbf{G}) & = \mathbb{I} \\
& \quad + \left[ {\mathrm{i}}(z_2(0) - z_1(0) )p \sigma_3
-\frac{1}{2\pi {\mathrm{i}}}\int_{\Sigma_\mathbf{F}} \mathbf{F}_-(\zeta;X,0,\mathbf{G})\left( \mathbf{V}^{\mathbf{F}}(\zeta;X,0,\mathbf{G}) - \mathbb{I} \right) \mathrm{d} \zeta \right]z^{-1}\\
& \quad + O(z^{-2}),\quad z\to \infty.
\end{aligned}
\end{align}Then we obtain
\begin{align}
\begin{split}
P_{22}^{[1]}(X,0;\mathbf{G}) = X^{-\frac{1}{2}} &\left[-{\mathrm{i}}(z_2(0) - z_1(0))p -\frac{1}{2\pi {\mathrm{i}}}\int_{\Sigma_\mathbf{F}} F_{21-}(\zeta;X,0,\mathbf{G}) V^{\mathbf{F}}_{12}(\zeta;X,0,\mathbf{G}) \mathrm{d} \zeta \right.\\
&\quad\left. -\frac{1}{2\pi {\mathrm{i}}}\int_{\Sigma_\mathbf{F}} F_{22-}(\zeta;X,0,\mathbf{G}) \left( V^{\mathbf{F}}_{22}(\zeta;X,0,\mathbf{G}) - 1 \right) \mathrm{d} \zeta \right],
\end{split}
\end{align} which is exact. Now combining the estimates (2.40) and (2.41) shows that
$P_{22}^{[1]}(X,0;\mathbf{G})=O(X^{-\frac{1}{2}})$ as
$X\to+\infty$, and this fact holds regardless of the values of the normalized parameters
$\mathfrak{a},\mathfrak{b}$ provided
$\mathfrak{a}\neq 0$ (if
$\mathfrak{a}=0$ then
$p=\infty$); in particular it is true for
$\mathbf{G}=\mathbf{G}(\mathfrak{a},\mathfrak{b})$ and
$\mathbf{G}=\mathbf{G}(\mathfrak{b},\mathfrak{a})$ if
$\mathfrak{ab}\neq 0$. Therefore, we have from (2.66) that
\begin{align}
\| \Psi(\diamond,0;\mathbf{G})\|_{L^2(\mathbb{R})} = \sqrt{8}
\end{align}as long as ab ≠ 0.
3. Asymptotic behaviour of
$\Psi(X,T;\mathbf{G})$ for large
$|T|$
This section is devoted to proving Theorem 1.22. To analyze
$\Psi(X,T;\mathbf{G})$ for large T > 0 and general
$a,b\in\mathbb{C}$ with ab ≠ 0, we can appeal to Proposition 1.5, which allows us to work with normalized parameters, replacing
$\mathbf{G}=\mathbf{G}(a,b)$ with
$\mathbf{G}(\mathfrak{a},\mathfrak{b})$ for which
$\mathfrak{a},\mathfrak{b} \gt 0$ with
$\mathfrak{a}^2+\mathfrak{b}^2=1$, at the cost of including a phase factor
${\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}$. Hence, our aim is to generalize the analysis of [Reference Bilman, Ling and Miller8, Section 4.2] from the specific case
$\mathfrak{a}=\mathfrak{b}=1/\sqrt{2}$ to allow for general normalized parameters; we will also compute a correction term not obtained in [Reference Bilman, Ling and Miller8].
We introduce a real parameter w and set
$X=w T^\frac{2}{3}$, and then rescale the spectral parameter Λ by setting
$Z:=T^{\frac{1}{3}}\Lambda$. Then the phase conjugating the jump matrix in (1.4) takes the form
\begin{align}
\Lambda X+\Lambda^{2} T+2 \Lambda^{-1}=T^{\frac{1}{3}} \theta(Z ; w), \quad \theta(Z ; w):=w Z+Z^{2}+2 Z^{-1}.
\end{align} In analogy with Section 2, taking the solution of Riemann–Hilbert Problem 1 for
$\mathbf{G}=\mathbf{G}(\mathfrak{a},\mathfrak{b})$ with
$\mathfrak{a},\mathfrak{b} \gt 0$ and
$\mathfrak{a}^2+\mathfrak{b}^2=1$, for brevity we omit G from the argument lists since
$\mathfrak{a}$ and
$\mathfrak{b}$ are fixed in this section. Thus, settingFootnote 6
\begin{align}
\mathbf{S}(Z;T,w):=\mathbf{P}(T^{-\frac{1}{3}}Z; T^{\frac{2}{3}}w,T),\quad T \gt 0,
\end{align}from (1.6) and Proposition 1.5 we get
\begin{align}
\Psi(T^\frac{2}{3} w , T)=2 {\mathrm{i}} {\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}T^{-\frac{1}{3}}\lim _{Z \rightarrow \infty} Z S_{12}(Z ; T, w).
\end{align} The matrix
$\mathbf{S}(Z ; T, w)$ is normalized to satisfy
$\mathbf{S}(Z ; T, w) \rightarrow \mathbb{I}$ as
$Z \rightarrow \infty$ for each T > 0 and
$\mathbf{S}(Z ; T, w)$ is analytic in the complement of an arbitrary Jordan curve Γ surrounding Z = 0 in the clockwise sense. The jump condition satisfied by
$\mathbf{S}(Z ; T, w)$ across Γ is
\begin{align}
\mathbf{S}_{+}(Z ; T, w)=\mathbf{S}_{-}(Z ; T, w) {\mathrm{e}}^{-{\mathrm{i}} T^{1/3} \theta(Z ; w) \sigma_{3}} \mathbf{G}(\mathfrak{a},\mathfrak{b}) {\mathrm{e}}^{{\mathrm{i}} T^{1/3} \theta(Z ; w) \sigma_{3}}, \quad Z \in \Gamma.
\end{align}3.1. Spectral curve and
$g$-function
Since the phase conjugating the jump matrix for
$\mathbf{P}(\Lambda;X,T,\mathbf{G})$ in Riemann–Hilbert Problem 1 does not involve G in any way, all of the analysis in [Reference Bilman, Ling and Miller8, Section 4.2] regarding the spectral curve and construction of the relevant g-functionFootnote 7 goes through without any modification. We summarize that analysis here.
We assume that
$|w| \lt 54^{\frac{1}{3}}$, and we recall the g-function
$g(Z;w)$ defined in [Reference Bilman, Ling and Miller8, Section 4.2], which is bounded and analytic in Z for
$Z\in \mathbb{C} \setminus \Sigma$, where Σ is a Schwarz-symmetrical arc determined below. It also satisfies
$g(Z;w)\to 0 $ as
$Z\to \infty$, and the boundary values taken by
$g(Z;w)$ on Σ satisfy the jump condition
where
$\kappa(w)$ is a constant whose explicit value was found in [Reference Bilman, Ling and Miller8, Eqn. (218)] to be
\begin{align}
\kappa(w)=-108^{\frac{1}{3}}-\frac{1}{3} w^{2},
\end{align} which can also be written in the form (1.82). The derivative
$g'(Z;w)$ of
$g(Z;w)$ with respect to Z satisfies the relation
\begin{align}
\begin{split}
\left(g'(Z;w) + \theta'(Z;w)\right)^2 &= 4Z^{-4}(Z-Z_1(w))^2(Z-Z_2(w))^2 (Z-Z_0(w))(Z-Z_0(w)^*)\\ &=:Z^{-4} P(Z;w),
\end{split}
\end{align} which is the spectral curve for the problem at hand, see [Reference Bilman, Ling and Miller8, Eqn. (183)]. The double roots of the sextic polynomial
$P(\diamond;w)$ are the real values
$Z_1(w) \lt 0$ and
$Z_2(w) \gt 0$ defined in (1.78), and there is also a complex-conjugate pair of simple roots
$Z_0(w)$,
$Z_0(w)^*$ given explicitly by (1.79) assuming that
$|w| \lt w_\mathrm{c}=54^\frac{1}{3}$. See [Reference Bilman, Ling and Miller8, Eqn. (190) and Eqn. (191)]. The cut Σ for the g-function connects the conjugate pair of simple roots
$Z_0(w)$ and
$Z_0(w)^*$ of
$P(\diamond;w)$, and is chosen to cross the real axis at the negative value
$Z=Z_1(w)$. The function
$g(Z;w)$ is given explicitly in [Reference Bilman, Ling and Miller8, Eqn. (195)] by
\begin{align}
g(Z;w) = \frac{R(Z ; w)^{3}}{Z}-\theta(Z ; w)-3 \cdot 2^{-\frac{1}{3}}-\frac{1}{6} w^{2},
\end{align} where
$R(Z;w)$ is the function analytic for
$Z\in \mathbb{C}\setminus \Sigma$ uniquely determined by the conditions
We define
as in [Reference Bilman, Ling and Miller8, Eqn. (196)], and take the jump contour Γ for
$\mathbf{S}(Z;T,w)$ so that
$\operatorname{Im}(h(Z;w))=0$ holds for
$Z\in\Gamma$.

Figure 14. The sign charts of
$\operatorname{Im}(h(Z;w))$ for
$w$ in the range
$|w| \lt w_{\mathrm{c}}$.

Figure 15. Left: the jump contour
$\Gamma=\Gamma^{+} \cup \Gamma^{-} \cup \Sigma^{+} \cup \Sigma^{-}$ for
$\mathbf{S}$ and the regions
$L_{\Gamma}^{\pm}, L_{\Sigma}^{\pm}, R_{\Gamma}^{\pm}, R_{\Sigma}^{\pm}$, and
$\Omega^{\pm}$. Right: the jump contour for
$\mathbf{T}$.
It consists of four oriented arcs
$\Gamma^{\pm}$ and
$\Sigma^{\pm}$ as shown in the left-hand panel of Figure 15, and
$\operatorname{Im}(h(Z;w))$ is continuous across
$\Sigma=\Sigma^+ \cup \Sigma^-$, vanishing there but not changing sign. See [Reference Bilman, Ling and Miller8, Section 4.2.1] for the construction of g and determination of Σ and Γ in full detail. See Figure 14 for the sign chart of
$\operatorname{Im}(h(Z;w))$ and how it varies with w. We define the domains
$L_\Gamma^{\pm}, R_\Gamma^{\pm}$,
$L_\Sigma^{\pm}, R_\Sigma^{\pm}$,
$\Omega^{\pm}$ exactly as in the left-hand panel of Figure 15.
3.2. Steepest-descent deformation
We make use of all of the factorizations in (1.13)–(1.16) and introduce the g-function by making the substitution:
where
$\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}} = \boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}}(Z; T, w)$ is defined in various regions by
\begin{align}
\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}} :=
\begin{bmatrix}1 & 0 \\ \dfrac{\mathfrak{b}}{\mathfrak{a}}{\mathrm{e}}^{2{\mathrm{i}} T^{1/3}\theta(Z;w)} & 1 \end{bmatrix} {\mathrm{e}}^{{\mathrm{i}} T^{1/3} g(Z;w)\sigma_3},\quad Z\in L^+_\Gamma,
\end{align}
\begin{align}
\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}} :=
\mathfrak{a}^{-\sigma_3} \begin{bmatrix}1 &\mathfrak{ab} {\mathrm{e}}^{-2{\mathrm{i}} T^{1/3}\theta(Z;w)} \\ 0 & 1 \end{bmatrix}{\mathrm{e}}^{{\mathrm{i}} T^{1/3} g(Z;w)\sigma_3},\quad Z\in R^+_\Gamma,
\end{align}
\begin{align}
\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}} :=
\mathfrak{a}^{\mp \sigma_3} {\mathrm{e}}^{{\mathrm{i}} T^{1/3} g(Z;w)\sigma_3},\quad Z\in \Omega^{\pm},
\end{align}
\begin{align}
\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}} := \mathfrak{a}^{\sigma_3}\begin{bmatrix}1 & 0 \\ -\mathfrak{a b} {\mathrm{e}}^{2{\mathrm{i}} T^{1/3}\theta(Z;w)} & 1 \end{bmatrix} {\mathrm{e}}^{{\mathrm{i}} T^{1/3} g(Z;w)\sigma_3},\quad Z\in R^-_\Gamma,
\end{align}
\begin{align}
\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}} := \begin{bmatrix}1 & -\dfrac{\mathfrak{b}}{\mathfrak{a}} {\mathrm{e}}^{-2{\mathrm{i}} T^{1/3}\theta(Z;w)} \\ 0 & 1 \end{bmatrix} {\mathrm{e}}^{{\mathrm{i}} T^{1/3} g(Z;w)\sigma_3},\quad Z\in L^-_\Gamma,
\end{align}
\begin{align}
\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}} := \mathfrak{a}^{-\sigma_3}\begin{bmatrix}1 & -\dfrac{\mathfrak{a}^3}{\mathfrak{b}} {\mathrm{e}}^{-2{\mathrm{i}} T^{1/3}\theta(Z;w)} \\ 0 & 1 \end{bmatrix}{\mathrm{e}}^{{\mathrm{i}} T^{1/3} g(Z;w)\sigma_3},\quad Z\in R^+_\Sigma,
\end{align}
\begin{align}
\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}} :=
\begin{bmatrix}1 & \dfrac{\mathfrak{a}}{\mathfrak{b}}{\mathrm{e}}^{-2{\mathrm{i}} T^{1/3}\theta(Z;w)} \\ 0 & 1 \end{bmatrix}{\mathrm{e}}^{{\mathrm{i}} T^{1/3} g(Z;w)\sigma_3},\quad Z\in L^+_\Sigma,
\end{align}
\begin{align}
\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}} :=
\mathfrak{a}^{\sigma_3} \begin{bmatrix}1 & 0 \\ \dfrac{\mathfrak{a}^3}{\mathfrak{b}} {\mathrm{e}}^{2{\mathrm{i}} T^{1/3}\theta(Z;w)} & 1 \end{bmatrix} {\mathrm{e}}^{{\mathrm{i}} T^{1/3} g(Z;w)\sigma_3},\quad Z\in R^-_\Sigma,
\end{align}
\begin{align}
\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}} :=
\begin{bmatrix}1 & 0 \\ -\dfrac{\mathfrak{a}}{\mathfrak{b}}{\mathrm{e}}^{2{\mathrm{i}} T^{1/3}\theta(Z;w)} & 1 \end{bmatrix} {\mathrm{e}}^{{\mathrm{i}} T^{1/3} g(Z;w)\sigma_3},\quad Z\in L^-_\Sigma,
\end{align} and we set
$\boldsymbol{\Delta}^{\mathbf{S}\rightarrow \mathbf{T}}:= {\mathrm{e}}^{{\mathrm{i}} T^{1/3}g(Z;w)\sigma_3}$ elsewhere. The jump contours for the jump conditions satisfied by
$\mathbf{T}(Z;T,w)$ are illustrated in the right-hand panel of Figure 15, and the jump conditions are the following.
where
$\mathbf{V}^{\mathbf{T}}=\mathbf{V}^{\mathbf{T}}(Z;T,w)$ is defined on various arcs by
\begin{align}
\mathbf{V}^{\mathbf{T}}:=
\begin{bmatrix}1 & 0 \\-\dfrac{\mathfrak{b}}{\mathfrak{a}}{\mathrm{e}}^{2{\mathrm{i}} T^{1/3}h(Z;w)} & 1 \end{bmatrix},\quad Z\in C^+_{\Gamma,L},
\qquad
\mathbf{V}^{\mathbf{T}}:=
\begin{bmatrix}1 &\mathfrak{a b} {\mathrm{e}}^{-2{\mathrm{i}} T^{1/3}h(Z;w)} \\ 0 & 1 \end{bmatrix},\quad Z\in C^+_{\Gamma,R},
\end{align}
\begin{align}
\mathbf{V}^{\mathbf{T}}:=
\begin{bmatrix}1 & 0 \\ -\mathfrak{a b} {\mathrm{e}}^{2{\mathrm{i}} T^{1/3}h(Z;w)} & 1 \end{bmatrix},\quad Z\in C^-_{\Gamma,R},
\qquad
\mathbf{V}^{\mathbf{T}}:=
\begin{bmatrix}1 & \dfrac{\mathfrak{b}}{\mathfrak{a}} {\mathrm{e}}^{-2{\mathrm{i}} T^{1/3}h(Z;w)} \\ 0 & 1 \end{bmatrix},\quad Z\in C^-_{\Gamma,L},
\end{align}
\begin{align}
\mathbf{V}^{\mathbf{T}}:=
\begin{bmatrix}1 & -\dfrac{\mathfrak{a}}{\mathfrak{b}}{\mathrm{e}}^{-2{\mathrm{i}} T^{1/3}h(Z;w)} \\ 0 & 1 \end{bmatrix},\quad Z\in C^+_{\Sigma,L},
\qquad
\mathbf{V}^{\mathbf{T}}:=
\begin{bmatrix}1 & -\dfrac{\mathfrak{a}^3}{\mathfrak{b}} {\mathrm{e}}^{-2{\mathrm{i}} T^{1/3}h(Z;w)} \\ 0 & 1 \end{bmatrix},\quad Z\in C^+_{\Sigma,R},
\end{align}
\begin{align}
\mathbf{V}^{\mathbf{T}}:=
\begin{bmatrix}1 & 0 \\ \dfrac{\mathfrak{a}^3}{\mathfrak{b}} {\mathrm{e}}^{2{\mathrm{i}} T^{1/3}h(Z;w)} & 1 \end{bmatrix},\quad Z\in C^-_{\Sigma,R},
\qquad
\mathbf{V}^{\mathbf{T}}:=
\begin{bmatrix}1 & 0 \\ \dfrac{\mathfrak{a}}{\mathfrak{b}}{\mathrm{e}}^{2{\mathrm{i}} T^{1/3}h(Z;w)} & 1 \end{bmatrix},\quad Z\in C^-_{\Sigma,L}.
\end{align} Finally, on
$\Sigma=\Sigma^+\cup\Sigma^-$ we have
\begin{align}
\mathbf{V}^{\mathbf{T}}:=
\begin{bmatrix} 0 & \left(\dfrac{\mathfrak{a}}{\mathfrak{b}}\right)^{\pm 1} {\mathrm{e}}^{-{\mathrm{i}} T^{1/3}\kappa(w)} \\ - \left(\dfrac{\mathfrak{b}}{\mathfrak{a}}\right)^{\pm 1} {\mathrm{e}}^{{\mathrm{i}} T^{1/3}\kappa(w)}& 0\end{bmatrix},\quad Z\in\Sigma^\pm,
\end{align} Since
$\mathbf{T}(Z;T,w)=\mathbf{S}(Z;T,w){\mathrm{e}}^{{\mathrm{i}} T^{1/3}g(Z;w)\sigma_3}$ for large Z, from (3.3) we get
\begin{align}
\begin{split}
\Psi(T^\frac{2}{3} w , T)&=2 {\mathrm{i}} {\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}T^{-\frac{1}{3}}\lim _{Z \rightarrow \infty} Z T_{12}(Z ; T, w){\mathrm{e}}^{{\mathrm{i}} T^{1/3}g(Z;w)}\\ &=2 {\mathrm{i}} {\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}T^{-\frac{1}{3}}\lim _{Z \rightarrow \infty} Z T_{12}(Z ; T, w),
\end{split}
\end{align} where the second equality comes from the fact that
$g(Z;w)\to 0$ as
$Z\to\infty$.
3.3. Parametrix construction
3.3.1. Outer parametrix
We construct a parametrix
$\breve{\mathbf{T}}^{\mathrm{out}}(Z;T,w)$ with the following properties:
•
$\breve{\mathbf{T}}^{\mathrm{out}}(Z;T,w)$ is analytic in Z for
$Z\in\mathbb{C}\setminus(\Sigma \cup I)$.•
$\breve{\mathbf{T}}^{\mathrm{out}}(Z;T,w) \to \mathbb{I}$ as
$Z\to\infty$.• The jump conditions satisfied by
$\breve{\mathbf{T}}^{\mathrm{out}}(Z;T,w)$ on
$\Sigma\cup I$ agree exactly with (3.21) with jump matrix (3.23) on I and (3.27) on
$\Sigma^+$ and
$\Sigma^-$.
These conditions are not sufficient to determine
$\breve{\mathbf{T}}^{\mathrm{out}}$ uniquely. Nevertheless, we will just construct a particular function satisfying the properties listed above. We first simplify the jump matrices defined on
$\Sigma^{+}$ and
$\Sigma^{-}$ by introducing a Szegő function
${f}(Z;w)$ analytic in Z for
$Z\in \mathbb{C}\setminus \Sigma$ given by
\begin{align}
{f}(Z;w) := \frac{1}{2}R(Z;w) \left(
\int_{\Sigma^{-}} \frac{ \mathrm{d} \zeta}{R_{+}(\zeta;w)(\zeta-Z)}-\int_{\Sigma^{+}} \frac{ \mathrm{d} \zeta}{R_{+}(\zeta;w)(\zeta-Z)}
\right).
\end{align} By rationally parametrizing the genus-zero curve
$R^2=(\zeta-Z_0(w))(\zeta-Z_0(w)^*)$ via stereographic projection, one finds that an antiderivative is
\begin{align}
\int\frac{ \mathrm{d}\zeta}{R(\zeta;w)(\zeta-Z)}=\frac{1}{R(Z;w)}\log\left(\frac{(Z-U)(\zeta-U)+(V+R(Z;w))(V-R(\zeta;w))}{(Z-U)(\zeta-U)+(V-R(Z;w))(V-R(\zeta;w))}\right),
\end{align} where
$Z_0(w)=U+{\mathrm{i}} V$. One can check that if Z is real and less than
$Z_1(w)$, then taking the principal branch of the logarithm yields a function of ζ that is analytic except on Σ and the real interval
$Z\le\zeta\le Z_1(w)$, and
$\zeta=Z$ is a simple root of the numerator of the argument of the logarithm only. Therefore, with this choice of branch, the antiderivative is suitable for evaluation of
${f}(Z;w)$ with
$Z \lt Z_1(w)$ provided one takes the imaginary part of the logarithm to be ±π at the endpoint
$\zeta=Z_1(w)$ of
$\Sigma^{\mp}$. It follows that
\begin{align}\begin{aligned}
{f}(Z;w)& =\frac{1}{2}\log\left(\frac{(V-R(Z;w))^2+(Z-U)^2}{(V+R(Z;w))^2+(Z-U)^2}\right) \\
& \quad +\log\left(-\frac{(Z-U)(Z_1(w)-U)+(V+R(Z;w))(V-R_+(Z_1(w);w))}{(Z-U)(Z_1(w)-U)+(V-R(Z;w))(V-R_+(Z_1(w);w))}\right).
\end{aligned}
\end{align} The term on the first line comes from the contributions at
$Z=Z_0(w)=U+{\mathrm{i}} V$ and
$Z=Z_0(w)^*=U-{\mathrm{i}} V$, and it is an analytic function of
$Z \lt Z_1(w)$ that admits continuation to a neighbourhood of
$Z_1(w)$; to evaluate it at
$Z=Z_1(w)$ we need only replace Z with
$Z_1(w)$ and
$R(Z;w)$ with
$R_+(Z_1(w);w)$. The term on the second line comes from the contributions at
$Z=Z_1(w)$, and the numerator of the argument of the logarithm has a simple root at
$Z=Z_1(w)$ and produces a branch cut in the Z-plane emanating from
$Z=Z_1(w)$ to the right. To reveal the simple root, we may use the identity
$(Z_1(w)-U)^2+V^2-R_+(Z_1(w);w)^2=0$ to write
\begin{align}
-\frac{(Z-U)(Z_1(w)-U)+(V+R(Z;w))(V-R_+(Z_1(w);w))}{(Z-U)(Z_1(w)-U)+(V-R(Z;w))(V-R_+(Z_1(w);w))}= -\omega^{f}(Z;w)(Z-Z_1(w)),
\end{align}where
\begin{align}
\omega^{f}(Z;w):=\frac{\displaystyle Z_1(w)-U+\frac{R(Z;w)-R_+(Z_1(w);w)}{Z-Z_1(w)}(V-R_+(Z_1(w);w))}{(Z-U)(Z_1(w)-U)+(V-R(Z;w))(V-R_+(Z_1(w);w))}
\end{align} is a function analytic and non-vanishing at
$Z=Z_1(w)$ and
$R(Z;w)=R_+(Z_1(w);w)$ (i.e., admitting analytic continuation through Σ at
$Z=Z_1(w)$ from the left) with
\begin{align}
\omega^{f}_+(Z_1(w);w)=\frac{(Z_1(w)-U)V}{2((Z_1(w)-U)^2+V^2)(R_+(Z_1(w);w)-V)}
\end{align} which one can verify is a positive number by the definitions of
$Z_1(w)$ and
$Z_0(w)=U+{\mathrm{i}} V$. We may therefore write
${f}(Z;w)$ in the form
\begin{align}
{f}(Z;w)=\frac{1}{2}\log\left(\frac{(V-R(Z;w))^2+(Z-U)^2}{(V+R(Z;w))^2+(Z-U)^2}\cdot\omega^{f}(Z;w)^2\right)+\log(Z_1(w)-Z),
\end{align} with only the second term not being analytic at
$Z=Z_1(w)$.
Similarly, if Z is a real number greater than
$Z_1(w)$, one can check that the antiderivative in (3.30) is analytic except for
$\zeta\in\Sigma$ and on the half-lines
$\zeta \lt Z_1(w)$ and
$\zeta \gt Z$, and that
$\zeta=Z$ is a simple root of the numerator only in the argument of the (principal branch) logarithm. In particular, the antiderivative is continuous along the minus side of Σ, and therefore for such Z,
\begin{align}
\begin{split}
{f}(Z;w)&=\frac{1}{2}R(Z;w)\left(\int_{\Sigma^+}\frac{ \mathrm{d}\zeta}{R_-(\zeta;w)(\zeta-Z)}-\int_{\Sigma^-}\frac{ \mathrm{d}\zeta}{R_-(\zeta;w)(\zeta-Z)}\right) \\
&=\frac{1}{2}\log\left(\frac{(V+R(Z;w))^2+(Z-U)^2}{(V-R(Z;w))^2+(Z-U)^2}\right)\\
&\quad+\log\left(\frac{(Z-U)(Z_1(w)-U)+(V-R(Z;w))(V-R_-(Z_1(w);w))}{(Z-U)(Z_1(w)-U)+(V+R(Z;w))(V-R_-(Z_1(w);w))}\right).
\end{split}
\end{align} This is analytic at
$Z \gt Z_1(w)$ and hence can be evaluated in particular at
$Z=Z_2(w)$ simply by replacing Z with
$Z_2(w)$ and
$R(Z;w)$ with
$R(Z_2(w);w)$. All arguments of logarithms are then positive and we interpret
$\log(\diamond)$ as
$\ln(\diamond)$.
It is easy to verify that
Although the sum of the boundary values is constant along each arc
$\Sigma^\pm$, both boundary values exhibit a logarithmic singularity as
$Z\to Z_1(w)$ and are otherwise continuous. We note that
${f}(Z;w)$ does not vanish in the limit
$Z\to \infty$; indeed, expanding (3.29) shows that
where
\begin{align}
\gamma(w):=\frac{1}{2}
\left(
\int_{\Sigma^+} \frac{ \mathrm{d} \zeta}{R_+(\zeta;w)} - \int_{\Sigma^-} \frac{ \mathrm{d} \zeta}{R_+(\zeta;w)}
\right)\in \mathbb{R}.
\end{align} As pointed out in [Reference Bilman, Ling and Miller8], it is straightforward to verify that an antiderivative of
$1/R(\zeta;w)$ is
\begin{align}
\int\frac{ \mathrm{d}\zeta}{R(\zeta;w)}=
\log\left(\zeta-U+R(\zeta;w)\right).
\end{align} Taking the principal branch of the logarithm gives a function of ζ that is analytic on the complement of Σ and the ray
$(-\infty,Z_1(w)]$. This is continuous along the − side of Σ and hence can be used to evaluate
$\gamma(w)$:
\begin{align}
\gamma(w)=\frac{1}{2}\left(\int_{\Sigma^-}\frac{ \mathrm{d}\zeta}{R_-(\zeta;w)}-\int_{\Sigma^+}\frac{ \mathrm{d}\zeta}{R_-(\zeta;w)}\right)=
\ln\left(\frac{1}{V}\left(Z_1(w)-U+R_-(Z_1(w);w)\right)\right).
\end{align} One can check that the argument of the logarithm is positive over the whole range
$|w| \lt w_\mathrm{c}$, which is why we use the notation
$\ln(\diamond)$ instead of the complex logarithm
$\log(\diamond)$. We use
${f}(Z;w)$ to define a new unknown
$\mathbf{J}(Z;T,w)$ by
where
\begin{align}
q:=\frac{1}{\pi}\ln\left(\frac{\mathfrak{a}}{\mathfrak{b}}\right)\in\mathbb{R}.
\end{align}
$\mathbf{J}(Z;T,w)$ has all the properties of
$\breve{\mathbf{T}}^{\mathrm{out}}(Z; T, w)$ except that it satisfies a simpler jump condition across
$\Sigma=\Sigma^+\cup \Sigma^-$, given by
\begin{align}
\mathbf{J}_+(Z;T,w) = \mathbf{J}_-(Z;T,w)\begin{bmatrix} 0 & {\mathrm{e}}^{-{\mathrm{i}} T^{1/3}\kappa(w)} \\ - {\mathrm{e}}^{{\mathrm{i}} T^{1/3}\kappa(w)} & 0 \end{bmatrix},\quad Z\in \Sigma.
\end{align}To satisfy the jump condition (3.23) on I, we write
\begin{align}
\mathbf{J}(Z;T,w) = \mathbf{K}(Z;T,w) \left(\frac{Z-Z_1(w)}{Z-Z_2(w)}\right)^{{\mathrm{i}} p \sigma_{3}}
\end{align} where the power function is defined to be the principal branch and p > 0 was given in terms of
$\mathfrak{a},\mathfrak{b}$ in (2.15). The new unknown
$\mathbf{K}(Z;T,w)$ extends analytically to I, and we assume that it is bounded near
$Z=Z_j(w)$,
$j=1,2$, which in particular makes it analytic at
$Z=Z_2(w)$. Thus,
$\mathbf{K}(Z;T,w)$ is analytic in
$\mathbb{C}\setminus \Sigma$ and tends to
$\mathbb{I}$ as
$Z\to\infty$. The constant and simple jump condition (3.44) satisfied by
$\mathbf{J}(Z;T,w)$ across Σ is modified for
$\mathbf{K}(Z;T,w)$:
\begin{align}
\mathbf{K}_+(Z;T,w) = \mathbf{K}_-(Z;T,w) \left(\frac{Z-Z_1(w)}{Z-Z_2(w)}\right)^{{\mathrm{i}} p \sigma_{3}}
\begin{bmatrix} 0 & {\mathrm{e}}^{-{\mathrm{i}} T^{1/3}\kappa(w)} \\ - {\mathrm{e}}^{{\mathrm{i}} T^{1/3}\kappa(w)} & 0 \end{bmatrix}
\left(\frac{Z-Z_1(w)}{Z-Z_2(w)}\right)^{-{\mathrm{i}} p \sigma_{3}},\quad Z\in\Sigma.
\end{align}To convert this back into a constant jump condition on Σ alone, we follow [Reference Bilman, Ling and Miller8, Eqn. (222)] up to a scaling, and introduce another Szegő functionFootnote 8
\begin{align}
{\ell}(Z;w):= \log \left(\frac{Z-Z_1(w)}{Z-Z_2(w)}\right)+ R(Z ;w) \int_{Z_1(w)}^{Z_2(w)} \frac{ \mathrm{d} \zeta}{R(\zeta ; w)(\zeta-Z)} + \frac{1}{2} \mu(w),
\end{align} where the logarithm is taken to be the principal branch,
$-\pi \lt \operatorname{Im}(\log (\diamond)) \lt \pi$, and where the constant
$\mu(w)$ is given by
\begin{align}
\mu(w):=2 \int_{Z_1(w)}^{Z_2(w)} \frac{ \mathrm{d} \zeta}{R(\zeta ; w)} \gt 0.
\end{align} Using the same antiderivative (3.40) used to integrate
$\gamma(w)$, which is analytic for
$Z_1(w) \lt \zeta \lt Z_2(w)$, we take care to evaluate
$R(Z;w)$ at the lower limit of integration by the boundary value
$R_-(Z_1(w);w)$ and obtain (an analogue of [Reference Bilman, Ling and Miller8, Eqn. (225)])
\begin{align}
\mu(w)=2\ln\left(\frac{Z_2(w)-U+R(Z_2(w);w)}{Z_1(w)-U+R_-(Z_1(w);w)}\right).
\end{align} The function
${\ell}(Z;w)$ has the following properties. Firstly,
${\ell}(Z;w) = O(Z^{-1})$ as
$Z\to\infty$ by definition of
$\mu(w)$. Next, it can be easily confirmed that
${\ell}(Z;w)$ has no jump across I by using the Plemelj formula and comparing the boundary values of the logarithm. The function
$Z\mapsto {\ell}(Z;w)$ has a removable singularity at
$Z=Z_2(w)$, and the boundary value
${\ell}_-(Z;w)$ is continuous along Σ, including at
$Z=Z_1(w)$. However,
${\ell}_{+}(Z;w)$ has a logarithmic singularity at
$Z=Z_1(w)$. Thus, the domain of analyticity for
${\ell}(Z;w)$ is
$Z\in \mathbb{C}\setminus \Sigma$ and the jump condition satisfied by the (continuous, except at
$Z=Z_1(w)$ from the left) boundary values of
${\ell}(Z;w)$ is
\begin{align}
{\ell}_{+}(Z ; w)+{\ell}_{-}(Z ; w)=2 \log \left(\frac{Z-Z_1(w)}{Z-Z_2(w)}\right)+ \mu(w),\quad Z\in\Sigma.
\end{align} We next evaluate
${\ell}(Z;w)$ for
$Z\in\mathbb{R}$ with
$Z \lt Z_1(w)$ using the antiderivative (3.30), which is analytic for
$\zeta\in (Z_1(w),Z_2(w))$, and hence
\begin{align}\begin{aligned}
{\ell}(Z;w) & =\log\left(\frac{Z-Z_1(w)}{Z-Z_2(w)}\right)+\frac{1}{2}\mu(w) \\
& \quad {}+\log\left(\frac{(Z-U)(Z_2(w)-U)+(V+R(Z;w))(V-R(Z_2(w);w))}{(Z-U)(Z_2(w)-U)+(V-R(Z;w))(V-R(Z_2(w);w))}\right)\\
& \quad {}-\log\left(\frac{(Z-U)(Z_1(w)-U)+(V+R(Z;w))(V-R_-(Z_1(w);w))}{(Z-U)(Z_1(w)-U)+(V-R(Z;w))(V-R_-(Z_1(w);w))}\right).
\end{aligned}
\end{align} The term on the middle line admits analytic continuation through Σ from the left; in particular it the argument of the logarithm is positive when evaluated at
$Z=Z_1(w)$ and
$R(Z;w)=R_+(Z_1(w);w)$ whenever
$|w| \lt w_\mathrm{c}=54^\frac{1}{3}$. The term on the last line has a logarithmic singularity at
$Z=Z_1(w)$ however, coming from the denominator of the argument of the logarithm. We can write
\begin{align}
\frac{(Z-U)(Z_1(w)-U)+(V+R(Z;w))(V-R_-(Z_1(w);w))}{(Z-U)(Z_1(w)-U)+(V-R(Z;w))(V-R_-(Z_1(w);w))}=-\frac{1}{\omega^{{\ell},1}(Z;w)(Z-Z_1(w))},
\end{align}where
\begin{align}
\omega^{{\ell},1}(Z;w):=\frac{\displaystyle U-Z_1(w)+\frac{R(Z;w)-R_+(Z_1(w);w)}{Z-Z_1(w)}(V-R_-(Z_1(w);w))}{(Z-U)(Z_1(w)-U)+(V+R(Z;w))(V-R_-(Z_1(w);w))}
\end{align} is a function analytic and non-vanishing at
$Z=Z_1(w)$ and
$R(Z;w)=R_+(Z_1(w);w)$ with value
\begin{align}
\omega^{{\ell},1}_+(Z_1(w);w)=\frac{(Z_1(w)-U)V}{2((Z_1(w)-U)^2+V^2)(R_+(Z_1(w);w)+V)}
\end{align} which one can confirm is a positive number whenever
$|w| \lt w_\mathrm{c}=54^\frac{1}{3}$. Therefore, for Z near
$Z_1(w)$ on the left of Σ,
${\ell}(Z;w)$ can be written in the form
\begin{align}\begin{aligned}
{\ell}(Z;w)=\frac{1}{2}\mu(w)
+\log\left(\frac{(Z-U)(Z_2(w)-U)+(V+R(Z;w))(V-R(Z_2(w);w))}{(Z-U)(Z_2(w)-U)+(V-R(Z;w))(V-R(Z_2(w);w))}\cdot\frac{\omega^{{\ell},1}(Z;w)}{Z_2(w)-Z}\right)\\
{}+2\log(Z_1(w)-Z),
\end{aligned}
\end{align} where only the last term fails to be analytic at
$Z=Z_1(w)$.
Similarly, if
$Z \gt Z_2(w)$, then once again the antiderivative in (3.30) is an analytic function of
$\zeta\in (Z_1(w),Z_2(w))$, so the formula (3.51) is also valid in this situation. If Z approaches
$Z_2(w)$ from above, then the terms on the final line in (3.51) are now analytic at
$Z=Z_2(w)$, while the terms on the middle line produce a logarithmic singularity and a branch cut emanating from
$Z=Z_2(w)$ to the left. The latter cancels with the explicit logarithmic singularity on the first line of (3.51), and the resulting formula is analytic at
$Z=Z_2(w)$:
\begin{align}\begin{aligned}
{\ell}(Z;w) & =
\frac{1}{2} \mu(w) \\
& \quad +\log\left(\frac{(Z-U)(Z_1(w)-U)+(V-R(Z;w))(V-R_-(Z_1(w);w))}{(Z-U)(Z_1(w)-U)+(V+R(Z;w))(V-R_-(Z_1(w);w))}(Z-Z_1(w))\omega^{{\ell},2}(Z;w)\right),
\end{aligned}
\end{align} wherein
$\omega^{{\ell},2}(Z;w)$ is a function analytic and positive at
$Z=Z_2(w)$ given by
\begin{align}
\omega^{{\ell},2}(Z;w):=\frac{\displaystyle Z_2(w)-U+\frac{R(Z;w)-R(Z_2(w);w)}{Z-Z_2(w)}(V-R(Z_2(w);w))}{(Z-U)(Z_2(w)-U)+(V-R(Z;w))(V-R(Z_2(w);w))}.
\end{align}In particular,
\begin{align}
\omega^{{\ell},2}(Z_2(w);w)=\frac{(Z_2(w)-U)V}{2((Z_2(w)-U)^2+V^2)(R(Z_2(w);w)-V)}.
\end{align} We introduce the function
${\ell}(Z;w)$ in the analysis by writing
It follows that
$\mathbf{L}(Z;T,w)$ is a matrix function analytic for
$Z\in\mathbb{C}\setminus\Sigma$, which also tends to
$\mathbb{I}$ as
$Z\to\infty$ and satisfies the jump condition
\begin{align}
\mathbf{L}_+(Z;T,w) = \mathbf{L}_-(Z;T,w) \begin{bmatrix} 0 & {\mathrm{e}}^{-{\mathrm{i}} (T^{1/3}\kappa(w) +p\mu(w))} \\ - {\mathrm{e}}^{{\mathrm{i}} ( T^{1/3}\kappa(w) + p\mu(w))} & 0 \end{bmatrix},\quad Z\in\Sigma.
\end{align} We can directly solve for
$\mathbf{L}(Z;T,w)$ by diagonalizing the constant jump matrix and choosing the unique solution that exhibits
$-\frac{1}{4}$-power growth at the endpoints
$Z=Z_0(w),Z_0(w)^*$ of Σ:
\begin{align}
\mathbf{L}(Z;T,w)= {\mathrm{e}}^{-\frac{1}{2}{\mathrm{i}} (T^{1/3}\kappa(w) +p\mu(w))\sigma_3} \mathbf{Z} y(Z;w)^{\sigma_3}\mathbf{Z}^{-1} {\mathrm{e}}^{\frac{1}{2}{\mathrm{i}} (T^{1/3}\kappa(w) +p\mu(w))\sigma_3},\quad
\mathbf{Z}:=
\frac{1}{\sqrt{2}}\begin{bmatrix}
1 & {\mathrm{i}} \\
{\mathrm{i}} & 1
\end{bmatrix},
\end{align} where
$y(Z;w)$ is the function analytic for
$Z\in \mathbb{C}\setminus \Sigma$, determined by the properties
\begin{align}
y(Z;w)^4 = \frac{Z-Z_0(w)}{Z-Z_0(w)^*}\quad\text{and}\quad \lim_{Z\to\infty}y(Z;w)= 1.
\end{align}Combining (3.42), (3.45), and (3.59) finishes the construction of the outer parametrix, yielding
\begin{align}
\breve{\mathbf{T}}^{\rm out}(Z;T,w) := {\mathrm{e}}^{{\mathrm{i}} q \gamma(w)\sigma_3} \mathbf{L}(Z;T,w) {\mathrm{e}}^{-{\mathrm{i}}(p{\ell}(Z;w)+q{f}(Z;w))\sigma_3}\left(\frac{Z-Z_1(w)}{Z-Z_2(w)}\right)^{{\mathrm{i}} p \sigma_{3}}.
\end{align} where
$\mathbf{L}(Z;T,w)$ is given by (3.61). Note that unlike the outer parametrix constructed for the analysis in the regime
$X\to+\infty$, the outer parametrix
$\breve{\mathbf{T}}^{\rm out}(Z;T,w)$ depends on T. However, this dependence is purely oscillatory, coming solely from the conjugating exponential factors in (3.61).
3.3.2. Inner parametrices near
$Z=Z_1(w)$ and
$Z=Z_2(w)$
Let
$D_{Z_j}(\delta)$,
$j=1,2$, denote the disk of radius δ > 0 centred at
$Z_j(w)$. To construct an inner parametrix
$\breve{\mathbf{T}}^{Z_2}$ in
$D_{Z_2}(\delta)$, first note that
$h(Z;w) - h(Z_2(w);w)$ vanishes precisely to second order as
$Z\to Z_2(w)$. We define the T-independent conformal coordinateFootnote 9
$\varphi_{Z_2}$ by choosing the solution of
that is analytic at
$Z=Z_2(w)$ and that satisfies
$\varphi_{Z_2}'(Z_2(w),w) \gt 0$. This choice ensures that the arc
$I\cap D_{Z_2}(\delta)$ is mapped by
$\varphi_{Z_2}(\diamond; w)$ locally to the negative real axis. We define the rescaled conformal coordinate
$\zeta_{Z_2}:= T^{\frac{1}{6}}\varphi_{Z_2}(Z;w)$ and observe that the jump conditions satisfied by the matrix
\begin{align}
\mathbf{U}^{Z_2}:= \mathbf{T}(Z;T,w) {\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z_2(w);w)\sigma_3}.
\end{align} match exactly those shown in Figure 13 when expressed in terms of the conformal coordinate
$\zeta=\zeta_{Z_{2}}$ and when the jump contours are locally taken to coincide with the five rays
$\arg (\zeta)=\pm \frac{1}{4} \pi$,
$\arg (\zeta)=\pm \frac{3}{4} \pi$, and
$\arg (-\zeta)=0$, with the same values of p and τ given in terms of
$\mathfrak{a},\mathfrak{b}$ by (2.15)–(2.16). These jump conditions coincide exactly with those in [Reference Miller25, Riemann–Hilbert Problem A.1] for a well-defined and explicit standard parabolic cylinder parametrix
$\mathbf{U}(\zeta)=\mathbf{U}(\zeta;p,\tau)$. Thus, an inner parametrix
$\breve{\mathbf{T}}^{Z_2}(Z;T,w)$ that satisfies exactly the jump conditions inside
$D_{Z_2}(\delta)$ can be taken in the form
\begin{align}
\breve{\mathbf{T}}^{Z_2}(Z;T,w):=\mathbf{Y}^{Z_2}(Z;T,w)\mathbf{U}(\zeta_{Z_2};p,\tau){\mathrm{e}}^{{\mathrm{i}} T^{1/3}h(Z_2(w);w)\sigma_3}
\end{align} where
$\mathbf{Y}^{Z_2}(Z;T,w)$ is any matrix analytic in
$D_{Z_2}(\delta)$. To specify
$\mathbf{Y}^{Z_2}(Z;T,w)$, we write the outer parametrix in terms of the conformal map
$Z\mapsto\varphi_{Z_2}(Z;w)$ by noting that
$\varphi_{Z_2}(Z;w)^{-{\mathrm{i}} p\sigma_3}$ is an exact solution of the jump conditions for
$\breve{\mathbf{T}}^\mathrm{out}(Z;T,w)$ in
$D_{Z_2}(\delta)$. Hence we may write
\begin{align}
\breve{\mathbf{T}}^\mathrm{out}(Z;T,w){\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z_2(w);w)\sigma_3} = \mathbf{H}^{Z_2}(Z;T,w)\varphi_{Z_2}(Z;w)^{-{\mathrm{i}} p\sigma_3},
\end{align} where
$\mathbf{H}^{Z_2}(Z;T,w)$ is analytic for
$Z\in D_{Z_2}(\delta)$ and is uniformly bounded on this disk as
$T\to+\infty$. We then define the parametrix near
$Z=Z_2(w)$ by the formula (3.66) in which we take
\begin{align}
\mathbf{Y}^{Z_2}(Z;T,w):=\mathbf{H}^{Z_2}(Z;T,w)T^{\frac{1}{6}{\mathrm{i}} p\sigma_3}.
\end{align} For
$Z\in D_{Z_2}(\delta)$, the parametrix
$\breve{\mathbf{T}}^{Z_2}(Z;T,w)$ satisfies
\begin{align}
\breve{\mathbf{T}}^{Z_2}(Z;T,w)\breve{\mathbf{T}}^\mathrm{out}(Z;T,w)^{-1} =
\mathbf{H}^{Z_2}(Z;T,w)T^{\frac{1}{6}{\mathrm{i}} p\sigma_3}\mathbf{U}(\zeta_{Z_2};p,\tau)\zeta_{Z_2}^{{\mathrm{i}} p\sigma_3}T^{-\frac{1}{6}{\mathrm{i}} p\sigma_3}\mathbf{H}^{Z_2}(Z;T,w)^{-1},
\end{align} in which we note that the product
$\mathbf{U}(\zeta_{Z_2};p,\tau)\zeta_{Z_2}^{{\mathrm{i}} p\sigma_3}$ has an asymptotic expansion in descending powers of
$\zeta_{Z_2}$ according to (2.22), and
$\zeta_{Z_2}$ is large of size
$T^{\frac{1}{6}}$ when
$Z\in \partial D_{Z_2}(\delta)$.
An explicit formula for
$\mathbf{H}^{Z_2}(Z;T,w)$ is:
\begin{align}\begin{aligned}
\mathbf{H}^{Z_2}(Z;T,w):={\mathrm{e}}^{{\mathrm{i}} q\gamma(w)\sigma_3}\mathbf{L}(Z;T,w){\mathrm{e}}^{-{\mathrm{i}}(p{\ell}(Z;w)+q{f}(Z;w))\sigma_3}{\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z_2(w);w)\sigma_3}\\
{}\cdot (Z-Z_1(w))^{{\mathrm{i}} p\sigma_3}\left(\frac{\varphi_{Z_2}(Z;w)}{Z-Z_2(w)}\right)^{{\mathrm{i}} p\sigma_3}.
\end{aligned}
\end{align} Constructing an inner parametrix in
$D_{Z_1}(\delta)$ is slightly more involved due to the presence of jump conditions satisfied by
$\mathbf{T}(Z;T,w)$ across
$\Sigma^+\cup\Sigma^-$. Recall that
$h(Z;w)$ is analytic in Z for
$Z\in D_{Z_1}(\delta) \setminus \Sigma$, and the values of
$h(Z;w)$ in the left (
$h_+(Z;w)$) and right (
$h_-(Z;w)$) half-disks both admit analytic continuation to the full disk
$D_{Z_1}(\delta)$ where, according to (3.5) and (3.10) we have
We will base the construction of the inner parametrix in
$D_{Z_1}(\delta)$ on a T-independent conformal mapping
$\varphi_{Z_1}$ constructed from
$h_{-}(Z;w)$ by choosing the solution of
that is analytic in
$D_{Z_1}(\delta)$ and satisfies
$\varphi_{Z_1}'(Z_1(w),w) \lt 0$, and we introduce the rescaled conformal coordinate
$\zeta_{Z_1}:= T^{\frac{1}{6}}\varphi_{Z_1}(Z;w)$. Letting
$\Omega_\infty$ denote the region near
$Z_1(w)$ complementary to
$R_\Sigma^+\cup\Omega_+\cup R_\Sigma^-\cup\Omega_-\cup L_\Sigma^+\cup L_\Sigma^-$ (i.e., to the left of
$\Sigma=\Sigma^+\cup\Sigma^-$), we then consider the matrix
\begin{align}
\mathbf{U}^{Z_1}:=
\begin{cases}
\mathbf{T}(Z;T,w) \left(\dfrac{\mathfrak{a}}{\mathfrak{b}}\right)^{\sigma_3}({\mathrm{i}} \sigma_2) {\mathrm{e}}^{{\mathrm{i}} T^{1/3} h_{-}(Z_1(w);w)\sigma_3}{\mathrm{i}}^{\sigma_3},&\quad Z\in (R_\Sigma^+ \cup \Omega_+)\cap D_{Z_1}(\delta),\\
\mathbf{T}(Z;T,w) \left(\dfrac{\mathfrak{b}}{\mathfrak{a}}\right)^{\sigma_3}({\mathrm{i}} \sigma_2) {\mathrm{e}}^{{\mathrm{i}} T^{1/3} h_{-}(Z_1(w);w)\sigma_3}{\mathrm{i}}^{\sigma_3},&\quad Z\in (R_\Sigma^- \cup \Omega_-) \cap D_{Z_1}(\delta),\\
\mathbf{T}(Z;T,w) {\mathrm{e}}^{{\mathrm{i}} T^{1/3} h_{-}(Z_1(w);w)\sigma_3}{\mathrm{e}}^{-{\mathrm{i}} T^{1/3}\kappa(w)\sigma_3}{\mathrm{i}}^{\sigma_3} ,&\quad Z\in (L_\Sigma^+\cup L_\Sigma^- \cup \Omega_\infty)\cap D_{Z_1}(\delta).\\
\end{cases}
\end{align} Using (3.71), one checks that this transformation results in a trivial identity jump for
$\mathbf{U}^{Z_1}$ across
$\Sigma^+\cup\Sigma^-$. Now, recall the values
$\bar{p}$ and
$\bar{\tau}$ defined in Corollary 1.19. Taking into account that the conformal coordinate
$\zeta_{Z_1}:= T^{\frac{1}{6}}\varphi_{Z_1}(Z;w)$ satisfies
$\varphi_{Z_1}'(Z_1(w);w) \lt 0$, we see that the jump conditions satisfied by
$\mathbf{U}^{Z_1}$ match those shown in Figure 13 with
$(p,\tau)$ replaced by
$(\bar{p},\bar{\tau})$ (equivalent to swapping
$\mathfrak{a}$ and
$\mathfrak{b}$). These jump conditions therefore coincide exactly with those of the standard parabolic cylinder parametrix
$\mathbf{U}(\zeta;\bar{p},\bar{\tau})$ solving [Reference Miller25, Riemann–Hilbert Problem A.1].
Thus, an inner parametrix
$\breve{\mathbf{T}}^{Z_1}(Z;T,w)$ that satisfies exactly the jump conditions of
$\mathbf{T}(Z;T,w)$ within
$D_{Z_1}(\delta)$ can be taken in the form
\begin{align}\begin{aligned}
\breve{\mathbf{T}}^{Z_1}(Z;T,w): & =\mathbf{Y}^{Z_1}(Z;T,w)\mathbf{U}(\zeta_{Z_1};\bar{p},\bar{\tau}){\mathrm{i}}^{-\sigma_3}{\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h_-(Z_1(w);w)\sigma_3}\\
& \quad {}\cdot\begin{cases}
({\mathrm{i}}\sigma_2)^{-1}\left(\dfrac{\mathfrak{b}}{\mathfrak{a}}\right)^{\sigma_3},&Z\in (R_\Sigma^+\cup\Omega_+)\cap D_{Z_1}(\delta),\\
({\mathrm{i}}\sigma_2)^{-1}\left(\dfrac{\mathfrak{a}}{\mathfrak{b}}\right)^{\sigma_3},&Z\in (R_\Sigma^-\cup\Omega_-)\cap D_{Z_1}(\delta),\\
{\mathrm{e}}^{{\mathrm{i}} T^{1/3}\kappa(w)\sigma_3},&Z\in (L_\Sigma^+\cup L_\Sigma^-\cup\Omega_\infty)\cap D_{Z_1}(\delta),
\end{cases}
\end{aligned}
\end{align} where
$\mathbf{Y}^{Z_1}(Z;T,w)$ is a matrix factor analytic in
$D_{Z_1}(\delta)$. To determine
$\mathbf{Y}^{Z_1}(Z;T,w)$, we first define a matrix
$\breve{\mathbf{U}}^{Z_1}$ within
$D_{Z_1}(\delta)$ exactly as in (3.73) replacing
$\mathbf{T}(Z;T,w)$ with
$\breve{\mathbf{T}}^\mathrm{out}(Z;T,w)$. Then one checks that
$\breve{\mathbf{U}}^{Z_1}$ is analytic in
$D_{Z_1}(\delta)$ except on
$I\cap D_{Z_1}(\delta)$ where it satisfies
$\breve{\mathbf{U}}^{Z_1}_+=\breve{\mathbf{U}}^{Z_1}_-\mathfrak{b}^{2\sigma_3}$. Since I corresponds to the ray
$\arg(-\varphi_{Z_1})=0$ oriented away from the origin in the
$\varphi_{Z_1}$-plane and
$\mathfrak{b}^{2\sigma_3}={\mathrm{e}}^{-2\pi\bar{p}\sigma_3}$, it follows that
$\breve{\mathbf{U}}^{Z_1}=\mathbf{H}^{Z_1}(Z;T,w)\varphi_{Z_1}(Z;w)^{-{\mathrm{i}}\bar{p}\sigma_3}$ where
$\mathbf{H}^{Z_1}(Z;T,w)$ is holomorphic in
$D_{Z_1}(\delta)$ and bounded as
$T\to+\infty$. By analogy with (3.68) we define the inner parametrix
$\breve{\mathbf{T}}^{Z_1}(Z;T,w)$ by (3.74) in which the holomorphic factor
$\mathbf{Y}^{Z_1}(Z;T,w)$ is given by
\begin{align}
\mathbf{Y}^{Z_1}(Z;T,w):=\mathbf{H}^{Z_1}(Z;T,w)T^{\frac{1}{6}{\mathrm{i}}\bar{p}\sigma_3}.
\end{align} By analogy with (3.69), for
$Z\in D_{Z_1}(\delta)$, the parametrix
$\breve{\mathbf{T}}^{Z_1}(Z;T,w)$ satisfies
\begin{align}
\breve{\mathbf{T}}^{Z_1}(Z;T,w)\breve{\mathbf{T}}^\mathrm{out}(Z;T,w)^{-1}=
\mathbf{H}^{Z_1}(Z;T,w)T^{\frac{1}{6}{\mathrm{i}}\bar{p}\sigma_3}\mathbf{U}(\zeta_{Z_1};\bar{p},\bar{\tau})\zeta_{Z_1}^{{\mathrm{i}} \bar{p}\sigma_3}T^{-\frac{1}{6}{\mathrm{i}}\bar{p}\sigma_3}\mathbf{H}^{Z_1}(Z;T,w)^{-1}.
\end{align} To find an analogue of (3.70) for
$\mathbf{H}^{Z_1}(Z;T,w)$ valid for
$Z\in D_{Z_1}(\delta)$, it is enough to first assume that
$Z\in\Omega_\infty$ and then extend the result to
$D_{Z_1}(\delta)$ by analytic continuation. By definition,
\begin{align}\begin{aligned}
\mathbf{H}^{Z_1}(Z;T,w):& ={\mathrm{e}}^{{\mathrm{i}} q\gamma(w)\sigma_3}\mathbf{L}(Z;T,w){\mathrm{e}}^{-{\mathrm{i}}(p{\ell}(Z;w)+q{f}(Z;w))\sigma_3}\left(\frac{Z_1(w)-Z}{Z_2(w)-Z}\right)^{{\mathrm{i}} p\sigma_3}\\
& \quad {}\cdot{\mathrm{e}}^{{\mathrm{i}} T^{1/3}h(Z_1(w);w)\sigma_3}{\mathrm{e}}^{-{\mathrm{i}} T^{1/3}\kappa(w)\sigma_3}{\mathrm{i}}^{\sigma_3}\varphi_{Z_1}(Z;w)^{{\mathrm{i}}\bar{p}\sigma_3},\quad Z\in\Omega_\infty\cap D_{Z_1}(\delta).
\end{aligned}
\end{align} Since the jump matrix for
$\mathbf{L}(Z;T,w)$ across Σ is constant (see (3.60)), upon identifying
$\mathbf{L}(Z;T,w)$ for
$Z\in\Omega_\infty$ with
$\mathbf{L}_+(Z;T,w)$, we see that
$\mathbf{L}(Z;T,w)=\mathbf{L}_+(Z;T,w)$ has an analytic continuation to all of
$D_{Z_1}(\delta)$ that we will denote by the same symbol
$\mathbf{L}_+(Z;T,w)$ and that is given by (3.61) in which the branch cut of
$y(Z;w)$ is deformed toward the right near
$Z=Z_1(w)$ to coincide with the corresponding arc of
$\partial D_{Z_1}(\delta)$. On the other hand, neither
${f}(Z;w)$ nor
${\ell}(Z;w)$ can be analytically continued from
$\Omega_\infty$ to all of
$D_{Z_1}(\delta)$ without a branch cut appearing in each case that we take to agree with
$I\cap D_{Z_1}(\delta)$. However, using (3.37) to continue
${f}(Z;w)={f}_+(Z;w)$ from
$\Omega_\infty\cap D_{Z_1}(\delta)$ through
$\Sigma^\pm$ to
$D_{Z_1}(\delta)\setminus I$ shows that the continuation has the form
where
${f}^0(Z;w)$ is holomorphic in
$D_{Z_1}(\delta)$ and the logarithm is the principal branch. In fact, we can use (3.35) to express
${f}^0(Z;w)$ explicitly as
\begin{align}
{f}^0(Z;w)=\frac{1}{2}\log\left(\frac{(V-R(Z;w))^2+(Z-U)^2}{(V+R(Z;w))^2+(Z-U)^2}\cdot\omega^{f}(Z;w)^2\cdot\frac{(Z_1(w)-Z)^2}{\varphi_{Z_1}(Z;w)^2}\right).
\end{align} This formula holds for
$Z\in \Omega_\infty\cap D_{Z_1}(\delta)$, but to continue analytically to
$D_{Z_1}(\delta)\setminus\Omega_\infty$ one just replaces
$R(Z;w)$ with
$-R(Z;w)$. In particular, to compute the value at the center of the disk, one simply takes
$R(Z;w)$ as
$R_+(Z_1(w);w) \lt 0$ and obtains
\begin{align}
{f}^0(Z_1(w);w)=\frac{1}{2}\ln\left(\frac{(V-R_+(Z_1(w);w))^2+(Z_1(w)-U)^2}{(V+R_+(Z_1(w);w))^2+(Z_1(w)-U)^2}\cdot\frac{\omega^{f}_+(Z_1(w);w)^2}{\varphi'_{Z_1}(Z_1(w);w)^2}\right),
\end{align} wherein
$\omega^{f}_+(Z_1(w);w) \gt 0$ is defined by (3.34). Similarly using (3.50) to continue
${\ell}(Z;w)={\ell}_+(Z;w)$ from
$\Omega_\infty\cap D_{Z_1}(\delta)$ through
$\Sigma^\pm$ to
$D_{Z_1}(\delta)\setminus I$ shows that
where
${\ell}^0(Z;w)$ is holomorphic in
$D_{Z_1}(\delta)$. Using (3.55), we can explicitly write
${\ell}^0(Z;w)$ in the form
\begin{align}\begin{aligned}
{\ell}^0(Z;w)& =\frac{1}{2}\mu(w)\\
& \quad {}+\log\left(\frac{(Z-U)(Z_2(w)-U)+(V+R(Z;w))(V-R(Z_2(w);w))}{(Z-U)(Z_2(w)-U)+(V-R(Z;w))(V-R(Z_2(w);w))}\cdot\frac{\omega^{{\ell},1}(Z;w)}{Z_2(w)-Z}\cdot \frac{(Z_1(w)-Z)^2}{\varphi_{Z_1}(Z;w)^2}\right).
\end{aligned}
\end{align} Again, this holds as written for
$Z\in D_{Z_1}(\delta)\cap\Omega_\infty$ but to analytically continue to
$D_{Z_1}(\delta)\setminus\Omega_\infty$ one just replaces
$R(Z;w)$ with
$-R(Z;w)$. Evaluating at
$Z=Z_1(w)$ means replacing
$R(Z;w)$ with
$R_+(Z_1(w);w) \lt 0$ and computing a limit of a difference quotient:
\begin{align}\begin{aligned}
{\ell}^0(Z_1(w);w) & =\frac{1}{2}\mu(w)\\
& \quad {}+\ln\left(\frac{(Z_1(w)-U)(Z_2(w)-U)+(V+R_+(Z_1(w);w))(V-R(Z_2(w);w))}{(Z_1(w)-U)(Z_2(w)-U)+(V-R_+(Z_1(w);w))(V-R(Z_2(w);w))}\right.\\
& \quad \left.{}\cdot\frac{\omega^{{\ell},1}_+(Z_1(w);w)}{(Z_2(w)-Z_1(w))\varphi_{Z_1}'(Z_1(w);w)^2}\right),
\end{aligned}
\end{align} wherein
$\omega^{{\ell},1}_+(Z_1(w);w) \gt 0$ is given by (3.54). Therefore, using the identity
$q=\bar{p}-p$, recalling that
$\mathbf{L}_+(Z;T,w)$ is interpreted as a holomorphic function in
$D_{Z_1}(\delta)$, and using (3.71),
\begin{align}\begin{aligned}
\mathbf{H}^{Z_1}(Z;T,w)={\mathrm{e}}^{{\mathrm{i}} q\gamma(w)\sigma_3}\mathbf{L}_+(Z;T,w){\mathrm{e}}^{-{\mathrm{i}}(p{\ell}^0(Z;w)+q{f}^0(Z;w))\sigma_3}{\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h_+(Z_1(w);w)\sigma_3}{\mathrm{i}}^{\sigma_3}\\
{}\cdot (Z_2(w)-Z)^{-{\mathrm{i}} p\sigma_3}\left(\frac{Z_1(w)-Z}{\varphi_{Z_1}(Z;w)}\right)^{{\mathrm{i}} p\sigma_3},\quad Z\in D_{Z_1}(\delta).
\end{aligned}
\end{align}3.3.3. Inner parametrices near
$Z=Z_0(w)$ and
$Z=Z_0(w)^*$
It suffices to construct a parametrix for
$\mathbf{T}(Z;T,w)$ for Z near
$Z_0(w)$ and obtain a corresponding parametrix for Z near
$Z_0(w)^*$ using Schwarz reflection. Let
$D_{Z_0}(\delta)$ denote a disk of radius δ > 0 centred at
$Z=Z_0(w)$. We define a conformal mapping on
$D_{Z_0}(\delta)$ by setting
$\varphi_{Z_0}(Z;w):=(2{\mathrm{i}} (h(Z_0(w);w)-h(Z;w)))^\frac{2}{3}$, analytically continued to
$Z\in D_{Z_0}(\delta)$ from the arc along which
$h(\diamond;w)-h(Z_0(w);w)$ is positive imaginary, and we choose
$C_{\Gamma,L}^+$ to coincide with this arc within
$D_{Z_0}(\delta)$. Then within
$D_{Z_0}(\delta)$ we choose
$\Sigma^+$ to be mapped by
$\varphi=\varphi_{Z_0}(Z;w)$ to
$\varphi\le 0$, we choose
$C_{\Sigma,L}^+$ to be mapped to
$\arg(\varphi)=\frac{2}{3}\pi$, and fusing
$C_{\Sigma,R}^+$ and
$C_{\Gamma,R}^+$ locally we choose both to be mapped to
$\arg(\varphi)=-\frac{2}{3}\pi$. We define a rescaling of the conformal coordinate by
$\zeta_{Z_0}:=T^\frac{2}{9}\varphi_{Z_0}(Z;w)$.
Using the facts that
$2h(Z_0(w);w)=h_+(Z_0(w);w)+h_-(Z_0(w);w)=\kappa(w)$ (as
$h(\diamond;w)$ is continuous at
$Z_0(w)$) and that
$-\mathfrak{a}^3/\mathfrak{b}-\mathfrak{ab}=-\mathfrak{a}/\mathfrak{b}$ (as
$\mathfrak{a}^2+\mathfrak{b}^2=1$), one can then check that the matrix
$\mathbf{P}(Z;T,w)$ defined by
\begin{align}
\mathbf{P}(Z;T,w):=\mathbf{T}(Z;T,w)({\mathrm{i}}\sigma_2)\left(\frac{\mathfrak{b}}{\mathfrak{a}}\right)^{\frac{1}{2}\sigma_3}{\mathrm{e}}^{\frac{1}{2}{\mathrm{i}} T^{1/3}\kappa(w)\sigma_3}
\end{align} satisfies the following jump conditions within
$D_{Z_0}(\delta)$:
\begin{align}
\mathbf{P}_+(Z;T,w)=\mathbf{P}_-(Z;T,w)\begin{bmatrix}1 & {\mathrm{e}}^{-\zeta_{Z_0}^{3/2}}\\0 & 1\end{bmatrix},\quad \arg(\zeta_{Z_0})=0,
\end{align}
\begin{align}
\mathbf{P}_+(Z;T,w)=\mathbf{P}_-(Z;T,w)\begin{bmatrix}1&0\\{\mathrm{e}}^{\zeta_{Z_0}^{3/2}}&1\end{bmatrix},\quad\arg(\zeta_{Z_0})=\pm\frac{2\pi}{3},
\end{align}and
\begin{align}
\mathbf{P}_+(Z;T,w)=\mathbf{P}_-(Z;T,w)\begin{bmatrix}0 & 1\\-1 & 0\end{bmatrix},\quad\arg(-\zeta_{Z_0})=0,
\end{align} where we are orienting all four rays in the direction of increasing real part of
$\zeta_{Z_0}$. Defining a matrix
$\breve{\mathbf{P}}^{\mathrm{out}}(Z;T,w)$ by the right-hand side of (3.85) replacing
$\mathbf{T}(Z;T,w)$ with
$\breve{\mathbf{T}}^\mathrm{out}(Z;T,w)$ we see that
$\breve{\mathbf{P}}^\mathrm{out}(Z;T,w)$ is analytic within
$D_{Z_0}(\delta)$ except for the arc where
$\varphi_{Z_0}(Z;w)\le 0$, along which the same jump condition as in (3.88) is satisfied, and
$\breve{\mathbf{P}}^\mathrm{out}(Z;T,w)$ exhibits negative one-fourth power singularities near Z 0. Therefore, the matrix function defined on
$D_{Z_0}(\delta)$ by
\begin{align}
\mathbf{H}^{Z_0}(Z;T,w):=\breve{\mathbf{P}}^\mathrm{out}(Z;T,w)\mathbf{V}^{-1}\varphi_{Z_0}(Z;w)^{-\frac{1}{4}\sigma_3},\quad Z\in D_{Z_0}(\delta),\quad \mathbf{V}:=\frac{1}{\sqrt{2}}\begin{bmatrix}1&-{\mathrm{i}}\\-{\mathrm{i}} & 1\end{bmatrix}
\end{align} is actually analytic on the whole disk, and it is easy to check that it is uniformly bounded on the disk in the limit
$T\to+\infty$. Now let
$\mathbf{A}(\zeta)$ denote the standard Airy parametrix analytic for
$\mathrm{Im}(\zeta)\neq 0$ except across the rays
$\arg(\zeta)=\pm\frac{2}{3}\pi$, satisfying the exact jump conditions (3.86)–(3.88), and satisfying the asymptotic condition
\begin{align}
\mathbf{A}(\zeta)\mathbf{V}^{-1}\zeta^{-\frac{1}{4}\sigma_3}=\mathbb{I} + \begin{bmatrix}O(\zeta^{-3}) & O(\zeta^{-1})\\O(\zeta^{-2}) & O(\zeta^{-3})\end{bmatrix},\quad\zeta\to\infty,
\end{align} (i.e.,
$\mathbf{A}(\zeta)$ is the unique solution of Riemann–Hilbert Problem 4 of [Reference Bothner and Miller17], for instance — see [Reference Bothner and Miller17, Appendix B] for full details), we then define the parametrix for
$\mathbf{T}(Z;T,w)$ in
$D_{Z_0}(\delta)$ by
\begin{align}
\breve{\mathbf{T}}^{Z_0}(Z;T,w):=\mathbf{H}^{Z_0}(Z;T,w)T^{-\frac{1}{18}\sigma_3}\mathbf{A}(T^{\frac{2}{9}}\varphi_{Z_0}(Z;w)){\mathrm{e}}^{-\frac{1}{2}{\mathrm{i}} T^{1/3}\kappa(w)\sigma_3}\left(\frac{\mathfrak{b}}{\mathfrak{a}}\right)^{-\frac{1}{2}\sigma_3}({\mathrm{i}}\sigma_2)^{-1},\quad Z\in D_{Z_0}(\delta).
\end{align} Then, comparing with the outer parametrix, we have for
$Z\in D_{Z_0}(\delta)$,
\begin{align}
\breve{\mathbf{T}}^{Z_0}(Z;T,w)\breve{\mathbf{T}}^{\mathrm{out}}(Z;T,w)^{-1}=\mathbf{H}^{Z_0}(Z;T,w)T^{-\frac{1}{18}\sigma_3}\mathbf{A}(\zeta_{Z_0})\mathbf{V}^{-1}\zeta_{Z_0}^{-\frac{1}{4}\sigma_3}T^{\frac{1}{18}\sigma_3}\mathbf{H}^{Z_0}(Z;T,w)^{-1},
\end{align} where we recall
$\zeta_{Z_0}=T^\frac{2}{9}\varphi_{Z_0}(Z;w)$. Using (3.90), the fact that
$\varphi_{Z_0}(\diamond;w)$ is bounded away from zero on
$\partial D_{Z_0}(\delta)$, and the fact that
$\mathbf{H}^{Z_0}(\diamond;T,w)$ is bounded as
$T\to+\infty$ and has unit determinant yields
\begin{align}
\sup_{Z\in\partial D_{Z_0}(\delta)}\left\|\breve{\mathbf{T}}^{Z_0}(Z;T,w)\breve{\mathbf{T}}^{\mathrm{out}}(Z;T,w)^{-1}-\mathbb{I}\right\|=O(T^{-\frac{1}{3}}),\quad T\to+\infty.
\end{align} Since
$\mathbf{T}(Z;T,w)$ satisfies the exact Schwarz symmetry
$\mathbf{T}(Z^*;T,w)=\sigma_2\mathbf{T}(Z;T,w)^*\sigma_2$, we obtain an inner parametrix near
$Z_0^*$ by applying the same reflection to
$\breve{\mathbf{T}}^{Z_0}(Z;T,w)$.
3.3.4. Global parametrix
We define the global parametrix
$\breve{\mathbf{T}}(Z;T,w)$ by
\begin{align}
\breve{\mathbf{T}}(Z;T,w):=\begin{cases}
\breve{\mathbf{T}}^{Z_1}(Z;T,w),&\quad Z\in D_{Z_1}(\delta),\\
\breve{\mathbf{T}}^{Z_2}(Z;T,w),&\quad Z\in D_{Z_2}(\delta),\\
\breve{\mathbf{T}}^{Z_0}(Z;T,w),&\quad Z\in D_{Z_0}(\delta),\\
\sigma_2\breve{\mathbf{T}}^{Z_0}(Z^*;T,w)^*\sigma_2,&\quad Z\in D_{Z_0^*}(\delta),\\
\breve{\mathbf{T}}^{\mathrm{out}}(Z;T,w),&\quad Z \in \mathbb{C} \setminus \left(I \cup D_{Z_1}(\delta) \cup D_{Z_2}(\delta)\cup D_{Z_0}(\delta) \cup D_{Z_0^*}(\delta) \right).
\end{cases}
\end{align}3.4. Asymptotics as
$T\to+\infty$
As in Section 2.3 we compare the matrix
$\mathbf{T}(Z;T,w)$ with the global parametrix
$\breve{\mathbf{T}}(Z;T,w)$ by defining the error matrix
$\mathbf{F}(Z;T,w):=\mathbf{T}(Z;T,w)\breve{\mathbf{T}}(Z;T,w)^{-1}$ whenever both of the factors are defined. It is straightforward to verify that
$\mathbf{F}(Z;T,w)$ satisfies a small-norm Riemann–Hilbert problem with the jump contour
$\Sigma_{\mathbf{F}}$ consisting of the disk boundaries
$\partial D_{Z_1}(\delta)$,
$\partial D_{Z_2}(\delta)$,
$\partial D_{Z_0}(\delta)$, and
$\partial D_{Z_0^*}(\delta)$, together with the restrictions of the arcs
$C^{\pm}_{\Gamma,L}$,
$C^{\pm}_{\Gamma,R}$,
$C^{\pm}_{\Sigma,L}$, and
$C^{\pm}_{\Sigma,R}$ to the exterior of the four disks.
The jump matrix
$\mathbf{V}^{\mathbf{F}}$ for
$\mathbf{F}(Z;T,w)$ is expressed on the latter arcs as
\begin{align}
\begin{aligned}
\mathbf{V}^{\mathbf{F}}(Z;T,w) &= \mathbf{F}_-(Z;T,w)^{-1} \mathbf{F}_+(Z;T,w)\\
&= \breve{\mathbf{T}}_-(Z;T,w){\mathbf{T}}_-(Z;T,w)^{-1} {\mathbf{T}}_+(Z;T,w)\breve{\mathbf{T}}_+(Z;T,w)^{-1}\\
&=\breve{\mathbf{T}}^\mathrm{out}_-(Z;T,w)\mathbf{T}_-(Z;T,w)^{-1}\mathbf{T}_+(Z;T,w)\breve{\mathbf{T}}^\mathrm{out}_-(Z;T,w)^{-1},
\end{aligned}
\end{align} because on these arcs
$\breve{\mathbf{T}}(Z;T,w)=\breve{\mathbf{T}}^\mathrm{out}(Z;T,w)$, which has no jump discontinuity. The restriction to the exterior of the disks makes the conjugating factors bounded independently of
$T\to+\infty$, while the jump matrix
$\mathbf{T}_-(Z;T,w)^{-1}\mathbf{T}_+(Z;T,w)$ for
$\mathbf{T}(Z;T,w)$ is a uniformly exponentially small perturbation of the identity. Therefore there is a positive constant
$K(\varepsilon) \gt 0$ such that
\begin{align}
\sup_{Z\in\left(C^{\pm}_{\Gamma,L}\cup C^{\pm}_{\Gamma,R} \cup C^{\pm}_{\Sigma,L}\cup C^{\pm}_{\Sigma,R}\right)\cap \Sigma_\mathbf{F}} \| \mathbf{V}^\mathbf{F}(Z;T,w) - \mathbb{I} \| = O({\mathrm{e}}^{-K(\varepsilon) T^{1/3}}),\quad T\to +\infty,
\end{align} holds uniformly for
$|w| \le w_\mathrm{c}-\varepsilon$ and normalized parameters
$(\mathfrak{a},\mathfrak{b})$ satisfying the double-sided inequality
$\varepsilon\le \mathfrak{b}/\mathfrak{a}\le\varepsilon^{-1}$. For the rest of the section we assume that these inequalities on w and
$\mathfrak{b}/\mathfrak{a}$ hold for some ɛ > 0 and use the notation
$O_\varepsilon(\diamond)$ introduced after (2.40) to indicate the dependence of implied constants on ɛ.
Taking the disk boundary
$\partial D_{Z_0}(\delta)\subset\Sigma_\mathbf{F}$ to have clockwise orientation, the jump matrix on this circle takes the form
$\mathbf{V}^\mathbf{F}(Z;T,w)=\breve{\mathbf{T}}^{Z_0}(Z;T,w)\breve{\mathbf{T}}^\mathrm{out}(Z;T,w)^{-1}$, and using (3.93) shows that
$\mathbf{V}^\mathbf{F}-\mathbb{I}$ is uniformly
$O_\varepsilon(T^{-\frac{1}{3}})$ on this circle. By Schwarz reflection a similar estimate holds for
$\mathbf{V}^\mathbf{F}-\mathbb{I}$ on
$\partial D_{Z_0^*}(\delta)\subset\Sigma_\mathbf{F}$.
The discrepancy
$\mathbf{V}^\mathbf{F}-\mathbb{I}$ is dominated by its behaviour on the boundaries of the disks
$D_{Z_j}(\delta)$,
$j=1,2$, which we also take to be clockwise-oriented. On these two circles we have
$\mathbf{V}^\mathbf{F}(Z;T,w)=\breve{\mathbf{T}}^{Z_j}(Z;T,w)\breve{\mathbf{T}}^\mathrm{out}(Z;T,w)^{-1}$. Therefore, the formulæ (3.69) and (3.76) together with the basic estimate
$\mathbf{U}(\zeta;p,\tau)\zeta^{{\mathrm{i}} p\sigma_3}=\mathbb{I}+O_\varepsilon(\zeta^{-1})$ (see (2.22), here valid also with
$(p,\tau)$ replaced by
$(\bar{p},\bar{\tau})$ due to the double-sided inequality
$\varepsilon\le\mathfrak{b}/\mathfrak{a}\le\varepsilon^{-1}$) where ζ is large of size
$T^\frac{1}{6}$ when
$Z\in\partial D_{Z_j}(\delta)$ immediately gives
\begin{align}
\sup_{Z\in\partial D_{Z_1}(\delta) \cup \partial D_{Z_2}(\delta)} \| \mathbf{V}^\mathbf{F}(Z;T,w) - \mathbb{I} \| = O_\varepsilon(T^{-\frac{1}{6}}),\quad T\to +\infty.
\end{align} This estimate is sharp, and we will extract a leading term proportional to
$T^{-\frac{1}{6}}$ below.
Just as in Section 2.3, from the
$L^2(\Sigma_\mathbf{F})$ theory of small-norm Riemann–Hilbert problems it follows that
\begin{align}
\mathbf{F}_{-}(\diamond ; T, w)-\mathbb{I}=O_\varepsilon(T^{-\frac{1}{6}}),\quad T \rightarrow+\infty
\end{align} holds in the
$L^2(\Sigma_\mathbf{F})$ sense. Therefore, again every coefficient
\begin{align}
\mathbf{F}^{[m]}(T, w):=-\frac{1}{2 \pi {\mathrm{i}}} \int_{\Sigma_{\mathbf{F}}} \mathbf{F}_{-}(Z; T, w)\left(\mathbf{V}^{\mathbf{F}}(Z ; T, w)-\mathbb{I}\right) Z^{m-1} \mathrm{d} Z
\end{align} in the Laurent series for
$\mathbf{F}(Z ; T, w)$
\begin{align}
\mathbf{F}(Z ; T, w)=\mathbb{I}+\sum_{m=1}^{\infty} Z^{-m} \mathbf{F}^{[m]}(T, w),
\end{align} which is convergent for
$|Z|$ sufficiently large, satisfies
$\|\mathbf{F}^{[m]}(T, w)\|=O_\varepsilon(T^{-\frac{1}{6} })$ as
$T \rightarrow+\infty$.
Now, note that
$\mathbf{T}(Z;T,w) = \mathbf{F}(Z;T,w) \breve{\mathbf{T}}^{\mathrm{out}}(Z;T,w)$, and we have
$\breve{\mathbf{T}}(Z;T,w)=\breve{\mathbf{T}}^{\mathrm{out}}(Z;T,w)$ for
$|Z|$ large enough. Thus, from (3.28) we obtain
\begin{align}\begin{aligned}
&\Psi(T^\frac{2}{3} w , T) \\
& \quad = 2 {\mathrm{i}} {\mathrm{e}}^{-{\mathrm{i}}\arg(ab)} T^{-\frac{1}{3}}\lim _{Z \rightarrow \infty} Z T_{12}(Z ; T, w){\mathrm{e}}^{{\mathrm{i}} T^{1/3} g(Z ; w)}\\
& \quad = 2 {\mathrm{i}} {\mathrm{e}}^{-{\mathrm{i}}\arg(ab)} T^{-\frac{1}{3}}\lim _{Z \rightarrow \infty} Z \left( F_{11}(Z;T,w) \breve{T}^{\mathrm{out}}_{12}(Z;T,w) + F_{12}(Z;T,w) \breve{T}^{\mathrm{out}}_{22}(Z;T,w) \right){\mathrm{e}}^{{\mathrm{i}} T^{1/3} g(Z ; w)}\\
& \quad = 2 {\mathrm{i}} {\mathrm{e}}^{-{\mathrm{i}}\arg(ab)} T^{-\frac{1}{3}}\lim _{Z \rightarrow \infty} Z \left( \breve{T}^{\mathrm{out}}_{12}(Z;T,w) + F_{12}(Z;T,w) \right)\\
& \quad = 2 {\mathrm{i}} {\mathrm{e}}^{-{\mathrm{i}}\arg(ab)} T^{-\frac{1}{3}}\left(\lim _{Z \rightarrow \infty} Z \breve{T}^{\mathrm{out}}_{12}(Z;T,w) + F^{[1]}_{12}(T,w)\right),
\end{aligned}
\end{align} where we have also used the properties
$g(Z;w)=O(Z^{-1})$,
$\breve{\mathbf{T}}^{\mathrm{out}}(Z;T,w)-\mathbb{I} = O(Z^{-1})$, and
$\mathbf{F}(Z;T,w)-\mathbb{I} = O(Z^{-1})$ as
$Z\to\infty$.
This has the form of a leading term plus a correction proportional to
$T^{-\frac{1}{3}}F^{[1]}_{12}(T,w)$, which since
$\|F^{[m]}(T,w)\|=O_\varepsilon(T^{-\frac{1}{6}})$ for all
$m\ge 1$ is of size
$O_\varepsilon(T^{-\frac{1}{2}})$. We will now compute the leading term explicitly, and also expand the error term to obtain a sub-leading term. For the leading term, we observe that
\begin{align}
L_{12}(Z;T,w) = -\frac{\operatorname{Im}(Z_0(w))}{2Z} {\mathrm{e}}^{-{\mathrm{i}} (T^{1/3}\kappa(w) +p \mu(w))} + O(Z^{-2}),\quad Z\to \infty,
\end{align} and that
$\operatorname{Im}(Z_0(w))= \tfrac{1}{3}\sqrt{w_\mathrm{c}^2-w^{2}}$ from (1.79). Using this in (3.101) while recalling (3.38) and the form of the outer parametrix given in (3.63) shows that the leading term is exactly
\begin{align}
2{\mathrm{i}}{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}T^{-\frac{1}{3}}\lim_{Z\to\infty}Z\breve{T}^\mathrm{out}_{12}(Z;T,w)=-{\mathrm{i}}{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}{\mathrm{e}}^{2{\mathrm{i}} q\gamma(w)}{\mathrm{e}}^{-{\mathrm{i}} (T^{1/3}\kappa(w)+p\mu(w))}T^{-\frac{1}{3}}\frac{1}{3}\sqrt{w_\mathrm{c}^2-w^2},
\end{align} in which
$q=\ln(\mathfrak{a}/\mathfrak{b})=\ln(|a/b|)$ and
$2\pi p=\ln(1+\mathfrak{b}^2/\mathfrak{a}^2)=\ln(1+|b/a|^2)$.
For the sub-leading term, we use (3.99) to obtain
\begin{align}\begin{aligned}
F^{[1]}_{12}(T,w) & =-\frac{1}{2\pi{\mathrm{i}}}\int_{\Sigma_\mathbf{F}}V^\mathbf{F}_{12}(Z;T,w)\, \mathrm{d} Z\\
& \quad {}-\frac{1}{2\pi{\mathrm{i}}}\int_{\Sigma_\mathbf{F}}((F_{11-}(Z;T,w)-1)V^\mathbf{F}_{12}(Z;T,w) + F_{12-}(Z;T,w)(V^\mathbf{F}_{22}(Z;T,w)-1))\, \mathrm{d} Z.
\end{aligned}
\end{align} Using (3.98) and
$\mathbf{V}^\mathbf{F}(\diamond;T,w)-\mathbb{I}=O_\varepsilon(T^{-\frac{1}{6}})$ in
$L^\infty(\Sigma_\mathbf{F})$ and hence also in
$L^2(\Sigma_\mathbf{F})$ for
$\Sigma_\mathbf{F}$ compact, Cauchy-Schwarz implies that
\begin{align}
\begin{split}
F^{[1]}_{12}(T,w)&=-\frac{1}{2\pi{\mathrm{i}}}\int_{\Sigma_\mathbf{F}}V^\mathbf{F}_{12}(Z;T,w)\, \mathrm{d} Z + O_\varepsilon(T^{-\frac{1}{3}})\\
&=-\frac{1}{2\pi{\mathrm{i}}}\int_{\partial D_{Z_1}(\delta)\cup\partial D_{Z_2}(\delta)}V^\mathbf{F}_{12}(Z;T,w)\, \mathrm{d} Z + O_\varepsilon(T^{-\frac{1}{3}}),\quad T\to+\infty,
\end{split}
\end{align} where on the second line we used the fact that
$V^\mathbf{F}_{12}(\diamond;T,w)=O_\varepsilon(T^{-\frac{1}{3}})$ in
$L^1(\Sigma^\mathbf{F}\setminus(\partial D_{Z_1}(\delta)\cup\partial D_{Z_2}(\delta)))$. Note that in the situation described in Section 2.3,
$\mathbf{V}^\mathbf{F}-\mathbb{I}$ had additional structure allowing for a refinement of the analogous estimate; however that structure is is not present here. Now,
$V^\mathbf{F}_{12}(Z;T,w)$ is given by the 12-element of (3.69) and (3.76) on
$\partial D_{Z_2}(\delta)$ and
$\partial D_{Z_1}(\delta)$, respectively. Using (2.22) and the fact that
$\zeta_{Z_j}$ is large of size
$T^{\frac{1}{6}}$ on the disk boundaries, along with
$\det(\mathbf{H}^{Z_j}(Z;T,w))=1$ for
$j=1,2$,
\begin{align}
V_{12}^\mathbf{F}(Z;T,w)=\frac{T^{-\frac{1}{3}{\mathrm{i}} \bar{p}}s(\bar{p},\bar{\tau})H_{12}^{Z_1}(Z;T;w)^2+T^{\frac{1}{3}{\mathrm{i}} \bar{p}}r(\bar{p},\bar{\tau})H_{11}^{Z_1}(Z;T,w)^2}{2{\mathrm{i}} T^\frac{1}{6}\varphi_{Z_1}(Z;w)}+O_\varepsilon(T^{-\frac{1}{3}}),\quad Z\in\partial D_{Z_1}(\delta),
\end{align}and
\begin{align}
V_{12}^\mathbf{F}(Z;T,w)=\frac{T^{-\frac{1}{3}{\mathrm{i}} p}s(p,\tau)H_{12}^{Z_2}(Z;T;w)^2+T^{\frac{1}{3}{\mathrm{i}} p}r(p,\tau)H_{11}^{Z_2}(Z;T,w)^2}{2{\mathrm{i}} T^\frac{1}{6}\varphi_{Z_2}(Z;w)}+O_\varepsilon(T^{-\frac{1}{3}}),\quad Z\in\partial D_{Z_2}(\delta),
\end{align} with both estimates holding in the
$L^\infty$ sense and hence also the L 1 sense. Since
$\mathbf{H}^{Z_j}(Z;T,w)$ is holomorphic in
$D_{Z_j}(\delta)$, and
$Z\mapsto\varphi_{Z_j}(Z;w)$ is conformal at
$Z_j(w)$ with
$\varphi_{Z_j}(Z_j(w);w)=0$, if δ > 0 is sufficiently small, we substitute into (3.105) and compute by residues to obtain
\begin{align}\begin{aligned}
F_{12}^{[1]}(T,w) & =\frac{T^{-\frac{1}{3}{\mathrm{i}}\bar{p}}s(\bar{p},\bar{\tau})H_{12}^{Z_1}(Z_1(w);T,w)^2+T^{\frac{1}{3}{\mathrm{i}} \bar{p}}r(\bar{p},\bar{\tau})H_{11}^{Z_1}(Z_1(w);T,w)^2}{2{\mathrm{i}} T^\frac{1}{6}\varphi'_{Z_1}(Z_1(w);w)}\\
& \quad {}+\frac{T^{-\frac{1}{3}{\mathrm{i}} p}s(p,\tau)H_{12}^{Z_2}(Z_2(w);T,w)^2 + T^{\frac{1}{3}{\mathrm{i}} p}r(p,\tau)H_{11}^{Z_2}(Z_2(w);T,w)^2}{2{\mathrm{i}} T^{\frac{1}{6}}\varphi_{Z_2}'(Z_2(w);w)} + O_\varepsilon(T^{-\frac{1}{3}}).
\end{aligned}
\end{align} Here,
$r(p,\tau)$ and
$s(p,\tau)$ are given by (2.25). By implicit differentiation of the equations (3.64) and (3.72) and using the facts that
$\varphi_{Z_j}(Z_j(w);w)=0$ and
$\varphi'_{Z_2}(Z_2(w);w) \gt 0$ while
$\varphi'_{Z_1}(Z_1(w);w) \lt 0$ we obtain
\begin{align}
\varphi'_{Z_2}(Z_2(w);w)=\sqrt{h''(Z_2(w);w)}\quad\text{and}\quad\varphi_{Z_1}'(Z_1(w);w)=-\sqrt{-h_-''(Z_1(w);w)}.
\end{align} Since
$g(Z;w)=O(Z^{-1})$ and
$R(Z)=Z+O(1)$ as
$Z\to\infty$, it follows from (3.1), (3.7), and (3.10) that
\begin{align}
h'(Z;w) =2 \frac{(Z-Z_1(w))(Z-Z_2(w))}{Z^2}R(Z;w),
\end{align}implying
\begin{align}
h''(Z;w) =2 \frac{(Z_1(w)+Z_2(w))Z - 2 Z_1(w) Z_2(w)}{Z^3}R(Z;w) + 2\frac{(Z-Z_1(w))(Z-Z_2(w))}{Z^2}R'(Z;w).
\end{align}Thus,
\begin{align}
h''(Z_2(w);w) &= \frac{2(Z_2(w)-Z_1(w))}{Z_2(w)^2}|Z_2(w)-Z_0(w)| \gt 0,
\end{align}
\begin{align}
h''_-(Z_1(w);w) &= \frac{2(Z_1(w)-Z_2(w))}{Z_1(w)^2}|Z_1(w)-Z_0(w)| \lt 0,
\end{align}and accordingly
\begin{align}
\varphi'_{Z_2}(Z_2(w);w) &= \frac{\sqrt{2(Z_2(w)-Z_1(w))}}{Z_2(w)}|Z_2(w)-Z_0(w)|^{\frac{1}{2}} \gt 0,
\end{align}
\begin{align}
\varphi'_{Z_1}(Z_1(w);w) &= \frac{\sqrt{2(Z_2(w)-Z_1(w))}}{Z_1(w)}|Z_1(w)-Z_0(w)|^{\frac{1}{2}} \lt 0.
\end{align} Using
$\varphi_{Z_2}(Z_2(w);w)=0$, we find from (3.70) that
\begin{align}\begin{aligned}
H_{11}^{Z_2}(Z_2(w);T,w)^2 = {\mathrm{e}}^{2{\mathrm{i}} q \gamma(w)} L_{11}(Z_2(w);T,w)^2 {\mathrm{e}}^{-2{\mathrm{i}}(p {\ell}(Z_2(w);w) + q {f}(Z_2(w);w))} {\mathrm{e}}^{-2{\mathrm{i}} T^{1/3}h(Z_2(w);w)}\\
\cdot (Z_2(w)-Z_1(w))^{2{\mathrm{i}} p} \left[ \varphi'_{Z_2}(Z_2(w);w)^2\right]^{{\mathrm{i}} p }
\end{aligned}
\end{align}and
\begin{align}\begin{aligned}
H_{12}^{Z_2}(Z_2(w);T,w)^2 = {\mathrm{e}}^{2{\mathrm{i}} q \gamma(w)} L_{12}(Z_2(w);T,w)^2 {\mathrm{e}}^{2{\mathrm{i}}(p {\ell}(Z_2(w);w) + q {f}(Z_2(w);w))} {\mathrm{e}}^{2{\mathrm{i}} T^{1/3}h(Z_2(w);w)}\\
\cdot (Z_2(w)-Z_1(w))^{-2{\mathrm{i}} p} \left[ \varphi'_{Z_2}(Z_2(w);w)^2\right]^{-{\mathrm{i}} p }.
\end{aligned}
\end{align} Since
$Z_2(w)-Z_1(w) \gt 0$, we use (3.115) to write
\begin{align}
\left[ \varphi'_{Z_2}(Z_2(w);w)^2\right]^{\pm {\mathrm{i}} p } &={\mathrm{e}}^{\pm {\mathrm{i}} p \ln( 2(Z_2(w)-Z_1(w)) Z_2(w)^{-2} |Z_2(w)-Z_0(w)|)},
\end{align} and from the definition (3.61) with
$y(Z_2(w);w)={\mathrm{e}}^{{\mathrm{i}}\arg(Z_2(w)-Z_0(w))/2}$ with the principal branch of the argument, we obtain using the amplitude notation from (1.81) in the introduction
\begin{align}
\begin{split}
L_{11}(Z_2(w);T,w)^2 &= \frac{1}{2} + \frac{1}{4}\left( y(Z_2(w);w)^2 + \frac{1}{y(Z_2(w);w)^2} \right)\\
& =
m^+_{Z_2}(w),
\end{split}
\end{align}
\begin{align}
\begin{split}
L_{12}(Z_2(w);T,w)^2
&=-{\mathrm{e}}^{-2 {\mathrm{i}}(T^{1/3} \kappa(w)+ p \mu(w))} \left[ \frac{1}{4}\left( y(Z_2(w);w)^2+\frac{1}{y(Z_2(w);w)^2} \right) - \frac{1}{2}\right]\\
&= {\mathrm{e}}^{-2 {\mathrm{i}}(T^{1/3} \kappa(w)+ p \mu(w))} m^-_{Z_2}(w).
\end{split}
\end{align} Note that
$m_{Z_2}^\pm(w) \gt 0$ and
$m_{Z_2}^+(w)+m_{Z_2}^-(w)=1$. Thus, the formulae (3.116) and (3.117) can be expressed as
\begin{align}
H_{11}^{Z_2}(Z_2(w);T,w)^2 &= {\mathrm{e}}^{2{\mathrm{i}} q \gamma(w)} m_{Z_2}^+(w) {\mathrm{e}}^{- {\mathrm{i}} \phi_{Z_2}(T,w)},
\end{align}
\begin{align}
H_{12}^{Z_2}(Z_2(w);T,w)^2 &= {\mathrm{e}}^{2{\mathrm{i}} q \gamma(w)} {\mathrm{e}}^{-2 {\mathrm{i}}(T^{1/3} \kappa(w)+ p \mu(w))} m_{Z_2}^-(w) {\mathrm{e}}^{{\mathrm{i}} \phi_{Z_2}(T,w)},
\end{align}where a real phase is defined by
\begin{align}\begin{aligned}
\phi_{Z_2}(T,w) := 2 T^{1/3} h(Z_2(w);w) -3p \ln(Z_2(w)-Z_1(w)) - p \ln\left(\frac{|Z_2(w)-Z_0(w)|}{Z_2(w)^2}\right)\\
+2(p {\ell}(Z_2(w);w) + q {f}(Z_2(w);w))-p\ln(2).
\end{aligned}
\end{align} Similarly, using
$\varphi_{Z_1}(Z_1(w);w)=0$, we find from (3.84) that
\begin{align}\begin{aligned}
H_{11}^{Z_1}(Z_1(w);T,w)^2 = - {\mathrm{e}}^{2{\mathrm{i}} q \gamma(w)} L_{11+}(Z_1(w);T,w)^2 {\mathrm{e}}^{-2{\mathrm{i}}(p {\ell}^0(Z_1(w);w) + q {f}^0(Z_1(w);w))} {\mathrm{e}}^{-2{\mathrm{i}} T^{1/3}h_{+}(Z_1(w);w)}\\
\cdot (Z_2(w)-Z_1(w))^{-2{\mathrm{i}} p} \left[ \frac{1}{\varphi'_{Z_1}(Z_1(w);w)^2}\right]^{{\mathrm{i}} p }
\end{aligned}
\end{align}and
\begin{align}\begin{aligned}
H_{12}^{Z_1}(Z_1(w);T,w)^2 = - {\mathrm{e}}^{2{\mathrm{i}} q \gamma(w)} L_{12+}(Z_1(w);T,w)^2 {\mathrm{e}}^{2{\mathrm{i}}(p {\ell}^0(Z_1(w);w) + q {f}^0(Z_1(w);w))} {\mathrm{e}}^{2{\mathrm{i}} T^{1/3}h_{+}(Z_1(w);w)}\\
\cdot (Z_2(w)-Z_1(w))^{2{\mathrm{i}} p} \left[ \frac{1}{\varphi'_{Z_1}(Z_1(w);w)^2}\right]^{-{\mathrm{i}} p }.
\end{aligned}
\end{align}The analogue of (3.119) needed here is
\begin{align}
\left[\frac{1}{\varphi'_{Z_1}(Z_1(w);w)^2}\right]^{\pm{\mathrm{i}} p} = {\mathrm{e}}^{\mp{\mathrm{i}} p\ln(2(Z_2(w)-Z_1(w))Z_1(w)^{-2}|Z_1(w)-Z_0(w)|)},
\end{align} and those of (3.120)–(3.121), using
$y_+(Z_1(w);w)={\mathrm{i}}{\mathrm{e}}^{{\mathrm{i}}\arg(Z_1(w)-Z_0(w))/2}$ with the principal branch of the argument, are
\begin{align}
\begin{split}
L_{11+}(Z_1(w);T,w)^2&=\frac{1}{2}+\frac{1}{4}\left(y_+(Z_1(w);w)^2+\frac{1}{y_+(Z_1(w);w))^2}\right)\\
&=
m_{Z_1}^-(w),
\end{split}
\end{align}
\begin{align}
\begin{split}
L_{12+}(Z_1(w);T,w)^2&=-{\mathrm{e}}^{-2{\mathrm{i}} (T^{1/3}\kappa(w)+p\mu(w))}\left[\frac{1}{4}\left(y_+(Z_1(w);w)^2+\frac{1}{y_+(Z_1(w);w)^2}\right)-\frac{1}{2}\right]\\
&={\mathrm{e}}^{-2{\mathrm{i}} (T^{1/3}\kappa(w)+p\mu(w))}m_{Z_1}^+(w),
\end{split}
\end{align} where we again recall the amplitude notation from (1.81) with
$m^\pm_{Z_1}(w) \gt 0$ and
$m^+_{Z_1}(w)+m^-_{Z_1}(w)=1$. Therefore, (3.125) and (3.126) become
\begin{align}
H_{11}^{Z_1}(Z_1(w);T,w)^2&=-{\mathrm{e}}^{2{\mathrm{i}} q\gamma(w)}m^-_{Z_1}(w){\mathrm{e}}^{-{\mathrm{i}}\phi_{Z_1}(T,w)},
\end{align}
\begin{align}
H_{12}^{Z_1}(Z_1(w);T,w)^2&=-{\mathrm{e}}^{2{\mathrm{i}} q\gamma(w)}{\mathrm{e}}^{-2{\mathrm{i}}(T^{1/3}\kappa(w)+p\mu(w))}m_{Z_1}^+(w){\mathrm{e}}^{{\mathrm{i}}\phi_{Z_1}(T,w)},
\end{align}where another real phase is defined by
\begin{align}\begin{aligned}
\phi_{Z_1}(T,w):=2T^\frac{1}{3}h_+(Z_1(w);w)+3p\ln(Z_2(w)-Z_1(w))+p\ln\left(\frac{|Z_1(w)-Z_0(w)|}{Z_1(w)^2}\right)\\
{}+2 (p{\ell}^0(Z_1(w);w)+q{f}^0(Z_1(w);w))+p\ln(2).
\end{aligned}
\end{align} Using these results in (3.108) together with the fact
$s(p,\tau)=- r(p,\tau)^*$ gives
\begin{align}\begin{aligned}
F_{12}^{[1]}(T,w) & =\frac{{\mathrm{e}}^{2{\mathrm{i}} q\gamma(w)}{\mathrm{e}}^{-{\mathrm{i}} (T^{1/3}\kappa(w)+p\mu(w))}}{2T^\frac{1}{6}\sqrt{2(Z_2(w)-Z_1(w))}}
\left(\frac{|r(\bar{p},\bar{\tau})||Z_1(w)|\left(m^+_{Z_1}(w){\mathrm{e}}^{{\mathrm{i}}\Phi_{Z_1}(T,w)}+m^-_{Z_1}(w){\mathrm{e}}^{-{\mathrm{i}}\Phi_{Z_1}(T,w)}\right)}{|Z_1(w)-Z_0(w)|^{\frac{1}{2}}}\right.\\
& \quad {}+\left.\frac{|r(p,\tau)|Z_2(w)\left(m_{Z_2}^-(w){\mathrm{e}}^{{\mathrm{i}}\Phi_{Z_2}(T,w)}+m_{Z_2}^+(w){\mathrm{e}}^{-{\mathrm{i}}\Phi_{Z_2}(T,w)}\right)}{|Z_2(w)-Z_0(w)|^\frac{1}{2}}\right) + O_\varepsilon(T^{-\frac{1}{3}}).
\end{aligned}
\end{align}with the modified real phases
\begin{align}
\Phi_{Z_1}(T,w)&:=\phi_{Z_1}(T,w) - T^{\frac{1}{3}}\kappa(w) - p \mu(w) - \frac{1}{3}\bar{p}\ln(T)+\frac{\pi}{2} - \arg(r(\bar{p},\bar{\tau})),
\end{align}
\begin{align}
\Phi_{Z_2}(T,w)&:=\phi_{Z_2}(T,w) - T^{\frac{1}{3}}\kappa(w) - p \mu(w) - \frac{1}{3}p \ln(T)+\frac{\pi}{2} - \arg(r(p,\tau)).
\end{align} We note from the definition (2.23) that
$\arg(r(\bar{p},\bar{\tau})) = \frac{\pi}{4} + \bar{p} \ln(2) - \arg(\Gamma({\mathrm{i}} \bar{p}))$ and
$\arg(r(p,\tau)) = \frac{\pi}{4} + p \ln(2) - \arg(\Gamma({\mathrm{i}} p))$, so that the modified real phases take the form
\begin{align}
\Phi_{Z_1}(T,w)&:=\phi_{Z_1}(T,w) - T^{\frac{1}{3}}\kappa(w) - p \mu(w) - \frac{1}{3}\bar{p}\ln(T)+\frac{\pi}{2} - \frac{\pi}{4} - \bar{p} \ln(2) + \arg(\Gamma({\mathrm{i}} \bar{p})),
\end{align}
\begin{align}
\Phi_{Z_2}(T,w)&:=\phi_{Z_2}(T,w) - T^{\frac{1}{3}}\kappa(w) - p \mu(w) - \frac{1}{3}p \ln(T)+\frac{\pi}{2} - \frac{\pi}{4} - p \ln(2) + \arg(\Gamma({\mathrm{i}} p)).
\end{align} Finally, substituting
$|r(p,\tau)|=\sqrt{2p}$ from (2.25) yields
\begin{align}\begin{aligned}
F_{12}^{[1]}(T,w)=\frac{{\mathrm{e}}^{2{\mathrm{i}} q\gamma(w)}{\mathrm{e}}^{-{\mathrm{i}} (T^{1/3}\kappa(w)+p\mu(w))}}{2T^\frac{1}{6}\sqrt{Z_2(w)-Z_1(w)}}
\left(\frac{\sqrt{\bar{p}}|Z_1(w)|\left(m^+_{Z_1}(w){\mathrm{e}}^{{\mathrm{i}}\Phi_{Z_1}(T,w)}+m^-_{Z_1}(w){\mathrm{e}}^{-{\mathrm{i}}\Phi_{Z_1}(T,w)}\right)}{|Z_1(w)-Z_0(w)|^{\frac{1}{2}}}\right.\\
{}+\left.\frac{\sqrt{p}Z_2(w)\left(m_{Z_2}^-(w){\mathrm{e}}^{{\mathrm{i}}\Phi_{Z_2}(T,w)}+m_{Z_2}^+(w){\mathrm{e}}^{-{\mathrm{i}}\Phi_{Z_2}(T,w)}\right)}{|Z_2(w)-Z_0(w)|^\frac{1}{2}}\right) + O_\varepsilon(T^{-\frac{1}{3}}).
\end{aligned}
\end{align}Using this in (3.101) along with (3.103) yields
\begin{align}\begin{aligned}
\Psi(T^\frac{2}{3}w,T) & =-\frac{1}{3}{\mathrm{i}} {\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}T^{-\frac{1}{3}}{\mathrm{e}}^{-{\mathrm{i}} T^{1/3}\kappa(w)}{\mathrm{e}}^{{\mathrm{i}}( 2q\gamma(w)-p\mu(w))}\left[
\vphantom{\left\{\frac{3\sqrt{2\bar{p}}|Z_1(w)|\left(m^+_{Z_1}(w){\mathrm{e}}^{{\mathrm{i}}\Phi_{Z_1}(T,w)}+m^-_{Z_1}(w){\mathrm{e}}^{-{\mathrm{i}}\Phi_{Z_1}(T,w)}\right)}{|Z_1(w)-Z_0(w)|^{\frac{1}{2}}\sqrt{2(Z_2(w)-Z_1(w))}}\right.}
\sqrt{w_\mathrm{c}^2-w^2}\right.\\
& \quad {}-3T^{-\frac{1}{6}}\frac{1}{\sqrt{Z_2(w)-Z_1(w)}}\left\{\frac{\sqrt{\bar{p}}|Z_1(w)|\left(m^+_{Z_1}(w){\mathrm{e}}^{{\mathrm{i}}\Phi_{Z_1}(T,w)}+m^-_{Z_1}(w){\mathrm{e}}^{-{\mathrm{i}}\Phi_{Z_1}(T,w)}\right)}{|Z_1(w)-Z_0(w)|^{\frac{1}{2}}}\right.\\
& \quad {}+\left.\left.\frac{\sqrt{p}Z_2(w)\left(m_{Z_2}^-(w){\mathrm{e}}^{{\mathrm{i}}\Phi_{Z_2}(T,w)}+m_{Z_2}^+(w){\mathrm{e}}^{-{\mathrm{i}}\Phi_{Z_2}(T,w)}\right)}{|Z_2(w)-Z_0(w)|^\frac{1}{2}}
\right\} + O_\varepsilon(T^{-\frac{1}{3}})\right].
\end{aligned}
\end{align} Simplifying
$2q\gamma(w)-p\mu(w)-\pi/2$ into the form of
$\Phi(w)$ given in (1.83), and simplifying the phases
$\Phi_{Z_1}(T,w)$ and
$\Phi_{Z_2}(T,w)$ into the forms (1.86) and (1.87) respectively, we complete the proof of Theorem 1.22.
4. Transitional asymptotics for
$\Psi(X,T;\mathbf{G})$
Now we analyse
$\Psi(X,T;\mathbf{G}(a,b))$ for large positive (X, T) in the regime that
$T\approx v_\mathrm{c}X^{\frac{3}{2}}$. Due to Proposition 1.5 we will use normalized parameters (1.11)
$\mathfrak{a},\mathfrak{b}$ and write
$\Psi(X,T;\mathbf{G}(a,b))={\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}\Psi(X,T;\mathbf{G}(\mathfrak{a},\mathfrak{b}))$. Therefore, in the terminology of Theorem 1.18, the parameter
$v:=TX^{-\frac{3}{2}}$ should be allowed to increase into a neighbourhood of
$v=v_\mathrm{c}:=54^{-\frac{1}{2}}$. Assuming that for some fixed ɛ > 0 arbitrarily small we have
$\tau:=\mathfrak{b}/\mathfrak{a} \lt \varepsilon^{-1}$, we will here adapt the analysis from Section 2 to allow for
$v\approx v_\mathrm{c}$.
As
$v\uparrow v_\mathrm{c}$, a third real simple critical point of
$z\mapsto\vartheta(z;v)$ collides with
$z_1(v)$ to form a double critical point at
$z=z_\mathrm{c}=-\sqrt{6}$, while the simple critical point at
$z_2(v)$ persists with the limiting value of
$z_2(v_\mathrm{c})=(\frac{3}{2})^{\frac{1}{2}}$. Following [Reference Bilman, Ling and Miller8, Section 4.3], we begin by introducing a Schwarz-symmetric conformal mapping
$z\mapsto \varphi(z;v)$ on a neighbourhood of
$z=z_\mathrm{c}$ by means of the equation
where
$t=t(v)$ and
$u=u(v)$ are real-analytic functions of
$v\approx v_\mathrm{c}$ determined such that the two critical points of the left-hand side near
$z=z_\mathrm{c}$ correspond to the two critical points of the cubic on the right-hand side. These functions satisfy
\begin{align}
t_\mathrm{c}:=t(v_\mathrm{c})=0,\quad t_\mathrm{c}':=t'(v_\mathrm{c})=4\cdot 6^{\frac{1}{2}}9^{\frac{1}{3}},\quad u_\mathrm{c}:=u(v_\mathrm{c})=2\cdot 6^{\frac{1}{2}},\quad u'_\mathrm{c}:=u'(v_\mathrm{c})=-12.
\end{align} The conformal map
$z\mapsto \varphi(z;v)$ is locally a dilation and reflection through
$z_\mathrm{c}$; indeed
$\varphi_\mathrm{c}':=\varphi'(z_\mathrm{c};v_\mathrm{c})=-9^{-\frac{1}{3}} \lt 0$. We denote by
$z_*(v)$ the preimage under
$z\mapsto\varphi(z;v)$ of φ = 0 for general
$v\approx v_\mathrm{c}$; it is an analytic function of v with
$z_*(v_\mathrm{c})=z_\mathrm{c}$.

Figure 16. Left: the jump contours in the
$z$-plane when
$v=0.1345$ near
$v_{\mathrm{c}}$ using the points
$z_2(v)$ and
$z_{*}(v)$ overlayed with the regions where
$\operatorname{Im}(\vartheta(z;v))$ has a definite sign. Right: the jump contours in the
$z$-plane when
$w=3.76$ near
$w_{\mathrm{c}}$ using the points
$z_2(v)$ and
$z_{*}(v)$ overlayed with the regions where
$\operatorname{Im}(h(Z;w))$ has a definite sign. This is plotted in the z-plane using the relation
$z=Z/v^{\frac{1}{3}}$ and the points
$z_2(v)$ and
$z_{*}(v)$ are found using the relation
$v=w^{-\frac{3}{2}}$.
Next, we modify the outer parametrix defined in Section 2.2.1 simply by replacing the point
$z_1(v)$ with
$z_*(v)$
\begin{align}
\dot{\mathbf{T}}^\mathrm{out}(z)=\dot{\mathbf{T}}^\mathrm{out}(z;v):=\left(\frac{z-z_*(v)}{z-z_2(v)}\right)^{{\mathrm{i}} p\sigma_3},\quad p:=\frac{1}{2\pi}\ln(1+\tau^2) \gt 0,\quad z\in \mathbb{C}\setminus [z_*(v),z_2(v)].
\end{align} The definition of an inner parametrix near the simple critical point
$z=z_2(v)$ is given by (2.27), (2.29), and (2.31) with only one small alteration: the factor
$\mathbf{H}^{z_2}(z;v)$ holomorphic near
$z=z_2(v)$ is modified from its definition in (2.29) only by replacing
$z_1(v)$ with
$z_*(v)$. On the other hand, we will need an inner parametrix near
$z=z_\mathrm{c}$ that is no longer constructed from parabolic cylinder functions at all.
We now explain how to construct such an inner parametrix. The exact jump conditions satisfied by
$\mathbf{U}^{\mathrm{c}}:=\mathbf{T}{\mathrm{e}}^{{\mathrm{i}} X^{1/2}u(v)\sigma_3/2}({\mathrm{i}}\sigma_2)$ near
$z=z_\mathrm{c}$ can be expressed in terms of the conformal coordinate via
$\zeta:=X^{\frac{1}{6}}\varphi$ and the rescaled parameter
$y:=X^{\frac{1}{3}}t(v)$. Locally we take the jump contours to coincide with the rays
$\arg(\zeta)=\pm\frac{1}{2}\pi$,
$\arg(\zeta)=\pm\frac{5}{6}\pi$, and
$\arg(-\zeta)=0$, and then the jump conditions are as shown in Figure 17.

Figure 17. The jump contours and jump matrices near
$z=z_*(v)$ take the form shown here when expressed in terms of the rescaled conformal coordinate
$\zeta$ for which
$\zeta=0$ is the image of
$z=z_*(v)$. Compare with [Reference Miller25, Figure 3].
Now let
$\mathbf{U}^\mathrm{TT}(\zeta;y,\tau)$ denote the solution of [Reference Miller25, Riemann–Hilbert Problem 2.1] (denoted by
$\mathbf{W}(\zeta;y)$ in that reference). The reason for the notation “TT” is that
$\mathcal{V}(y;\tau)$ defined in (4.5) below is connected with an increasing tritronquée solution of the Painlevé-II equation (1.94) as explained in the introduction. By a simple generalization of the argument given in [Reference Miller25, Section 5] to arbitrary τ > 0, it exists for all
$y\in\mathbb{R}$ and τ > 0. It has the following properties.
•
$\mathbf{U}^\mathrm{TT}(\zeta;y,\tau)$ is analytic in the five sectors shown in Figure 17, which are
$S_0: |\arg(\zeta)| \lt \frac{1}{2}\pi$,
$S_1: \frac{1}{2}\pi \lt \arg(\zeta) \lt \frac{5}{6}\pi$,
$S_{-1}: -\frac{5}{6}\pi \lt \arg(\zeta) \lt -\frac{1}{2}\pi$,
$S_2: \frac{5}{6}\pi \lt \arg(\zeta) \lt \pi$, and
$S_{-2}: -\pi \lt \arg(\zeta) \lt -\frac{5}{6}\pi$.•
$\mathbf{U}^\mathrm{TT}(\zeta;y,\tau)$ takes continuous boundary values on the excluded rays and at the origin from each of the five sectors, which are related by the jump condition
$\mathbf{U}^\mathrm{TT}_+(\zeta;y,\tau)=\mathbf{U}^\mathrm{TT}_-(\zeta;y,\tau)\mathbf{V}^\mathrm{TT}(\zeta;y,\tau)$, where the jump contours and the jump matrix
$\mathbf{V}^\mathrm{TT}(\zeta;y,\tau)$ are given in Figure 17.• The matrix function
$\mathbf{U}^\mathrm{TT}(\zeta;y,\tau)\zeta^{{\mathrm{i}} p\sigma_3}$ has a complete asymptotic expansion in descending integer powers of ζ as
$\zeta\to\infty$, with coefficients that are independent of the sector in which
$\zeta\to\infty$. In particular,
(4.4)
\begin{align}
\mathbf{U}^\mathrm{TT}(\zeta;y,\tau)\zeta^{{\mathrm{i}} p\sigma_3}=\mathbb{I} + \mathbf{U}^1(y,\tau)\zeta^{-1}+O(\zeta^{-2}),\quad\zeta\to\infty
\end{align}holds uniformly for bounded τ > 0. For each τ > 0, the function
(4.5)
\begin{align}
\mathcal{V}(y;\tau):=U_{21}^1(y,\tau)
\end{align}is analytic for all
$y\in\mathbb{R}$ and has no real zeros or critical points.
The analogue of (2.28) in this case is
\begin{align}
\begin{aligned}
\dot{\mathbf{T}}^{\mathrm{out}}(z;v){\mathrm{e}}^{{\mathrm{i}} X^{1/2}u(v)\sigma_3/2}({\mathrm{i}}\sigma_2)&=X^{-{\mathrm{i}} p\sigma_3/6}{\mathrm{e}}^{{\mathrm{i}} X^{1/2}u(v)\sigma_3/2}\mathbf{H}^{\mathrm{c}}(z;v)\zeta^{-{\mathrm{i}} p\sigma_3},\\
\mathbf{H}^\mathrm{c}(z;v)&:=(z_2(v)-z)^{-{\mathrm{i}} p\sigma_3}\left(\frac{z_*(v)-z}{\varphi(z;v)}\right)^{{\mathrm{i}} p\sigma_3}({\mathrm{i}}\sigma_2),
\end{aligned}
\end{align} in which
$\zeta=X^{\frac{1}{6}}\varphi(z;v)$. The function
$z\mapsto\mathbf{H}^\mathrm{c}(z;v)$ is analytic for z near
$z_\mathrm{c}$, and we use it together with
$\mathbf{U}^\mathrm{TT}(\zeta;y,\tau)$ to build an inner parametrix by setting for
$z\in D_{z_\mathrm{c}}(\delta)$,
\begin{align}
\dot{\mathbf{T}}^\mathrm{c}(z;X,v):=X^{-{\mathrm{i}} p\sigma_3/6}{\mathrm{e}}^{{\mathrm{i}} X^{1/2}u(v)\sigma_3/2}\mathbf{H}^\mathrm{c}(z;v)\mathbf{U}^{\mathrm{TT}}(X^{\frac{1}{6}}\varphi(z;v);X^{\frac{1}{3}}t(v))(-{\mathrm{i}}\sigma_2){\mathrm{e}}^{-{\mathrm{i}} X^{1/2}u(v)\sigma_3/2}.
\end{align} This is the correct analogue of (2.30) in the situation that
$v\approx v_\mathrm{c}$.
We now define a global parametrix
$\dot{\mathbf{T}}(z;X,v)$ for
$\mathbf{T}(z;X,v)$ by direct analogy with (2.32):
\begin{align}
\dot{\mathbf{T}}(z;X,v):=\begin{cases}
\dot{\mathbf{T}}^{\mathrm{c}}(z;X,v),&\quad z\in D_{z_\mathrm{c}}(\delta),\\
\dot{\mathbf{T}}^{z_2}(z;X,v),&\quad z\in D_{z_2}(\delta),\\
\dot{\mathbf{T}}^{\mathrm{out}}(z;v),&\quad z \in \mathbb{C} \setminus \left([z_*(v),z_2(v)] \cup D_{z_\mathrm{c}}(\delta) \cup D_{z_2}(\delta)\right).
\end{cases}
\end{align} Defining the error as
$\mathbf{F}(z;X,v):=\mathbf{T}(z;X,v)\dot{\mathbf{T}}(z;X,v)^{-1}$ wherever both factors make sense, we see that
$z\mapsto\mathbf{F}(z;X,v)$ is analytic on the complement of a bounded jump contour that is a version of the contour
$\Sigma_\mathbf{F}$ defined in Section 2.3, and at each non-self-intersection point
$z\in\Sigma_\mathbf{F}$ there is a well-defined jump matrix
$\mathbf{V}^\mathbf{F}(z;X,v)$ such that
$\mathbf{F}_+(z;X,v)=\mathbf{F}_-(z;X,v)\mathbf{V}^\mathbf{F}(z;X,v)$. The arguments of Section 2.3 then imply that
\begin{align}
\sup_{z\in\Sigma_\mathbf{F}\setminus\partial D_{z_\mathrm{c}}(\delta)}\|\mathbf{V}^\mathbf{F}(z;X,v)-\mathbb{I}\|=O_\varepsilon(X^{-\frac{1}{4}}),\quad X\to+\infty
\end{align} holds uniformly for
$v\approx v_\mathrm{c}$ and
$\tau=\mathfrak{b}/\mathfrak{a} \lt \varepsilon^{-1}$. However, using
$\mathbf{V}^\mathbf{F}(z;X,v):=\dot{\mathbf{T}}^\mathrm{c}(z;X,v)\dot{\mathbf{T}}^\mathrm{out}(z;v)^{-1}$ for
$z\in\partial D_{z_\mathrm{c}}(\delta)$ then shows that if
$v=v_\mathrm{c}+O(X^{-\frac{1}{3}})$ so that y is bounded, the sharp estimate
$\mathbf{V}^\mathbf{F}(z;X,v)-\mathbb{I}=O_\varepsilon(X^{-\frac{1}{6}})$ holds uniformly for
$z\in\partial D_{z_\mathrm{c}}(\delta)$, which is small but which dominates all other contributions to
$\mathbf{V}^\mathbf{F}-\mathbb{I}$. Applying the small-norm theory, it follows that the analogue of (2.41) in the present situation is that
$\mathbf{F}_-(\diamond;X,v)-\mathbb{I}=O_\varepsilon(X^{-\frac{1}{6}})$ holds in the
$L^2(\Sigma_\mathbf{F})$ sense. Using this and the fact that
$\mathbf{V}^\mathbf{F}(\diamond;X,v)-\mathbb{I}=O_\varepsilon(X^{-\frac{1}{6}})$ also holds in the same topology, by Cauchy-Schwarz, the exact formula (2.45) implies that the analogue of (2.51) in this situation is that
\begin{align}
\Psi(X,X^\frac{3}{2}v)=-\frac{{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}}{\pi X^\frac{1}{2}}\int_{\Sigma_\mathbf{F}}V_{12}^\mathbf{F}(z;X,v)\, \mathrm{d} z + O(X^{-\frac{5}{6}}),\quad X\to+\infty,\quad v=v_\mathrm{c}+O_\varepsilon(X^{-\frac{1}{3}}).
\end{align} Since the integrand is uniformly exponentially small unless
$z\in\partial D_{z_\mathrm{c}}(\delta)\cup\partial D_{z_2}(\delta)$ we can simplify further:
\begin{align}\begin{aligned}
\Psi(X,X^\frac{3}{2}v)=-\frac{{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}}{\pi X^\frac{1}{2}}\int_{\partial D_{z_\mathrm{c}}(\delta)\cup\partial D_{z_2}(\delta)}V_{12}^\mathbf{F}(z;X,v)\, \mathrm{d} z + O_\varepsilon(X^{-\frac{5}{6}}),\\ X\to+\infty,\quad v=v_\mathrm{c}+O(X^{-\frac{1}{3}}).
\end{aligned}
\end{align} As in Section 2.3, we then compute the integral over
$\partial D_{z_2}(\delta)$ in (4.11), which we identify up to an error of order
$O_\varepsilon(X^{-\frac{5}{4}})$ as the second term on the right-hand side of (2.55) in which
$H^{z_2}_{11}(z_2(v);v)$ is modified from its definition in (2.29) just replacing
$z_1(v)$ with
$z_*(v)$. In other words,
\begin{align}\begin{aligned}
& -\frac{{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}}{\pi X^\frac{1}{2}}\int_{\partial D_{z_2(\delta)}}V_{12}^\mathbf{F}(w;X,v)\, \mathrm{d} w\\
& \quad {}=
\frac{{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}}{X^\frac{3}{4}}X^{\frac{1}{2}{\mathrm{i}} p}{\mathrm{e}}^{-2{\mathrm{i}} X^{1/2}\vartheta(z_2(v);v)}
\frac{\sqrt{2p}{\mathrm{e}}^{{\mathrm{i}}(\frac{1}{4}\pi+p\ln(2)-\arg(\Gamma({\mathrm{i}} p)))}}{\sqrt{\vartheta''(z_2(v);v)}}(z_2(v)-z_*(v))^{2{\mathrm{i}} p}\vartheta''(z_2(v);v)^{{\mathrm{i}} p}
\\
& \quad {}+O_\varepsilon(X^{-\frac{5}{4}}).
\end{aligned}
\end{align} By Taylor expansion,
$v=v_\mathrm{c}+O(X^{-\frac{1}{3}})$ implies
\begin{align}
\begin{split}
\vartheta(z_2(v);v)&=\sqrt{\frac{75}{8}}+\frac{3}{2}(v-v_\mathrm{c}) + O(X^{-\frac{2}{3}})\\
\vartheta''(z_2(v);v)&= \sqrt{6}+O(X^{-\frac{1}{3}})\\
z_2(v)-z_*(v)&= \sqrt{\frac{27}{2}} + O(X^{-\frac{1}{3}}),
\end{split}
\end{align}so it follows that
\begin{align}\begin{aligned}
-\frac{{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}}{\pi X^\frac{1}{2}}\int_{\partial D_{z_2}(\delta)}V_{12}^\mathbf{F}(w;X,v)\, \mathrm{d} w =
2^\frac{1}{4}3^{-\frac{1}{4}}p^\frac{1}{2}X^{-\frac{3}{4}}
{\mathrm{e}}^{{\mathrm{i}}\Omega_{2}(X,v)} + O_\varepsilon(X^{-\frac{11}{12}}),\\ X\to+\infty,\quad v=v_\mathrm{c}+O(X^{-\frac{1}{3}}),
\end{aligned}
\end{align} wherein the phase
$\Omega_2(X,v)$ is defined by (1.99).
Next, we compute the integral over
$\partial D_{z_\mathrm{c}}(\delta)$ in (4.11). Using the expansion (4.4) and the representation (4.6) of the outer parametrix within
$D_{z_\mathrm{c}}(\delta)$, we obtain that as
$X\to+\infty$,
\begin{align}
\begin{split}
V^\mathbf{F}_{12}(z;X,v)&=
\frac{X^{-\frac{1}{3}{\mathrm{i}} p}{\mathrm{e}}^{{\mathrm{i}} X^{1/2}u(v)}}{X^{\frac{1}{6}}\varphi(z;v)}\left[\mathbf{H}^\mathrm{c}(z;v)\mathbf{U}^1(X^{\frac{1}{3}}t(v),\tau)\mathbf{H}^\mathrm{c}(z;v)^{-1}\right]_{12} + O_\varepsilon(X^{-\frac{1}{3}})\\
&=-\frac{X^{-\frac{1}{3}{\mathrm{i}} p}{\mathrm{e}}^{{\mathrm{i}} X^{1/2}u(v)}U^1_{21}(X^{\frac{1}{3}}t(v),\tau)}{X^{\frac{1}{6}}\varphi(z;v)}(z_2(v)-z)^{-2{\mathrm{i}} p}\left(\frac{z_*(v)-z}{\varphi(z;v)}\right)^{2{\mathrm{i}} p} + O_\varepsilon(X^{-\frac{1}{3}})\\
&=-\frac{X^{-\frac{1}{3}{\mathrm{i}} p}{\mathrm{e}}^{{\mathrm{i}} X^{1/2}u(v)}\mathcal{V}(X^{\frac{1}{3}}t(v);\tau)}{X^{\frac{1}{6}}\varphi(z;v)}(z_2(v)-z)^{-2{\mathrm{i}} p}\left(\frac{z_*(v)-z}{\varphi(z;v)}\right)^{2{\mathrm{i}} p} + O_\varepsilon(X^{-\frac{1}{3}}).
\end{split}
\end{align}Therefore,
\begin{align}\begin{aligned}
& -\frac{{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}}{\pi X^\frac{1}{2}}\int_{\partial D_{z_\mathrm{c}}(\delta)}V_{12}^\mathbf{F}(w;X,v)\, \mathrm{d} w\\
& \quad =\frac{{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}X^{-\frac{1}{3}{\mathrm{i}} p}{\mathrm{e}}^{{\mathrm{i}} X^{1/2}u(v)}\mathcal{V}(X^{\frac{1}{3}}t(v);\tau)}{\pi X^{\frac{2}{3}}}\int_{\partial D_{z_\mathrm{c}}(\delta)}
(z_2(v)-w)^{-2{\mathrm{i}} p}\left(\frac{z_*(v)-w}{\varphi(w;v)}\right)^{2{\mathrm{i}} p}\frac{ \mathrm{d} w}{\varphi(w;v)}\\
& \qquad {} + O_\varepsilon(X^{-\frac{5}{6}}),\quad X\to+\infty.
\end{aligned}
\end{align} We evaluate the integral by residues, using the fact that
$w\mapsto \varphi(w;v)$ has a simple zero at
$w=z_*(v)\in D_{z_\mathrm{c}}(\delta)$ and the first two factors in the integrand are analytic at
$w=z_*(v)$ with value
\begin{align}
\left.(z_2(v)-w)^{-2{\mathrm{i}} p}\left(\frac{z_*(v)-w}{\varphi(w;v)}\right)^{2{\mathrm{i}} p}\right|_{w=z_*(v)} = (z_2(v)-z_*(v))^{-2{\mathrm{i}} p}(-\varphi'(z_*(v);v))^{-2{\mathrm{i}} p}.
\end{align} Further expanding the result in v about
$v=v_\mathrm{c}$ using (4.2),
$\varphi'(z_\mathrm{c};v_\mathrm{c})=-9^{-\frac{1}{3}}$,
$z_2(v_\mathrm{c})=(\frac{3}{2})^{\frac{1}{2}}$, and the assumption that
$v-v_\mathrm{c}=O(X^{-\frac{1}{3}})$, we produce no error terms larger than already present in (4.16) and hence
\begin{align}\begin{aligned}
-\frac{{\mathrm{e}}^{-{\mathrm{i}}\arg(ab)}}{\pi X^\frac{1}{2}}\int_{\partial D_{z_\mathrm{c}}(\delta)}V_{12}^\mathbf{F}(w;X,v)\, \mathrm{d} w=2\cdot 3^{\frac{2}{3}}X^{-\frac{2}{3}}\mathcal{V}(2^{\frac{5}{2}}3^{\frac{7}{6}}X^{\frac{1}{3}}(v-v_\mathrm{c});\tau){\mathrm{e}}^{{\mathrm{i}}\Omega_\mathrm{c}(X,v)} + O_\varepsilon(X^{-\frac{5}{6}}),\\ X\to+\infty,\quad v=v_\mathrm{c}+O(X^{-\frac{1}{3}}),
\end{aligned}
\end{align} where the phase
$\Omega_\mathrm{c}(X,v)$ is as defined in (1.98).
5. The RogueWaveInfiniteNLS.jl software package for Julia
In this section we introduce the software package RogueWaveInfiniteNLS.jl [Reference Bilman and Miller11] for Julia that is developed as part of this work to accurately compute
$\Psi(X,T;\mathbf{G},{B})$ at virtually any given point in the (X, T)-plane for given G and B > 0. The method for computing
$\Psi(X,T;\mathbf{G},{B})$ is based on numerically solving a suitably regularized Riemann–Hilbert problem that is selected depending on the location of the point (X, T) in
$\mathbb{R}^2$. Basically,
• if X is deemed to be sufficiently large and
$|T| \lt v_\mathrm{c}|X|^\frac{3}{2}$, then we numerically solve for T described in Section 2;• if T is deemed to be sufficiently large and
$|X| \lt w_\mathrm{c}|T|^\frac{2}{3}$, then we numerically solve for the different matrix T described in Section 3;• if either X or T is deemed to be sufficiently large and
$|T|\approx v_\mathrm{c}|X|^\frac{3}{2}$ or equivalently
$|X|\approx w_\mathrm{c}|T|^\frac{2}{3}$, then we numerically solve for T defined as in Section 2 but with more suitable contour choices as described in Section 4;• otherwise, we numerically solve a version of Riemann–Hilbert Problem 1 for P.
Since the selected Riemann–Hilbert problem has then been appropriately deformed (via employing noncommutatitve steepest descent techniques) to be suitable for asymptotic analysis in one of the asymptotic regimes considered in this work, it is also good for computation, modulo some details due to numerical considerations. This final Riemann–Hilbert problem is solved using the routines available in the OperatorApproximation.jl package, see [Reference Trogdon40]. OperatorApproximation.jl is a framework for approximating functions and operators, and for solving equations involving such objects.
Numerical solution of a Riemann–Hilbert problem posed on a suitable oriented contour Γ (may be open, closed, or unbounded) for an unknown
$\boldsymbol{\Phi}(z)\in\mathbb{C}^{2 \times 2}$ satisfying a jump condition
and normalized to satisfy
$\boldsymbol{\Phi}(z) \to \mathbb{I}$ as
$z\to\infty$, essentially involves seeking a solution of the form
\begin{align}
\boldsymbol{\Phi}(z) =\mathbb{I} +\mathcal{C}_{\Gamma,W}[\mathbf{F}](z),\qquad \mathcal{C}_{\Gamma,W}[\mathbf{F}](z):=\frac{1}{2\pi {\mathrm{i}}}\int_{\Gamma}\frac{\mathbf{F}(s)W(s) \mathrm{d} s}{s-z},
\end{align} where W is a suitably chosen weight (possibly different on each arc of Γ) and rephrasing (5.1) as a singular integral equation for the unknown density
$\mathbf{F}(z)$ in the form
This singular integral equation is discretized via collocation separately on each arc of Γ using a basis of suitable polynomials orthogonal with respect to the weight W. In practice, a basis of orthogonal polynomials on the unit interval
$[-1,1]$ with positive weight W is mapped to each arc of Γ. The linear system resulting from the employed collocation is solved for the coefficients of
$\mathbf{F}_{\pm}(z)$ for
$z\in\Gamma$. All of this machinery is readily implemented and available in a black-box manner in OperatorApproximation.jl [Reference Trogdon40], and [Reference Bilman and Miller11] uses those capabilities to numerically solve various Riemann–Hilbert problem representations of
$\Psi(X,T;\mathbf{G},{B})$. The theoretical framework behind the computational approach described above is due to S. Olver and T. Trogdon, see [Reference Trogdon and Olver39] (and also [Reference Olver28, Reference Olver29]) and the references therein. An in-depth description and analysis of the accuracy of the numerical method employed can be found in [Reference Olver and Trogdon30] and [Reference Trogdon and Olver39, Chapters 2 and 7].
5.1. Basic use of the software package RogueWaveInfiniteNLS.jl
The installation of the software package to the user’s Julia environment follows the standard package installation:
Then at the Julia prompt one can activate the package via
and to compute
$\Psi(X,T;\mathbf{G}(a,b),{B})$ at
$X=-1.8$, T = 0.6 with
$\mathbf{G}=\mathbf{G}(a=2-3{\mathrm{i}}, b=1+0.5{\mathrm{i}})$ and B = 1.2, one just calls:
The syntax for using the main routine psi is psi(X, T, a, b, B). See also the user’s guide in Appendix C.
5.2. Details of the implementation and the regions of the
$(X,T)$-plane
The routines we develop to compute
$\Psi(X,T; \mathbf{G}(a,b), {B})$ make use of the elementary symmetry properties given in Section 1. Thanks to Proposition 1.2, Proposition 1.3, and Proposition 1.4, to compute the value of
$\Psi(X,T; \mathbf{G}(a,b), {B})$ at a given point
$(X,T)\in\mathbb{R}^2$ for given B > 0 and
$\mathbf{G}=\mathbf{G}(a,b)$, it suffices to compute
$\Psi(\widetilde{X},\widetilde{T}; \widetilde{\mathbf{G}}, {B}=1)$ for
and
\begin{align}
\widetilde{\mathbf{G}}:= \begin{cases}
\mathbf{G}(a,b),&\quad\text{if}~X\geq 0~\text{and}~T\geq 0\\
\mathbf{G}(b,a),&\quad\text{if}~X \lt 0~\text{and}~T\geq 0\\
\mathbf{G}(a,b)^*,&\quad\text{if}~X \geq 0~\text{and}~T \lt 0\\
\mathbf{G}(b,a)^*,&\quad\text{if}~X \lt 0~\text{and}~T \lt 0.
\end{cases}
\end{align} The method underlying the routine psi for computing
$\Psi(X,T;\mathbf{G}(a,b),{B})$ consists of the following steps.
Choose a computationally-appropriate Riemann–Hilbert problem to solve based on the location of the point (X, T) using Algorithm 1 (see below). Thus the (X, T)-plane is written as the disjoint union of four regions denoted NoDeformation, LargeX, LargeT, and Painleve.
Construct data structures representing the jump contours and jump matrices of the selected Riemann–Hilbert problem. Because of the choice made in Step 1, the jump matrix differs little from the identity except on certain arcs where it takes a (piecewise) constant value and near self-intersection points of the jump contour. Depending on details of the problem, small circles centred at the self-intersection points may be added to the jump contour at this stage in order to remove singularities, and the radii of these circles is chosen so that for the given coordinates (X, T) the jump matrices supported on the circles remain bounded in norm.
Solve the relevant Riemann–Hilbert problem numerically by passing the data structures built in Step 2 to suitable routines in the package OperatorApproximation.jl. The quantity
(5.6)
\begin{align}
P_{12}^{[1]}(\widetilde{X},\widetilde{T},\widetilde{\mathbf{G}}):= \lim_{\Lambda\to\infty} 2{\mathrm{i}} \Lambda P_{12}(\Lambda;\widetilde{X},\widetilde{T},\widetilde{\mathbf{G}})
\end{align}is then extracted from the numerically computed solution by a contour integration of the returned weighted Cauchy density.
Recover
$\Psi(X,T;\mathbf{G}(a,b),{B})$ from
$P_{12}^{[1]}(\widetilde{X},\widetilde{T},\widetilde{\mathbf{G}})$ using (1.6) and the symmetry relations in Proposition 1.2, Proposition 1.3, and Proposition 1.4.
Since the computation of
$\Psi(X,T;\mathbf{G},{B})$ is based on the numerical solution of Riemann–Hilbert problems that depend on (X, T) explicitly and parametrically, the computations for different pairs (X, T) are independent and can be immediately parallelized over a large range of the coordinates. Some of the computations for this work were performed in parallel on the 48-core Pitzer nodes of the Ohio Supercomputer Center [1]. For instance, the solution shown in the plots in Figure 1 is computed over the domain
$\{(X,T)\colon -16\leq X \leq 16,~ -8\leq T \leq 8\}$ with grid spacings
$\texttt{dX}=\texttt{dT}=0.05$. Therefore, to obtain the data for these plots, 205,761 Riemann–Hilbert problems were solved in parallel on the supercomputer as (X, T) ranges over the 205,761 points on the discretized domain (in batches, of course, due to memory limitations and the number of nodes available).

The routine psi calls Algorithm 1, and based on the region determined to contain (X, T), calls one of the programs psi_undeformed, psi_largeX, psi_largeT, or psi_Painleve to perform Steps 2–4. These programs are in turn “wrappers” for corresponding lower-level programs rwio_undeformed, rwio_largeX, rwio_largeT and rwio_Painleve. We will describe the syntax of these programs and give some details about the choices of contours and jump matrices needed to build the data structures in Step 2 below. Some users of RogueWaveInfiniteNLS.jl may like to use these routines directly to compute
$\Psi(X,T;\mathbf{G},{B})$ by using a specific Riemann–Hilbert representation as they bypass Algorithm 1 and just compute
$\Psi(X,T;\mathbf{G},{B})$ from the indicated problem, whether or not that is a good idea given the values of (X, T). However we wish to emphasize that the casual user of the package need not be concerned with any of these programs and can reliably compute general rogue waves of infinite order in most of the (X, T)-plane just by using the main routine psi.
It is our intention that psi return an accurate numerical evaluation of
$\Psi(X,T;\mathbf{G},{B})$ for coordinates (X, T) lying in a very large, but bounded region of the (X, T)-plane. Indeed, the main utility of the routines is to allow the reliable computation of
$\Psi(X,T;\mathbf{G},{B})$ for values of (X, T) that are definitely not in the regime of applicability of any of the theorems in Section 1.3. However, we also want the region of accurate computability to be large enough to allow for substantial overlap with the various regions of validity of those theorems. This allows one to validate the analytical results as shown in Figures 2, 3, 4, 5, 9, and 10.
In principle the analytical asymptotics make numerical calculations unnecessary for extreme values of the variables, at least to the extent that the asymptotic formulae can be accurately computationally evaluated. Nonetheless, it is of some interest to push the envelope of applicability of the numerical methods beyond the limits of the current version of the software, and here we point out that as the variables become larger, even a Riemann–Hilbert problem adapted to asymptotic analysis in the relevant regime becomes challenging to solve numerically. The reason is that the deviation of the jump matrix from a piecewise-constant matrix becomes increasingly concentrated near the contour self-intersection points, and the rapid variation requires an increasingly large number of collocation points as the variables grow in size. There are strategies for dealing with this phenomenon. The first technique is to remove the non-identity limiting jump matrices using the analytical outer parametrix. This yields a modified Riemann–Hilbert problem with jump matrices that are different from the identity only in small neighbourhoods of the self-intersection points. One may think that an optimal approach would be to then deal with the self-intersection points using analytical local/inner parametrices constructed from special functions (e.g., Airy, Bessel, or parabolic cylinder), and then reducing the problem to a small-norm problem — exactly as in the proofs of the theorems — for the computer to solve. However, just calculating the jump matrices for the small-norm problem accurately requires reliable evaluation of the relevant special functions for extreme values of the arguments, which is again a computational problem of a similar nature. An alternative is to construct a numerical parametrix for a given intersection point by truncating jump contours away from it, imposing identity asymptotics at infinity, and rescaling to obtain a model requiring fewer collocation points for accuracy. Then one conjugates the full problem by the parametrix, which removes all difficulties near the selected point and conjugates the jumps near the remaining points by near-identity factors. Iterating this procedure to take care of the intersection points one-by-one can yield excellent results. The actual procedure has additional technical details, but this is the main idea. We plan to continue to update the software in RogueWaveInfiniteNLS.jl as such improvements come to light, with the aim of making the accurate computation of
$\Psi(X,T;\mathbf{G},{B})$ available in increasingly larger domains of the (X, T)-plane.
In the remainder of this section we redefine v (defined in Section 1.3.1) and w (defined in Section 1.3.2) in terms of the rescaled and reflected coordinates
$(\widetilde{X},\widetilde{T})$:
\begin{align}
v:= \widetilde{T} \widetilde{X}^{-\frac{3}{2}},\qquad w:= \widetilde{X}\widetilde{T}^{-\frac{2}{3}}.
\end{align} We also recall the critical values of v and w:
$v_{\mathrm{c}}:=54^{-\frac{1}{2}}$ and
$w_{\mathrm{c}}:=54^{\frac{1}{3}}$.
Remark 5.1 (Scaling of arguments)
Like the main program psi, the programs psi_undeformed, psi_largeX, psi_largeT, and psi_Painleve take the unscaled variables
$(X,T)\in\mathbb{R}^2$ as arguments, along with the value of B. However, the lower-level programs assume that B = 1 and take the rescaled coordinates
$(\widetilde{X},\widetilde{T})$ as arguments (both nonnegative). To simplify the notation in describing the latter routines below, we will drop the tildes. This also makes it easier for the reader to match with the notation in the rest of the paper where the Riemann–Hilbert problems that are solved numerically by these routines are formulated in terms of variables denoted (X, T) and the derived quantities v and w given by (5.7). Of course if B = 1 and
$X,T \gt 0$, there is no difference between the scaled and unscaled coordinates.
5.3. The region NoDeformation
If (X, T) lies in the region NoDeformation according to Algorithm 1, then psi calls the wrapper psi_undeformed which in turn calls the low-level program rwio_undeformed. The latter program solves numerically the basic Riemann–Hilbert Problem 1 which does not leverage any deformation or opening of lenses. Although the jump contour is stated to be
$|\Lambda|=1$ in (1.4), one can actually take any Jordan curve enclosing the origin
$\Lambda=0$ to be the jump contour. rwio_undeformed leverages this freedom and uses the circle
$|\Lambda|=T^{-\frac{1}{2}}$ as the jump contour if
$1 \lt T\leq T_{\mathrm{max}}$ and uses the original contour
$|\Lambda|=1$ if
$0 \leq T \leq 1$. In practice we take
$T_{\mathrm{max}}:=8$ (see Algorithm 1). With this choice the jump contour stays away from the singularity of the exponential factors in (1.4) at
$\Lambda=0$ (with a distance at least
$1/\sqrt{8}$) while, under the conditions that (X, T) is assigned to the region NoDeformation by Algorithm 1, the matrix norm of the jump matrix is uniformly of moderate size on the jump contour. The low-level routine for computing
$\Psi(X,T;\mathbf{G},{B}=1)$ via solving the undeformed problem Riemann–Hilbert Problem 1 with the given parameters and for
$X\geq 0$ and
$T\geq 0$ is rwio_undeformed with arguments X, T (rescaled and nonnegative), a, b, and an integer n, the number of collocation points to use on each straight-line arc of the polygonal jump contour. For instance, the command
returns
$\Psi(X,T;\mathbf{G},{B})$ at X = 0.8, T = 1.5, with
$\mathbf{G}=\mathbf{G}(a=1,b=2{\mathrm{i}})$ and B = 1. The original circular jump contour is modelled as a square, each side of which is resolved using 400 collocation points. The corresponding wrapper psi_undeformed takes the same arguments as does psi except for an additional argument allowing the user to specify the number of collocation points on each of the four sides of the square jump contour. These are the unscaled coordinates, which can take any signs, and the value of B is specified in the argument list. Thus for instance
returns
$\Psi(X,T;\mathbf{G},{B})$ at
$X=-0.8$, T = 1.5,
$\mathbf{G}=\mathbf{G}(a=1,b=2{\mathrm{i}})$ and B = 1.2 using 400 collocation points on each edge of the square.
5.4. The region LargeX
If Algorithm 1 determines that (X, T) lies in the region LargeX, then psi calls the wrapper psi_largeX which calls the low-level program rwio_largeX. This low-level program solves the Riemann–Hilbert problem satisfied by
$\mathbf{T}(z;X,v)$ defined by (2.5) with jump conditions given by (2.9). See Section 2 and in particular Figure 12 for the jump contour of this Riemann–Hilbert problem. This is exactly what was done in our first paper [Reference Bilman, Ling and Miller8] where
$\Psi(X,T;\mathbf{G},{B}=1)$ was computed for the first time for
$\mathbf{G}=\mathbf{Q}^{-1}$ for T small:
$|T|\leq 1$. The numerical routine rwio_largeX implemented in the package RogueWaveInfiniteNLS.jl is very similar, the main difference being the use of the open-source Julia programming language.
As described in Section 2, the jump contour for the deformed problem is independent of X but depends on
$v\in[0,v_{\mathrm{c}})$. rwio_largeX adaptively chooses a polygonal model for the jump contour that varies as v ranges over this interval. This variation becomes especially important as v approaches
$v_\mathrm{c}\approx 0.136$. See Figure 18 for the numerical jump contours used by rwio_largeX for different values of v.

Figure 18. The numerical contours used by rwio_largeX for increasing values of
$v\in[0,v_{\mathrm{c}})$.
The arguments taken by rwio_largeX are X and v (the natural parameters for
$\mathbf{T}(z;X,v)$ as described in Section 2), a, b, and n (the number of collocation points per segment of the polygonal jump contour). For instance,
returns
$\Psi(X,T;\mathbf{G},{B}=1)$ at X = 25,
$v=TX^{-\frac{3}{2}}=0.1$, with
$\mathbf{G}=\mathbf{G}(a=1,b=2{\mathrm{i}})$ by using 140 collocation points in each segment. The auxiliary routine vfromXT(X,T) included in RogueWaveInfiniteNLS.jl can be used to compute the value of v given (X, T) if needed.
In this case, the wrapper psi_largeX takes different arguments, as it allows the user to directly compute
$\Psi(X,T;\mathbf{G},{B})$ at arbitrary given (X, T) coordinates (unscaled, and in any quadrant of the plane) using the numerical approach underpinning rwio_largeX. The arguments of psi_largeX are again the same as those of psi with an additional integer argument for specifying the number of points used on each contour segment. Thus,
returns
$\Psi(-25,-0.1,\mathbf{G}(1,2{\mathrm{i}}),1.2)$ computed by scaling the variables by B = 1.2, computing v from the scaled variables using vfromXT, then calling rwio_largeX with 140 collocation points, and finally scaling the returned value by B = 1.2.
5.5. The region LargeT
When Algorithm 1 assigns (X, T) to the region LargeT, the main program psi calls the wrapper psi_largeT which in turn calls the low-level program rwio_largeT. The latter routine computes the solution by via the Riemann–Hilbert problem satisfied by the matrix
$\mathbf{T}(Z;T,w)$ described in Section 3. The numerical solution of the latter problem is substantially more complicated than is solving for
$\mathbf{T}(z;X,v)$ and it was not considered in our earlier work in [Reference Bilman, Ling and Miller8]. Its implementation in RogueWaveInfiniteNLS.jl is a significant new contribution.
To describe the method used by rwio_largeT, we first rewrite the jump conditions satisfied by
$\mathbf{T}(Z;T,w)$ by assigning new names to the constant matrices that appear in the jump conditions that will be convenient in describing certain local transformations later on. Thus the jump conditions given by (3.21) are reformulated as follows:
\begin{align}
\mathbf{T}_+(Z;T,w) = \mathbf{T}_-(Z;T,w) {\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3} \mathbf{V}_{L}{\mathrm{e}}^{{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3}
,\quad \mathbf{V}_{L}:= \begin{bmatrix}1 & 0 \\-\dfrac{\mathfrak{b}}{\mathfrak{a}} & 1 \end{bmatrix},\quad Z\in C^+_{\Gamma,L},
\end{align}
\begin{align}
\mathbf{T}_+(Z;T,w) = \mathbf{T}_-(Z;T,w) {\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3} \mathbf{V}_{R}{\mathrm{e}}^{{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3},
\quad \mathbf{V}_{R}:=
\begin{bmatrix}1 &\mathfrak{a b} \\ 0 & 1 \end{bmatrix},\quad Z\in C^+_{\Gamma,R},
\end{align}
\begin{align}
\mathbf{T}_+(Z;T,w) = \mathbf{T}_-(Z;T,w) {\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3} \mathbf{Y}_{R}{\mathrm{e}}^{{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3},
\quad \mathbf{Y}_{R}:=
\begin{bmatrix}1 & 0 \\ -\mathfrak{a b} & 1 \end{bmatrix},\quad Z\in C^-_{\Gamma,R},
\end{align}
\begin{align}
\mathbf{T}_+(Z;T,w) = \mathbf{T}_-(Z;T,w){\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3} \mathbf{Y}_{L}{\mathrm{e}}^{{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3},\quad
\mathbf{Y}_{L}:=\begin{bmatrix}1 & \dfrac{\mathfrak{b}}{\mathfrak{a}} \\ 0 & 1 \end{bmatrix},\quad Z\in C^-_{\Gamma,L},
\end{align}
\begin{align}
\mathbf{T}_+(Z;T,w) = \mathbf{T}_-(Z;T,w) {\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3} \mathbf{W}_{L}{\mathrm{e}}^{{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3},\quad
\mathbf{W}_{L} := \begin{bmatrix}1 & -\dfrac{\mathfrak{a}}{\mathfrak{b}} \\ 0 & 1 \end{bmatrix},\quad Z\in C^+_{\Sigma,L},
\end{align}
\begin{align}
\mathbf{T}_+(Z;T,w) = \mathbf{T}_-(Z;T,w){\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3} \mathbf{W}_{R}{\mathrm{e}}^{{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3},\quad
\mathbf{W}_{R}:=\begin{bmatrix}1 & -\dfrac{\mathfrak{a}^3}{\mathfrak{b}} \\ 0 & 1 \end{bmatrix},\quad Z\in C^+_{\Sigma,R},
\end{align}
\begin{align}
\mathbf{T}_+(Z;T,w) = \mathbf{T}_-(Z;T,w){\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3} \mathbf{X}_{R}{\mathrm{e}}^{{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3},\quad
\mathbf{X}_{R}:=\begin{bmatrix}1 & 0 \\ \dfrac{\mathfrak{a}^3}{\mathfrak{b}} & 1 \end{bmatrix},\quad Z\in C^-_{\Sigma,R},
\end{align}
\begin{align}
\mathbf{T}_+(Z;T,w) = \mathbf{T}_-(Z;T,w){\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3} \mathbf{X}_{R}{\mathrm{e}}^{{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3},\quad
\mathbf{X}_{L}:=
\begin{bmatrix}1 & 0 \\ \dfrac{\mathfrak{a}}{\mathfrak{b}}& 1 \end{bmatrix},\quad Z\in C^-_{\Sigma,L}.
\end{align} Finally, on
$\Sigma=\Sigma^+\cup\Sigma^-$ we have
\begin{alignat}{2}
\mathbf{T}_+(Z;T,w) &= \mathbf{T}_-(Z;T,w){\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h_-(Z;w)\sigma_3} \mathbf{W}{\mathrm{e}}^{{\mathrm{i}} T^{1/3}h_+(Z;w)\sigma_3}, \quad
\mathbf{W}:=
\begin{bmatrix} 0 & \dfrac{\mathfrak{a}}{\mathfrak{b}} \\ - \dfrac{\mathfrak{b}}{\mathfrak{a}} & 0\end{bmatrix},&&\quad Z\in\Sigma^+,
\end{alignat}
\begin{alignat}{2}
\mathbf{T}_+(Z;T,w) &= \mathbf{T}_-(Z;T,w) {\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h_-(Z;w)\sigma_3} \mathbf{X}{\mathrm{e}}^{{\mathrm{i}} T^{1/3}h_+(Z;w)\sigma_3},\quad
\mathbf{X}:=\begin{bmatrix} 0 &\dfrac{\mathfrak{b}}{\mathfrak{a}} \\ - \dfrac{\mathfrak{a}}{\mathfrak{b}} & 0\end{bmatrix},&&\quad Z\in\Sigma^-.
\end{alignat} Note that the jump matrices on
$\Sigma^+$ and
$\Sigma^-$ are constants in Z since
$h_+(Z)+h_-(Z) = \kappa(w)$ on those arcs. For numerical purposes, we implement
$h(z;w)$ as follows. We first define
$R^{N}(z;w)$ as the function (the numerical implementation of
$R(z;w)$, hence the superscript) that has branch cuts on the line segments from
$Z_0^*$ to Z 1 and from Z 1 to Z 0. It can be written in terms of principal branch square roots as
\begin{align}
R^{N}(Z;w):=\left( \frac{Z -Z_1(w)}{Z -Z_0^*(w)}\right)^\frac{1}{2}\left( \frac{Z -Z_0(w)}{Z -Z_1(w)}\right)^\frac{1}{2}(Z-Z_0^*(w)),
\end{align} however, this formulation is not stable near
$Z=Z_1(w)$ or
$Z=Z_0^*(w)$. In practice, we work with the following functions:
\begin{align}
R^{N}_{\leftarrow}(Z;w) := (Z-Z_0(w))^{\frac{1}{2}}(Z-Z_0^*(w))^{\frac{1}{2}},
\end{align} which has horizontal branch cuts from
$Z=Z_0(w)$ to
$Z=(-\infty)+ {\mathrm{i}} \operatorname{Im}(Z_0)$ and from
$Z=Z_0^*(w)$ to
$Z=(-\infty) - {\mathrm{i}} \operatorname{Im}(Z_0(w))$,
\begin{align}
R^{N}_{\rightarrow}(Z;w) := - (Z_0(w)-Z)^{\frac{1}{2}}(Z_0^*(w) - Z)^{\frac{1}{2}},
\end{align} which has horizontal branch cuts from
$Z=Z_0(w)$ to
$Z=(+\infty)+ {\mathrm{i}} \operatorname{Im}(Z_0(w))$ and from
$Z=Z_0^*(w)$ to
$Z=(+\infty) - {\mathrm{i}} \operatorname{Im}(Z_0(w))$, and
\begin{align}
R^{N}_{\uparrow}(Z;w) := \left(\frac{Z-Z_0(w)}{Z-Z_0^*(w)}\right)^{\frac{1}{2}}(Z-Z_0^*(w)),
\end{align} which has a vertical branch cut from
$Z=Z_0^*$ to
$Z=Z_0$. Recalling that Σ is oriented upward,
$R^{N}_{\leftarrow}(Z;w)$ is the continuation of
$R^N_-(Z;w)$ to the strip
$-\operatorname{Im}(Z_0) \lt \operatorname{Im}(Z) \lt \operatorname{Im}(Z_0)$ lying to the left-hand side of Σ and
$R^{N}_{\rightarrow}(Z;w)$ is the continuation of
$R^N_+(Z;w)$ to the strip
$-\operatorname{Im}(Z_0) \lt \operatorname{Im}(Z) \lt \operatorname{Im}(Z_0)$ lying to the right-hand side of Σ. The values of the functions
$R^{N}_{\leftarrow}(Z;w)$ and
$R^{N}_{\rightarrow}(Z;w)$ match with the values of
$R^N(Z;w)$ directly above and below the relevant semi-infinite strips emanating from Σ. Using these numerical versions of
$R(Z;w)$, we define corresponding numerical versions
$h^{N}_{\leftarrow}(Z;w)$,
$h^{N}_{\rightarrow}(Z;w)$, and
$h^{N}_{\uparrow}(Z;w)$ of
$h(Z;w)$ via the formula
\begin{align}
h(Z;w) = \frac{R(Z;w)^3}{Z} - 3\cdot2^{-\frac{1}{3}}-\frac{w^2}{6},
\end{align}as in (3.8).
Before we do anything, we locally collapse the jump conditions supported on
$C_{\Gamma,R}^+$ and
$C_{\Sigma,R}^+$ near
$Z=Z_0$ to a single common arc
$C^+_{\Gamma\Sigma,R}$ oriented towards Z 0 by a local transformation and we still call the resulting unknown matrix
$\mathbf{T}(Z)=\mathbf{T}(Z;T,w)$ due to the simplicity of the transformation. Since
$C_{\Sigma,R}^+$ is oriented towards Z 0 but
$C_{\Gamma,R}^+$ is oriented away from Z 0, it follows that the collapsed jump condition satisfied by the redefined
$\mathbf{T}(Z;T,w)$ is
\begin{align}
\mathbf{T}_+(Z;T,w) = \mathbf{T}_-(Z;T,w) {\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3}\mathbf{C}_{Z_0} {\mathrm{e}}^{{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3}, \quad \mathbf{C}_{Z_0}:=\mathbf{V}_R^{-1}\mathbf{W}_R, \quad Z\in C^+_{\Gamma\Sigma,R}.
\end{align} The analogue of this transformation is also carried out near
$Z=Z_0^*$ by locally collapsing the jump conditions supported on
$C_{\Gamma,R}^-$ and
$C_{\Sigma,R}^-$ near
$Z=Z_0^*$ to a single common arc
$C^-_{\Gamma\Sigma,R}$ oriented away from
$Z_0^*$ by a similar transformation. Since
$C_{\Sigma,R}^-$ is oriented away from
$Z_0^*$ but
$C_{\Gamma,R}^-$ is oriented towards
$Z_0^*$, it follows that the collapsed jump condition satisfied by the redefined
$\mathbf{T}(Z;T,w)$ is
\begin{align}
\mathbf{T}_+(Z;T,w) = \mathbf{T}_-(Z;T,w) {\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3}\mathbf{C}_{Z_0^*} {\mathrm{e}}^{{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3}, \quad \mathbf{C}_{Z_0^*}:=\mathbf{Y}_R^{-1}\mathbf{X}_R, \quad Z\in C^-_{\Gamma\Sigma,R}.
\end{align} We place the collapsed contours
$C^+_{\Gamma\Sigma,R}$ and
$C^-_{\Gamma\Sigma,R}$ where
$C^+_{\Sigma,R}$ and
$C^-_{\Sigma,R}$ were placed near Z 0 and
$Z_0^*$, respectively. See Figure 19 for the arrangement of contours near Z 0. We omit the figure for the configuration near
$Z_0^*$.

Figure 19. Collapsing the jump conditions supported on
$C_{\Gamma,R}^+$ and
$C_{\Sigma,R}^+$ to a common arc near Z 0.
We now let
$D_{Z_0}(\delta_0(T))$ and
$D_{Z_0^*}(\delta_0(T))$ denote disks centred at
$Z=Z_0$ and
$Z=Z_0^*$ with common radii
$\delta_0(T)$, whose dependence on T will be determined later. We introduce
\begin{align}
\mathbf{A}(Z;T,w):=\begin{cases}
\mathbf{T}(Z;T,w) {\mathrm{e}}^{-{\mathrm{i}} T^{1/3}h(Z;w)\sigma_3},&\quad Z\in D_{Z_0}(\delta_0(T)) \cup D_{Z_0}(\delta_0(T)),\\
\mathbf{T}(Z;T,w),&\quad\text{everywhere else}.
\end{cases}
\end{align} This transformation introduces jump conditions on the boundary of the disks, which we take to be clockwise oriented. It also modifies the jump matrices on the existing arcs of the jump contour for
$\mathbf{T}(Z;T,w)$. The jump matrices associated with the jump conditions satisfied by
$\mathbf{A}(Z;T,w)$ for Z near Z 0 are illustrated in Figure 20 in blue colour. See for Figure 21 for the analogous jump conditions for Z near
$Z_0^*$. For the purposes of numerics, the disks are modelled by two polygons related by Schwarz reflection to preserve that of the original jump contour.

Figure 20. The transformation
$\mathbf{T}(Z;T,w)\mapsto \mathbf{A}(Z;T,w)$ augments the jump contour for
$\mathbf{T}(Z;T,w)$ (black segments) by the blue-coloured segments. The jump matrices (modified or new) associated with
$\mathbf{A}(Z;T,w)$ are given in blue. The substitutions shown in fuchsia define the transformation
$\mathbf{A}(Z;T,w)\mapsto \mathbf{N}(Z;T,w)$ inside the disk (modelled by a polygon in rwio_largeT).

Figure 21. As in Figure 20 but for the neighbourhood of
$Z_0^*$.
We then make the local substitutions shown in fuchsia in Figures 20–21 to transform
$\mathbf{A}(Z;T,w)$ to a new unknown
$\mathbf{N}(Z;T,w)$, and we define
$\mathbf{N}(Z;T,w):= \mathbf{A}(Z;T,w)$ for Z outside the polygonal disks. The transformation
$\mathbf{A}(Z;T,w)\mapsto \mathbf{N}(Z;T,w)$ results in the new unknown
$\mathbf{N}(Z;T,w)$ being analytic inside the disks. The jump conditions satisfied by
$\mathbf{N}(Z;T,w)$ near
$Z=Z_0$ and
$Z=Z_0^*$ are described in Figure 22.

Figure 22. The transformation
$\mathbf{A}(Z;T,w)\mapsto \mathbf{N}(z;T,w)$ removes the jump discontinuities inside the disk (polygon). The jump matrices (modified) associated with
$\mathbf{N}(z;T,w)$ are given in fuchsia.
Although the jump contour for the problem satisfied by
$\mathbf{N}(Z;T,w)$ (or by
$\mathbf{T}(Z;T,w)$) is independent of T, it depends on w. So the numerical jump contour is chosen adaptively according to the value of w. The final Riemann–Hilbert problem that is satisfied by
$\mathbf{N}(Z;T,w)$ has the numerical jump contours as shown in Figure 23.

Figure 23. The numerical contours used in the region LargeT for increasing values of
$w\in[0,w_{\mathrm{c}})$. The orange arc is Σ (see Figure 15) and it is included just for reference in the plots. Σ is also modelled by line segments.
Except for those on the polygons modelling the disks centred at
$Z=Z_0$ and
$Z=Z_0^*$, the jump matrices for
$\mathbf{N}(Z;T,w)$ on the subarcs of the numerical contour shown in Figure 23 coincide with the jump matrices for
$\mathbf{T}(Z;T,w)$ as described in (5.8)–(5.18). These jump matrices (except for the constant jump matrices supported on I and
$\Sigma^+\cup\Sigma^-$) are close to the identity away from the points
$Z=Z_1$ and
$Z=Z_2$ when Algorithm 1 assigns (X, T) to the region LargeT. The jump matrices for
$\mathbf{N}(Z;T,w)$ on the polygons centred at
$Z=Z_0$ and
$Z=Z_0^*$ as shown in Figure 22 have elements that grow as T increases. To combat this growth when T is large, rwio_largeT chooses the radii
$\delta_0(T)$ to be smaller when T is larger. Indeed, noting from
that
\begin{align}
{\mathrm{e}}^{{\mathrm{i}} T^{1 / 3} h(Z ; w) \sigma_3}=O(1), \quad T \rightarrow+\infty,
\end{align} if
$|Z-\xi| T^{\frac{2}{9}}=O(1)$, for
$\xi=Z_0(w), Z_0(w)^*$ as
$T \rightarrow+\infty$. Therefore, rwio_largeT scales the common radius of the circles (polygons) centred at
$z=Z_0(w),Z_0(w)^*$ as
$|T|^{-\frac{2}{9}}$. With these choices and the numerical implementation of
$h(Z;w)$ discussed earlier to compute the jump matrices, rwio_largeT solves for
$\mathbf{N}(Z;T,w)$ using the routines in OperatorApproximation.jl.
The low-level routine rwio_largeT takes as arguments T, w, a, b, and an integer specifying the number of collocation points per polygonal segment of the jump contour. The use of T and w appears because these are natural coordinates for the description of the jump conditions, however
$w=XT^{-\frac{2}{3}}$ is an explicit function of (X, T) and the routine wfromXT(X,T) provided with RogueWaveInfiniteNLS.jl could be used to find w from given (X, T) if needed. For instance,
computes
$\Psi(X,T;\mathbf{G},{B}=1)$ at T = 40 ,
$w=XT^{-\frac{2}{3}}=1.9$, with
$\mathbf{G}=\mathbf{G}(a=1,b=2{\mathrm{i}})$, with 150 collocation points on each segment of the polygonal numerical jump contour. The corresponding wrapper psi_largeT has the same arguments as
$\texttt{psi}$ except for an additional argument it passes directly to rwio_largeT to determine the number of collocation points. Thus
computes
$\Psi(40,-1.9,\mathbf{G}(1,2{\mathrm{i}}),1.2)$ by scaling the variables by B = 1.2, computing w from the scaled variables using wfromXT, and then calling rwio_largeT with 150 collocation points, after which the returned value is scaled by B = 1.2.
5.6. The region Painleve
Finally, if Algorithm 1 determines that (X, T) lies in the region Painleve, then psi calls the wrapper psi_Painleve that then calls the low-level routine rwio_Painleve. The latter implements the numerical solution of the modification described in Section 4 of the Riemann–Hilbert jump conditions for
$\mathbf{T}(z;X,v)$ given in Section 2 with the aim of improving accuracy as
$v=TX^{-\frac{3}{2}}\uparrow v_{\mathrm{c}}=54^{-\frac{1}{2}}$. The method for computing
$\Psi(X,T;\mathbf{G},{B})$ implemented in rwio_Painleve is also a new contribution of the package RogueWaveInfiniteNLS.jl and such a computation was not attempted in our earlier work [Reference Bilman, Ling and Miller8].
Recall that for
$0 \lt v \lt v_{\mathrm{c}}$, the controlling exponent function
$\vartheta(z;v)$ in the Riemann–Hilbert problem analysed in Section 2 has three real simple critical points
$z_{-\infty}(v) \lt z_1(v) \lt 0 \lt z_2(v)$. As
$v \lt v_\mathrm{c}$ gets close to
$v_\mathrm{c}$, the third critical point
$z_{-\infty}(v)$ (not playing a role in the large-X analysis) gets close to
$z_1(v)$, and at
$v=v_\mathrm{c}$ these two critical points collide at
$z=z_\mathrm{c}:=-\sqrt{6}=z_1(v_{\mathrm{c}})$, forming a double critical point.
On the other side of the critical curve
$v=TX^{-\frac{3}{2}}=v_{\mathrm{c}}$ in the (X, T)-plane (equivalent to the curve
$w=XT^{-\frac{2}{3}}=w_{\mathrm{c}}$), for
$0 \lt w \lt w_{\mathrm{c}}$, the controlling exponent function
$h(Z;w)$ has two simple critical points
$Z_1(w) \lt 0 \lt Z_2(w)$ and two non-real branch points
$Z_0(w)$ and
$Z_0(w)^*$. As w approaches
$w_\mathrm{c}$, the branch points
$Z_0(w)$ and
$Z_0(w)^*$ approach to the real critical point
$Z_1(w)$, and at
$w=w_\mathrm{c}$ three points collide at a point
$Z=Z_\mathrm{c}$ related to
$z=z_\mathrm{c}$ by the scaling relation
$Z=w^{-\frac{1}{2}}z=v^{\frac{1}{3}}z$.
Algorithm 1 assigns (X, T) to the region Painleve when
$|v-v_{\mathrm{c}}|$ is small so that one of the two collision scenarios described above is about to occur. The pair of nearby critical points is modelled by the double critical point
$z=z_\mathrm{c}=-\sqrt{6}$ of the exponent function
$\vartheta(z;v_\mathrm{c})$. Consequently, the Riemann–Hilbert problem solved numerically by rwio_Painleve coincides mostly with that solved by rwio_largeX but with critical points
$z=z_\mathrm{c}$ (fixed, double) and
$z=z_2(v)$, varying slightly with
$v\approx v_{\mathrm{c}}$. The main new feature accounted for in rwio_Painleve is an adjustment of the angles with which the jump contours exit the point
$z=z_\mathrm{c}$. See Figure 24 for the numerical jump contours used by rwio_Painleve (and compare with the right-hand panel of Figure 16) to formulate and solve a numerically-tractable Riemann–Hilbert problem for the relevant values of (X, T).

Figure 24. The numerical contours used by rwio_Painleve as
$v$ increases when
$|v-v_{\mathrm{c}}|$ remains small.
$v_{\mathrm{c}}\approx 0.136083$.
Since rwio_Painleve is quite similar to rwio_largeX, the two routines take the same arguments. So, for instance,
returns the value of Ψ at X = 80 and
$v=TX^{-\frac{3}{2}}=0.13$, with a = 1 and
$b=2{\mathrm{i}}$ using 150 collocation points at each segment of the numerical jump contour. The corresponding wrapper has the same arguments as psi, except for the number of collocation points to use that gets passed directly to rwio_Painleve. Thus, for instance
returns
$\Psi(80,-93,\mathbf{G}(1,2{\mathrm{i}}),1)$ using 150 collocation points per contour segment.
5.7. Consistency of the low-level programs near region boundaries
In this section we cross-validate the numerically computed solution by comparing the output of the different low-level routines, which we remind the reader are actually designed to compute exactly the same quantity:
$\Psi(X,T;\mathbf{G}(a,b),{B}=1)$. We first compare the solution computed by each of the routines rwio_largeX, rwio_largeT, and
$\texttt{rwio\_Painleve}$ with the solution computed by the simplest routine rwio_undeformed. This is done by taking (X, T) near the origin. We then take X and T large and near the critical curve
$v=TX^{-\frac{3}{2}}=v_{\mathrm{c}}$, and compare the solution computed by rwio_largeX and rwio_largeT with the solution computed by rwio_Painleve. We demonstrate that the computed solutions match to at least 13 digits of accuracy in all the cases mentioned above. This indicates that Algorithm 1 operates in a “seamless” fashion. See the notebook Paper-Code.ipynb in the repository [2] for the sample codes that produced the examples below along with the codes for performing the computations presented in Section 1.
5.7.1. Comparing rwio_largeX and rwio_undeformed near the origin
We set X = 1 and
$v=0.1 v_{\mathrm{c}}$. Then the value of T is determined via the code:
The code below shows that the solution computed using rwio_undeformed and that computed using rwio_largeX match with high accuracy:
Similarly, we set X = 1 and
$v=0.8 v_{\mathrm{c}}$, and find:
5.7.2. Comparing rwio_largeT and rwio_undeformed near the origin
We set T = 1 and
$w=0.1w_{\mathrm{c}}$. Then the value of X is determined via the code:
The code below shows that the solution computed using rwio_undeformed and that computed using rwio_largeT match with high accuracy:
Similarly, we set T = 1 and
$w=0.8w_{\mathrm{c}}$, and find:
5.7.3. Comparing rwio_Painleve and rwio_undeformed near the origin
We set X = 1 and
$v=v_{\mathrm{c}}$. Again, these choices determine the value of T via the code:
The code below shows that the solution computed using rwio_undeformed and that computed using rwio_Painleve match with high accuracy:
5.7.4. Comparing rwio_Painleve with rwio_largeX or rwio_largeT for (
$X, T$) large near the critical curve
To compare rwio_Painleve with rwio_largeX we set X = 2000 and consider
$v \lt v_\mathrm{c}$ but also
$v\approx v_\mathrm{c}$ by setting
$v=0.98v_\mathrm{c}$. The code below shows that the solution computed using rwio_largeX and that computed using rwio_Painleve match near the critical curve
$v=v_{\mathrm{c}}$ with high accuracy:
To compare rwio_Painleve with rwio_largeT we again set X = 2000 and consider
$v \gt v_\mathrm{c}$ (i.e.
$w \lt w_\mathrm{c}$) but also
$v\approx v_\mathrm{c}$ by setting
$v=1.05v_\mathrm{c}$. This choice determines the value of T which we find numerically and then obtain the value of w from these determined values of X and T. The code below performs these initial computations and then shows that the solution computed using rwio_largeT and that computed using rwio_Painleve match near the critical curve
$v=v_{\mathrm{c}}$ with high accuracy:
5.8. Effect of increasing the number of collocation points
We now demonstrate how the accuracy of the various routines improves as the number of collocation points is increased. We fix parameters
$a=b=1$ and B = 1. First, to study rwio_largeX, we fix a relatively large reference number of collocation points
$N_\mathrm{X}=80$ and fix
$v=0.5v_\mathrm{c}$. Then varying X and the number m of collocation points per contour segment, we define
Next, to study rwio_largeT, we fix a relatively large reference number of collocation points
$N_\mathrm{T}=80$ and fix
$w=0.5w_\mathrm{c}$. Then varying T and the number m of collocation points per contour segment, we define
Finally, to study rwio_Painleve, we fix a relatively large reference number of collocation points
$N_\mathrm{P}=160$ and fix
$v=v_\mathrm{c}$ to be on the critical curve. Then varying X and the number m of collocation points per contour segment, we define
In Figure 25 we plot these pointwise errors over different intervals (over the X-axis for
$\mathcal{E}_m^\mathrm{X}(X)$ and
$\mathcal{E}_m^\mathrm{P}(X)$, and over the T-axis for
$\mathcal{E}_m^\mathrm{T}(T)$) for three values of m increasing towards
$N_{\mathrm{X}}$,
$N_{\mathrm{P}}$ and
$N_{\mathrm{T}}$. These plots show that once m has increased to half of the reference value in each case, the difference has decreased to machine precision. This demonstrates how quickly the accuracy improves as the number of collocation points per contour segment is increased.

Figure 25. Left:
$\mathcal{E}^{\mathrm{X}}_m(X)$ for
$m=10$ (dashed-dotted),
$m=20$ (dashed), and
$m=40$ (solid) over
$400\leq X \leq 420$. Center:
$\mathcal{E}^{\mathrm{T}}_m(T)$ for
$m=10$ (dashed-dotted),
$m=20$ (dashed), and
$m=40$ (solid) over
$600\leq T \leq 620$. Right:
$\mathcal{E}^{\mathrm{P}}_m(X)$ for
$m=20$ (dashed-dotted),
$m=40$ (dashed), and
$m=80$ (solid) over
$400\leq X \leq 420$.
Data availability statement
The code used in generating the plots in this work can be found in [2].
Acknowledgements
The author(s) would like to thank the Isaac Newton Institute for Mathematical Sciences, Cambridge, for support and hospitality during the program “Emergent phenomena in nonlinear dispersive waves,” where some of the work on this paper was undertaken. This work was supported by EPSRC grant EP/R014604/1. The computations in this work were facilitated through the use of the advanced computational, storage, and networking infrastructure provided by the Ohio Supercomputer Center (48-core Pitzer nodes) [1].
Author contributions
Conceptualization: D.B; P.D.M. Methodology: D.B; P.D.M. Writing original draft: D.B; P.D.M. All authors approved the final submitted draft.
Funding statement
D. Bilman was supported by the National Science Foundation on grant numbers DMS-2108029 and DMS-2510003, and the Simons Foundation on grant number MPS-TSM-00007577; P. D. Miller was supported by the National Science Foundation on grant numbers DMS-1812625, DMS-2204896, and DMS-2508694.
Competing interests
None.
Ethical standards
The research meets all ethical guidelines, including adherence to the legal requirements of the study country.
Appendix A. Elementary properties of
$\Psi(X,T;\mathbf{G},{B})$
This appendix is devoted to the proofs of several basic symmetries of the function
$\Psi(X,T;\mathbf{G},{B})$. First, we prove the scaling invariance of the solution
$\Psi(X,T;\mathbf{G},{B})$ with respect to B > 0.
Proof of Proposition 1.2
Suppose that
$(X,T)\in\mathbb{R}^2$, B > 0, and G satisfying
$\det(\mathbf{G})=1$ and
$\mathbf{G}=\sigma_2\mathbf{G}^*\sigma_2$ are given. Let
${\mathbf{P}}(\Lambda;{X},{T},\mathbf{G},{B})$ be the solution of Riemann–Hilbert Problem 1 with these given parameters. On the other hand, let
$\widetilde{\mathbf{P}}(\Lambda;\widetilde{X},\widetilde{T},\mathbf{G},1)$ be the solution of Riemann–Hilbert Problem 1 for given
$(\widetilde{X},\widetilde{T})\in\mathbb{R}^2$ with B = 1. Since the jump matrix in Riemann–Hilbert Problem 1 is invariant under
$\Lambda\mapsto {B}^{-1}\Lambda$,
$X\mapsto {B} X$, and
$T\mapsto {B}^2 T$, by uniqueness we find that
\begin{align}
\mathbf{P}(\Lambda;X,T,\mathbf{G},{B}) \equiv \widetilde{\mathbf{P}}({B}^{-1} \Lambda; {B} X,{B}^2 T,\mathbf{G},1)
\end{align}since the radius of the circular jump contour in Riemann–Hilbert Problem 1 can be taken arbitrary. We deduce from (A.1) that
\begin{align}
\begin{aligned}
\Psi(X,T; \mathbf{G}, {B}) &= 2{\mathrm{i}} \lim_{\Lambda\to \infty} \Lambda P_{12}(\Lambda;X,T,\mathbf{G},{B})\\
&= 2{\mathrm{i}} \lim_{\Lambda\to \infty} \Lambda \widetilde{P}_{12}({B}^{-1}\Lambda; {B} X, {B}^2 T,\mathbf{G}, 1)\\
&= {B} \left(2{\mathrm{i}} \lim_{\Lambda\to \infty} {B}^{-1}\Lambda \widetilde{P}_{12}(
{B}^{-1}\Lambda; {B} X, {B}^2 T,\mathbf{G}, 1)\right)\\
&= {B} \Psi({B} X, {B}^2 T; \mathbf{G}, 1),
\end{aligned}
\end{align}which is the claimed scaling symmetry.
As in the rest of the paper, from this point onwards in this Appendix we take B = 1 and omit B from all argument lists.
We now work towards proving the symmetries of the solution
$\Psi(X,T;\mathbf{G})$ with respect to
$X\mapsto -X$ and
$T\mapsto -T$. Given the solution
$\mathbf{P}(\Lambda;X,T,\mathbf{G}(a,b))$ of Riemann–Hilbert Problem 1, define
\begin{align}
\mathbf{X}(\Lambda;X,T,\mathbf{G}(a,b)) :=
\begin{cases}
\sigma_3 \mathbf{P}(\Lambda;X,T, \mathbf{G}(a,b)) {\mathrm{e}}^{-4{\mathrm{i}} \Lambda^{-1}\sigma_3}\sigma_3, &\quad |\Lambda| \gt 1,\\
\sigma_3 \mathbf{P}(\Lambda;X,T, \mathbf{G}(a,b)) {\mathrm{e}}^{-2{\mathrm{i}} (\Lambda X + \Lambda^2 T)\sigma_3}({\mathrm{i}} \sigma_2)\sigma_3, &\quad |\Lambda| \lt 1.
\end{cases}
\end{align}Proposition A.1.
$\mathbf{X}(- \Lambda; -X, T,\mathbf{G}(b,a)) = \mathbf{P}(\Lambda;X,T,\mathbf{G}(a,b))$.
Proof. Observe that for
$|\Lambda| = 1$ we have
\begin{align}
\begin{aligned}
\mathbf{X}_+(\Lambda;X,T, \mathbf{G}(a,b)) =& \sigma_3 \mathbf{P}_+(\Lambda;X,T, \mathbf{G}(a,b)) {\mathrm{e}}^{-4{\mathrm{i}} \Lambda^{-1}\sigma_3}\sigma_3\\
=& \sigma_3 \mathbf{P}_-(\Lambda;X,T, \mathbf{G}(a,b)) {\mathrm{e}}^{-{\mathrm{i}} (\Lambda X+\Lambda^2 T + 2 \Lambda^{-1})\sigma_{3}}\mathbf{G}(a,b)\sigma_3 {\mathrm{e}}^{{\mathrm{i}} (\Lambda X+\Lambda^2 T - 2 \Lambda^{-1})\sigma_{3}}\\
=& \mathbf{X}_-(\Lambda;X,T, \mathbf{G}(a,b))\sigma_3 (-{\mathrm{i}}\sigma_2) {\mathrm{e}}^{{\mathrm{i}}(\Lambda X + \Lambda^2 T - 2 \Lambda^{-1})\sigma_3} \mathbf{G}(a,b)\sigma_3 \\
&{}\cdot {\mathrm{e}}^{{\mathrm{i}} (\Lambda X+\Lambda^2 T - 2 \Lambda^{-1})\sigma_{3}}\\
=& \mathbf{X}_-(\Lambda;X,T, \mathbf{G}(a,b)){\mathrm{e}}^{-{\mathrm{i}}(\Lambda X + \Lambda^2 T - 2 \Lambda^{-1})\sigma_3}
[\sigma_3 (-{\mathrm{i}}\sigma_2) \mathbf{G}(a,b)\sigma_3]\\
&{}\cdot {\mathrm{e}}^{{\mathrm{i}} (\Lambda X+\Lambda^2 T - 2 \Lambda^{-1})\sigma_{3}}.\\
=& \mathbf{X}_-(\Lambda;X,T, \mathbf{G}(a,b)){\mathrm{e}}^{-{\mathrm{i}}(\Lambda X + \Lambda^2 T - 2 \Lambda^{-1})\sigma_3}
\mathbf{G}(b,a) {\mathrm{e}}^{{\mathrm{i}} (\Lambda X+\Lambda^2 T - 2 \Lambda^{-1})\sigma_{3}},
\end{aligned}
\end{align} since
$\sigma_3 (-{\mathrm{i}}\sigma_2) \mathbf{G}(a,b)\sigma_3 = \mathbf{G}(b,a)$. Thus,
$\mathbf{X}(-\Lambda; -X, T, \mathbf{G}(b,a))$ satisfies exactly the same jump condition as
$\mathbf{P}(\Lambda;X,T, \mathbf{G}(a,b))$. Since they satisfy the same normalization and analyticity properties, by uniqueness of the solutions of Riemann–Hilbert Problem 1 the result follows.
The proof of Proposition 1.3 is now a simple consequence.
Proof of Proposition 1.3
We make use of Proposition A.1 and compute
\begin{align}
\begin{aligned}
\Psi(X,T;\mathbf{G}(a,b)) &= 2{\mathrm{i}} \lim_{\Lambda\to \infty} \Lambda P_{12}(\Lambda; X,T, \mathbf{G}(a,b))\\
&= 2{\mathrm{i}} \lim_{\Lambda\to \infty} \Lambda \left[ \sigma_3 \mathbf{P}(- \Lambda; -X, T, \mathbf{G}(b,a)){\mathrm{e}}^{-4{\mathrm{i}} \Lambda^{-1}\sigma_3} \sigma_3 \right]_{12}\\
&= 2{\mathrm{i}} \lim_{\Lambda\to \infty} \Lambda P_{12}(\Lambda; X,T, \mathbf{G}(a,b))\\
&= 2{\mathrm{i}} \lim_{\Lambda\to \infty} \Lambda \left[ - P_{12}(- \Lambda; -X, T, \mathbf{G}(b,a)) \right]\\
&= 2{\mathrm{i}} \lim_{\Lambda\to \infty} \left[ ( -\Lambda) P_{12}(- \Lambda; -X, T, \mathbf{G}(b,a)) \right]\\
&=\Psi(-X,T;\mathbf{G}(b,a)),
\end{aligned}
\end{align}which is the claimed symmetry.
An easier observation is the following.
Proposition A.2.
$\mathbf{P}(\Lambda;X,-T,\mathbf{G}(a,b)) =\mathbf{P}(-\Lambda^*;X,T,\mathbf{G}(a,b)^*)^*$
Proof. It is straightforward to verify that the two matrix functions satisfy the same analyticity and normalization properties, and satisfy the same jump condition. The result follows from uniqueness.
We now prove Proposition 1.4 as a consequence of this result.
Proof of Proposition 1.4
We make use of Proposition A.2 and compute:
\begin{align}
\begin{aligned}
\Psi(X,-T;\mathbf{G}(a,b)) &= 2{\mathrm{i}} \lim_{\Lambda\to \infty} \Lambda P_{12}(\Lambda; -X,T, \mathbf{G}(a,b))\\
&= 2{\mathrm{i}} \lim_{\Lambda\to \infty} \Lambda \left[P_{12}(- \Lambda^*; X, T, \mathbf{G}(a,b)^*)^* \right]\\
&= 2{\mathrm{i}} \lim_{\Lambda\to \infty} \left[\Lambda^* P_{12}(- \Lambda^*; X, T, \mathbf{G}(a,b)^*) \right]^*\\
&= \lim_{\Lambda\to \infty} \left[2{\mathrm{i}} (-\Lambda^*) P_{12}(- \Lambda^*; X, T, \mathbf{G}(a,b)^*) \right]^*\\
&= \lim_{\Lambda\to \infty} \left[2{\mathrm{i}} \Lambda P_{12}(\Lambda ; X, T, \mathbf{G}(a,b)^*) \right]^*\\
&=\Psi(X,T;\mathbf{G}(a,b)^*)^*,
\end{aligned}
\end{align}which is the claimed symmetry.
Next, we turn to the proof of Proposition 1.5, which concerned a normalization of the parameters
$a,b$ in
$\mathbf{G}(a,b)$.
Proof of Proposition 1.5
It is clear from the structure of the matrix
$\mathbf{G}(a,b)$ in (1.5) that
$\mathbf{G}(a,b)=\mathbf{G}(c a, c b)$ for any positive scalar c > 0. Since the dependence on a and b of Riemann–Hilbert Problem 1 enters only via the matrix G, we have
An additional identity following from (1.5) is
$\mathbf{G}({\mathrm{e}}^{{\mathrm{i}}\theta}a,{\mathrm{e}}^{-{\mathrm{i}}\theta}b)={\mathrm{e}}^{{\mathrm{i}}\theta\sigma_3}\mathbf{G}(a,b)$ for all
$\theta\in\mathbb{R}$. Since
${\mathrm{e}}^{{\mathrm{i}}\theta\sigma_3}$ commutes with
${\mathrm{e}}^{-{\mathrm{i}}(\Lambda X+\Lambda^2T+2 {B} \Lambda^{-1})\sigma_3}$ it can be absorbed into
$\mathbf{P}_-(\Lambda;X,T,\mathbf{G})$, which has no bearing on
$\Psi(X,T;\mathbf{G})$. Taken together, we can say that for any
$c\in\mathbb{C}\setminus\{0\}$, the matrices
$\mathbf{G}(ca,c^*b)$ and
$\mathbf{G}(a,b)$ yield exactly the same solution
$\Psi(X,T;\mathbf{G})$, or put another way,
Finally, one can remove phase factors on b by diagonal conjugation of P, leading to the formula
Composing these identities by first taking
$c={\mathrm{e}}^{-{\mathrm{i}}\arg(a)}/\sqrt{|a|^2+|b|^2}$ in (A.8) so that
\begin{align}
\Psi(X,T;\mathbf{G}(a,b))=\Psi\left(X,T;\mathbf{G}\left(\frac{|a|}{\sqrt{|a|^2+|b|^2}},\frac{|b|}{\sqrt{|a|^2+|b|^2}}{\mathrm{e}}^{{\mathrm{i}}\arg(ab)}\right)\right)
\end{align} and then taking
$\theta=\arg(ab)$ in (A.9), we arrive at (1.10).
Finally, recall the parameterization
$\mathbf{G}=\mathbf{G}(a,b)$ by complex numbers
$a,b$ not both zero as given in (1.5). Proposition 1.8 concerned the special case that either a = 0 or b = 0, and we give its proof now.
Proof of Proposition 1.8
First, if b = 0, but a ≠ 0, then
$\mathbf{G}=\mathbf{G}(a,b)$ given in (1.5) is a diagonal matrix and it is easy to verify that the matrix function
\begin{align}
\mathbf{P}(\Lambda;X,T,\mathbf{G}) =
\begin{cases}
\begin{bmatrix} \dfrac{a^*}{|a|} & 0 \\0 & \dfrac{a}{|a|} \end{bmatrix},&\quad |\Lambda| \lt 1,\\
\mathbb{I},&\quad |\Lambda| \gt 1,
\end{cases}
\end{align} is the solution of Riemann–Hilbert Problem 1, which produces
$\Psi(X,T;\mathbf{G})\equiv 0$ by (1.6). Similarly, if a = 0, but b ≠ 0, then G is an off-diagonal matrix, and the jump matrix (1.4) may be expressed as
\begin{align}
{\mathrm{e}}^{-{\mathrm{i}}(\Lambda X+\Lambda^2T+2 \Lambda^{-1})\sigma_3}\mathbf{G}
{\mathrm{e}}^{{\mathrm{i}}(\Lambda X+\Lambda^2T+2 \Lambda^{-1})\sigma_3}
= \frac{1}{|b|}\begin{bmatrix}
0 & b^* {\mathrm{e}}^{-2{\mathrm{i}} (\Lambda X +\Lambda^2 T)}\\
-b {\mathrm{e}}^{2{\mathrm{i}} (\Lambda X +\Lambda^2 T)} & 0
\end{bmatrix}{\mathrm{e}}^{4{\mathrm{i}} \Lambda^{-1}\sigma_3}.
\end{align}In this case, one can verify that
\begin{align}
\mathbf{P}(\Lambda;X,T,\mathbf{G}) =
\begin{cases}
\displaystyle\frac{1}{|b|}\begin{bmatrix}
0 &-b^* {\mathrm{e}}^{-2{\mathrm{i}} (\Lambda X +\Lambda^2 T)}\\
b {\mathrm{e}}^{2{\mathrm{i}} (\Lambda X +\Lambda^2 T)} & 0
\end{bmatrix},&\quad |\Lambda| \lt 1,\\
{\mathrm{e}}^{4{\mathrm{i}} \Lambda^{-1}\sigma_3},&\quad |\Lambda| \gt 1,
\end{cases}
\end{align} is the solution of Riemann–Hilbert Problem 1, which again produces
$\Psi(X,T;\mathbf{G})\equiv 0$ by (1.6).
Appendix B. Computing
$\mathcal{V}(y;\tau)$ related to the increasing tritronquée solution of Painlevé-II
In this appendix we provide the details concerning the computation of
$\mathcal{V}(y;\tau)$ for real bounded values of y, which is used in the numerical validation of Theorem 1.25 as shown in Figure 10. We recall that
$\mathcal{V}(y;\tau)$ is characterized by the conditions (1.95) in terms of the (unique) increasing tritronquée solution u(x) of (1.94) with
$\alpha=\frac{1}{2}+{\mathrm{i}} p$, where
$p=\frac{1}{2\pi}\ln(1+\tau^2)$ and
$\tau=|b/a|$. As explained in Section 4,
$\mathcal{V}(y;\tau)$ is obtained via (4.5) from the unique solution
$\mathbf{U}^{\mathrm{TT}}(\zeta;y,\tau)$ of the Riemann–Hilbert problem arising from a Lax pair for the Painlevé-II equation due to Jimbo and Miwa [Reference Jimbo and Miwa21].
As discussed in Section 5, the numerical framework developed in [Reference Trogdon and Olver39] and implemented in OperatorApproximation.jl [Reference Trogdon40] concerns Riemann–Hilbert problems posed on a suitable oriented contour Γ and normalized such that the solution is of the form
$\mathbf{C}+\mathcal{C}^{\Gamma}[\mathbf{F}](\zeta)$, where C is a constant
$2\times 2$ matrix and
$\mathcal{C}^{\Sigma}[\mathbf{F}](\zeta)$ is the Cauchy transform
\begin{align}
\mathcal{C}^{\Gamma}[\mathbf{F}](\zeta) = \frac{1}{2\pi {\mathrm{i}}} \int_{\Gamma} \frac{\mathbf{F}(s)}{s-\zeta} \mathrm{d} s.
\end{align}Therefore, we consider the following Riemann–Hilbert problem satisfied by the renormalized function
\begin{align}
\mathbf{W}^{\mathrm{TT}}(\zeta;y,\tau):=
\begin{cases}
\mathbf{U}^{\mathrm{TT}}(\zeta;y,\tau),&\quad |\zeta| \lt 1,\\
\mathbf{U}^{\mathrm{TT}}(\zeta;y,\tau)\zeta^{{\mathrm{i}} p \sigma_3},&\quad |\zeta| \gt 1.
\end{cases}
\end{align}
Figure B1. Jump contours and conditions associated with Riemann–Hilbert Problem 2 in the ζ-plane satisfied by
$\mathbf{W}^{\mathrm{TT}}(\zeta;y,\tau)$.
Riemann–Hilbert Problem 2 (Renormalized Jimbo-Miwa Painlevé-II problem)
Let
$y, p, \tau \in \mathbb{C}$ be related by
$\tau^2={\mathrm{e}}^{2 \pi p}-1$. Seek a
$2 \times 2$ matrix-valued function
$\mathbf{W}^{\mathrm{TT}}(\zeta;y,\tau)$ with the following properties.
• Analyticity:
$\mathbf{W}^{\mathrm{TT}}(\zeta;y,\tau)$ is analytic for ζ in the complement of the unit circle in the five sectors
$S_0:|\arg (\zeta)| \lt \frac{1}{2} \pi, S_1: \frac{1}{2} \pi \lt $
$\arg (\zeta) \lt \frac{5}{6} \pi, S_{-1}:-\frac{5}{6} \pi \lt \arg (\zeta) \lt -\frac{1}{2} \pi, S_2: \frac{5}{6} \pi \lt \arg (\zeta) \lt \pi$, and
$S_{-2}:-\pi \lt \arg (\zeta) \lt $
$-\frac{5}{6} \pi$. It takes continuous boundary values on the excluded rays and at the origin from each sector.• Jump conditions:
$\mathbf{W}_+^{\mathrm{TT}}(\zeta;y,\tau)=\mathbf{W}^{\mathrm{TT}}_-(\zeta;y,\tau) \mathbf{V}^{\mathrm{PII}}(\zeta ; y,\tau)$, where
$\mathbf{V}^{\mathrm{PII}}(\zeta ; y,\tau)$ is the matrix defined on the jump contour shown in Figure B1.• Normalization:
$\mathbf{W}^{\mathrm{TT}}(\zeta;y,\tau) \rightarrow \mathbb{I}$ as
$\zeta \rightarrow \infty$ uniformly in all directions.
The function
$\mathcal{V}(y;\tau)$ is then given by
\begin{align}
\mathcal{V}(y;\tau) = \lim_{\zeta\to\infty} \zeta {W}^{\mathrm{TT}}_{12}(\zeta; y,\tau).
\end{align}For the purposes of verifying Theorem 1.25, one only needs to obtain
$\mathcal{V}(y;\tau)$ for fairly small values of
$|y|$. For
$y\in\mathbb{R}$ close to y = 0, the jump matrix
$\mathbf{V}^{\mathrm{PII}}(\zeta ; y,\tau)$ is bounded and tends to the identity matrix as
$\zeta\to\infty$ on any arc of the jump contour described in Figure B1. For such
$y\in\mathbb{R}$, we numerically solve Riemann–Hilbert Problem 2 as is (without employing any steepest descent deformations) using [Reference Trogdon40]. The routines we developed to compute
$\mathcal{V}(y;\tau)$ can be found in the repository associated with this paper [2]. See the Jupyter notebook Painleve2TT.ipynb in the repository [2].
In order to verify Theorem 1.25, we consider a dense grid
$\mathbb{Y}$ on the closed interval
$-1\leq y \leq 1$ with a mesh size of 0.005. We numerically solve Riemann–Hilbert Problem 2 for each
$y\in\mathbb{Y}$ and obtain the data for
$\mathcal{V}(y;\tau)$ via (B.3) over
$\mathbb{Y}$. We then interpolate over
$\mathbb{Y}$ to obtain a continuous function
$\mathcal{V}(y;\tau)$ of y. We use this interpolant for each value of
$X\in\mathbb{X}$, where
\begin{align}\begin{aligned}
\mathbb{X} := \{200,400,600,800,1000,1200,1400,1600,1800,2000,2200,2400,2600,2800,3000,\\
3200,3400,3600,3800,4000,5000,6000,7000,8000,9000,10000\},
\end{aligned}
\end{align}to compute E(y) as defined in (1.100) and then take the supremum over
$y\in\mathbb{R}$ as described immediately thereafter.
Appendix C. User’s guide for the package RogueWaveInfiniteNLS.jl
In this Appendix we list all of the commands defined in the package RogueWaveInfiniteNLS.jl written in the Julia programming language.
C.1. Main command
Most users will only need the command psi.
This command returns a numerical approximation of
$\Psi(X,T;\mathbf{G}(a,b),{B})$ computed in a black-box fashion. It determines which of the routines psi_undeformed, psi_largeX, psi_largeT, and psi_Painleve to call based on the coordinates (X, T).
C.2. Commands for computing
$\Psi$ based on specific deformed Riemann–Hilbert problems
C.2.1. Using the undeformed Riemann–Hilbert problem
The following commands implement a direct solution of Riemann–Hilbert Problem 1.
This solves a version of Riemann–Hilbert Problem 1 assuming X > 0 and T > 0, and returns
$\Psi(X,T;\mathbf{G}(a,b),1)$ with n collocation points per contour segment.
This wrapper for rwio_nodeformation_rescaled allows for variables (X, T) of any signs, and also takes an additional argument representing the value of B. It calls rwio_undeformed after rescaling the variables by B and determining the appropriate parameters from the matrix
$\widetilde{\mathbf{G}}(a,b)$, and then rescales the returned value again by B.
C.2.2. Using the Riemann–Hilbert problem deformed for large-
$X$ asymptotics
The following commands implement a numerical solution of the Riemann–Hilbert Problem satisfied by
$\mathbf{T}(z;X,v)$ described in Section 2.
This uses the native variables (X, v) for
$\mathbf{T}(z;X,v)$. The underlying assumptions are X > 0 and
$0\leq v \lt v_\mathrm{c}$. There are useful routines TfromXv and vfromXT for switching back and forth between the coordinates (X, v) and (X, T) that are described below in Section C.3.
This wrapper allows for variables (X, T) of any signs, and also takes an additional argument representing the value of B. For
$X\geq 0$ and
$T\geq 0 $, it calls rwio_largeX with rescaled (X, T) coordinates so that B = 1 and with v obtained from the rescaled (X, T) via the routine vfromXT. If X < 0 or T < 0, a symmetry is used to map (X,T) to a point in the first quadrant, and then rwio_largeX is called using that point.
C.2.3. Using the Riemann–Hilbert problem deformed for large-
$T$ asymptotics
The following commands implement a numerical solution of the Riemann–Hilbert Problem satisfied by
$\mathbf{T}(Z;T,w)$ described in Section 3.
This uses the native variables (T, w) for
$\mathbf{T}(Z;T,w)$. The underlying assumptions are T > 0 and
$0\leq w \lt w_\mathrm{c}$. There are useful routines XfromTw and wfromXT for switching back and forth between the coordinates (T, w) and (X, T) that are described below in Section C.3.
This wrapper allows for variables (X, T) of any signs, and also takes an additional argument representing the value of B. For
$X\geq 0$ and
$T\geq 0 $, it calls rwio_largeT with rescaled (X, T) coordinates so that B = 1 and with w obtained from the rescaled (X, T) via the routine wfromXT. If X < 0 or T < 0, a symmetry is used to map (X,T) to a point in the first quadrant, and then rwio_largeT is called using that point.
C.2.4. Using the Riemann–Hilbert problem deformed for large
$X$ and
$T$ near the critical curve
The following commands implement a modification of the Riemann–Hilbert problem satisfied by
$\mathbf{T}(z;X,v)$ as described in Section 4, accounting for the effect of
$v\approx v_\mathrm{c}$.
This again uses the native variables (X, v) for
$\mathbf{T}(z;X,v)$. The underlying assumptions are X > 0 and
$v\approx v_\mathrm{c}$.
This wrapper allows for variables (X, T) of any signs, and also takes an additional argument representing the value of B. For
$X\geq 0$ and
$T\geq 0 $, it calls rwio_Painleve with rescaled (X, T) coordinates so that B = 1 and with v obtained from the rescaled (X, T) via the routine vfromXT. If X < 0 or T < 0, a symmetry is used to map (X,T) to a point in the first quadrant, and then rwio_Painleve is called using that point.
C.3. Routines for changing coordinates
The package RogueWaveInfiniteNLS.jl defines for the user the two important constants VCRIT representing
$v_\mathrm{c}:=54^{-\frac{1}{2}}$ and WCRIT representing
$w_\mathrm{c}:=54^\frac{1}{3}$. The following commands allow the user to easily move between the coordinates (X, T), (X, v), and (T, w).
This returns the value
$T = X^{\frac{3}{2}}v$ determined by a given (X, v), in case one would like to extract the T-coordinate of a point on the curve in the (X, T) plane determined by fixing the value of v.
In a similar fashion, this returns
$v = T X^{-\frac{3}{2}}$, in case one would like to determine the value of v to use in the routine rwio_largeX from given (X, T).
This returns the value
$X = T^{\frac{2}{3}} w$ determined by a given (T, w), in case one would like to extract the X-coordinate of a point on the curve in the (X, T) plane determined by fixing the value of w.
In a similar fashion, this returns
$w = X T^{-\frac{2}{3}}$, in case one would like to determine the value of w to use in the routine rwio_largeT from given (X, T).
































































































































































