1 Introduction
Non-local integrable non-linear Schrödinger (NLS) equations are generated from matrix spectral problems under specific symmetric reductions on potentials [Reference Ablowitz and Musslimani3]. The corresponding inverse scattering transforms have been recently presented, under zero or non-zero boundary conditions, and there still exist N-soliton solutions in the non-local cases [Reference Ablowitz, Luo and Musslimani2, Reference Ablowitz and Musslimani4, Reference Gerdjikov and Saxena15]. Such soliton solutions can be constructed more generally from the Riemann–Hilbert problems with the identity jump matrix [Reference Yang51] and by the Hirota bilinear method [Reference Gürses and Pekcan16]. Some vector or matrix generalisations [Reference Ablowitz and Musslimani5, Reference Fokas10, Reference Ma32] and other interesting non-local integrable equations [Reference Ji and Zhu20, Reference Song, Xiao and Zhu44] were also presented. We would like to propose a class of general non-local reverse-space matrix NLS equations and analyse their inverse scattering transforms and soliton solutions through formulating and solving associated Riemann–Hilbert problems.
The Riemann–Hilbert approach is one of the most powerful approaches for investigating integrable equations and particularly constructing soliton solutions [Reference Novikov, Manakov, Pitaevskii and Zakharov42]. Many integrable equations, such as the multiple wave interaction equations [Reference Novikov, Manakov, Pitaevskii and Zakharov42], the general coupled non-linear Schrödinger equations [Reference Wang, Zhang and Yang46], the generalised Sasa–Satsuma equation [Reference Geng and Wu13], the Harry Dym equation [Reference Xiao and Fan47] and the AKNS soliton hierarchies [Reference Ma27], have been studied by formulating and analysing their Riemann–Hilbert problems associated with matrix spectral problems.
A general procedure for formulating Riemann–Hilbert problems can be described as follows. We start from a pair of matrix spectral problems, say,
where i is the unit imaginary number, $\lambda $ is a spectral parameter, u is a potential and $\phi$ is an $m\times m$ matrix eigenfunction. The compatibility condition of the above two matrix spectral problems, that is, the zero curvature equation:
where $[\cdot,\cdot]$ is the matrix commutator, presents an integrable equation. To establish an associated Riemann–Hilbert problem for the above integrable equation, we adopt the following equivalent pair of matrix spectral problems:
where $\psi$ is also an $m\times m$ matrix eigenfunction. We often assume that A and B are constant commuting $m\times m$ matrices, and P and Q are trace-less $m\times m$ matrices. The equivalence between (1.1) and (1.3) follows from the commutativity of A and B. The properties $(\det \psi )_x\,{=}\,(\det \psi )_t\,{=}\,0$ are two consequences of $\textrm{tr}P=\textrm{tr}Q=0$ . There exists a direct connection between (1.1) and (1.3):
It is important to note that for the pair of matrix spectral problems in (1.3), we can impose the asymptotic conditions:
where $I_m$ stands for the identity matrix of size m. From these two matrix eigenfunctions $\psi^\pm$ , we need to pick the entries and build two generalised matrix Jost solutions $T^\pm(x,t,\lambda)$ , which are analytic in the upper and lower half-planes $\mathbb{C}^+$ and $\mathbb{C}^-$ and continuous in the closed upper and lower half-planes $\mathbb{\bar C}^+ $ and $\mathbb{\bar C}^-$ , respectively, to formulate a Riemann–Hilbert problem on the real line:
where two unimodular generalised matrix Jost solutions $G^+$ and $G^-$ and the jump matrix $G_0$ are generated from $T^+$ and $T^-$ . The jump matrix $G_0$ carries all basic scattering data from the scattering matrix $S_g(\lambda )$ of the matrix spectral problems, defined through
Solutions to the associated Riemann–Hilbert problems (1.6) provide the required generalised matrix Jost solutions in recovering the potential of the matrix spectral problems, which solves the corresponding integrable equation. Such solutions $G^+$ and $G^-$ can be presented by applying the Sokhotski–Plemelj formula to the difference of $G^+$ and $G^-$ . A recovery of the potential comes from observing asymptotic behaviours of the generalised matrix Jost solutions $G^\pm$ at infinity of $\lambda $ . This then completes the corresponding inverse scattering transforms. Soliton solutions can be worked out from the reflectionless transforms, which correspond to the Riemann–Hilbert problems with the identity jump matrix $G_0$ .
In this paper, we first present a class of non-local reverse-space matrix NLS equations by making a specific group of non-local reductions and then analyse their inverse scattering transforms and soliton solutions, based on associated Riemann–Hilbert problems. One example with two components is
where $\gamma _1$ and $\gamma_2$ are arbitrary non-zero real constants.
The rest of the paper is organised as follows. In Section 2, within the zero curvature formulation, we recall the Ablowitz–Kaup–Newell–Segur (AKNS) integrable hierarchy with matrix potentials, based on an arbitrary-order matrix spectral problem suited for the Riemann–Hilbert theory, and conduct a group of non-local reductions to generate non-local reverse-space matrix NLS equations. In Section 3, we build the inverse scattering transforms by formulating Riemann–Hilbert problems associated with a kind of arbitrary-order matrix spectral problems. In Section 4, we compute soliton solutions to the obtained non-local reverse-space matrix NLS equations from the reflectionless transforms, that is, the special associated Riemann–Hilbert problems on the real axis where an identity jump matrix is taken. The conclusion is given in the last section, together with a few concluding remarks.
2 Non-local reverse-space matrix NLS equations
2.1 Matrix AKNS hierarchy
Let m and $n \ge 1 $ be two arbitrary integers, and $\alpha _1$ and $\alpha_2$ are different arbitrary real constants. We focus on the following matrix spectral problem:
where $\lambda $ is a spectral parameter, and p and q are two matrix potentials:
When $m=1$ , that is, p and q are vectors, (2.1) gives a matrix spectral problem with vector potentials [Reference Ma and Zhou37]. When there is only one pair of non-zero potentials $p_{jl},q_{lj}$ , (2.1) becomes the standard AKNS spectral problem [Reference Ablowitz, Kaup, Newell and Segur1]. On account of these, we call (2.1) a matrix AKNS matrix spectral problem, and its associated hierarchy, a matrix AKNS integrable hierarchy. Because of the existence of a multiple eigenvalue of $ \frac {\partial U}{\partial \lambda }$ , we have a degenerate matrix spectral problem in (2.1).
To construct an associated matrix AKNS integrable hierarchy, as usual, we begin with the stationary zero curvature equation:
corresponding to (2.1). We search for a solution W of the form:
where a, b, c and d are $m\times m$ , $m\times n$ , $n\times m$ and $n\times n$ matrices, respectively. Obviously, the stationary zero curvature equation (2.3) equivalently presents
where $\alpha =\alpha _1-\alpha _2$ . We take W as a formal series:
and then, the system (2.5) exactly engenders the following recursion relations:
Let us now fix the initial values:
where $\beta_1$ and $\beta_2$ are arbitrary but different real constants, and take zero constants of integration in (2.7d), which says that we require
In this way, with $a^{[0]}$ and $ d^{[0]}$ given by (2.8), all matrices $W_s ,\ s\ge 1$ , defined recursively, are uniquely determined. For example, a direct computation, based on (2.1), generates that
where $\beta=\beta_1-\beta_2$ . Using (2.7d), we can derive, from (2.7b) and (2.7c), a recursion relation for $b^{[s]}$ and $c^{[s]}$ :
where $\Psi $ is a matrix operator:
The matrix AKNS integrable hierarchy is associated with the following temporal matrix spectral problems:
The compatibility conditions of the two matrix spectral problems (2.1) and (2.13), that is, the zero curvature equations:
yield the so-called matrix AKNS integrable hierarchy:
The first non-linear integrable system in this hierarchy gives us the standard matrix NLS equations:
When $m=1$ and $n=2$ , under a special kind of symmetric reductions, the matrix NLS equations (2.16) can be reduced to the Manokov system [Reference Manakov39], for which a decomposition into finite-dimensional integrable Hamiltonian systems was made in [Reference Chen and Zhou8].
2.2 Non-local reverse-space matrix NLS equations
Let us now take a specific group of non-local reductions for the spectral matrix:
which is equivalent to
Henceforth, $\dagger $ stands for the Hermitian transpose, $ * $ denotes the complex conjugate, $\Sigma _{1,2}$ are two constant invertible Hermitian matrices, and for brevity, we adopt
for a matrix A and a function f.
The matrix spectral problems of the matrix NLS equations (2.16) are given as follows:
The involved Lax pair reads
where $\Lambda =\textrm{diag}(\alpha _1 I_m,\alpha _2I_n),$ $ \Omega =\textrm{diag}(\beta _1 I_m,\beta _2 I_n)$ , and
In the above matrices P and Q, p and q are defined by (2.2), and $a^{[s]},b^{[s]},c^{[s]},d^{[s]}$ , $ 1\le s\le 2$ , are determined in (2.10).
Based on (2.18), we arrive at
The vector function c in (2.5) under such a non-local reduction could be taken as:
It is easy to see that those non-local reduction relations ensure that
where a and d satisfy (2.5). For instance, under (2.23) and (2.24), we can compute that
from which the first relation in (2.25) follows. Furthermore, by using the Laurent expansions for a,b,c and d, we can get
where $s\ge 0$ . It then follows that
where $V^{[2]}$ and Q are defined in (2.21) and (2.22), respectively.
The above analysis guarantees that the non-local reduction (2.18) does not require any new condition for the compatibility of the spatial and temporal matrix spectral problems in (2.20). Therefore, the standard matrix NLS equations (2.16) are reduced to the following nonlocal reverse-space matrix NLS equations:
where $\Sigma _1$ and $\Sigma _2$ are two arbitrary invertible Hermitian matrices.
When $m=n=1$ , we can get two well-known scalar examples [Reference Ablowitz and Musslimani3]:
When $m=1$ and $n=2$ , we can obtain a system of non-local reverse-space two-component NLS equations (1.8).
3 Inverse scattering transforms
3.1 Distribution of eigenvalues
We consider the non-local reduction case and so q is defined by (2.23). We are going to analyse the scattering and inverse scattering transforms for the non-local reverse-space matrix NLS equations (2.28) by the Riemann–Hilbert approach (see, e.g., [Reference Doktorov and Leble9, Reference Gerdjikov, Mladenov and Hirshfeld14, Reference Novikov, Manakov, Pitaevskii and Zakharov42]). The results will prepare the essential foundation for soliton solutions in the following section.
Assume that all the potentials sufficiently rapidly vanish when $x\to \pm \infty$ or $t\to \pm \infty$ . For the matrix spectral problems in (2.20), we can impose the asymptotic behaviour: $\phi \sim \textrm{e}^{i\lambda \Lambda x+ i \lambda ^2 \Omega t}$ , when $x,t\to \pm \infty$ . Therefore, if we take the variable transformation:
then we can have the canonical asymptotic conditions: $\psi \to I_{m+n}, \ \textrm{when}\ x,t \to \infty\ \textrm{or}\ -\infty.$ The equivalent pair of matrix spectral problems to (2.20) reads
Applying a generalised Liouville’s formula [Reference Ma, Yong, Qin, Gu and Zhou34], we can obtain
because $(\det \psi )_x=0$ due to $\textrm{tr}\ \check P=\textrm{tr}\ \check Q=0$ .
Recall that the adjoint equation of the x-part of (2.20) and the adjoint equation of (3.1) are given by:
and
respectively. Obviously, there exist the links: $\tilde \phi =\phi ^{-1}$ and $\tilde \psi=\psi ^{-1}$ . Each pair of adjoint matrix spectral problems and equivalent adjoint matrix spectral problems do not create any new condition, either, except the non-local reverse-space matrix NLS equations (2.28).
Let $\psi(\lambda ) $ be a matrix eigenfunction of the spatial spectral problem (3.1) associated with an eigenvalue $\lambda$ . It is easy to see that $C\psi^{-1}(x,t, \lambda )$ is a matrix adjoint eigenfunction associated with the same eigenvalue $\lambda$ . Under the non-local reduction in (2.18), we have
Thus, the matrix
presents another matrix adjoint eigenfunction associated with the same original eigenvalue $ \lambda $ . That is to say that $ \psi ^\dagger (-x,t,- \lambda ^*) C $ solves the adjoint spectral problem (3.5).
Finally, we observe the asymptotic conditions for the matrix eigenfunction $\psi$ , and see that by the uniqueness of solutions, we have
when $\psi\to I_{m+n},\ x\ \textrm{or}\ t\to \infty\ \textrm{or}\ -\infty$ . This tells that if $\lambda $ is an eigenvalue of (3.1) (or (3.5)), then $-\lambda ^*$ will be another eigenvalue of (3.1) (or (3.5)), and there is the property (3.7) for the corresponding eigenfunction $\psi$ .
3.2 Riemann–Hilbert problems
Let us now start to formulate a class of associated Riemann–Hilbert problems with the variable x. In order to clearly state the problems, we also make the assumptions:
In the scattering problem, we first introduce the two matrix eigenfunctions $\psi^\pm (x,\lambda )$ of (3.1) with the asymptotic conditions:
respectively. It then follows from (3.3) that $\det \psi ^\pm =1$ for all $x\in \mathbb{R}$ . Because
are both matrix eigenfunctions of (2.20), they must be linearly dependent, and as a result, one has
where $S(\lambda )$ is the corresponding scattering matrix. Note that $\det S(\lambda )=1$ , thanks to $\det \psi ^\pm=1$ .
Through the method of variation in parameters, we can transform the x-part of (2.20) into the following Volterra integral equations for $\psi^{\pm}$ [Reference Novikov, Manakov, Pitaevskii and Zakharov42]:
where the asymptotic conditions (3.9) have been imposed. Now, the theory of Volterra integral equations tells that by the Neumann series [Reference Hildebrand18], we can show that the eigenfunctions $\psi ^\pm$ exist and allow analytic continuations off the real axis $\lambda\in \mathbb{R}$ provided that the integrals on their right-hand sides converge (see, e.g., [Reference Ablowitz, Prinari and Trubatch6]). From the diagonal form of $\Lambda$ and the first assumption in (3.8), we can see that the integral equation for the first m columns of $\psi ^-$ contains only the exponential factor $\textrm{e}^{-i\alpha \lambda (x-y)}$ , which decays because of $y< x$ in the integral, if $\lambda $ takes values in the upper half-plane $\mathbb{C}^+$ , and the integral equation for the last n columns of $\psi^+$ contains only the exponential factor $\textrm{e}^{i \alpha \lambda (x-y)}$ , which also decays because of $y> x$ in the integral, when $\lambda $ takes values in the upper half-plane $\mathbb{C}^+$ . Therefore, we see that these $m+n$ columns are analytic in the upper half-plane $\mathbb{C}^+$ and continuous in the closed upper half-plane $\mathbb{\bar C}^+$ . In a similar manner, we can know that the last n columns of $\psi ^-$ and the first m columns of $\psi^+$ are analytic in the lower half-plane $ \mathbb{C}^-$ and continuous in the closed lower half-plane $\mathbb{\bar C}^-$ .
In what follows, we give a detailed proof for the above statements. Let us express
that is, $\psi^{\pm}_j$ denotes the jth column of $\phi^{\pm}$ ( $1\le j\le m+n$ ). We would like to prove that $\psi^{-}_j$ , $1\le j\le m$ , and $\psi^{+}_j$ , $m+1\le j\le m+n$ , are analytic at $\lambda \in \mathbb{C}^+$ and continuous at $\lambda \in \mathbb{\bar C}^+$ ; and $\psi^{+}_j$ , $1\le j\le m$ , and $\psi^{-}_j$ , $m+1\le j\le m+n$ , are analytic at $\lambda \in \mathbb{C}^-$ and continuous at $\lambda \in \mathbb{\bar C}^-$ . We only to prove the result for $\psi^{-}_j$ , $1\le j\le m$ , and the proofs for the other eigenfunctions follow analogously.
It is easy to obtain from the Volterra integral equation (3.12) that
and
where the $\textrm{e}_i$ are standard basis vectors of $\mathbb{R}^{m+n}$ and the matrices $R_1$ and $R_2$ are defined by:
Let us prove that for each $1\le j\le m$ , the solution to (3.15) is determined by the Neumann series:
where
This will be true if we can prove that the Neumann series converges uniformly for $x\in \mathbb{R}$ and $\lambda \in \mathbb{\bar C}^+$ . By the mathematical induction, we can have
for $x\in \mathbb{R}$ and $ \lambda \in \mathbb{\bar C}^+$ , where $|\cdot |$ denotes the Euclidean norm for vectors and $\|\cdot \|$ stands for the Frobenius norm for square matrices. By the Weierstrass M-test, this estimation guarantees that
uniformly converges for $ \lambda \in \mathbb{\bar C}^+$ and $x\in \mathbb{R}$ , and all $\phi^-_j(\lambda ,x)$ , $1\le j\le m$ , are continuous with respect to $\lambda$ in $ \mathbb{\bar C}^+$ , since so are all $\phi^-_{j,k}(\lambda ,x)$ , $1\le j\le m$ , $k\ge 0$ .
Let us now consider the differentiability of $ \phi^-_j(\lambda ,x)$ , $1\le j\le m$ , with respect to $\lambda $ in $ \mathbb{C}^+$ (similarly, we can prove the differentiability with respect to x in $\mathbb{R}$ ). Fix an integer $1\le j\le m$ and a number $\mu $ in $ \mathbb{C}^+$ . Choose a disc $B_r(\mu )=\{\lambda \in \mathbb{C}\,|\, |\lambda -\mu |\le r \} $ with a radius $r> 0$ such that $B_r(\mu )\subseteq \mathbb{C}^+$ , and then we can have a constant $C(r)>0$ such that $|\alpha x \textrm{e}^{-i \alpha \lambda x} | \le C(r)$ for $\lambda \in B_r(\mu )$ and $x\ge 0$ . We define the following Neumann series:
where $\phi^-_{j,\lambda,0}=0 $ and
with the $\phi^-_{j,k}$ being given by (3.19) and $R_{1,\lambda }$ being defined by:
We can readily verify by the mathematical induction that
for $x\in \mathbb{R}$ and $ \lambda \in B_r(\mu )$ . Therefore, by the Weierstrass M-test, the Neumann series defined by (3.22) converges uniformly for $x\in \mathbb{R}$ and $ \lambda \in B_r(\mu)$ ; and by the term-by-term differentiability theorem, it converges to the derivative of $\phi^-_j$ with respect to $\lambda$ , due to $\psi^-_{j,\lambda,k}=\frac {\partial }{\partial \lambda }\phi^-_{j,k}$ , $k\ge 0$ . Therefore, $\phi^-_j$ is analytic at an arbitrarily fixed point $\mu \in \mathbb{C}^+$ . It then follows that all $\phi^-_j$ , $1\le j\le m,$ are analytic with respect to $\lambda$ in $ \mathbb{C}^+$ . The required proof is done.
Based on the above analysis, we can then form the generalised matrix Jost solution $T^+$ as follows:
which is analytic with respect to $\lambda$ in $\mathbb{C}^+$ and continuous with respect to $\lambda$ in $\mathbb{\bar C}^+$ . The generalised matrix Jost solution:
is analytic with respect to $\lambda $ in $\mathbb{C}^-$ and continuous with respect to $\lambda$ in $\mathbb{\bar C}^-$ . In the above definition, we have used
To construct the other generalised matrix Jost solution $T^-$ , we adopt the analytic counterpart of $T^+$ in the lower half-plane $\mathbb{C}^-$ , which can be generated from the adjoint counterparts of the matrix spectral problems. Note that the inverse matrices $\tilde \phi ^{\pm}=(\phi ^\pm )^{-1}$ and $\tilde \psi ^{\pm}=(\psi ^\pm )^{-1}$ solve those two adjoint equations, respectively. Then, stating $\tilde \psi^{\pm}$ as:
that is, $\tilde \psi ^{\pm,j}$ denotes the jth row of $\tilde \psi ^{\pm}$ ( $1\le j\le m+n$ ), we can verify by similar arguments that we can form the generalised matrix Jost solution $T^-$ as the adjoint matrix solution of (3.5), that is,
which is analytic with respect to $ \lambda$ in $\mathbb{C}^-$ and continuous with respect to $\lambda$ in $\mathbb{\bar C}^-$ , and the other generalised matrix Jost solution of (3.5):
is analytic with respect to $ \lambda$ in $\mathbb{C}^+$ and continuous with respect to $\lambda$ in $\mathbb{\bar C}^+$ .
Now we have finished the construction of the two generalised matrix Jost solutions, $T^+$ and $T^-$ . Directly from $\det \psi ^\pm =1$ and using the scattering relation (3.11) between $\psi ^+$ and $\psi ^-$ , we arrive at
and
where we split $S(\lambda )$ and $S^{-1}(\lambda )$ as follows:
$S_{11},\hat S_{11}$ being $m\times m$ matrices, $S_{12},\hat S_{12}$ being $m\times n$ matrices, $S_{21},\hat S_{21}$ being $n\times m$ matrices and $S_{22},\hat S_{22}$ being $n\times n$ matrices. Based on the uniform convergence of the previous Neumann series, we know that $S_{11}(\lambda )$ and $\hat S_{11}(\lambda )$ are analytic in $ \mathbb{C}^+$ and $ \mathbb{C}^-$ , respectively.
In this way, we can introduce the following two unimodular generalised matrix Jost solutions:
Those two generalised matrix Jost solutions establish the required matrix Riemann–Hilbert problems on the real line for the non-local reverse-space matrix NLS equations (2.28):
where the jump matrix $G_0$ is
based on (3.11). In the jump matrix $G_0$ , the matrix $\tilde S(\lambda )$ has the factorisation:
which can be shown to be
Following the Volterra integral equations (3.12) and (3.13), we can obtain the canonical normalisation conditions:
for the presented Riemann–Hilbert problems. From the property (3.7), we can also observe that
and thus, the the jump matrix $G_0$ possesses the following involution property:
3.3 Evolution of the scattering data
To complete the direct scattering transforms, let us take the derivative of (3.11) with time t and use the temporal matrix spectral problems:
It then follows that the scattering matrix S satisfies the following evolution law:
This tells the time evolution of the time-dependent scattering coefficients:
and all other scattering coefficients are independent of the time variable t.
3.4 Gelfand–Levitan–Marchenko-type equations
To obtain Gelfand–Levitan–Marchenko-type integral equations to determine the generalised matrix Jost solutions, let us transform the associated Riemann–Hilbert problem (3.36) into
where the jump matrix $G_0$ is defined by (3.37) and (3.38).
Let $G(\lambda )=G ^\pm (\lambda )$ if $\lambda \in \mathbb{C}^\pm$ . Assume that G has simple poles off $\mathbb{R}$ : $\{ \mu _j\}_{j=1}^R$ , where R is an arbitrary integer. Define
where $ G_j$ is the residue of G at $\lambda =\mu _j$ , that is,
This tells that we have
By applying the Sokhotski–Plemelj formula [Reference Gakhov12], we get the solution of (3.49):
Further taking the limit as $\lambda \to \mu _l $ yields
where
and consequently, we see that the required Gelfand–Levitan–Marchenko-type integral equations are as follows:
All these equations are used to determine solutions to the associated Riemann–Hilbert problems and thus the generalised matrix Jost solutions. However, little was yet known about the existence and uniqueness of solutions. In the case of soliton solutions, a formulation of solutions, where eigenvalues could equal adjoint eigenvalues, will be presented for non-local integrable equations in the next section.
3.5 Recovery of the potential
To recover the potential matrix P from the generalised matrix Jost solutions, as usual, we make an asymptotic expansion:
Then, plugging this asymptotic expansion into the matrix spectral problem (3.1) and comparing $\textrm{O}(1)$ terms generates
This leads exactly to the potential matrix:
where we have similarly partitioned the matrix $G^+_1$ into four blocks as follows:
Therefore, the solutions to the standard matrix NLS equations (2.16) read
When the non-local reduction condition (2.18) is satisfied, the reduced matrix potential p solves the non-local reverse-space matrix NLS equations (2.28).
To conclude, this completes the inverse scattering procedure for computing solutions to the non-local reverse-space matrix NLS equations (2.28), from the scattering matrix $S(\lambda )$ , through the jump matrix $G_0(\lambda )$ and the solution $\{G^+(\lambda ), G^-(\lambda )\}$ of the associated Riemann–Hilbert problems, to the potential matrix P.
4 Soliton solutions
4.1 Non-reduced local case
Let $N\ge 1 $ be another arbitrary integer. Assume that $\det S_{11}(\lambda ) $ has N zeros $\{\lambda _ k\in \mathbb{C} ,\ 1\le k\le N\}$ , and $\det \hat S_{11}(\lambda )$ has N zeros $\{\hat \lambda _ k\in \mathbb{C} ,\ 1\le k\le N\}$ .
In order to present soliton solutions explicitly, we also assume that all these zeros, $\lambda _k$ and $ \hat \lambda _k ,\ 1\le k\le N,$ are geometrically simple. Then, each of $\textrm{ker} \,T^+(\lambda _k)$ , $1\le k\le N$ , contains only a single basis column vector, denoted by $v_k$ , $1\le k\le N$ ; and each of $\textrm{ker}\, T^-(\hat \lambda _k)$ , $1\le k\le N$ , a single basis row vector, denoted by $\hat v _k$ , $1\le k\le N$ :
Soliton solutions correspond to the situation where $G_0=I_{m+n}$ is taken in each Riemann–Hilbert problem (3.36). This can be achieved if we assume that $S_{21}=\hat S_{12}=0,$ which means that the reflection coefficients are taken as zero in the scattering problem.
This kind of special Riemann–Hilbert problems with the canonical normalisation conditions in (3.40) and the zero structures given in (4.1) can be solved precisely, in the case of local integrable equations [Reference Kawata21, Reference Novikov, Manakov, Pitaevskii and Zakharov42], and consequently, we can exactly work out the potential matrix P. However, in the case of non-local integrable equations, we often do not have
Without this condition, the solutions to the special Riemann–Hilbert problem with the identity jump matrix can be presented as follows (see, e.g., [Reference Ma32]):
where $M=(m_{kl})_{N\times N}$ is a square matrix with its entries:
and we need an orthogonal condition:
to guarantee that $G^+(\lambda )$ and $G^-(\lambda )$ solve
Note that the zeros $\lambda _k$ and $\hat \lambda _k$ are constants, that is, space- and time-independent, and so we can easily determine the spatial and temporal evolutions for the vectors, $v_k(x,t)$ and $\hat v_k(x,t)$ , $1\le k\le N$ , in the kernels. For instance, let us compute the x-derivative of both sides of the first set of equations in (4.1). Applying (3.1) first and then again the first set of equations in (4.1), we arrive at
This implies that for each $1\le k\le N$ , $\frac {dv_k}{dx}- i\lambda _k \Lambda v_k$ is in the kernel of $P^+(x,\lambda _k)$ , and thus, a constant multiple of $v_k$ , since $\lambda _k$ is geometrically simple. Without loss of generality, we can simply take
The time dependence of $v_k$ :
can be achieved similarly through an application of the t-part of the matrix spectral problem, (3.2). In consequence of these differential equations, we obtain
and completely similarly, we can have
where $w_{k} $ and $ \hat w_{ k}$ , $1\le k\le N$ , are arbitrary constant column and row vectors, respectively, but need to satisfy an orthogonal condition:
which is a consequence of (4.5).
Finally, from the solutions in (4.3), we get
and thus, the presentations in (3.57) yield the following N-soliton solution to the standard matrix NLS equations (2.16):
Here for each $1\le k\le N$ , we split $v_k=((v_{k,1})^T,(v_{k,2})^T)^T$ and $ \hat v_k=(\hat v_{k,1},\hat v_{k,2} )$ , where $v_{k,1}$ and $\hat v_{k,1}$ are m-dimensional column and row vectors, respectively, and $v_{k,2}$ and $\hat v_{k,2}$ are n-dimensional column and row vectors, respectively.
4.2 Reduced non-local case
To compute N-soliton solutions for the non-local reverse-space matrix NLS equations (2.28), we need to check if $G^+_1$ defined by (4.13) satisfies an involution property:
This equivalently requires that the potential matrix P determined by (3.55) satisfies the non-local reduction condition (2.18). Thus, the N-soliton solution to the standard matrix NLS equations (2.16) is reduced to the N-soliton solution:
for the non-local reverse-space matrix NLS equations (2.28), where we split $v_k=((v_{k,1})^T,(v_{k,2})^T)^T $ and $\hat v_k=(\hat v_{k,1},\hat v_{k,2})$ , $1\le k\le N$ , as before.
Let us now show how to realise the involution property (4.15). We first take N distinct zeros of $\det T^+(\lambda )$ (or eigenvalues of the spectral problems under the zero potential): $ \lambda _k \in \mathbb{C},\ 1\le k\le N, $ and define
which are zeros of $\det T^-(\lambda )$ . We recall that the $\textrm{ker}\,T^+(\lambda _k)$ , $1\le k\le N$ , are spanned by:
respectively, where $w_{k},\ 1\le k\le N$ , are arbitrary column vectors. These column vectors in (4.18) are eigenfunctions of the spectral problems under the zero potential associated with $\lambda_k,\ 1\le k\le N$ . Furthermore, following the previous analysis in Subsection 3.1, the $\textrm{ker} \,T^-(\lambda _k)$ , $1\le k\le N$ , are spanned by:
respectively. These row vectors are eigenfunctions of the adjoint spectral problems under the zero potential associated with $\hat \lambda_k,\ 1\le k \le N$ . To satisfy the orthogonal property (4.12), we require the following orthogonal condition:
on the constant columns $\{ w_k \, |\, 1\le k\le N \}$ . Interestingly, the situation of $\lambda _k=\hat \lambda _k$ occurs only when $\lambda _k\in i\mathbb{R}$ and $\hat \lambda _k= -\lambda _k^*$ .
Now, we can directly see that if the solutions to the specific Riemann–Hilbert problems, determined by (4.3) and (4.4), satisfy the property (3.41), then the corresponding matrix $G_1^+$ possesses the involution property (4.15) generated from each non-local reduction in (2.17). Accordingly, the formula (4.16), together with (4.3), (4.4), (4.18) and (4.19), presents the required N-soliton solutions to the non-local reverse-space matrix NLS equations (2.28).
When $m=n=N=1$ , we choose $\lambda _1=i \eta_1 ,\ \hat \lambda_1 = -i \eta _1 ,\ \eta _1 \in \mathbb{R} $ and denote $w_1=(w_{1,1},w_{1,2})^T$ . Then, we can obtain the following one-soliton solution to the non-local reverse-space scalar NLS equations in (2.29):
where $\varepsilon=\pm 1$ , $\eta _1$ is an arbitrary real number, and $w_{1,1}$ and $w_{1,2}$ are arbitrary complex numbers but satisfy $\sigma |w_{1,1}|^2+|w_{1,2}|^2=0$ , which comes from the involution property (4.15). The condition for $w_1$ implies that we need to take $\sigma=-1$ . This solution has a singularity at $x=-\frac {\ln \varepsilon \sigma}{2\eta_1}$ when $\varepsilon \sigma >0$ , and the case of $\varepsilon =1$ and $\sigma=-1$ can present the breather one-soliton in [Reference Ablowitz and Musslimani4].
When $m=1, \ n=2$ and $ N=1$ , we take $C=\textrm{diag}(1,\gamma_1,\gamma_2)$ , where $\gamma_1$ and $\gamma_2$ are arbitrary non-zero real numbers. Then the non-local reverse-space matrix NLS equations (2.28) becomes
According to our formulation of solutions above, this system has the following one-soliton solution:
provided that $w_{1}=(w_{1,1},w_{1,2},w_{1,3})^T$ satisfies the orthogonal conditions:
Note that the product $\gamma_1 \gamma_2$ could be either positive or negative.
5 Concluding remarks
The paper aims to present a class of non-local reverse-space integrable matrix non-linear Schrödinger (NLS) equations and their inverse scattering transforms. The main analysis is based on Riemann–Hilbert problems associated with a kind of arbitrary-order matrix spectral problems with matrix potentials. Through the the Sokhotski–Plemelj formula, the associated Riemann–Hilbert problems were transformed into Gelfand–Levitan–Marchenko-type integral equations, and the corresponding reflectionless problems were solved to generate soliton solutions for the non-local reverse-space matrix NLS equations.
The Riemann–Hilbert technique, which is very effective in generating soliton solutions (see also, e.g., [Reference Liu and Guo22, Reference Yang48]), has been recently generalised to solve various initial-boundary value problems of continuous integrable equations on the half-line and the finite interval [Reference Fokas and Lenells11, Reference Lenells and Fokas23]. There are many other approaches to soliton solutions that work well and are easy to use, among which are the Hirota direct method [Reference Hirota19], the generalised bilinear technique [Reference Ma25], the Wronskian technique [Reference Freeman and Nimmo17, Reference Ma and You33] and the Darboux transformation [Reference Ma and Zhang35, Reference Matveev and Salle40]. It would be significantly important to search for clear connections between different approaches to explore dynamical characteristics of soliton phenomena.
We also emphasise that it would be particularly interesting to construct various kinds of solutions other than solitons to integrable equations, such as positon and complexiton solutions [Reference Ma24, Reference Matveev41], lump and rogue wave solutions [Reference Ma and Zhou38]–[Reference Ma31], Rossby wave solutions [Reference Zhang and Yang50], solitonless solutions [Reference Ma28, Reference Ma29, Reference Rybalko and Shepelsky43] and algebro-geometric solutions [Reference Belokolos, Bobenko, Enol’skii, Its and Matveev7, Reference Ma26], from a perspective of the Riemann–Hilbert technique. Another interesting topic for further study is to establish a general formulation of Riemann–Hilbert problems for solving generalised integrable equations, for example, integrable couplings, super integrable equations and fractional analogous equations.
Acknowledgements
The work was supported in part by NSFC under the grants 11975145 and 11972291, the Fundamental Research Funds of the Central Universities (grant no. 2020MS043), and the Natural Science Foundation for Colleges and Universities in Jiangsu Province (grant no. 17 KJB 110020). The authors are also grateful to the reviewers for their constructive comments and suggestions, which helped improve the quality of their manuscript.
Conflict of interest
None.