1 Introduction
Liquid crystals are a phase of matter intermediate between isotropic fluids and crystalline solids, which combine typical properties of liquids (flow properties) with those of solids (anisotropy). These multi-faceted properties make liquid crystal materials widely used in industry. Microscopically, to generate liquid crystal phase, the molecules have to be anisotropic, for example, rod-like or disk-like, and therefore they tend to organize themselves [Reference De Gennes and Prost11, Reference Vertogen and De Jeu40]. According to the structure of the constituent molecules or groups of molecules, liquid crystals have many forms, such as nematic, smectic, and cholesteric. A liquid crystal is nematic if only long-range orientational order is present (rod-like molecules tend to align in a common direction), whereas is smectic if the order is partially positional (molecules are usually in layers). For the cholesteric liquid crystals, the molecules are aligned parallel to a certain direction, but this direction changes in space and makes a helix. Nematic liquid crystals are one of the simplest and most important members of liquid crystals and have been extensively studied theoretically, numerically, and experimentally [Reference Chanderasekhar8, Reference De Gennes and Prost11, Reference Ericksen13, Reference Leslie30, Reference Vertogen and De Jeu40]. There are various continuum theories for nematic liquid crystals [Reference Lin and Liu35]; we adopt in this paper the Ericksen–Leslie theory, which is adequate for the treatment of both static and dynamic phenomena [Reference Lin34]. Macroscopically, the motion of nematic liquid crystals can be characterized by the coupling of velocity field
$\textbf {u}$
for the flow and director field
$\textbf {n}$
for the alignment of the rod-like feature. The governing system is the so-called Ericksen–Leslie equations [Reference Ericksen13, Reference Leslie29, Reference Leslie30, Reference Lin34], which reads that

where the notation
$\dot {f}$
denotes the material derivative, that is,
$\dot {f}=f_t+\textbf {u}\cdot \nabla f$
. In system (1.1),
$\rho $
is the density, P is the pressure,
$\nu $
is the inertial coefficient of
$\textbf {n}$
, and
$\lambda $
is the Lagrangian multiplier of the constraint
$|\textbf {n}|=1$
. Furthermore, the function
$W=W(\textbf {n},\nabla \textbf {n})$
is the Oseen–Franck potential energy density

with elastic constants
$k_i\ (i=1,2,3)$
, and
$\textbf {g}$
and
$\sigma $
are, respectively, the kinematic transport tensor and the viscous stress tensor given by

where N represents the rigid rotation part of director changing rate by fluid vorticity and D represents the rate of strain tensor expressed as

The material coefficients
$\gamma _1, \gamma _2$
in the expression of
$\textbf {g}$
and the Leslie coefficients
$\alpha _i\ (i=1,\ldots ,6)$
in the expression of
$\sigma $
satisfy some specific relations and inequalities. For the one-dimensional Poiseuille flows with the following special form [Reference Calderer and Liu7]

Chen, Huang, and Liu [Reference Chen, Huang and Liu9] investigated the full Ericksen–Leslie equations (1.1) and derived the following one-dimensional hyperbolic–parabolic coupled system

where a is a constant, and

For the derivation of (1.3), we refer the reader to the work of Chen, Huang, and Liu [Reference Chen, Huang and Liu9]. By choosing appropriate parameters such that
$h(\theta )=g(\theta )\equiv 1$
and
$\nu =\rho =1, a=0, \gamma _1=2$
, system (1.3) reduces to

We here provide a detailed introduction to the variables in (1.4).
$t\geq 0$
is the time-independent variable,
$x\in \mathbb {R}$
is the space-independent variable, the unknown function
$\theta (t,x)\in \mathbb {R}$
represents the angle between the director field
$\textbf {n}$
and the z-axis, and the unknown function
$u(t,x)\in \mathbb {R}$
represents the component of flow velocity
$\textbf {u}$
along the z-axis. See (1.2) for the details. The special physical significance of these unknown variables comes from the plane shear and Poiseuille flow geometries (assuming director stays in the x–z plane). Moreover, the first hyperbolic equation in (1.4) (and also (1.3)) describes the “crystal” property in nematic liquid crystals since it characterizes the propagation of the orientation waves in the director field, whereas the second parabolic equation in (1.4) (and also (1.3)) describes the “liquid” property of the liquid crystals.
When neglecting the fluid effect in system (1.3), it is obtained the well-known variational wave equation

which was introduced by Hunter and Saxton [Reference Hunter and Saxton22] and had been widely studied under the positiveness condition of the wave speed c. See works on the singularity formations of smooth solutions [Reference Glassey, Hunter and Zheng16], on the existence of dissipative weak solutions [Reference Zhang and Zheng42, Reference Zhang and Zheng43], and on the well-posedness of conservative weak solutions [Reference Bressan and Chen2–Reference Bressan and Zheng5]. One may also consult the works of the related systems of variational wave equations in [Reference Cai, Chen and Du6, Reference Hu18, Reference Zhang and Zheng44, Reference Zhang and Zheng45]. On the other hand, the elastic coefficients
$k_i(i=1,2,3)$
may be negative in some cases physically, for example, the bend elastic constant
$k_3$
is negative for the twist-bend nematic liquid crystals (see, e.g., [Reference Adlem, Čopiv and Luckhurst1, Reference Dozov12, Reference Panov, Nagaraj and Vij36]), so that the wave speed
$c(\cdot )$
may degenerate at some time. Moreover, the motion of long waves on a neutral dipole chain in the continuum limit can also be described by a mixed-type equation similar to (1.5) [Reference Zorski and Infeld52]. In [Reference Saxton, Hale and Wiener38], Saxton studied the boundary blowup properties of smooth solutions to the degenerate equation (1.5) with
$\gamma _1=0$
by allowing either
$k_1$
or
$k_3$
to be zero. In addition, equation (1.5) with
$c(\theta )=\theta $
and
$\gamma _1=0$
is corresponded to the one-dimensional second sound equation which describes the wave of temperature in superfluids (see [Reference Kato and Sugiyama26, Reference Kato and Sugiyama27]). Under the initial assumption
$\theta (0,x)\geq \delta>0$
, they investigated the local well-posedness and blowup of solutions to the one-dimensional second sound equation. In [Reference Hu and Wang21], Hu and Wang considered the degenerate variational wave equation (1.5) and discussed the local existence of smooth solutions near the degenerate line.
For the hyperbolic–parabolic coupled system (1.4) with a strictly positive wave function c, the local existence and uniqueness of smooth solutions to its Cauchy problem can be obtained from the classical results of the general hyperbolic–parabolic coupled systems by Li, Yu, and Shen [Reference Li, Yu and Shen31]. Moreover, the local classical solutions to the initial-boundary value problems for the general hyperbolic–parabolic coupled systems were presented in [Reference Li, Yu and Shen32, Reference Li, Yu and Shen33, Reference Zheng47]. The global existence of smooth solutions for the initial value problems and initial-boundary value problems to a class of hyperbolic–parabolic coupled systems were established by Zheng and Shen [Reference Zheng48, Reference Zheng49, Reference Zheng and Shen51]. For more relevant results about the hyperbolic–parabolic coupled systems, we refer the reader to, for example, [Reference Hsiao, Jiang, Dafermos and Feireisl17, Reference Kawashima28, Reference Slemrod39, Reference Vol’pert and Hudjaev41, Reference Zheng50] and the references therein. Furthermore, for system (1.4), Chen, Huang, and Liu [Reference Chen, Huang and Liu9] shown the cusp-type singularity formations of smooth solutions in finite time and established the global existence of Hölder continuous weak solutions for its Cauchy problem. Recently, Chen, Liu, and Sofiani [Reference Chen, Liu and Sofiani10] generalized the existence results to the general system (1.3). Some relevant studies for the full Ericksen–Leslie equations (1.1) were presented among others in [Reference Jiang and Luo23–Reference Jiang, Luo and Tang25]. To the best of our knowledge, studies on the hyperbolic–parabolic coupled systems with a degenerate wave speed are still very limited and there is no general theory for these kinds of problems. Based on an estimate of the solution for the heat equation, we [Reference Hu19] shown that the smooth solution of (1.4) may break down in finite time even for an arbitrarily small initial energy under the initial condition
$c(\theta (0,x))\geq \delta>0$
.
In this paper, we are concerned with the Cauchy problem for the nonlinear degenerate hyperbolic–parabolic coupled system (1.4) with initial data given on a line of parabolicity. More precisely, we consider the local solvability of classical solutions to (1.4) with the degenerate initial data
$c(\theta (0,x))=0$
. This type of degenerate problem is very meaningful and interesting in both mathematical theory and physical applications. This situation corresponds physically to the case that the orientation wave in the director field has no potential energy initially. At later times, the orientation wave converts kinetic energy into potential energy due to the coupling effect of director field and flow field. Exploring this issue may help us better understand the process of energy conversion. Mathematically, the solvability of degenerate hyperbolic–parabolic coupled systems is a fundamental problem for coupled systems. Although there have been many results on the solvability for the strictly hyperbolic–parabolic coupled systems and the single degenerate hyperbolic equations, they cannot be applied to the current degenerate hyperbolic–parabolic coupled problem. In the present paper, we need to overcome the coupling difficulty of degeneracy on variables at different scales in terms of technology. In addition, it is also very valuable to investigate how the unknown function of parabolic equation will be affected by the degeneracy of the hyperbolic equation.
For the convenience of calculation, we discuss the special wave speed
$c(\theta )=\theta $
here, and a similar result for the general wave speed
$c(\cdot )$
with
$|c'(\cdot )|\geq c_0>0$
can be established by the mean value theorem in the derivation process. By introducing a new variable
$v(t,x)$

and hence
$v_t=u_x+\theta _t$
, system (1.4) can be rewritten as in terms of
$(\theta ,v)$

We study the Cauchy problem to (1.7) with the following initial data:

From (1.8), the wave speed is equal to zero on the initial line, then the hyperbolic equation in (1.7) is degenerate at
$t=0$
. We emphasize that this degenerate Cauchy problem is not trivial or easy, and it cannot be solved by applying the classical method for the strictly hyperbolic–parabolic coupled systems (e.g., [Reference Li, Yu and Shen31]). The main reason is that the corresponding system is not a continuously differentiable system by the degeneracy. Moreover, due to the coupling of hyperbolicity and parabolicity, this degenerate Cauchy problem can also not be solved directly from the previous strategy of studying degenerate hyperbolic equations (e.g., [Reference Hu and Li20, Reference Hu and Wang21, Reference Zhang and Zheng46]) since the scales of the hyperbolic and parabolic equations are different. In the present paper, we solve the degenerate Cauchy problem (1.7), (1.8) by the fixed point iteration pattern in two steps. First, we introduce a suitable function space for the variable v, and establish the existence of classical solutions for the degenerate hyperbolic system in a partial hodograph coordinate plane. Here, due to the regularity of variable v, the approach of solving the current degenerate hyperbolic problem, different from the method in previous papers [Reference Hu and Li20, Reference Hu and Wang21, Reference Zhang and Zheng46], is inspired by the work done by Protter [Reference Protter37] for studying the well-posedness of the Cauchy problem to the second-order linear degenerate hyperbolic equation. We solve the quasilinear degenerate hyperbolic problem and derive the uniform estimates of solutions near the parabolic degenerating line in the partial hodograph plane. By transforming the smooth solution back to the original coordinate variables, we obtain the information of variable
$\theta $
and its corresponding estimates. Second, we use the fundamental solution of one-dimensional heat equation to express the variable v and then establish its series of estimates based on the information of
$\theta $
. The uniform convergence of the iterative sequence generated by this pattern is verified in the selected space for a sufficiently small time.
The main conclusion of this paper can be stated as follows.
Theorem 1.1 Suppose that the functions
$\theta _0(x)$
and
$v_0(x)$
satisfy

for all
$x\in \mathbb {R}$
and some positive constant
$\underline {\theta }$
. Then there exists a constant
$\delta>0$
such that the degenerate Cauchy problem (1.7), (1.8) admits a unique classical solution on
$[0,\delta ]\times \mathbb {R}$
.
We comment that the technique developed in the paper can be applied to study the more general hyperbolic–parabolic coupled systems, e.g., system (1.3). With a choice of a suitable space for the unknown variable u, one can obtain the function
$\theta $
and its properties near the degenerate line by solving the degenerate hyperbolic equation based on almost the same process. Subsequently, we may utilize the parametrix method [Reference Friedman15] to construct the iterative sequence for the variable u and then show that it is uniformly convergent in the selected function space. The detailed process is relatively complicated and will be considered in the future.
The rest of the paper is organized as follows. In Section 2, we reformulate the problem in terms of new dependent variables and then restate the main result. Section 3 is devoted to solving the degenerate hyperbolic problem for a given variable v in a suitable function space. In Section 3.1, we introduce a partial hodograph coordinate system to transform the hyperbolic equation into a new system with a transparent singularity-regularity structure. In Section 3.2, we construct an iterative sequence generated by the new problem in the partial hodograph plane. Sections 3.3 and 3.4 are devoted to, respectively, establish a series of lemmas for the iterative sequence and show the existence and uniqueness of smooth solutions for the new problem. The smooth solution is converted in the partial hodograph plane to that in the original physical plane in Section 3.5. In Section 4, we explore the local existence and uniqueness of classical solutions for the hyperbolic–parabolic coupled problem. In Section 4.1, we present the preliminary results for the one-dimensional heat equation. Section 4.2 is devoted to constructing an iterative sequence for the parabolic equation and verifying that it belongs to the selected function space. In Sections 4.3–4.5, we establish a series of properties for the iterative sequence and its derivatives in the selected function space, including the regularity and convergence of the iterative sequence. Finally, in Section 4.6, we complete the proof of Theorem 1.1.
2 Reformulation of the problem and the main result
In this section, we reformulate the problem by introducing a series of new dependent variables and then restate the main result in terms of these variables. We discuss the case
$\theta _0(x)\geq \underline {\theta }>0$
; the other case
$\theta _0(x)\leq -\underline {\theta }<0$
is analogous.
To handle the term
$v_t$
in the first equation of (1.7), we introduce

Thus,

In terms of variables
$(v, R, S, \theta )$
, system (1.7) can be written as

Corresponding to (1.8), the initial conditions of (2.3) are

Furthermore, we introduce the following variables
$(\widetilde {v}, \widetilde {R}, \widetilde {S})$
to homogenize the initial data:

Then one has by (2.3) and (2.4)

Here, the homogeneous initial values of
$(\widetilde {R}_t, \widetilde {S}_t)$
come from the equations of
$(R,S)$
in (2.3) and the fact
$\frac {R-S}{2\theta }\Big |_{t=0}=\theta _x|_{t=0}=0$
. By a direct calculation, we can obtain a new system in terms of the variables
$(\widetilde {v}, \widetilde {R}, \widetilde {S},\theta )$
as follows:

For the singular Cauchy problem (2.7), (2.6), we have the following.
Theorem 2.1 Assume that the functions
$v_0(x)$
and
$\theta _0(x)$
satisfy (1.9) and

for all
$x\in \mathbb {R}$
and two positive constants
$\underline {\theta }, \overline {\theta }$
. Then there exists a constant
$\delta>0$
such that the singular Cauchy problem (2.7), (2.6) has a unique classical solution on
$[0,\delta ]\times \mathbb {R}$
. Moreover, the solution
$(\widetilde {v}, \widetilde {R}, \widetilde {S}, \theta )(t,x)$
satisfies

for any
$(t,x)\in [0,\delta ]\times \mathbb {R}$
, where
$\widehat {M}$
and
$\widetilde {M}$
are two positive constants.
We shall use the iteration method to show Theorem 2.1, from which and (2.5) one establishes the existence of classical solutions for the Cauchy problem (2.3), (2.4). Then Theorem 1.1 can be achieved by the relations in (2.1).
Here, we present the results of our choices of the constants
$\widetilde {M}, \widehat {M}$
, and
$\delta $
for the reader first, which come from the construction process in Sections 3 and 4. Denote

It is noted that
$\overline {K}$
and
$\widehat {K}$
are constants depending only on
$\underline {\theta }$
,
$\overline {\theta }$
, the
$C^3$
norms of
$\theta _0(x)$
, and the
$C^4$
norms of
$v_0(x)$
. Moreover, we choose

and

In (2.11),
$C_{j,\beta }(j\geq 1, \beta \geq 0)$
are positive constants that make the following inequality valid:

which come from the estimates for the fundamental solution of the heat equation.
3 The degenerate hyperbolic problem
In this section, we choose a suitable function space for the variable
$\widetilde {v}$
and then solve the degenerate hyperbolic problem for the variables
$(\widetilde {R}, \widetilde {S}, \theta )(t,x)$
.
Set

where
$\widehat {M}_0$
is a positive constant which will be determined in Section 4, and
$\delta _0$
is an arbitrary positive number which may be assumed to be 1. Let
$\widetilde {v}$
be any element in
$\Sigma (1)$
and denote
$w(t,x)=\widetilde {v}_x(t,x)\in C^1$
. We consider the degenerate hyperbolic problem

with the homogeneous initial conditions

For the Cauchy problem (3.2), (3.3), we have the following theorem.
Theorem 3.1 Let the conditions in Theorem 2.1 hold. Then there exists a constant
$\overline {\delta }>0$
such that the Cauchy problem (3.2), (3.3) has a unique classical solution on
$[0,\overline {\delta }]\times \mathbb {R}$
. Moreover, the solution
$(\widetilde {R}, \widetilde {S}, \theta )(t,x)$
satisfies

for any
$(t,x)\in [0,\overline {\delta }]\times \mathbb {R}$
, where
$\widetilde {M}$
is a positive constant.
We show Theorem 3.1 by transforming the problem into a partial hodograph plane and then returning the solution to the original physical plane.
3.1 The problem in a partial hodograph plane
We introduce the new independent variables

It is easy to calculate the Jacobian of this transformation

which, together with (3.1) and (3.3), gives
$J|_{t=0}=\theta _0\geq \underline {\theta }>0$
. Furthermore, it acquires that

From system (3.2) and the relations in (3.6), we can obtain a closed quasilinear hyperbolic system in terms of
$(\widetilde {R},\widetilde {S},t)$

where
$f=\theta _0(y)-\widetilde {v}(t,y)-\tau $
. Obviously, system (3.7) has a clear regularity-singularity structure. The initial values of
$(\widetilde {R},\widetilde {S},t)$
are

It is easily seen that the three eigenvalues of system (3.7) are

Moreover, the three characteristics passing through a point
$(\xi ,\eta )$
are defined by

Suppose that
$\widetilde {M}$
is a sufficiently large positive constant determined later. We first choose a sufficiently small positive constant
$\widetilde {\delta }_1\leq 1$
such that

We use the notation
$\widetilde {\Sigma }(\widetilde {\delta }_1)$
to denote the function class which incorporates all continuously vector functions
$\textbf {F}=(\widetilde {R},\widetilde {S},t)^T:\ [0,\widetilde {\delta }_1]\times \mathbb {R}\rightarrow \mathbb {R}^{3}$
satisfying the following properties:

If
$(\widetilde {R}, \widetilde {S}, t)$
is an arbitrary element in
$\widetilde {\Sigma }(\widetilde {\delta }_1)$
, we see by (3.11) and (3.12) that

and

Similarly, one has

and

Furthermore, we have by (3.9) and (3.13)–(3.15)

from which and (3.10) one achieves

3.2 The iterative sequence
For any
$(\xi ,\eta )\in [0.\widetilde {\delta }_1]\times \mathbb {R}$
, integrating the differential system (3.7) along the characteristic curves
$y=y_i(\tau ;\xi ,\eta )\ (i=\pm ,0)$
defined in (3.10), we utilize the boundary conditions (3.8) to gain a system of integral equations

where

One can apply the integral system (3.19) to construct the iterative sequences. For any
$(\tau ,y)\in [0,\widetilde {\delta }_1]\times \mathbb {R}$
, set

from which we define the characteristic curves
$y=y_{i}^{(0)}(\tau )=:y_{i}^{(0)}(\tau ;\xi ,\eta )\ (i=\pm ,0)$
for
$\tau \in [0,\xi ]$
as

Thus, the functions
$(\widetilde {R}^{(1)}, \widetilde {S}^{(1)}, t^{(1)})(\tau , y)$
can be defined as

where

and
$f^{(0)}=\theta _0(y)-\widetilde {v}(t^{(0)},y)-\tau $
. After obtaining the functions
$(\widetilde {R}^{(k)}, \widetilde {S}^{(k)}, t^{(k)})(\tau ,y)$
, one gets the characteristic curves
$y=y_{i}^{(k)}(\tau )=:y_{i}^{(k)}(\tau ;\xi ,\eta )\ (i=\pm ,0)$
by solving the following ODE equations:

Hence, we determine the functions
$(\widetilde {R}^{(k+1)}, \widetilde {S}^{(k+1)}, t^{(k+1)})(\tau ,y)$
by the following relations:

where

and
$f^{(k)}=\theta _0(y)-\widetilde {v}(t^{(k)},y)-\tau $
.
We next choose the constants
$\widetilde {M}$
and
$\widetilde {\delta }\leq \widetilde {\delta }_1$
to verify the uniform convergence of the sequences
$(\widetilde {R}^{(k)}, \widetilde {S}^{(k)}, t^{(k)})(\tau ,y)$
.
3.3 Several lemmas
It first follows by the detailed expressions of
$A, B, C$
in (3.20) that

If
$(\widetilde {R}, \widetilde {S}, t)(\tau ,y)\in \widetilde {\Sigma }(\widetilde {\delta }_1)$
, we have by (3.13)–(3.16) and (3.26)

Denote
$\overline {\theta }_0=\max \{\max _{z\in \mathbb {R}}|\theta _{0}'(z)|, \max _{z\in \mathbb {R}}|\theta _{0}"(z)|, \max _{z\in \mathbb {R}}|\theta _{0}"'(z)|\}$
and

which along with (3.27) give

We now choose
$\widetilde {M}, \overline {M}$
, and
$\widetilde {\delta }$
satisfying

such that there hold

Hence, if
$(\widetilde {R}, \widetilde {S}, t)(\tau ,y)\in \widetilde {\Sigma }(\widetilde {\delta })$
, one obtains

Thanks to (3.31) and (3.32), we have the following lemma.
Lemma 3.1 Let the sequences
$(\widetilde {R}^{(k)}, \widetilde {S}^{(k)}, t^{(k)})$
be defined in (3.25). For all
$k\geq 1$
, the following inequalities

hold in
$[0,\widetilde {\delta }]\times \mathbb {R}$
.
Proof We show this lemma by the standard argument of induction. That is, we first check that each inequality in (3.33) is true for
$n=1$
, then assume that they all hold for
$n=k$
and establish (3.33) for
$n=k+1$
.
Obviously, the functions
$(\widetilde {R}^{(0)},\widetilde {S}^{(0)},t^{(0)})(\xi ,\eta )$
defined in (3.21) are in
$\widetilde {\Sigma }(\widetilde {\delta })$
, then we find by (3.32) that

for
$I=A,B$
. It concludes by (3.23) and (3.34) that

The estimate (3.35) is also true for
$\widetilde {S}^{(1)}(\xi ,\eta )$
. For the variable
$t^{(1)}(\xi ,\eta )$
, one arrives at

Furthermore, applying (3.23) and (3.34) again acquires

We combine (3.35)–(3.37) to achieve (3.33) for
$n=1$
. In addition, the functions
$(\widetilde {R}^{(1)}, \widetilde {S}^{(1)}, t^{(1)})(\xi ,\eta )$
satisfy

which means that
$(\widetilde {R}^{(1)}, \widetilde {S}^{(1)}, t^{(1)})(\xi ,\eta )\in \widetilde {\Sigma }(\widetilde {\delta })$
.
Suppose that all inequalities in (3.33) hold for
$n=k$
. Thus,

from which we see that
$(\widetilde {R}^{(k)}, \widetilde {S}^{(k)}, t^{(k)})(\xi ,\eta )\in \widetilde {\Sigma }(\widetilde {\delta })$
. Therefore, one has by (3.32)

for
$I=A,B$
. In view of (3.25) and (3.40), we employ the induction assumptions to obtain

The above estimate (3.41) also holds for
$\widetilde {S}^{(k+1)}(\xi ,\eta )$
. Moreover, it is easy to find by (3.40) that

For the term
$|\widetilde {R}^{(k+1)}(\xi ,\eta )-\widetilde {S}^{(k+1)}(\xi ,\eta )|$
, we proceed by using (3.25) and (3.40) again

By virtue of Lemma 3.1, the functions
$(\widetilde {R}^{(k)}, \widetilde {S}^{(k)}, t^{(k)})(\xi ,\eta )$
are in the space
$\widetilde {\Sigma }(\widetilde {\delta })$
for each
$k\geq 0$
. Thus, the estimates in (3.32) are true for the functions
$(\widetilde {R}^{(k)}, \widetilde {S}^{(k)}, t^{(k)})\ (k\geq 0)$
. To derive the uniform convergence of the iterative sequences
$(\widetilde {R}^{(k)}, \widetilde {S}^{(k)}, t^{(k)})$
, one needs the properties of the sequences
$(\widetilde {R}_{\eta }^{(k)}, \widetilde {S}_{\eta }^{(k)}, t_{\eta }^{(k)})(\xi ,\eta )$
. We differentiate system (3.25) with respect to
$\eta $
to calculate

where


where

and

where

The functions
$\frac {\partial y_{\pm }^{(k)}(\tau )}{\partial \eta }$
in (EQ.a) and (EQ.b) are

where

Due to
$(\widetilde {R}^{(k)}, \widetilde {S}^{(k)}, t^{(k)})\in \widetilde {\Sigma }(\widetilde {\delta })$
, for
$k\geq 1$
, it suggests by Lemma 3.1 and (3.27), (3.30), and (3.32) that

and then

We obtain analogously

Furthermore, there also hold

and

For the sequences
$(\widetilde {R}_{\eta }^{(k)}, \widetilde {S}_{\eta }^{(k)}, t_{\eta }^{(k)})(\xi ,\eta )$
, we have the following lemma.
Lemma 3.2 For all
$k\geq 1$
, the following inequalities

hold in
$[0,\widetilde {\delta }]\times \mathbb {R}$
.
Proof The proof is also based on the standard argument of induction. According to


One combines (EQ.a), (3.50), and (3.55) to acquire

The estimate in (3.56) is also valid for the function
$\widetilde {S}_{\eta }^{(1)}(\xi ,\eta )$
. For the function
$t_{\eta }^{(1)}(\xi ,\eta )$
, it concludes by (EQ.c) and (3.52) that

Moreover, for the term
$|\widetilde {R}_{\eta }^{(1)}(\xi ,\eta )-\widetilde {S}_{\eta }^{(1)}(\xi ,\eta )|$
, it is easy to check that

which along with (3.56) and (3.57) give (3.54) for
$n=1$
.
Let all the inequalities in (3.54) hold for
$n=k$
. Then

Combining (3.53), (3.47), and (3.59) yields

and then

Utilizing the induction assumptions, we have by (EQ.a), (3.50), and (3.61)

The above estimate also holds for the function
$\widetilde {S}_{\eta }^{(k+1)}(\xi ,\eta )$
. For the term
$|\widetilde {R}_{\eta }^{(k+1)}(\xi ,\eta )-\widetilde {S}_{\eta }^{(k+1)}(\xi ,\eta )|$
, we can perform the process as in (3.62) to obtain

which, together with the fact
$15e^{\overline {M}\widetilde {\delta }^2}\leq 16$
by (3.31), leads to

For the function
$t_{\eta }^{(k+1)}(\xi ,\eta )$
, it follows by (EQ.c), (3.52), and the induction assumptions that

We combine (3.62), (3.64), and (3.65) to complete the proof of the lemma.
By means of Lemmas 3.1 and 3.2, one achieves the following lemma.
Lemma 3.3 For all
$k\geq 0$
, the following inequalities

hold in
$[0,\widetilde {\delta }]\times \mathbb {R}$
.
Proof We use again the argument of induction to verify the lemma. It is obvious to see by the functions
$(\widetilde {R}^{(0)},\widetilde {S}^{(0)},t^{(0)})(\xi ,\eta )$
in (3.21) and (3.33) that the inequalities in (3.66) are valid for
$n=1$
. Suppose that each inequality in (3.66) holds for
$n\leq k-1$
. We shall show that they are true for
$n=k$
.
We first estimate the term
$|t^{(k+1)}(\xi ,\eta )-t^{(k)}(\xi ,\eta )|$
. In view of (3.25) and (3.32), we apply the induction assumptions and the fact
$y_{0}^{k}(\tau )\equiv \eta $
to find that

To derive the difference between
$\widetilde {R}^{(k+1)}(\xi ,\eta )$
and
$\widetilde {R}^{(k)}(\xi ,\eta )$
, we need to first obtain the estimate of
$|y^{(k)}_-(\tau )-y^{(k-1)}_-(\tau )|$
. By virtue of (3.24), it suggests that for
$\tau \in [0,\xi ]$
,

from which we see by Lemma 3.1 and (3.15) that

where

For the term
$T_{1}^{(k)}$
, one easily finds by the mean value theorem that

For the term
$T_{2}^{(k)}$
, we have by Lemma 3.2 and the induction assumptions

Similarly, it concludes that for the term
$T_{3}^{(k)}$
,

Putting (3.71)–(3.73) into (3.69) yields for
$\tau \in [0,\xi ]$
,

Denote

Then we obtain by (3.74)

which indicates that

Analogously, one gets

We now estimate the difference between
$\widetilde {R}^{(k+1)}(\xi ,\eta )$
and
$\widetilde {R}^{(k)}(\xi ,\eta )$
. Applying (3.25) gives

where

Thanks to Lemma 3.2 and (3.75), one employs the induction assumptions to arrive at

Recalling the expressions of
$A^{(k)}$
in (3.25) and
$T_{1,2,3}^{(k)}$
in (3.70) achieves

by the choice of
$\widetilde {\delta }$
in (3.31). Therefore, we can use Lemmas 3.1 and 3.2, the induction assumptions, (3.75), and (3.80) to estimate the terms
$T_{5,6,7,8}^{(k)}$
. In detail, for the term
$T_{5}^{(k)}$
, we have

For the term
$T_{6}^{(k)}$
, one obtains

For the terms
$T_{7}^{(k)}$
and
$T_{8}^{(k)}$
, we also acquire

and

Here, the facts
$\widehat {M}_0\geq 1$
,
$16\overline {M}\widetilde {\delta }\leq 1$
, and
$16\overline {K}\overline {M}_0\widetilde {\delta }\leq 1$
are used in the above estimation processes.
We now insert (3.79) and (3.81)–(3.84) into (3.77) to gain

It is easily checked that the estimate (3.85) is also valid for the term
$|\widetilde {S}^{(k+1)}(\xi ,\eta )-\widetilde {S}^{(k)}(\xi ,\eta )|$
. The proof of the lemma is finished.
3.4 The existence and uniqueness of solutions
According to Lemma 3.3, it is known that the sequences
$(\widetilde {R}^{(k)}, \widetilde {S}^{(k)}, t^{(k)})(\tau , y)$
are uniformly convergent. We denote the limit functions by
$(\widetilde {R}, \widetilde {S}, t)(\tau , y)$
which are obviously continuous. Furthermore, by means of Lemma 3.1, the functions
$(\widetilde {R}, \widetilde {S}, t)(\tau , y)$
satisfy

for any
$(\tau , y)\in [0,\widetilde {\delta }]\times \mathbb {R}$
. In addition, it is clear that the functions
$(\widetilde {R}, \widetilde {S}, t)(\tau , y)$
satisfy the integral system (3.19) and the boundary conditions

Thus, we have
$(\widetilde {R}, \widetilde {S}, t)\in \widetilde {\Sigma }(\widetilde {\delta })$
.
Next, we show that the functions
$(\widetilde {R}, \widetilde {S}, t)(\tau , y)$
possess first-order continuous derivatives with respect to y. To move forward, one differentiates (3.19) with respect to
$\eta $
to deduce the following linear system of integral equations:



Here, the coefficient functions in (EQ.a1)–(EQ.c1) are given in (EQ.a)–(EQ.c) but with the limit functions
$(\widetilde {R}, \widetilde {S}, t)$
replacing
$(\widetilde {R}^{(k)}, \widetilde {S}^{(k)}, t^{(k)})$
. Set

and construct the iterative sequences
$(\widetilde {R}_{\eta }^{(k)}, \widetilde {S}_{\eta }^{(k)}, t_{\eta }^{(k)})(\xi ,\eta )$
by the integral system (EQ.a1)–(EQ.c1) as follows:



where

and

Since the limit functions
$(\widetilde {R}, \widetilde {S}, t)(\tau , y)$
are in the space
$\widetilde {\Sigma }(\widetilde {\delta })$
, we see that the coefficients of system (EQ.a2) and (EQ.c2) still satisfy the estimates in (3.50)–(3.53). Then we have the following.
Lemma 3.4 The sequences
$(\widetilde {R}_{\eta }^{(k)}, \widetilde {S}_{\eta }^{(k)}, t_{\eta }^{(k)})$
defined by system (EQ.a2) and (EQ.c2) satisfy the following inequalities:

for all
$k\geq 1$
and any
$(\xi ,\eta )\in [0,\widetilde {\delta }]\times \mathbb {R}$
.
Proof We omit the proof here since the process is exactly the same as that of Lemma 3.2.
It concludes by (3.91) that

which together with (3.89) and (3.90) and the estimates in (3.50)–(3.53) give

and then

where

By utilizing Lemma 3.4 and (3.93) and (3.94), we have the following lemma.
Lemma 3.5 Let the functions
$(\widetilde {R}_{\eta }^{(k)}, \widetilde {S}_{\eta }^{(k)}, t_{\eta }^{(k)})$
be defined by the iterative system (EQ.a2) and (EQ.c2). Then, for all
$k\geq 0$
, the following inequalities

hold in
$[0,\widetilde {\delta }]\times \mathbb {R}$
.
Proof The proof of the lemma is still based on the inductive method. It is obvious by Lemma 3.4 that all inequalities in (3.95) are true for
$k=0$
. We assume that each inequality in (3.95) holds for
$n=k-1$
and then verify all of them valid for
$n=k$
.
For the term
$|t_{\eta }^{(k+1)}(\xi ,\eta )-t_{\eta }^{(k)}(\xi ,\eta )|$
, one obtains by (EQ.c2), (3.52), and the induction assumptions

For the term
$|\widetilde {R}_{\eta }^{(k+1)}(\xi ,\eta )-\widetilde {R}_{\eta }^{(k)}(\xi ,\eta )|$
, one gets by (EQ.a2)

where


Recalling the estimates in (3.50) yields

which, together with (3.86), (3.91), (3.98)–(3.99), and the induction assumptions, arrive at

and

Moreover, it suggests by (3.94) and the induction assumptions that

Putting (3.101)–(3.103) into (3.97) and using (3.93) leads to

by the fact
$e^{1/64}<64/63$
. The estimate of the term
$|\widetilde {S}_{\eta }^{(k+1)}(\xi ,\eta )-\widetilde {S}_{\eta }^{(k)}(\xi ,\eta )|$
can be derived similar to (3.104). The proof of the lemma is complete.
According to Lemmas 3.4 and 3.5, we know that the sequences
$(\widetilde {R}_{\eta }^{(k)}, \widetilde {S}_{\eta }^{(k)}, t_{\eta }^{(k)})(\xi ,\eta )$
are uniformly convergent, which means that the functions
$(\widetilde {R}_{\eta }, \widetilde {S}_{\eta }, t_{\eta })(\xi ,\eta )$
are continuous in
$[0,\widetilde {\delta }]\times \mathbb {R}$
. Furthermore, one also has by (3.91)

In addition, we differentiate equations (3.19) with respect to
$\xi $
to gain


and

The terms
$\partial _\xi y_\pm (\tau )$
in (3.106) and (3.107) are given by

In view of the expressions of
$\widetilde {R}_\xi $
and
$\widetilde {S}_\xi $
in (3.106) and (3.107), we employ the estimates in (3.86) and (3.105) to find that the functions
$(\widetilde {R}_{\tau }, \widetilde {S}_{\tau }, t_\tau )(\tau , y)$
are continuous in
$[0,\widetilde {\delta }]\times \mathbb {R}$
and satisfy

All in all, the functions
$(\widetilde {R}, \widetilde {S}, t)(\tau , y)$
satisfy the integral equations (3.19) and the homogeneous initial conditions (3.87) and (3.110), and they also own the required differentiability properties. Therefore, they are a smooth solution to the singular problem (3.7), (3.8).
In order to verify the uniqueness, we suppose that
$(\widetilde {R}_a, \widetilde {S}_a, t_a)(\tau , y)$
and
$(\widetilde {R}_b, \widetilde {S}_b, t_b)(\tau , y)$
are two smooth solutions of the problem (3.7), (3.8) and consider their difference. Denote
$X=\widetilde {R}_a-\widetilde {R}_b$
,
$Y=\widetilde {S}_a-\widetilde {S}_b$
, and
$T=t_a-t_b$
. By (3.7), one derives the equations for
$(X,Y,T)$
as follows:


and

where

and
$f_i=\theta _0(y)-\widetilde {v}(t_i,y)-\tau $
for
$i=a,b$
. Performing a direct calculation gets

Integrating (3.111)–(3.113) from
$0$
to
$\xi $
and utilizing the estimates (3.86) and (3.105) and the mean value theorem, we can find that the functions
$(X,Y,T)$
satisfy a homogeneous integral inequality system as the following form

for some positive constant
$M^*$
. One repeats the insertion of the right side of (3.114) to see that the functions
$(X,Y,T)$
must satisfy

for arbitrary integer
$\ell \geq 1$
and some positive constant
$\overline {M}^*$
. This implies that
$X(\tau ,y)=Y(\tau ,y)=T(\tau ,y)\equiv 0$
. Hence, the smooth solution of the singular initial value problem (3.7), (3.8) is unique.
3.5 The problem in the original plane
Based on the unique smooth solution
$(\widetilde {R}, \widetilde {S}, t)(\tau , y)$
of problem (3.7), (3.8) obtained in Section 3.4, we establish the existence and uniqueness of smooth solutions for the initial value problem (3.2), (3.3) in this subsection.
By virtue of (3.86), we know that the functions
$(\widetilde {R}, \widetilde {S}, t)(\tau , y)$
are in the space
$\widetilde {\Sigma }(\widetilde {\delta })$
, from which and (3.16) one acquires

for
$(\tau ,y)\in [0,\widetilde {\delta }]\times \mathbb {R}$
. Then we can use (3.5) and (3.7) to construct the functions
$(t,x)$
as follows:

for any
$(\tau ,y)\in [0,\widetilde {\delta }]\times \mathbb {R}$
. Obviously, by (3.115), the mapping
$(\tau ,y)\mapsto (t,x)$
is global one-to-one in the region
$[0,\widetilde {\delta }]\times \mathbb {R}$
. Now, set

Then, for any point
$(t^*,x^*)\in [0,\overline {\delta }]\times \mathbb {R}$
, we see that there exists a unique corresponding point
$(\tau ^*,y^*)\in [0,\widetilde {\delta }]\times \mathbb {R}$
such that

Thus, one can define the functions
$(\theta , \widetilde {R}, \widetilde {S})(t,x)$
as follows:

In sum, we have obtained the functions
$(\theta , \widetilde {R}, \widetilde {S})(t, x)$
in the region
$[0,\overline {\delta }]\times \mathbb {R}$
.
We next verify that the functions defined in (3.119) satisfy the initial value conditions (3.3) and equations (3.2). By means of (3.86) and (3.119), one concludes

from which we get
$\theta (0,x)=\widetilde {R}(0,x)=\widetilde {S}(0,x)=0$
. Moreover, we calculate

which along with (3.105) and (3.115) achieve
$\widetilde {R}_t(0,x)=\widetilde {S}_t(0,x)=0$
. Thus, the functions
$(\theta , \widetilde {R}, \widetilde {S})(t, x)$
satisfy the initial value conditions (3.3). Furthermore, we apply the fact
$\tau _x=(\widetilde {R}-\widetilde {S})/2\tau $
by (3.6) to arrive at

which combined with (3.121) and (3.7) leads to

which is the desired equation for
$\widetilde {R}$
in (3.2). One can check the other equations in (3.2) in a similar way. The inequalities in (3.4) come from (3.120). Therefore, the proof of Theorem 3.1 is completed.
In addition, we check that the following relation holds:

To show (3.123), we set
$\Phi =\widetilde {R}-\widetilde {S}-2\theta \theta _x$
and apply (3.2) to compute

Recalling (3.17) gives

which, along with (3.124) and the initial condition
$(\Phi /t)|_{t=0}=0$
, arrives at
$\Phi \equiv 0$
. Hence, the relation (3.123) holds.
Finally, for later applications, we summarize some properties for the functions
$(\theta , \widetilde {R}, \widetilde {S})(t, x)$
in the region
$[0,\overline {\delta }]\times \mathbb {R}$

Here, we used the following estimates:

and

4 The hyperbolic–parabolic coupled problem
In this section, we show Theorem 2.1 and then obtain Theorem 1.1 based on the results established in Section 3. The strategy is to solve the parabolic equation in (2.7) to construct an iterative sequence for the variable
$\widetilde {v}$
and then verify the uniform convergence of the iterative sequence.
4.1 Preliminary results for heat equation
We first introduce some known results for the heat equation. These results can be found in the many excellent texts (see, e.g., [Reference Evans14]). Let
$b(t,x)\in C^1$
be a function defined in the region
$[0,\overline {\delta }]\times \mathbb {R}$
. Here, the positive number
$\overline {\delta }$
is given in Theorem 3.1. Consider the following initial value problem for the heat equation:

We know that the fundamental solution of the heat equation is

that is,
$G(t,x)$
satisfies

where
$\delta _0(x)$
is the Dirac function at
$x=0$
. Thus, the smooth solution of problem (4.1) can be expressed by the fundamental solution

Furthermore, the functions
$\widetilde {v}_x, \widetilde {v}_t, \widetilde {v}_{xx}$
, and
$\widetilde {v}_{xt}$
can also be stated as follows:



and

We set

Then one has

and

The results in (4.9)–(4.11) can be found in [Reference Li, Yu and Shen31]. Thus, we obtain for
$\sigma =1,2$
,

In addition, it is clear that the following inequality holds

for
$z\geq 0$
, where
$\beta \geq 0$
is an arbitrary number and
$C_\beta $
is a positive constant depending only on
$\beta $
. Then, by (4.2), (4.9), and (4.13), one acquires

and

where
$C_{j,\beta }(j\geq 1)$
are positive constants depending only on
$\beta $
. The inequalities in (4.15) can also be found in [Reference Li, Yu and Shen31]. Making use of (4.12) and (4.14), we gain

4.2 The iterative sequence for the heat equation
For any
$(t,x)\in [0,\overline {\delta }]\times \mathbb {R}$
, we set

which obviously satisfies
$\widetilde {v}^{(0)}\in \Sigma (\overline {\delta })$
. Here, the space
$\Sigma (\overline {\delta })$
is defined in (3.1) but with the number
$\overline {\delta }$
replacing
$\delta _0$
. In view of the results in Section 3, one concludes the functions
$(\widetilde {R}^{(0)},\widetilde {S}^{(0)},\theta ^{(0)})(t,x)$
for
$(t,x)\in [0,\overline {\delta }]\times \mathbb {R}$
. According to (4.4), we define the function
$\widetilde {v}^{(1)}(t,x)$
in the region
$[0,\overline {\delta }]\times \mathbb {R}$
as

where

After determining the function
$\widetilde {v}^{(k)}(t,x)$
in
$[0,\overline {\delta }]\times \mathbb {R}$
, one can achieve the functions
$(\widetilde {R}^{(k)},\widetilde {S}^{(k)},\theta ^{(k)})(t,x)$
based on the results in Section 3. Then we define the function
$\widetilde {v}^{(k+1)}(t,x)$
in the region
$[0,\overline {\delta }]\times \mathbb {R}$
by the following relation:

where

Therefore, we have constructed an iterative sequence
$\{\widetilde {v}^{(k)}(t,x)\}$
.
Denote

which originates from the last two terms in the expression of
$b^{(k)}$
. Set

Here, the constant
$\overline {M}\geq 16$
is given in (3.30), which depends only on
$\underline {\theta }$
and the
$C^3$
norm of
$\theta _0(y)$
,
$\overline {\delta }$
is defined in (3.117), and
$C_{j,\beta }$
are presented in (4.15). Then we see by (3.125) that if
$\widetilde {v}(t,x)\in \Sigma (\delta )$
,

For the sequence
$\{\widetilde {v}^{(k)}(t,x)\}$
, we have the following lemma.
Lemma 4.1 For all
$k\geq 0$
, there hold
$\widetilde {v}^{(k)}(t,x)\in \Sigma (\delta )$
, that is, the functions
$\widetilde {v}^{(k)}(t,x)(k=1,2,\ldots )$
satisfy
$\widetilde {v}^{(k)}(t,x)\in C^1,\ \widetilde {v}_{x}^{(k)}(t,x)\in C^1$
, and
$\forall \ (t,x)\in [0,\delta ]\times \mathbb {R}$

Proof We mainly verify that all the inequalities in (4.22) are true for any
$k\geq 0$
. The proof is also based on the argument of induction.
Due to
$\widetilde {v}^{(0)}\equiv 0\in \Sigma (\delta )$
, we see by (4.21) that the functions
$(\widetilde {R}^{(0)},\widetilde {S}^{(0)},\theta ^{(0)})(t,x)$
satisfy

from which we find that

and


One combines (4.17) and (4.5)–(4.8) and utilizes (4.24)–(4.26) to arrive at

and

Making use of (4.16), (4.27), and (4.28), we get

from which we gain that (4.22) holds for
$k=1$
.
Assume that
$\widetilde {v}^{(k)}(t,x)\in \Sigma (\delta )$
, that is, (4.22) is true for
$n=k$
. Thanks to (4.21), the functions
$(\widetilde {R}^{(k)},\widetilde {S}^{(k)},\theta ^{(k)})(t,x)$
obtained in Section 3 satisfy

It suggests by (4.30) and the induction assumptions that

Thus, by applying (4.18), (4.5)–(4.8), (4.16), and (4.31), one achieves

and

From (4.32) and (4.33), we acquire
$\widetilde {v}^{(k+1)}(t,x)\in \Sigma (\delta )$
and then complete the proof of the lemma.
By virtue of Lemma 4.1 and (4.21), we find that for any
$k\geq 0$
and
$(t,x)\in [0,\delta ]\times \mathbb {R}$
,

Furthermore, the function
$\widetilde {v}^{(k)}_{xx}(t,x)$
owns the following regularity.
Lemma 4.2 For any
$k\geq 0$
, the function
$\widetilde {v}^{(k)}_{xx}(t,x)$
is uniformly
$\alpha $
-Hölder continuous with respect to x, that is, there exists a positive constant
$\widehat {C}_\alpha $
independent of k such that for any two points
$(t,x_1)$
and
$(t,x_2)$
in
$[0,\delta ]\times \mathbb {R}$
, there hold for
$\alpha \in (0,1)$

Proof The proof of the lemma is similar to that of Lemma 3.3 in [Reference Li, Yu and Shen31], but we list it here for the sake of completeness. By means of (4.7) and (4.18), we see that

and then for any
$x_1<x_2$
,

Denote
$\gamma =(x_2-x_1)^2$
. The proof is divided into two cases:
$\gamma> t$
and
$\gamma \leq t$
.
If
$\gamma> t$
, we use (4.31), (4.15) with
$\beta =1$
, and (4.12) to estimate (4.37) directly

If
$\gamma \leq t$
, the relation (4.37) can be rewritten as

where

For the term
$I_{1}^{(k)}$
, it concludes by (4.31), (4.15) with
$\beta =1$
, and (4.11) that

The estimate (4.40) also holds for the term
$I_{2}^{(k)}$
. Furthermore, one acquires by the integration by parts that
$I_{3}^{(k)}=0$
. For the term
$I_{4}^{(k)}$
, we use (4.31), (4.15), and (4.11) again to find

Substituting (4.40) and (4.41) into (4.39) yields

We denote

and combine (4.38) and (4.42) to achieve

which is the desired inequality (4.35).
4.3 The convergence of the iterative sequence (I)
In order to establish the uniform convergence of the iterative sequence
$\{\widetilde {v}^{(k)}(t,x)\}$
, we need to derive its series of properties in the space
$\Sigma (\delta )$
.
By direct calculations, one applies (4.18) and (4.5) to get

and

where

To estimate
$T_{12-14}^{(k)}$
, we first recall (3.2) to gain the equations for
$(\widetilde {R}^{(k)}, \widetilde {S}^{(k)}, \theta ^{(k)})(t,x)$
in
$[0,\delta ]\times \mathbb {R}$

where

It follows by (4.46) that

from which one acquires for any
$(\xi ,\eta )\in [0,\delta ]\times \mathbb {R}$
,

where
$x_{\pm }^{(k)}(t)=x_{\pm }^{(k)}(t;\xi ,\eta )(t\in [0,\xi ])$
are provided by

Performing direct calculations achieves

where

Moreover, thanks to (4.34) and Lemma 4.1, we can get a more precise estimate for
$\theta ^{(k)}$

and thus

and then

It suggests by (4.34) and (4.50) that

Furthermore, for the term
$T_{17}^{(k)}$
, we obtain

by
$8\widehat {M}_{0}\delta \leq \underline {\theta }<\overline {\theta }$
. Putting (4.51) and (4.52) into (4.49) and applying (4.34) again yield

We insert (4.53) and (4.49) into (4.48) to see that

and

Based on (4.54) and (4.55), we can show the following lemma.
Lemma 4.3 For any
$k\geq 1$
and
$(\xi ,\eta )\in [0,\delta ]\times \mathbb {R}$
, if
$T_{15}^{(k)}(\xi ,\eta )$
and
$T_{16}^{(k)}(\xi ,\eta )$
satisfy

then there hold

Proof By (4.34) and the definitions of
$T_{12-14}^{(k)}$
in (4.45), we first find that

Substituting (4.58) into (4.55) and utilizing the assumptions in (4.56) give

We now put the estimates (4.58) and (4.59) into (4.54) and apply the assumptions in (4.56) to deduce

One also has by (4.59)

Inserting the estimates of
$T_{12-14}^{(k)}$
in (4.60) and (4.61) into (4.54) and (4.55) and using the assumptions in (4.56) again arrive at

and

We next repeatedly substitute the new obtained estimates of
$T_{12-14}^{(k)}$
into (4.54) and (4.55) to get for arbitrary integer
$\ell \geq 1$
,

It follows by the arbitrariness of
$\ell $
that

which finishes the proof of the lemma.
By virtue of Lemma 4.3 and (4.44) and (4.45), we can achieve the following lemma.
Lemma 4.4 Let the iterative sequence
$\{\widetilde {v}^{(k)}\}$
be defined by (4.18). Then, for all
$k\geq 0$
, the following inequalities

hold for
$(t,x)\in [0,\delta ]\times \mathbb {R}$
.
Proof We show the lemma by using the argument of induction again. Obviously, the results of Lemma 4.1 and the fact
$\widetilde {v}^{(0)}(t,x)\equiv 0$
indicate that (4.66) is valid for
$k=0$
.
Now, suppose that all inequalities in (4.66) are true for
$n=k$
. Recalling the definitions of
$T_{15}^{(k)}$
and
$T_{16}^{(k)}$
, the induction assumptions mean that

from which and Lemma 4.3 one has

for any
$(t,x)\in [0,\delta ]\times \mathbb {R}$
. Inserting (4.67) and (4.68) into (4.44) and (4.45) and making use of (4.16) give

and

by the choice of
$\delta $
in (4.20). We combine (4.69) and (4.70) to complete the proof of the lemma.
4.4 The improved regularity of the iterative sequence
In this subsection, we improve the regularity of the function
$\widetilde {v}_{xx}^{(k)}(t,x)$
with respect to x, which will be used to establish the uniform convergence of sequences
$\{(\widetilde {v}_{t}^{(k)}, \widetilde {v}_{xt}^{(k)}, \widetilde {v}_{xx}^{(k)})(t,x)\}$
in the space
$\Sigma (\delta )$
.
Integrating system (4.46) along the characteristic curves
$x_{i}^{(k)}(t)(i=\pm ,0)$
and differentiating the resulting with respect to
$\eta $
, we deduce for any
$(\xi ,\eta )\in [0,\delta ]\times \mathbb {R}$
,

where
$x_{\pm }^{(k)}(t)=x_{\pm }^{(k)}(t;\xi ,\eta )$
are determined by (4.48), and

From (4.71), one obtains for any
$(\xi ,\eta _1), (\xi ,\eta _2)\in [0,\delta ]\times \mathbb {R}$
,

where

We point out by the estimate
$\theta _{\eta \eta }^{(k)}$
in (4.34) that there is no need to consider the difference between
$\theta _{\eta }^{(k)}(\xi ,\eta _1)$
and
$\theta _{\eta }^{(k)}(\xi ,\eta _2)$
. One performs direct calculations and uses (4.34) and (4.72) to achieve

and

where

Recalling the definitions of
$x_{\pm }^{(k)}(t;\xi ,\eta )$
in (4.48) and utilizing (4.34) get

for any
$t\in [0,\xi ]$
, from which one finds that

We put (4.76) into (4.75) to gain

Moreover, by applying (4.20), (4.34), (4.51), and Lemma 4.1, one computes

To estimate
$I_{7}^{(k)}$
, we rewrite the term
$F_{1x}^{(k)}$
as

where
$\Psi ^{(k)}$
is defined in (4.47) and

Here, we used the relation
$\theta _{x}^{(k)}=(\widetilde {R}^{(k)}-\widetilde {S}^{(k)})/2\theta ^{(k)}$
by (3.123). Note that the term
$I_{11}^{(k)}$
is differentiable with respect to x. Doing a direct calculation and employing (4.20), (4.34), (4.51), and Lemma 4.1 arrive at

and

by the choice
$\widehat {M}_0\geq 24\overline {M}^2\overline {\theta }$
in (4.20). We combine (4.79)–(4.81) and apply (4.34), (4.76), and Lemmas 4.1 and 4.2 to achieve

Here, we chosen
$\alpha =\frac {1}{2}$
in Lemma 4.2 and then

If
$|\eta _1-\eta _2|\leq 1$
, then one obtains by (4.82)

Now, we put (4.74), (4.77), (4.78), and (4.83) into (4.73) to find that if
$|\eta _1-\eta _2|\leq 1$
,

by the fact
$15e^{\widehat {M}_0\delta ^2}\leq 16$
.
In view of (4.84), we have the following lemma.
Lemma 4.5 The functions
$\widetilde {R}_{x}^{(k)}(t,x)$
,
$\widetilde {S}_{x}^{(k)}(t,x)$
, and
$\widetilde {v}_{xx}^{(k)}(t,x) (k=0,1,\ldots )$
are uniformly Lipschitz continuous with respect to x. More precisely, there hold for any
$k\geq 0$
and any two points
$(t,x_1), (t,x_2)\in [0,\delta ]\times \mathbb {R}$
,

Proof We first show that the functions
$\widetilde {R}_{\eta }^{(k)}(\xi ,\eta )$
and
$\widetilde {S}_{\eta }^{(k)}(\xi ,\eta ) (k=0,1,\ldots )$
are uniformly 1/2-Hölder continuous with respect to
$\eta $
. For any two numbers
$\eta _1, \eta _2$
, if
$|\eta _1-\eta _2|>1$
, then one acquires by (4.34)

If
$|\eta _1-\eta _2|\leq 1$
, then the inequality (4.84) is valid. Due to the definitions of
$I_{5}^{(k)}$
and
$I_{6}^{(k)}$
in (4.73), we get

Inserting the above into (4.84) leads to

We put (4.87) into (4.84) again to arrive at

Repeating the above insertion process yields

One combines (4.86) and (4.89) to achieve for any
$(\xi ,\eta _1), (\xi ,\eta _2)\in [0,\delta ]\times \mathbb {R}$
,

Next, we prove that
$\widetilde {v}_{xx}^{(k)}(t,x) (k=0,1,\ldots )$
are uniformly Lipschitz continuous with respect to x. Recalling (4.7) and (4.18) gives

from which one has

Here, the term
$b_{x}^{(k)}$
is

which satisfies by (4.34) and (4.90)

Putting (4.94) into (4.92) and making use of (4.10) yield

which implies that the function
$\widetilde {v}_{xx}^{(k)}(t,x)$
satisfies

Finally, we verify that the functions
$\widetilde {R}_{\eta }^{(k)}(\xi ,\eta )$
and
$\widetilde {S}_{\eta }^{(k)}(\xi ,\eta ) (k=0,1,\ldots )$
are uniformly Lipschitz continuous with respect to
$\eta $
. By using (4.96), we re-estimate the terms
$I_{7}^{(k)}$
and
$I_{9}^{(k)}$
in (4.82) and (4.83) to see that for
$|\eta _1-\eta _2|\leq 1$
,

from which the terms
$I_{5}^{(k)}$
and
$I_{6}^{(k)}$
in (4.84) can be improved to

Based on (4.98), one employs the same argument as (4.89) to gain

for
$|\eta _1-\eta _2|\leq 1$
. Combining (4.90), (4.99), and (4.96) finishes the proof of the lemma.
4.5 The convergence of the iterative sequence (II)
In this subsection, we continue the discussion of Section 4.3 to establish the properties for the sequences
$\{(\widetilde {v}_{t}^{(k)}, \widetilde {v}_{xt}^{(k)}, \widetilde {v}_{xx}^{(k)})(t,x)\}$
in the space
$\Sigma (\delta )$
.
We first apply (4.18), (4.6)–(4.8), and (4.34) to deduce


and

where

Recalling the system (4.46) arrives at

from which and (4.53) and (4.49), we get

One utilizes Lemmas 4.3 and 4.4, (4.104), and (4.34) to acquire

We have by putting (4.105) into (4.100) and (4.102)

In order to proceed with the verification, it is necessary to estimate the terms
$T_{22,23,24}^{(k)}$
. Recalling system (4.71) leads to

where

We first remember (4.74) and (4.78) to obtain

Next, we are going to estimate the terms
$T_{25-29}^{(k)}$
. For the term
$T_{27}^{(k)}$
, it follows directly by Lemma 4.4 that

For the terms
$T_{28-29}^{(k)}$
, one recalls the definitions of
$\partial _\eta x_{\pm }^{(k)}$
in (4.72) to achieve

Moreover, it concludes by the definitions of
$x_{\pm }^{(k)}(t;\xi ,\eta )$
in (4.48) that

from which and (4.34) and Lemma 4.3 one has

Thus, we find by (4.111) that

Putting (4.112) into (4.110) and using (4.34) again yield

Finally, we estimate
$T_{25}^{(k)}$
by re-expressing it as

Recalling the expression of
$\partial _x F_{1}^{(k)}$
in (4.79) and making use of (4.34) and (4.51) arrive at

By the expressions of
$\Psi ^{(k)}$
in (4.47) and
$I_{11}^{(k)}$
in (4.79), we employ Lemmas 4.3 and 4.4 and (4.22), (4.34), and (4.51) again to gain

and

One inserts (4.116) and (4.117) into (4.115) to get

In addition, for the term
$T_{31}^{(k-1)}$
, we also use the expression of
$\partial _x F_{1}^{(k-1)}$
as in (4.79) to derive

Thus, it suggests by Lemma 4.5, (4.34), (4.80), (4.81), and (4.112) that

One puts (4.118) and (4.120) into (4.114) to deduce

The above estimate is also true for the term
$T_{26}^{(k)}$
.
We now combine (4.107)–(4.109), (4.113), and (4.121) and apply the fact
$15e^{\widehat {M}_0\delta ^2}\leq 16$
to find that

and

Based on (4.122) and (4.123), we have the following lemma.
Lemma 4.6 For any
$k\geq 1$
and
$(\xi ,\eta )\in [0,\delta ]\times \mathbb {R}$
, if
$\widetilde {v}_{\eta \eta }^{(k)}(\xi ,\eta )$
satisfies

then there hold

Proof The proof of the lemma is similar to that of Lemma 4.3. According to the definitions of
$T_{22-24}^{(k)}$
, we first see by (4.34) that

which along with (4.123) gets

Thus,

Next, in view of (4.122) and the assumption (4.124), one has

by the fact
$\overline {\theta }\sqrt {\delta }\leq 1$
. We substitute (4.127) into (4.128) and (4.123) to obtain by
$\widehat {M}_0\delta \leq 1$
that

and

Hence, it follows by (4.129) and (4.130) that

Moreover, putting (4.131) into (4.128) and (4.123) again yields

and

One combines (4.132) and (4.133) to acquire

Therefore, by repeating the above process, we can achieve

for arbitrary integer
$\ell \geq 1$
. Due to the arbitrariness of
$\ell $
, it concludes by (4.135) that

which ends the proof of the lemma.
By virtue of (4.106) and Lemma 4.6, we have the following lemma.
Lemma 4.7 Let the iterative sequence
$\{\widetilde {v}^{(k)}\}$
be defined by (4.18). There hold

for all
$k\geq 0$
and
$(t,x)\in [0,\delta ]\times \mathbb {R}$
.
Proof The proof of the lemma is still based on the method of induction. Noting the initial iterative function
$\widetilde {v}^{(0)}(t,x)\equiv 0$
, it is easy to see by Lemma 4.1 that all inequalities in (4.137) are true for
$k=0$
.
Now, assume that (4.137) holds for
$n=k$
. Thus,

from which and Lemma 4.6 one has for
$n=k$
and
$(t,x)\in [0,\delta ]\times \mathbb {R}$
that

We insert (4.139) into (4.106) and apply (4.16) to get


and

4.6 The solutions of the hyperbolic–parabolic coupled problem
In this subsection, we complete the proof of Theorem 2.1 and then obtain Theorem 1.1. Thanks to Lemmas 4.4 and 4.7, one finds that the iterative sequence
$\{\widetilde {v}^{(k)}\}$
defined by (4.18) satisfies concurrently

for all
$k\geq 0$
and
$(t,x)\in [0,\delta ]\times \mathbb {R}$
. It follows by (4.143) that the iterative sequence
$\{\widetilde {v}^{(k)}(t,x)\}$
is uniformly convergent in the space
$\Sigma (\delta )$
. We denote the limit function by
$\widetilde {v}(t,x)$
, which satisfies the following properties by Lemma 4.1:

From (4.144), we know that the limit function
$\widetilde {v}(t,x)$
is in the space
$\Sigma (\delta )$
. Based on this limit function
$\widetilde {v}(t,x)\in \Sigma (\delta )$
, the degenerate hyperbolic problem (3.2), (3.3) is solved by Section 3 to get the variables
$(\widetilde {R}, \widetilde {S}, \theta )(t,x)$
. It is clear that the functions
$(\widetilde {v}, \widetilde {R}, \widetilde {S}, \theta )(t,x)$
satisfy the initial value conditions (2.6) and the last three equations in (2.7). Furthermore, due to the construction of the sequence
$\{\widetilde {v}^{(k)}\}$
in (4.18), the limit function
$\widetilde {v}(t,x)$
obviously satisfies the integral equation

By combining the integral equation (4.145) and the regularity of
$\widetilde {v}(t,x)$
in (4.144), we see that the function
$\widetilde {v}(t,x)$
satisfies the first differential equation in (2.7). Therefore, the functions
$(\widetilde {v}, \widetilde {R}, \widetilde {S}, \theta )(t,x)$
are the classical solution to the singular Cauchy problem (2.7), (2.6). In addition, we set

then apply (4.144) and Theorem 3.1 to acquire

for any
$(t,x)\in [0,\delta ]\times \mathbb {R}$
, which are the desired inequalities in (2.9). This ends the existence proof of Theorem 2.1.
Next, we consider the uniqueness by estimating the difference of solutions. Assume that
$\widetilde {v}_a, \widetilde {v}_b$
are any two elements in the space
$\Sigma (\delta )$
, and
$(\widetilde {v}_a, \widetilde {R}_a, \widetilde {S}_a, \theta _a)(t,x)$
,
$(\widetilde {v}_b, \widetilde {R}_b, \widetilde {S}_b, \theta _b)(t,x)$
are two solutions of (2.7). Set
$(\widehat {v}, \widehat {R}, \widehat {S}, \widehat {\theta })(t,x)=(\widetilde {v}_a-\widetilde {v}_b, \widetilde {R}_a-\widetilde {R}_b, \widetilde {S}_a-\widetilde {S}_b, \theta _a-\theta _b)(t,x)$
. Performing direct calculations, the functions
$(\widehat {v}, \widehat {R}, \widehat {S}, \widehat {\theta })$
satisfy the following homogeneous integral inequality system:

Here, we omit the derivation process of system (4.147), since it is very similar to that of equations (4.44) and (4.45), and (4.54) and (4.55). According to the facts
$\widetilde {v}_a, \widetilde {v}_b\in \Sigma (\delta )$
, one employs (4.144) and (4.34) to achieve that the difference functions
$(\widehat {v}, \widehat {R}, \widehat {S}, \widehat {\theta })$
satisfy

Making use of (4.148) at the first step and then repeating the insertion of the right side of (4.147), we can find that the functions
$(\widehat {v}, \widehat {R}, \widehat {S}, \widehat {\theta })$
must satisfy the inequalities of the forms

for arbitrary integer
$\ell \geq 1$
and some positive constant
$\widehat {M}^*$
. It is obvious by (4.149) that there holds
$\widehat {v}=\widehat {R}=\widehat {S}=\widehat {\theta }\equiv 0$
, which yields the uniqueness of classical solutions of Cauchy problem (2.7), (2.6). Hence, the proof of Theorem 2.1 is completed.
Based on Theorem 2.1 and the transformation (2.5), one obtains the existence and uniqueness of classical solutions for the Cauchy problem (2.3), (2.4). Finally, by the relation
$\theta _x=(\widetilde {R}-\widetilde {S})/2\theta $
in (3.123) and the transformation (2.1), it concludes that the two problems (2.3), (2.4) and (1.7), (1.8) are equivalent. Therefore, we complete the proof of Theorem 1.1.
Acknowledgment
The author would like to thank the editor and referees for very helpful comments and suggestions to improve the quality of the paper.