1. Introduction
The aim of this paper is to study linear symmetric hyperbolic systems with damping, differential constraints and delay. Differential constraints for the states occur naturally in certain models in fluid dynamics and electromagnetism. They appear in the system itself, for example in the Euler–Maxwell system, or they are introduced to factor out spurious solutions as in the case of the wave equation. In this work, we consider the multidimensional hyperbolic system
for $t > 0$, $x \in \mathbb {R}^d$ and $\theta \in (-\tau,\,0)$ with unknown state $u : (0,\,\infty ) \times \mathbb {R}^d \to \mathbb {R}^n$. The positive integers $d$ and $n$ represent the dimension and the number of equations in the system. For this system, $u_0$ and $z_0$ correspond to the initial data and initial history, respectively. Our main concern is to develop a well-posedness theory, provide sufficient conditions that lead to the asymptotic stability of the solutions, and determine the decay structure. The positive constant $\tau$ represents a delay.
In (1.1), we assume that all of the coefficient matrices have real entries. The matrices $L$, $M$ and $A^j$ for $0 \leq j \leq d$ have size $n\times n$, while the matrices $R$ and $Q^j$ for $1 \leq j \leq d$ have size $n_1 \times n$, where $n_1$ represents the number of constraints. Here, $L$ and $M$ will be referred as the damping (or relaxation) and delay matrices, respectively. It is allowed that the matrices $Q^j$ and $R$ to vanish and in such case we simply have a hyperbolic system with delay. All throughout, we suppose that (1.1) is symmetric, that is, $A^j$ is symmetric for all $0\leq j \leq d$. Moreover, we assume that $A^0$ is positive definite.
The physical models we often encounter deal with the case where the state is independent of the past. However in some situation, this is only an approximation and a more realistic setting is to include the dependence of the dynamics on the past states. For this reason, one could incorporate delay in the system and study its effect. The study of delay to partial differential equations caught its attention in control theory, specifically in the boundary feedback stabilization of the one-dimensional wave equation. It has been shown in [Reference Datko5, Reference Datko, Lagnese and Polis6] that the presence of delay in the boundary feedback for the string equation can lead to instability. These works have been extended in the multidimensional setting in [Reference Nicaise and Pignotti15]. Roughly, if damping dominates the delay factor, then the energy of the solutions for the wave equation tends to zero exponentially. The delay in the damping occurs either in the interior or on the boundary. In the event where the damping and delay factors are equal, there are solutions where the energy is conserved. The proofs rely on semigroup and energy methods, observability estimates and a compactness-uniqueness argument.
The main goal of this paper is to determine sufficient conditions on the damping and delay matrices in (1.1) in order for its solution to be stable for every delay $\tau > 0$. Our structural condition, see condition (M) below, is similar to the one stated in [Reference Hale8] for systems of differential equations with delay.
By introducing a state variable that keeps track of the history, system (1.1) will be expressed as a hyperbolic system coupled to a transport system with parameter. For partial differential equations with delay on a bounded domain, for example, the wave, heat and Schrödinger equations, the existence and uniqueness of solutions can be obtained through semigroup methods, Kato's theorem for evolution equations and Faedo–Galerkin approximations, see [Reference Fridman, Nicaise and Valein7, Reference Kirane and Said-Houari12, Reference Nicaise and Pignotti15–Reference Nicaise, Valein and Fridman19] to name a few. The approach we shall pursue here is based on the Friedrichs method. The basic idea is to derive a priori estimates for suitably smooth functions and apply a duality argument. Weak solutions for rough data are formulated through a variational equation. The corresponding results rely on the well-posedness theory for hyperbolic systems as well as for a decoupled system of transport equations with parameter. For completeness and clarity, we present the results of the latter.
For data that are smooth and compatible, we expect better regularity for the solutions. This will be proved by a standard approximation argument and the a priori estimates for hyperbolic operators in Sobolev spaces. We would like to note that the advantage of Friedrichs method is its applicability even in the case of variable-coefficients, see for instance [Reference Benzoni-Gavage and Serre3]. Another reason of using this method is the following: For hyperbolic partial differential equations, there is a trade-off in the regularity between time and space. The higher regularity with respect to time, the less spatial derivatives are available. Now for the delay variable, which satisfies a hyperbolic partial differential equation with parameter, the trade-off is now on three quantities, namely time, space and the variable with respect to history.
The plan of the paper is as follows. In § 2, we present the suitable conditions for the matrices involved in (1.1) that guarantee stability. The well-posedness of transport equations with parameter and hyperbolic systems with delay will be developed in § 3 and 4, respectively. In §5, 6, and 8, we establish the asymptotic stability, standard decay estimates, and regularity-loss type estimates. Specific examples that illustrate our results are provided in § 7 and 8. These are the wave, Timoshenko, and Euler–Maxwell systems with delay.
The Sobolev space $W^{k,p}(\mathbb {R}^d)$ will be simply denoted by $W^{k,p}$ and $H^k := W^{k,2}$. We let $H^\infty := \bigcap _{m = 0}^\infty H^k$. If $X$ is a Banach space and $m$ is a nonnegative integer, then $C^m(0,\,T;X)$ is the space of functions from $[0,\,T]$ into $X$ whose derivatives up to order $m$ are continuous. We shall also use the shorthand $W_\theta ^{k,p}(W^{j,q}) := W^{k,p}(-\tau,\,0;W^{j,q}(\mathbb {R}^d))$. For example, $L^2_\theta (H^k) := L^2(-\tau,\,0;H^k(\mathbb {R}^d))$. Depending on the context, $\langle \cdot,\, \cdot \rangle$ denotes the inner product in $\mathbb {C}^n$ or $\mathbb {R}^n$. The gradient of a function $u : \mathbb {R}^d \to \mathbb {R}^n$ is denoted by $\partial _x u := (\partial _{x_1}u,\, \ldots,\, \partial _{x_d}u)^T$ where the superscript$^T$ represents transposition.
2. Structural conditions on the coefficient matrices
In this section, we list the structural assumptions on the coefficient matrices that will guarantee the stability of the solutions of system (1.1). We follow the presentation in [Reference Ueda, Duan and Kawashima29]. The principal symbol of (1.1) is given by $iA(\xi )$, where $A(\xi ) := A^1\xi _1 + \cdots + A^d \xi _d$ for $\xi := (\xi _1,\, \ldots,\, \xi _d)^T \in \mathbb {R}^d$. Similarly, we define by $iQ(\xi ) := i(Q^1\xi _1 + \cdots + Q^d\xi _d)$ the principal symbol of the constraint. The unit sphere in $\mathbb {R}^d$ is denoted by $\mathbb {S}^{d-1}$. Given a square real matrix $A$, the symmetric and skew-symmetric parts of $A$ are given by $A_1 := (A+A^T)/2$ and $A_2 := (A-A^T)/2$, respectively, so that $A = A_1 + A_2$. The orthogonal projection of $\mathbb {C}^n$ onto the orthogonal complement of the kernel of $A$ will be denoted by $P_A$. Equivalently, $P_A$ is the orthogonal projection onto the range of $A^T$, and as a consequence, $I-P_A$ is the orthogonal projection onto the kernel of $A$. Recall that $P_A$ and $I-P_A$ are symmetric matrices.
For the damping or relaxation matrix $L$, we impose the following condition.
(L) The matrix $L$ is nonnegative and has a nontrivial kernel.
It is not assumed that the relaxation matrix $L$ is symmetric. Thus, condition (L) provides dissipation only in the orthogonal complement of the kernel of $L_1$. To obtain dissipation terms in the space $\text {Ker}(L)^\perp$, we introduce the compensating matrix $S$ as in [Reference Ueda, Duan and Kawashima29].
(S) There exists a real $n \times n$ matrix such that $SA^0$ is symmetric, $(SL+L)_1 \geq 0$, $\text {Ker}((SL+L)_1) = \text {Ker}(L)$, and
(2.1)\begin{equation} \langle SMz,u\rangle = 0 \quad \text{for all }(z,u) \in \mathbb{C}^n \times \text{Ker}(L). \end{equation}
Equation (2.1) means that the range of $SM$ and the kernel of $L$ are orthogonal. With respect to the delay matrix $M$, we assume the following condition.
(M) There exist real $n\times n$ symmetric matrices $G$ and $N$ such that $GA^0$ is symmetric positive definite, $N$ is positive definite on $\text {Ker}(M)^\perp$, $\text {Ker}((GL)_1) = \text {Ker}(L_1)$,
\[ \langle GMz,u\rangle = 0 \quad \text{for all }(z,u) \in \mathbb{C}^n \times \text{Ker}(L_1), \]and the symmetric block matrix\[ \Psi_{G,N,M} := \left( \begin{array}{@{}cc@{}} 2(GL)_1 - P_MNP_M & GM \\ M^TG & N \end{array} \right) \]is positive definite on $\text {Ker}(L_1)^\perp \times \text {Ker}(M)^\perp$.
This condition is similar to the one presented in [Reference Hale8, p. 107]. Due to the possible degeneracy of the matrices $L$ and $M$, positivity is only assumed on the orthogonal complements of their kernels. If $M$ vanishes, the case when there is no delay, one can see that condition (M) follows from condition (L) by taking $G = I$ and $N = L_1$. Also, condition (M) implies that $\text {Ker}(L_1) \subset \text {Ker}(M)$. Indeed, suppose that $u \in \text {Ker}(L_1)$. Then, for some constant $c_N > 0$ it holds that $c_N|P_Mu|^2 \leq \langle NP_Mu,\,P_Mu\rangle \leq 2\langle (GL)_1u,\,u\rangle = 0$ because the kernels of $L_1$ and $(GL)_1$ coincide. Thus, $P_M u = 0$, which implies that $u\in \text {Ker}(M)$.
The constraint in (1.1) will be satisfied for all $t > 0$ as soon as the initial data satisfies it and if the matrices appearing in the constraint as well as those in the PDE satisfy certain conditions. For this, we consider the following assumption.
(Q) The matrices $Q(\omega )$ and $R$ satisfy
\begin{align*} & Q(\omega)(A^0)^{{-}1}A(\omega) = R(A^0)^{{-}1}L = R(A^0)^{{-}1}M = 0,\\ & Q(\omega)(A^0)^{{-}1}L + R(A^0)^{{-}1}A(\omega) = Q(\omega)(A^0)^{{-}1}M = 0, \end{align*}for every $\omega \in \mathbb {S}^{d-1}$.
We denote by $\varPi _1$ the orthogonal projection of $\mathbb {C}^n$ onto the image of $R$, and hence $\varPi _2 := I - \varPi _1$ is the orthogonal projection onto the kernel of $R^T$. To derive energy estimates for the derivatives of the state components, we need the following condition, which is referred as the Shizuta–Kawashima condition [Reference Shizuta and Kawashima27].
(K) There exist $n\times n$ real matrices $K^l$ for $1 \leq l \leq d$ such that $K^lA^0$ is skew-symmetric for all $l$ and
\[ \sum_{j,l = 1}^d (K^l A^j)_1\omega_j\omega_l > 0 \quad\text{on } \text{Ker}(\varPi_2 Q(\omega))\cap \text{Ker}(L) \]for every $\omega := (\omega _1,\,\ldots,\,\omega _d) \in \mathbb {S}^{d-1}$.
Conditions (S) and (K) imply the existence of a constant $\vartheta > 0$ such that
for every $\omega \in \mathbb {S}^{d-1}$.
Our final set of assumptions deal with conditions that will determine the decay structure of (1.1). For a standard decay, the following assumption is sufficient.
(S)s A real $n_1\times n_1$ real matrix $W$ exists with $W_1 \geq 0$ on the image of $R$ and
\[ i(SA(\omega) - Q(\omega)^T\varPi_1WR)_2 \geq 0 \quad \text{on } \mathbb{C}^n \]for every $\omega \in \mathbb {S}^{d-1}$, where $S$ is the matrix in condition (S).
A weaker version of the previous condition is the following, whose corresponding decay will be of regularity-loss type. This means that we need additional regularity for the initial data to obtain stability of solutions.
(S)r There is an $n_1\times n_1$ real matrix $W$ such that $W_1 \geq 0$ on the image of $R$ and
\[ i(SA(\omega) - Q(\omega)^T\varPi_1 WR)_2 \geq 0 \qquad \text{on } \text{Ker}(L_1) \]for every $\omega \in \mathbb {S}^{d-1}$, where $S$ is the matrix in condition (S).
Both conditions (S)$_s$ and (S)$_r$ were introduced in [Reference Ueda, Duan and Kawashima29]. The rest of the section will be devoted in studying condition (M) and specifically on the block matrix $\Psi _{G,N,M}$. The first observation is that the positivity of $\Psi _{G,N,M}$ is equivalent to the positivity with respect to $\text {Ker}(L_1)^\perp$ with possibly a different matrix $N$.
Theorem 2.1 Let $G$ be a real symmetric matrix as in condition (M). Then, there is an $n\times n$ symmetric matrix $N$ that is positive definite on $\textit {Ker}(M)^\perp$ such that
for some $\alpha > 0$ and for every $(u,\,z) \in \mathbb {C}^n \times \mathbb {C}^n$ if and only if there is an $n\times n$ symmetric matrix $\widetilde {N}$ which is positive definite on $\textit {Ker}(M)^\perp$ such that
for some $\widetilde {\alpha } > 0$ and for every $(u,\,z) \in \mathbb {C}^n\times \mathbb {C}^n$.
Proof. One can see that (2.3) implies (2.4) by taking $N = \widetilde {N}$. For the other direction, let $N = \widetilde {N} + \varepsilon P_M$ where $\varepsilon > 0$. The block matrices associated with $N$ and $\widetilde {N}$ are related by
As before, (2.4) implies that the kernel of $L_1$ lies in the kernel of $M$ and as a consequence $\langle P_Mu,\,u\rangle = \langle P_MP_{L_1}u,\,P_{L_1}u\rangle$. By choosing $\varepsilon < \widetilde {\alpha } / \|P_M\|$, where $\|\cdot \|$ is the operator norm, we have $\widetilde {\alpha }|P_{L_1}u|^2 - \varepsilon \langle P_MP_{L_1}u,\,P_{L_1}u \rangle \geq (\widetilde {\alpha } - \varepsilon \|P_M\|)|P_{L_1} u|^2 > 0$. Then, we can see that (2.4) implies (2.3) with $\alpha = \widetilde {\alpha } - \varepsilon \|P_M\|$.
If the delay matrix is symmetric and nonnegative, then a sufficient condition for the positivity of the block matrix in condition (M) is given by the following theorem.
Theorem 2.2 Suppose that $M \geq 0$ is symmetric and $L_1 - M > 0$ on $\textit {Ker}(L_1)$. Then, $\Psi _{I,M,M} > 0$ on $\textit {Ker}(L_1)^\perp \times \textit {Ker}(M)^\perp$.
Proof. Given $(u,\,z)\in \mathbb {C}^n \times \mathbb {C}^n$ we have
By the symmetry of the delay matrix $M$, we obtain $\langle Mz,\,z\rangle = \langle MP_Mz,\,P_Mz\rangle$ and $\langle Mu,\,z\rangle = \langle MP_Mu,\,P_Mz\rangle$, and therefore, by the Cauchy–Schwarz inequality we have
Using (2.6) in (2.5) yields the estimate
for some $\widetilde {\alpha } > 0$. The conclusion now follows from theorem 2.1.
We close this section by proving the invariance of the condition (M) with respect to a class of orthogonal matrices.
Theorem 2.3 Let $J$ be a real orthogonal $n\times n$ matrix, that is, $J^TJ = I$, such that
(a) $J(\textit {Ker}(M)) = \textit {Ker}(M)$ and $J(\textit {Ker}(M)^\perp ) = \textit {Ker}(M)^\perp$
(b) $\langle NP_Mu,\,P_Mu\rangle = \langle NP_MJu,\,P_MJu\rangle$ for all $u\in \mathbb {C}^n$.
If $M$ satisfies condition (M), then so is $MJ$. In particular, $-M$ satisfies condition (M) if and only if $M$ satisfies the condition.
Proof. Property (i) implies that the kernels of $MJ$ and $M$ coincide, and in particular, $\text {Ker}(MJ)^\perp = \text {Ker}(M)^\perp$ and $P_{MJ} = P_M$. Given $u \in \mathbb {C}^n$ there holds
since $JP_Mu \in \text {Ker}(M)^\perp$ and $J(I-P_M)u \in \text {Ker}(M)$. Hence, $P_M$ and $J$ commute, and consequently, $P_M$ and $J^T$ also commute by symmetry of $P_M$. If $(u,\,z)\in \mathbb {C}^n \times \mathbb {C}^n$, then we derive from (ii) and the preceding statement that
Using condition (M) and the fact that $P_M$ and $J$ commute, we can see from these equations that $\Psi _{G,J^TNJ,MJ} > 0$ on $\text {Ker}(L_1)^\perp \times \text {Ker}(M)^\perp$. Finally, since $J$ is bijective, it follows that $\langle GMJz,\, u \rangle = 0$ for every $z \in \mathbb {C}^n$ and $u \in \text {Ker}(L_1)$. These prove that $MJ$ satisfies condition (M).
Notice that $M$ satisfies (2.1) if and only if $MJ$ satisfies $\langle SMJz,\,u\rangle = 0$ for every $z \in \mathbb {C}^n$ and $u \in \text {Ker}(L)$. Also, the conditions involving the matrix $M$ in hypothesis (Q) hold if and only if those conditions are satisfied by $MJ$. The previous theorem together with the above remark will imply the stability of (1.1), with $M$ replaced by $MJ$, where $J$ is an orthogonal matrix satisfying (i) and (ii), provided that the original system with delay matrix $M$ is also stable.
3. Transport equations with parameter
The goal of the present section is to discuss the well-posedness of the following transport equation with parameter
that will be useful in the study of system (1.1). Here, $a$ is a fixed real number and $z : (0,\,T) \times (-\tau,\,0) \times \mathbb {R}^d \to \mathbb {R}^n$ is the unknown state. Such equation will occur once we introduce a state component that keeps track of the history. We would like to point out that the results in this section are analogous to the usual transport equations. However, for clarity in the development of the well-posedness for (1.1) and for future reference, we decided to include them here. Define the differential operator
whose formal adjoint is given by $\mathscr {L}_1^*z := -\partial _t z + \partial _\theta z + a z$.
First, we start with the definition of a weak solution for given square integrable data $v \in L^2(0,\,T;L^2)$ and $z_0 \in L^2_\theta (L^2)$. A function $z \in L^2((0,\,T)\times (-\tau,\,0)\times \mathbb {R}^d)$ is called a weak solution of (3.1) if the variational equation
holds for every $\psi \in L^2(\mathbb {R}^d;H^1((0,\,T)\times (-\tau,\,0)))$ such that $\psi _{|t= T} = 0$ and $\psi _{|\theta = -\tau } = 0$.
It is clear that every classical solution is also a weak solution. The existence of weak solutions will be obtained using the following result in [Reference Peralta and Propst22] inspired by the Friedrichs work [Reference Fridman, Nicaise and Valein7].
Theorem 3.1 Let $X$ and $Z$ be Hilbert spaces, $Y$ be a subspace of $X$, and $\Lambda : Y \to X$, $\Psi : Y \to Z$, $\Phi : Y \to Z$ be linear operators. Suppose that $W = \text {Ker}(\Phi )$ and $\Lambda (W)$ are nontrivial. If there exist $\gamma > 0$ and $C > 0$ such that
then the variational equation
for a given $(F,\,G) \in X \times Z$ has a solution $u \in X$. In addition, the solution is unique if and only if $\Lambda (W)$ is dense in $X$.
Applying the above result requires some a priori estimate. First, let us derive the estimate associated with $\mathscr {L}_1$. For a smooth function $\psi$, we multiply both sides of equation (3.2) by $e^{-2\gamma t}\psi$, where $\gamma \geq 1$ is a constant to be chosen below, to obtain
Integrating this equation over $(0,\,\sigma )\times (-\tau,\,0) \times \mathbb {R}^d$, using Young's inequality to the right-hand side, and then choosing $\gamma _0 \geq 1$ sufficiently large, we have
for every $\sigma \in [0,\,T]$, for every $\gamma \geq \gamma _0$, and for some $C > 0$. By a density argument, (3.6) is satisfied for every $\psi \in L^2(\mathbb {R}^d; H^1((0,\,T) \times (-\tau,\,0)))$. The dual version of this estimate is the following: for every $\gamma \geq \gamma _0^*$ and $\sigma \in [0,\,T]$ it holds that
for some constants $C > 0$ and $\gamma _0^* \geq 1$, and for every $\psi \in L^2(\mathbb {R}^d; H^1((0,\,T) \times (-\tau,\,0)))$.
For data that will be regular and compatible, we have additional regularity of the weak solution. For this we need a priori estimates in terms of the Sobolev norms. Given $0 \leq j \leq m$, if we replace $\psi$ by $\partial _t^j\partial _\theta ^k\partial _x^\ell \psi$ in (3.6), take the sum over all $0\leq k\leq m-j$, $0\leq j\leq m$ and $0 \leq \ell \leq s$, and then finally take the supremum over all $\sigma \in [0,\,T]$, we obtain the weighted a priori estimate
for every $\psi \in H^{m+1}((0,\,T)\times (-\tau,\,0);H^s).$
Theorem 3.2 Given $z_0 \in L^2_\theta (L^2)$ and $v \in L^2(0,\,T;L^2)$, equation (3.1) admits a unique weak solution.
Proof. Let $X = L^2((0,\,T)\times (-\tau,\,0)\times \mathbb {R}^d)$, $Y = L^2(\mathbb {R}^d; H^1((0,\,T)\times (-\tau,\,0)))$, and $Z = L^2(0,\,T;L^2) \times L_\theta ^2(L^2)$. Define the operators $\Lambda : Y \to X$, $\Psi : Y \to Z$, and $\Phi : Y \to Z$ as follows:
for $\psi \in Y$. The variational equation (3.3) can now be written in form (3.5). From the a priori estimate (3.7), one can see that (3.4) is satisfied, and therefore by theorem 3.1, (3.1) has a weak solution.
To establish uniqueness, we proceed by a duality argument. Suppose that $z_1$ and $z_2$ are two weak solutions and let $z := z_1 - z_2$. Then, it follows that
for every $\psi \in \text {Ker}(\Phi )$. Let $(\phi _n)_{n=1}^\infty$be a sequence of infinitely differentiable functions with compact support in $(0,\,T)\times (-\tau,\,0)\times \mathbb {R}^d$ such that $\phi _n \to z$ in $L^2((0,\,T)\times (-\tau,\,0)\times \mathbb {R}^d)$. The backward-in-time transport equation
has a classical solution, so that $\psi _n \in Y$ for each $n$. Using this test function in (3.9) and then passing to the limit, we see that $z = 0$ almost everywhere. Therefore, the weak solution of (3.1) is unique.
To prove regularity of the solutions, the following observation will be useful. Given $z_0$, we define recursively $z_j := \partial _\theta z_{j-1} - a z_{j-1}$. We say that the data $(z_0,\,v)$ is compatible up to order $k-1$ if $\partial _t^j v_{|t=0} = z_{j|\theta = 0}$ for every $0 \leq j \leq k-1$.
Theorem 3.3 Let $k$ and $s$ be nonnegative integers. If the pair $(z_0,\,v) \in H^k_\theta (H^s) \times H^k(0,\,T;H^s)$ is compatible up to order $k-1$ if $k \geq 1$, then there is a sequence $(z_{0n},\,v_n) \in H^{k+1}_\theta (H^{s+1}) \times H^{k+1}(0,\,T;H^{s+1})$ compatible up to order $k$ for every $n$ and $(z_{0n},\,v_n) \to (z_0,\,v) \text { in } H^k_\theta (H^s) \times H^k(0,\,T;H^s).$
Proof. Let $\rho _\varepsilon$ be a standard mollifier with respect to $x$, that is, $\rho _\varepsilon (x) := \varepsilon ^{-d}\rho ({x}/{\varepsilon })$ where $\rho \in \mathscr {D}(\mathbb {R}^d)$ satisfies $\int _{\mathbb {R}^d} \rho _\varepsilon (x) {\rm d} x = 1$, and let $R_\varepsilon v := \rho _\varepsilon \ast v$. Then, the regularized data $(R_\varepsilon z_0,\, R_\varepsilon v) \in H^k_\theta (H^\infty ) \times H^k(0,\,T;H^\infty )$ is still compatible up to order $k-1$ and
as $\varepsilon \to 0$. For a fix $\varepsilon > 0$, take a sequence $(z_{0n}^\varepsilon,\, v_{n1}^\varepsilon ) \in H^{k+1}_\theta (H^\infty ) \times H^{k+1}(0,\,T;H^\infty )$ such that as $n\to \infty$ there holds
For example, we first extend $R_\varepsilon z_0$ to a function in $H^k(\mathbb {R};H^\infty )$ by a standard reflection argument, see [Reference Adams1] for instance, and if $\widetilde {R}_\delta$ is the corresponding convolution operator with respect to $\theta$, then we may take $z_{0n}^\varepsilon$ to be the restriction of $\widetilde {R}_{1/n} (R_\varepsilon z_0)$ in $(-\tau,\,0)$.
Define $v_n^\varepsilon := v_{n1}^\varepsilon - v_{n2}^\varepsilon$ where $v_{n2}^\varepsilon \in H^{k+1}(0,\,T;H^{s+1})$ is a function that will be constructed below that satisfies $v_{n2}^\varepsilon \to 0$ in $H^{k+1}(0,\,T;H^{s+1})$. For each $0 \leq j\leq k$, define
From the compatibility conditions for the data $(R_\varepsilon z_0,\, R_\varepsilon v)$, we have $\sigma _{nj} \to 0$ in $H^{k - j + s + {3}/{2}}$ as $n\to \infty$ for every $0\leq j < k$. According to trace theory, for each $n$ there exists $h_n \in H^{k+s+2}((0,\,T)\times \mathbb {R}^d) \subset H^{k+1}(0,\,T;H^{s+1})$ such that $\partial _t^jh_{n|t=0} = \sigma _{nj}$ for all $0 \leq j < k$, $\partial _t^k h_{n|t=0} = 0$, and $h_n \to 0$ in $H^{k+1}(0,\,T;H^{s+1})$.
Let $v_{n2}^\varepsilon := h_n + g_n$, where $g_n := \widetilde {g}_n \otimes \sigma _{n,k}$ and $\widetilde {g}_n \in H^{k+1}(0,\,T)$ satisfies
For the construction of $\widetilde {g}_n$, we refer to [Reference Peralta and Propst22, Reference Rauch and Massey25]. If $j < k$, then
Also, $\partial _t^k v^\varepsilon _{n|t=0} = \partial _t^k v^\varepsilon _{n1|t=0} - \sigma _{nk} = z^\varepsilon _{0nk|\theta = 0}.$
We now construct the sequence $(z_{0n},\,v_n)$ as follows. Given a positive integer $n$, let $(z_{0n},\,v_n) := (z_{0N}^{1/n},\,v_N^{1/n})$ for a sufficiently large $N = N(n)$ be such that
From the above construction, we can see that the pair $(z_{0n},\,v_n)$ satisfies the desired properties.
If the function $z_0$ in the previous theorem satisfies $z_0 \in L^2_\theta (L^1)$, then we have $R_\varepsilon z_0 \to z_0$ in $L_\theta ^2(L^1)$. Now, for a fixed $\varepsilon > 0$, it holds that $\widetilde {R}_{1/n}(R_\varepsilon z_0) \to R_\varepsilon z_0$ in $L^2(\mathbb {R};L^1)$, see for example [Reference Arendt, Batty, Hieber and Neubrander2, Section 1.3]. In particular, we have $z_{0n}^\varepsilon \to R_\varepsilon z_0$ in $L_\theta ^2(L^1)$ and by the same argument as above, we can choose $z_{0n}$ such that $z_{0n} \in L^2_\theta (L^1)$ for every $n$ and $z_{0n} \to z_0$ in $L_\theta ^2(L^1)$. With a diagonalization argument we obtain the following.
Corollary 3.4 Given $(z_0,\,v) \in L^2_\theta (L^2) \times L^2(0,\,T;L^2)$ and positive integers $k$ and $s$, there exists a sequence of data $(z_{0n},\,v_n) \in H^k_\theta (H^s) \times H^k(0,\,T;H^s)$ compatible up to order $k-1$ for each $n$ and
Moreover, if $z_0 \in L_\theta ^2(L^1)$, then $z_{0n}$ can be chosen to be an element of $L_\theta ^2(L^1)$ and $z_{0n} \to z_0$ in $L^2_\theta (L^1)$.
Let $\{t> -\theta \} := \{(t,\,\theta ) : t > -\theta \}$ and $\{t< -\theta \} := \{(t,\,\theta ) : t < -\theta \}$. Notice that the weak solution of (3.1) $z$ as well as $\mathscr {L}_1z$ lie in $L^2(\mathbb {R}^d;L^2((0,\,T)\times (-\tau,\,0)))$, and hence, a priori we have the trace regularity $z_{|\theta =-\tau },\, z_{|\theta = 0} \in L^2(\mathbb {R}^d;H^{-1/2}(0,\,T))$. Now we show that in fact they are both in $L^2(0,\,T;L^2)$ and that the weak solutions coincide with the one given by the method of characteristics. The former is sometimes called hidden regularity in control theory literature.
Theorem 3.5 The weak solution of system (3.1) satisfies $z \in C(0,\,T;L^2_\theta (L^2))$, $z_{|\theta = -\tau }$, $z_{|\theta = 0} \in L^2(0,\,T;L^2)$ and the following energy estimate
holds for every $\gamma \geq \gamma _0$ and for some constants $C> 0$ and $\gamma _0 \geq 1$. The weak solution is given explicitly by
Proof. By choosing $k$ and $s$ sufficiently large in corollary 3.4, one can construct a sequence $(z_{0n},\,v_n)$ of continuously differentiable data that are compatible up to order 1 and tends to $(z_0,\,v)$ in $L^2_\theta (L^2) \times L^2(0,\,T;L^2)$. For example one may take $k = 2$ and $s > {d}/{2} + 1$. It can be easily verified that the transport equation (3.1) with boundary data $v_n$ and initial data $z_{0n}$ has the classical solution
Applying the a priori estimate (3.6) to $z_n - z_m$, one can see that $(z_n)_n$ is a Cauchy sequence in $C(0,\,T;L_\theta ^2(L^2))$, while $(z_{|\theta = -\tau })_n$ and $(z_{n|\theta = 0})_n$ are a Cauchy sequences in $L^2(0,\,T;L^2)$. By passing to the weak formulation of the equation for $z_n$, we can see that the limit in $C(0,\,T;L_\theta ^2(L^2))$ is a weak solution, and thus, the limit must be the weak solution of (3.1) by uniqueness. Note that the traces $z_{n|\theta = -\tau }$ and $z_{n|\theta = 0}$ tend to $z_{|\theta = -\tau }$ and $z_{|\theta = 0}$ in $L^2(\mathbb {R}^d;H^{-1/2}(0,\,T))$, respectively, and consequently in $L^2(0,\,T;L^2)$ by uniqueness of limits in the sense of distributions. Passing to the limit in (3.11), up to a subsequence we can see that the weak solution of (3.1) is given by (3.10). The energy estimate can be obtained by passing to the limit of the priori estimate (3.6) for $z_n$.
If $z$ is the weak solution of (3.1), then the differential equations are satisfied in the sense of distributions, the boundary and initial conditions are satisfied in $L^2$, and the variational equation
holds for every $\psi \in L^2(\mathbb {R}^d;H^1((0,\,T)\times (-\tau,\,0)))$. Letting $n\to \infty$ in (3.11), it follows that $z_{|\theta = -\tau }$ is given by
With additional regularity for the initial and boundary data, one can obtain better regularity of the solutions. If $m$ is a nonnegative integer and the data $(z_0,\,v) \in H^m_\theta (H^s ) \times H^m(0,\,T;H^s)$ is compatible up to order $m-1$ if $m\geq 1$, then the weak solution of (3.1) satisfies
and we have also a corresponding energy estimate
The proofs rely on additional a priori estimates in Sobolev spaces, see (3.8). On the other hand, if (3.1) with initial data $(z_0,\,v) \in H^m_\theta (H^s ) \times H^m(0,\,T;H^s)$ has a solution satisfying (3.13), then the data $(u,\,z_0)$ is compatible up to order $m-1$.
4. Well-posedness for hyperbolic systems with delay
We will recast system (1.1) as a coupled hyperbolic system-transport system with parameter. In this section, there are no assumptions on the matrices $L$ and $M$ aside from that they have real entries. Introducing the variable $z(t,\,\theta,\,x) = e^{\varepsilon \theta }P_Mu (t+\theta,\,x)$ for $(t,\,\theta,\,x) \in (0,\,\infty ) \times (-\tau,\,0)\times \mathbb {R}^d$, the system (1.1) can be written as
for $t > 0$, $x \in \mathbb {R}^d$ and $\theta \in (-\tau,\,0)$. Here and in the succeeding sections, $z_\tau$ will denote the trace $z_{|\theta = -\tau }$.
Define the following differential operators
whose distributional adjoints are given respectively by
We can then rewrite (4.1) as follows:
Before we deal with (4.1), we briefly recall the results for hyperbolic systems without constraints. With respect to the hyperbolic operator $\mathscr {L}_2$ we have the weighted a priori estimate, see [Reference Benzoni-Gavage and Serre3] for example,
for every $u \in H^1(0,\,T;H^s)$ and $\gamma \geq \gamma _1$, for some positive constants $C$ and $\gamma _1 \geq 1$. There is also an analogous a priori estimate for the dual operator $\mathscr {L}_2^*$. Given an initial data $u_0 \in L^2$ and a source term $f \in L^2(0,\,T;L^2)$, a function $u \in L^2 ((0,\,T)\times \mathbb {R}^d)$ is called a weak solution of the system
if for every test function $\phi \in H^1((0,\,T)\times \mathbb {R}^d)$ such that $\phi _{|t=T} = 0$ we have
If $u_0 \in H^s$ and $f \in L^2(0,\,T;H^s)$, then it is known that the Cauchy problem (4.4) has a unique weak solution, and moreover, we have $u \in C(0,\,T;H^s)$ and the estimate (4.3) holds where $\mathscr {L}_2u$ is replaced by $f$.
For source terms with more regularity, the corresponding solution has also more regularity as well. Again they follow from the a priori estimates for Sobolev spaces. It can shown that if the source term satisfies $f \in \bigcap _{j = 0}^s H^j(0,\,T;H^{s-j})$ and $u_0 \in H^s$, then the weak solution of (4.4) satisfies the regularity $u \in C^j(0,\,T;H^{s-j})$ for every $0 \leq j \leq s$, see [Reference Rauch24] for instance. Moreover, we have the energy estimate
Now, we define the weak solutions for the hyperbolic system (4.1). Given $u_0 \in L^2$ and $z_0 \in L_\theta ^2(L^2)$, the pair $(u,\,z) \in L^2(0,\,T;L^2) \times L^2((0,\,T)\times (-\tau,\,0) \times \mathbb {R}^d)$ is called a weak solution of (4.1) if the variational equation
is satisfied for every test function $(\phi,\,\psi ) \in H^1((0,\,T)\times \mathbb {R}^d) \times L^2(\mathbb {R}^d; H^1((0,\,T)\times (-\tau,\,0)))$ such that $\phi _{|t=T} = 0$, $\psi _{|t=T} = 0$ and $e^{\varepsilon \tau }M^T \phi = \psi _{|\theta = -\tau }$, and equation
holds for every $\varphi \in L^2(0,\,T;H^1)$.
For systems without constraints, the last equation trivially holds. Weak solutions are necessarily unique according to the following lemma.
Lemma 4.1 If $(u,\,z)$ is a weak solution of system (4.1), then $u$ is the weak solution of the Cauchy problem
and $z$ is the weak solution of the transport system
In particular, we have $u \in C(0,\,T;L^2)$, $z \in C(0,\,T;L^2_\theta (L^2))$, $z_\tau \in L^2(0,\,T;L^2)$, and the weak solution satisfies the estimate
for some positive constants $C$ and $\gamma$.
Proof. Taking $\phi = 0$ in (4.6) shows that $z$ is the weak solution of (4.9), and therefore, we have $z_\tau \in L^2(0,\,T;L^2)$. Given $\phi \in H^1((0,\,T)\times \mathbb {R}^d)$ such that $\phi _{|t= T} = 0$, the homogeneous backward-in-time Cauchy problem
has a compatible data, and thus, according to the previous section it has a solution satisfying
and in particular, $\psi \in L^2(\mathbb {R}^d; H^1((0,\,T) \times (-\tau,\,0)))$. Choosing the pair $(\phi,\,\psi )$ in the variational formulation (4.6) and using (3.12), it follows that $u$ is the weak solution of (4.8). The energy estimate of the lemma follows from the energy estimates for solutions of (4.8) and (4.9), and by taking $\gamma$ sufficiently large.
The above lemma together with theorem 3.5 imply that $z(t,\,\theta,\,x) \in \text {Ker}(M)^\perp$ for almost every $(t,\,\theta,\,x) \in (0,\,T) \times (-\tau,\,0)\times \mathbb {R}^d$. Let
with the differential equation taken in the sense of distributions.
Theorem 4.2 If $(u_0,\,z_0) \in X_c \times L_\theta ^2(L^2)$ and assumption (Q) holds, then (4.1) has a unique weak solution.
Proof. Uniqueness follows immediately from the previous lemma. For existence, we apply theorem 3.1 and for this we introduce the function spaces $X := L^2(0,\,T;L^2) \times L^2(0,\,T;L^2_\theta (L^2))$, $Y := H^1((0,\,T)\times \mathbb {R}^d) \times L^2(\mathbb {R}^d,\, H^1((0,\,T)\times (-\tau,\,0)))$ any $Z := L^2(0,\,T;L^2) \times L^2 \times L^2_\theta (L^2)$. Define the operators $\Lambda : Y \to X$, $\Psi : Y \to Z$ and $\Phi : Y \to Z$ as follows:
The variational equation (4.6) can now be expressed as
for every $(\phi,\,\psi )\in \text {Ker}(\Phi )$. From the a priori estimates for the transport equation with parameter (3.7) and for hyperbolic systems, the dual version of (4.5), we obtain the priori estimate (3.4) with the help of an absorption argument. More precisely, the terms $\|\phi \|_{L^2(0,T;L^2)}$ and $\|\psi _{|\theta = 0}\|_{L^2(0,T;L^2)}$ arising on the right-hand side can be absorbed by the left-hand side by making $\gamma$ sufficiently large. Therefore, (4.6) is satisfied for some $(u,\,z) \in X$.
It remains to verify the constraint. For this purpose, let $u_\delta := R_\delta u \in L^2(0,\,T;H^\infty )$ and $z_{0\delta } := R_\delta z_0 \in L_\theta ^2(H^\infty )$. Let $z_\delta$ be the solution of the transport system with initial data $e^{\varepsilon \theta }P_Mz_{0\delta }$ and boundary data $P_M u_\delta$. Let $u^\delta$ be the solution of the hyperbolic system with source term $-e^{\varepsilon \tau }Mz_{\delta \tau }$ and initial data $u_{0\delta } := R_\delta u_0$. Then, we have $z_{\delta \tau } \in L^2(0,\,T;H^\infty )$, and consequently, $u \in H^1(0,\,T;H^{\infty })$. Moreover, $u^\delta \to u$ in $L^2$ by uniqueness of weak solutions. Therefore, for every $\varphi \in \mathscr {D}((0,\,T)\times \mathbb {R}^d)$ we obtain from the Parseval's identity that
where $\, \widehat {\cdot }\,$ is the Fourier transform.
According to condition (Q) and the differential equation for $u^\delta$, we have
Thus, $\mathscr {L}_3u^\delta$ is constant, and in particular, we have $\mathscr {L}_3u^\delta (t) = \mathscr {L}_3u_{0\delta } = R_\delta (\mathscr {L}_3u_0) = 0$ for every $t \geq 0$. Passing to the limit in $\int _0^T (u^\delta,\,\mathscr {L}_3^*\varphi )_{L^2} {\rm d} t = 0$ and using the density of $\mathscr {D}((0,\,T)\times \mathbb {R}^d)$ in $L^2(0,\,T;H^1)$, we infer that the weak solution satisfies the variational form (4.7) of the differential constraint.
Theorem 4.3 If $u_0 \in X_c \cap H^s$ and $z_0 \in L^2_\theta (H^s)$, then the weak solution of (4.1) satisfies $u \in C(0,\,T;H^s)$, $z \in C(0,\,T;L^2_\theta (H^s))$, and $z_\tau \in L^2(0,\,T;H^s)$.
Proof. Let $u^0 := u_0$ and $z^0 := e^{\varepsilon \theta }P_Mz_0$. Given $u^{n-1}$, let $z^n$ be the solution of the transport system (4.9) with boundary data $P_Mu^{n-1}$ and initial data $e^{\varepsilon \theta }P_Mz_0$. Then, we have $z^n_{\tau } \in L^2(0,\,T;H^s)$. Let $u^n$ be the solution of the hyperbolic system (4.8) with initial data $u_0$ and source term $-e^{\varepsilon \tau }Mz^{n}_\tau$. Hence, it follows that $u^n \in C(0,\,T;H^s)$. Using the energy estimates for the transport equation with parameter and hyperbolic systems, one can derive
for some $C > 0$ and for every $n$. This implies that $(u^n)_n$, $(z^n)_n$, and $(z^{n}_\tau )_n$ are Cauchy sequences in $C(0,\,T;H^s)$, $C(0,\,T; L^2_\theta (H^s))$, and $L^2(0,\,T;H^s)$, respectively.
One can see that the limit of $(u^n,\,z^n)$ is the weak solution of system (4.1). In fact, this follows from
which holds for every test function $(\phi,\,\psi ) \in H^1((0,\,T)\times \mathbb {R}^d) \times L^2(\mathbb {R}^d;H^1((0,\,T) \times (-\tau,\,0)))$ such that $\phi _{|t=T} = 0$, $\psi _{|t=T} = 0$, and $e^{\varepsilon \tau }M^T \phi = \psi _{|\theta = -\tau }$. Therefore, $u \in C(0,\,T;H^s)$, $z \in C(0,\,T;L^2_\theta (H^s))$ and $z_\tau \in L^2(0,\,T;H^s)$.
The solution space for problem (4.1) with compatible data is based on the following function spaces
for nonnegative integers $m$ and $k$. The norms of these functions spaces will be the sum (or the max) of norms appearing in the intersections.
Given $u_0$ and $z_0$ we define recursively the following functions
The data $(u_0,\,z_0)$ is said to be compatible up to order $k$ if $\widetilde {z}_{i|\theta =0} = u_i$ for all $0\leq i \leq k$.
Theorem 4.4 Let $m$ and $k$ be nonnegative integers. Assume that the initial data $(u_0,\,z_0) \in (H^{m+k}\cap X_c) \times Z_{m,k}$ is compatible up to order $m-1$ if $m\geq 1$. Then, (4.1) has a unique solution $(u,\,z) \in X_{m,k} \times Y_{m,k}$ satisfying the differential constraint for every $T > 0$. Furthermore, $z_\tau \in W_{m,k}$. There exist positive constants $C$ and $\gamma$ such that
Proof. The case $m= 0$ has been already established from the previous theorem, and so, we only consider $m\geq 1$. Initially we have the regularity $u\in C(0,\,T;H^{m+k})$, $z \in C(0,\,T;L^2_\theta (H^{m+k}))$, and $z_\tau \in L^2(0,\,T;H^{m+k})$ according to theorem 4.3, and from the differential equation for $u$, we can see that $u\in H^1(0,\,T;H^{m+k-1})$. The compatibility of the data $(P_Mu,\,e^{\varepsilon \theta }P_Mz_0)$ implies that $z \in C(0,\,T;H^1_\theta (H^{m+k-1})) \cap C^1(0,\,T;L^2_\theta (H^{m+k-1}))$ and $z_\tau \in H^1(0,\,T;H^{m+k-1}) \subset C(0,\,T;H^{m+k-1})$. Consequently, $u\in C^1(0,\,T;H^{m+k-1}) \cap H^2(0,\,T;H^{m+k-2})$ according to the PDE for $u$ once more. Continuing the process, we obtain that $z \in Z_{m,k}$ and $z_\tau \in W_{m,k}$, and consequently, $u \in X_{m,k}$. The energy estimate (4.10) is a direct consequence of (3.14), (4.5), and by making $\gamma$ sufficiently large.
In deriving energy estimates for (4.1), we will take more derivatives than those that are allowable by the regularity of the solution. However, the final estimates only contain the norms of the space where the solution belongs. Therefore, one can approximate first the solution by smoother ones, derive energy estimates for the approximations and then pass to the limit to obtain the energy estimates for the solution.
As an illustration, suppose we have data $(u_0,\,z_0) \in (H^{m+k}\cap X_c) \times Z_{m,k}$ which is compatible up to order $m-1$ if $m \geq 1$. The previous theorem implies that the weak solution of (4.1) satisfies $(u,\,z,\,z_\tau ) \in X_{m,k} \times Y_{m,k} \times W_{m,k}$. Given $m_0 > m$ and $k_0 > k$, by theorem 3.3 there exists $(z_{0n},\, v_n) \in H_\theta ^{m_0}(H^{k_0}) \times H^{m_0}(0,\,T;H^{k_0})$ that is compatible up to order $m_0 - 1$ for every $n$ and
The solution of the transport system with parameter
satisfies $z_n \in Y_{m_0,k_0}$ and $z_{n\tau } \in W_{m_0,k_0}$ and we have $z_n \to z$ in $Y_{m,k}$ and $z_{n\tau } \to z_\tau$ in $W_{m,k}$.
For each $n$, let $u_{0n} := R_{1/n}u_0 \in H^{m_0 + k_0}$ so that $u_{0n} \to u_0$ in $H^{m+k}\cap X_c$. Now we use $z_n$ to approximate the solutions of the hyperbolic system. Let $u_n$ be the solution of the hyperbolic system
Then we have $u_n \in X_{m_0,k_0}$ and $u_n \to u$ in $X_{m,k}$. Combining the above systems, we have
where the residual $\varrho _n$ is given by $\varrho _n := v_n - P_M u_n$.
Notice that the above system is the same as (4.2) except for the boundary condition for $z_n$ which contains the residual $\varrho _n$. According to the continuous embedding $X_{m,k} \subset W_{m,k}$ we have $u_n \to u$ in $W_{m,k}$ and therefore $\varrho _n \to 0$ in $W_{m,k}$. From this, it follows that by taking $m_0$ and $k_0$ large enough, we can take any derivatives and as long as the final estimate involves only the norms of the states in $X_{m,k}$, $Y_{m,k}$, $Z_{m,k}$, and $W_{m,k}$ where they are applicable. The energy estimates for the approximate functions $(u_n,\,z_n)$ imply those for the solution $(u,\,z)$ of (4.2). If in addition, the initial data are integrable in the sense that $u_0 \in L^1$ and $z_0 \in L^2_\theta (L^1)$, then we have $u_{0n} \to u_0$ in $L^1$ and $z_{0n} \to z_0$ in $L_\theta ^2(L^1)$, see the paragraph after the proof of theorem 3.3. This information will be used in § 8 in deriving decay estimates under the additional integrability assumption on the data.
5. Asymptotic stability and standard decay estimates
The goal of the present section is to derive energy estimates for the solutions of (4.1) under the conditions for the coefficient matrices presented in § 2. We begin with condition (S)$_s$, which is known to provide standard decay estimates for symmetric hyperbolic systems. For simplicity we denote by
the projection of $u$ onto $\text {Ker}(L)^\perp$ and $\text {Ker}(L_1)^\perp$, respectively. Also, we simply write $\Psi$ for the block matrix $\Psi _{G,N,M}$ in condition (M). Generic constants will be denoted by $C$ or with a subscript and their values may possibly vary from line to line.
Theorem 5.1 Suppose that conditions (L), (S), (M), (Q), (K), and (S) $_s$ are satisfied. Assume that $(u_0,\,z_0) \in (H^{s}\cap X_c) \times L_\theta ^2(H^s)$ for some $s \geq 1$ and define $I_0^2 := \|u_0\|_{H^{s}}^2 + \|z_0\|^2_{L_\theta ^2(H^s)}$. Then, the solution of (4.1) with data $(u_0,\,e^{\varepsilon \theta }P_Mz_0)$ satisfies
Proof. As mentioned in the preceding section, we can formally take the derivatives of the partial differential equations. We divide the derivation of the energy estimate in several steps.
Step 1. Applying $\partial _x^\ell$ to the PDE for $u$ and then taking the inner product with $G \partial _x^\ell u$ yields
for $0 \leq \ell \leq s$. By taking the inner product of the transport equation for $z$ with $Nz$ and then integrating over $(-\tau,\,0)$, we obtain
Getting the sum of (5.2) and (5.3), integrating with respect to $x$, and using the fact that the range of $GM$ is orthogonal to the kernel of $L_1$, we have
where
Using condition (M) on (5.4) and then choosing $\varepsilon$ sufficiently small, we have
Step 2. The next step is to derive dissipation terms involving $w$. For this purpose, we differentiate $\ell$ times the equation for $u$ and take the inner product with $S^T\partial _x^\ell u$ to obtain
Here, we used the fact that the range of $SM$ is orthogonal to the kernel of $L$. The second term in the sum can be rewritten as
where
Integrating both sides with respect to $x$ and then applying Parseval's identity to the first sum on the right-hand side of (5.7), we get
where $\omega := \xi /|\xi |$ and
According to condition (S)$_s$, (5.8) is nonnegative. On the other hand, using the constraint $\mathscr {L}_3 u = 0$, we obtain
because $W_1$ is nonnegative on the range of $R$. Therefore, integrating (5.6) over $\mathbb {R}^d$, using the above information, and Young's inequality, we have the estimate
where $\eta > 0$ and
From condition (S), there exist positive constants $C_1$ and $C_2$ such that
Plugging this estimate to (5.9) and choosing $\eta < C_1$, we obtain
Multiplying (5.10) by small enough $\alpha > 0$ and then adding with (5.5), we have
for some $C_\alpha > 0$ and for all $0 \leq \ell \leq s$.
Step 3. The final step is to derive dissipation terms for the derivatives. Applying $\partial _x^\ell$ to the equation for $u$, taking the $L^2$-inner product with $\sum _{k = 1}^d K^{kT} \partial _{x_k}\partial _{x}^\ell u$, and applying the anti-symmetry of $K^kA^0$, we obtain
for every $0\leq \ell \leq s-1$. Let $I_1$ and $I_2$ denote the last two sums in this equation. The term $I_2$ can be estimated as
for every $\eta > 0$.
Now, applying (2.2), the fact that $\widehat {u}(t,\,\xi ) \in \text {Ker}(\varPi _2Q(\omega ))$ for every $t \geq 0$ and for every $\xi \in \mathbb {R}^d \setminus \{0\}$, and using Parseval's identity, we infer that
for some constants $C_1,\,C_2 > 0$ and $\vartheta$ is the constant in (2.2). Here, $K(\omega ) := \sum _{k =1}^d K^k \omega _k$. Using Plancherel's identity to the latter terms and then combining the above estimates, we obtain from (5.12)
by choosing $\eta > 0$ small enough, where
Multiplying (5.13) by $\beta > 0$ small enough and then adding the result to (5.11) yields, for $0 \leq \ell \leq s-1$, the estimate
for some constant $C = C_{\alpha,\beta,\eta } > 0$. By reducing $\alpha > 0$ and then $\beta > 0$ if necessary, we can see that there are constants $C_1,\,C_2 > 0$ such that
Taking the sum of (5.14) for $0\leq \ell \leq s-1$, integrating with respect to time, and then using the equivalence (5.15), we acquire the estimate in the theorem.
Using the energy estimate of the previous theorem, one can also derive the corresponding estimates for the derivatives with respect to time and history under an additional compatibility condition.
Theorem 5.2 Suppose that conditions (L), (S), (M), (Q), (K), and (S) $_s$ are satisfied. Assume that $(u_0,\,z_0) \in (H^{k+m} \cap X_c) \times Z_{m,k}$ is compatible up to order $m-1$ for some $k\geq 0$ and $m\geq 1$. Let $s := k+m$ and $I_0^2 := \|u_0\|_{H^{s}}^2 + \|z_0\|^2_{Z_{m,k}}$. The solution of (4.1) satisfies
for every $t \geq 0$.
Proof. First, since $Z_{m,k} \subset Z_{0,s} = L^2_\theta (H^s)$ it follows that (5.1) holds. The next step is to obtain an estimate for the time derivatives of $u$. Taking the $(\ell -1)$st derivative with respect to $t$ of the equation for $u$, for $1\leq \ell \leq m$, we have
Using an induction argument, it can be easily seen that the estimate
holds for every $1 \leq \ell \leq m$ and $t\geq 0$.
On the other hand, by applying $\partial _t^\ell \partial _x^\nu$ to the transport equation for $z$ and multiplying with $\partial _t^\ell \partial _x^\nu z$, we have
for every $0 \leq \ell \leq m$ and $0\leq \nu \leq m - \ell$. From the boundary condition at $\theta = 0$ and the fact that $\text {Ker}(L_1) \subset \text {Ker}(M)$, we have $z_{|\theta = 0} =P_Mu = P_M( v + (I - P_{L_1})u) = P_Mv$. Integrating (5.19) over $(0,\,t) \times (-\tau,\,0) \times \mathbb {R}^d$ and then taking the sum over all $0 \leq \nu \leq s - \ell$ produce the estimate
for every $0 \leq \ell \leq m$. We note that for every $0 \leq \ell \leq m$ it holds that $\|\partial _t^\ell z(0)\|_{L_\theta ^2(H^{s-\ell })} \leq \|z_0\|_{Z_{m,k}}$.
We establish by strong induction the following estimate for $0 \leq \ell \leq m$ and $t \geq 0$
The case $\ell = 0$ has been already established in theorem 5.1. Suppose that (5.21) holds for every $0,\, 1,\,\ldots,\,\ell -1$. Applying (5.1), (5.18), and the induction hypothesis, one has
Plugging this in inequality (5.20) proves (5.21).
Similarly, using a strong induction argument and the equation $\partial _\theta z = \partial _t z + \varepsilon z$, we can obtain estimates involving derivatives with respect to $\theta$. More precisely, we have
for every $0 \leq \ell + \mu \leq m$. Given $0 \leq \ell \leq m$, taking the sum over all $0 \leq \mu \leq m-\ell$ in (5.22) results to
Combining (5.21) and (5.23), and using the definition of $Z_{m,k}$, we obtain
By the Poincaré inequality we have, for every $0 \leq j \leq m -1$,
Utilizing this estimate in (5.18) together with (5.1) and (5.23), we have the other part of the desired estimate
This completes the proof of the theorem.
The above energy estimates imply the uniform decay of $u$ and $z$ on $\mathbb {R}^d$ and $(-\tau,\,0) \times \mathbb {R}^d$, respectively. We denote by $[r]$ the largest integer less than or equal to $r \in \mathbb {R}$.
Corollary 5.3 Suppose that the conditions of theorem 5.2 hold. Let $s_0 := [{d}/{2}] + 1$ and $k \geq 1$. Then, for every $0 \leq \ell \leq m-1$ and $0 \leq j \leq m -\ell$, we have
In particular, for every $0 \leq \ell \leq m-1$ and $0 \leq j \leq m - \ell -1$, we have
Proof. First, let us prove the uniform decay (5.24). To do this, we introduce the functional $\Phi _1(t) := \|\partial _x\partial _t^\ell u(t)\|^2_{H^{s-\ell -2}}$. According to (5.1) and (5.16), we have $\Phi _1 \in W^{1,1}(0,\,\infty )$ and therefore $\Phi _1(t) \to 0$ as $t \to \infty$. Let $r = d/(2s_0)$. Then, $r = d/(d+2)$ if $d$ is even and $r = d/(d+1)$ if $d$ is odd, and in any case, we have $r \in [{1}/{2},\,1)$. In virtue of the Gagliardo–Nirenberg inequality [Reference Nirenberg20], we get
as $t \to \infty$. To prove (5.25) and (5.26), we consider the functional
From the energy estimates (5.1) and (5.16) we can see that $\Phi _2 \in W^{1,1}(0,\,\infty )$, and hence, $\Phi _2(t) \to 0$ as $t \to \infty$. The Gagliardo–Nirenberg inequality once more imply (5.25), and similarly
for every $0\leq \mu \leq j$ and $0\leq j \leq m-\ell$. Applying Hölder's inequality and using the fact that $r < 1$, we get
for every $0\leq \mu \leq j$. Passing to the limit $t \to \infty$, this estimate imply (5.26). Finally, (5.27) is a consequence of (5.26) and the Sobolev embedding.
The next goal is to derive time-weighted decay estimates for (4.1) under the assumption (S)$_s$. For this, we define the energy functionals
Theorem 5.4 Under the assumptions of theorem 5.1, there exists a constant $C > 0$ independent of $t$ and the initial data such that $N_s(t)^2 + D_s(t)^2 \leq CI_0^2$ for every $t \geq 0$. In particular, we have
for every $0 \leq j \leq s$ and $t \geq 0$. Moreover, for every $0 \leq j < s$, we have
The proof of this theorem follows immediately from the following energy estimates together with an induction argument. Estimate (5.32) follows from (5.31) and the differential constraint.
Lemma 5.5 In the framework of theorem 5.1, there exists $C > 0$ such that we have the following time-weighted energy estimates
for every $0 \leq j \leq s$ and $t \geq 0$, and
for every $0 \leq j < s$ and $t \geq 0$.
Proof. Multiplying (5.11) by $(1+t)^j$, integrating with respect to $t$, and then taking the sum of the corresponding inequalities for $j \leq \ell \leq s-j$, we obtain
for every $0 \leq j \leq s$. On the other hand, if we multiply (5.14) by $(1+ t)^{j}$, integrate from $0$ to $t$, and then take the sum for every $j \leq \ell \leq s-j-1$, we get
Induction argument yields that these estimates imply (5.33) and (5.34).
Due to the dissipative structure of the damping matrix $L$, we have the following better decay for the component $w$ and $z$ assuming that the kernel of $L$ lies in the kernel of its symmetric part $L_1$. More precisely:
(L)* The matrix $P_L$ commutes with $GA^0$ and $SA^0$ and we have $\text {Ker}(L) \subset \text {Ker}(L_1)$.
Theorem 5.6 Suppose that the conditions of theorem 5.1 hold and in addition that (L) $_*$ is satisfied. Then, we have
for every $0 \leq j < s$ and $t \geq 0$.
Proof. Taking the inner product of the differential equation for $u$ with $G\partial _x^\ell w = GP_L\partial _x^\ell u$ and then using the fact that $P_L$ and $GA^0$ commute, we have
where $R_{1\ell } := - \sum _{k= 1}^d \langle GA^k\partial _{x_k}\partial _x^\ell u,\, \partial _x^\ell w \rangle$. We claim that $P_{L_1}w = v$. Indeed, since $(I-P_L)u$ is in the kernel of $L$, and hence in the kernel of $L_1$, we have $P_{L_1} (u - P_Lu) = 0$. Using the definition of $w$ and $v$, this implies our claim. Since $P_{(GL)_1} = P_{L_1}$ and the range of $GM$ is orthogonal to the kernel of $L_1$, equation (5.36) can be written as
Similarly, from the fact that $P_L$ and $SA^0$ commute, we obtain by multiplying the equation for $u$ by $S^T \partial _x^\ell w$
where $R_{2\ell } := - e^{\varepsilon \tau }\langle SM \partial _x^\ell z_\tau,\, \partial _x^\ell w \rangle - \sum _{l = 1}^d \langle SA^k\partial _{x_k}\partial _x^\ell u,\, \partial _x^\ell w \rangle$. Here, we use the fact that $SLu = SLw$. Multiplying (5.38) by $\alpha$, taking the sum with (5.37) and (5.3), and then the sum for all $j \leq \ell \leq s-j-1$, we obtain
where $\widetilde {E}_j := ((GA^0 + \alpha SA^0) \partial _x^\ell w,\,\partial _x^\ell w)_{H^{s-j-1}} + (N \partial _x^\ell z,\, \partial _x^\ell z)_{L_\theta ^2(H^{s-j-1})}$ and the right-hand side is given by $R_j := \sum _{\ell = j}^{s-j-1} (R_{1\ell } + \alpha R_{2\ell })$.
Applying the Cauchy–Schwarz inequality, using condition (M), and then making $\alpha$ and $\varepsilon$ small enough, we can see that there exist positive constants $c$ and $C$ such that
Multiplying both sides by $e^{ct}$ and then integrating from $0$ to $t$ yields
This estimate implies (5.35) and this completes the proof of the theorem.
Next, we have the following estimates on the spatio-temporal derivatives. With additional structure on the matrices associated with the constraints we get better decay.
(Q)* For each $1 \leq l \leq d$ there exists a $n_1\times n$ matrix $\widetilde {Q}^l$ such that $\varPi _1Q^l = \widetilde {Q}^l P_{L}$.
Corollary 5.7 In the framework of theorem 5.2, for $0 \leq \ell \leq m$, we have
for every $0 \leq j \leq s-\ell$, $0 \leq \nu \leq s -\ell -j$ and $0\leq \mu < s-\ell$. In addition, if (L) $_*$ and (Q) $_*$ are satisfied, then we have
for every $0 \leq j \leq s - \ell - 1$ and $0 \leq \nu \leq s-\ell -j-1$, and for every $0\leq \mu < s-\ell -1$.
Proof. Applying $\partial _x^\nu$ to (5.17) and then taking the sum over $j \leq \nu \leq s - \ell -j$ yields
Multiplying equation (5.19) by $(1 + t)^j$, integrating with respect to $(t,\,\theta,\,x)$, and then getting the sum for $j \leq \nu \leq s-\ell - j$, we have
The estimate involving $z$ in (5.39) when $\nu = 0$ can be obtained from (5.43) and (5.44), and when $\nu > 0$ we use the equation $\partial _\theta z = \partial _t z + \varepsilon z$ together with strong induction. The constraint and (5.39) immediately imply (5.40). Finally, (5.41) and (5.42) can be shown using the same argument as in the preceding theorem.
6. Asymptotic stability and regularity-loss decay estimates
If we replace condition (S)$_s$ by (S)$_r$, then the inequality (5.11) does not hold anymore. For this reason, we need to revisit the second and third steps of the proof of theorem 5.1. As a result, the corresponding estimates will be weaker than those that were derived from (S)$_s$.
Theorem 6.1 Assume that the conditions (L), (S), (M), (Q), (K), and (S) $_r$ hold and let $s \geq 2$. Suppose that $(u_0,\,z_0) \in (H^{s}\cap X_c) \times L_\theta ^2(H^s)$ and define $I_0^2 := \|u_0\|_{H^{s}}^2 + \|z_0\|^2_{L_\theta ^2(H^s)}$. Then, the solution of (4.1) with data $(u_0,\,e^{\varepsilon \theta }P_Mz_0)$ satisfies
Proof. Note that inequality (5.5) is still satisfied. The next step is to revise the estimation of Step 2 in the proof of theorem 5.1. Since $Y(\omega )$ is nonnegative only on the kernel of $L_1$, we need to proceed in a different way to treat the first term on the right-hand side of (5.7). Recall that $Y_j := (SA^j - Q^{jT}\varPi _1WR)_2$. We rewrite the said term as follows:
Applying Parseval's identity and the fact that $\widehat {u}(\xi ) - \widehat {v}(\xi ) \in \text {Ker}(L_1)$ for every $\xi$, we obtain from condition (S)$_r$ that
We apply Young's inequality to estimate the last two terms on the right-hand side of (6.2) as follows:
for every $0 \leq \ell \leq s-1$ and $\varrho > 0$. Plugging these estimates to (5.6) and (5.7), we obtain
for every $0 \leq \ell \leq s-1$. We perform a similar procedure and integrate by parts to pass the derivatives on $v$ to get
for every $0 \leq \ell \leq s-2$.
Combining estimates (5.5), (5.14), (6.3), and (6.4), we obtain the following:
for every $0 \leq \ell \leq s-2$, where we used the abbreviation
Choosing the positive constants $\varrho,$ $\beta$, and $\alpha$ in such a way that $\beta < C/C_\eta$, $\varrho < \beta C_{\eta }$, and $\alpha < \max (C/C_{\varrho },\, C/(C + \beta C_\eta ))$, we can see that every constants appearing on the left-hand side of inequality (6.5) are positive. Integrating this inequality with respect to $t$ and taking the sum for all $0\leq \ell \leq s-2$, we get the desired inequality stated in the theorem after reducing $\alpha$ and then $\beta$ if necessary.
Analogous to theorem 5.2, one can also derive estimates for the time-derivatives. The details are left to the reader.
Theorem 6.2 Suppose that the conditions (L), (S), (M), (Q), (K), and (S) $_r$ hold. Assume that the initial data $(u_0,\,z_0) \in (H^{k+m} \cap X_c) \times Z_{m,k}$ is compatible up to order $m-1$ for some $k\geq 1$ and $m\geq 1$. Let $s := k+m$ and $I_0 := \|u_0\|_{H^{s}} + \|z_0\|_{Z_{m,k}}$. The solution of (4.1) satisfies
The energy estimates in theorems 6.1 and 6.2 imply the following uniform decay. The proofs are similar to corollary 5.3 and for this reason they are omitted.
Corollary 6.3 Assume that the framework of theorem 6.2 holds. Let $s_0 := [{d}/{2}] + 1$, $\widetilde {s}_0 := s_0$ if $d > 1$ and $\widetilde {s}_0 := s_0 + 1$ if $d =1$. For every $0\leq \ell \leq m-1$ and $0 \leq j \leq m-\ell$
and (5.25), (5.26), and (5.27) are also satisfied, with $s_0$ replaced by $\widetilde {s}_0$.
One can also derive time-weighted decay estimates with condition (S)$_r$. To this end, we define the following energy functionals:
Theorem 6.4 With respect to the assumptions of theorem 6.1, there is a constant $C > 0$, which is independent of $t$ and the initial data, with $N_r(t)^2 + D_r(t)^2 \leq CI_0^2$ for every $t \geq 0$. In particular, we have
for every $0 \leq j \leq [{s}/{2}]$. Furthermore, for every $0 \leq j < [{s}/{2}]$, we have
Proof. Given $0\leq j \leq [{s}/{2}]$, we multiply (5.5) by $(1+t)^j$, take the sum over all $j \leq \ell \leq s-j$, and integrate with respect to time to obtain the time-weighted inequality
Now, given $0 \leq j < [{s}/{2}]$ we multiply (6.5) by $(1+t)^j$ and then add the corresponding terms for $j \leq \ell \leq s-j-2$ to have the estimate
A straightforward induction argument shows that these two estimates imply $N_r(t)^2 + D_r(t)^2 \leq CI_0^2$ for every $t \geq 0$ and for some constant $C > 0$. The rest of the theorem follows from the latter estimate and the constraint.
We also have better decay under the assumption (L)$_*$. The proof is similar as before and therefore we omit the details.
Theorem 6.5 Assume that the conditions of theorem 6.1 hold. Suppose also that (L) $_*$ is satisfied. Then, we have
for every $0 \leq j < [{s}/{2}]$ and $t \geq 0$.
Finally, we close this section with estimates on the time derivatives and derivatives with respect to $\theta$ for the delay variable $z$. The proofs are the same as before and for this reason we again omit the proofs.
Corollary 6.6 In the framework of theorem 6.2, for $0 \leq \ell \leq m$, we have
for every $0 \leq j \leq [(s-\ell )/2]$, $0 \leq \nu \leq s-\ell - 2j$, and $0\leq \mu < [(s-\ell )/2]$. If (L)$_*$ is satisfied, then
for every $0 \leq j \leq [(s - \ell )/2] - 1$ and $0 \leq \nu \leq s - \ell - 2j - 2$, and if (Q) $_*$ holds, then
for every $0\leq \mu < [(s - \ell )/2] -1$.
7. Applications to the wave, Timoshenko and Euler–Maxwell systems
In this section, we shall apply the results of § 5 and 6 to certain physical systems. This includes the Timoshenko system, system of wave equations with delay in the interaction, and the linearized Euler–Maxwell system.
7.1 Timoshenko system
Our first example is the following dissipative Timoshenko system with delay
for $t > 0$ and $x \in \mathbb {R}$. The unknown scalar functions $w$ and $\psi$ represent the transversal displacement and rotation angle of a beam, respectively. The constants $\alpha$ and $a$ are positive while the sign of $\beta$ is arbitrary. As in [Reference Ide, Haramoto and Kawashima10, Reference Ide and Kawashima11, Reference Ueda, Duan and Kawashima29], by introducing the state variable $u = (w_x -\psi,\, w_t,\, a\psi _x,\, \psi _t)$, we can rewrite this system in form (1.1) with the matrices $A^0 = I$,
The system has no constraints so that $Q = R = 0$, and so condition (Q) is trivially satisfied.
Note that the damping matrix is nonnegative and the delay matrix is symmetric. The kernels of $L$ and its symmetric part are given by $\text {Ker}(L) = \{e_2,\,e_3\}$ and $\text {Ker}(L_1) = \text {Ker}(L) \cup \{e_1\}$, where $e_j$ for $1 \leq j \leq 4$ are the canonical unit vectors in $\mathbb {R}^4$. This means that $e_j$ is the vector in $\mathbb {R}^4$ with entry 1 in the $j$th component and zero elsewhere. Choosing the compensating matrices $S$ and $K^1$ by
and by choosing $\eta > 0$ small enough, conditions (S) and (K) are satisfied. Moreover, if $a \neq 1$, then (S)$_s$ is satisfied while (S)$_r$ holds when $a = 1$. We refer to [Reference Ueda, Duan and Kawashima29] for the computations. If $\alpha > |\beta |$, then it follows from theorem 2.2 and theorem 2.3 that condition (M) holds. One can easily verify that condition (L)$_*$ holds. Therefore, the asymptotic stability and decay estimates presented in § 5 and § 6 for $\alpha \neq 1$ and $\alpha = 1$, respectively, are applicable to the state $u$ corresponding to (7.1). In this example, we have $P_Lu = (u_1,\,0,\,0,\,u_4)$.
7.2 System of wave equations I
Consider the following coupled system of three-dimensional wave equations with delay in one component
for $(t,\,\theta,\,x) \in (0,\,\infty )\times (-\tau,\,0) \times \mathbb {R}^3$. Here, $\phi$ and $\psi$ are scalar-valued. A similar system in the bounded case has been studied in [Reference Ait Benhassi, Ammari, Boulite and Maniar9].
When $\alpha = 0$ in (7.2), the wave equations are uncoupled, where one has a damping term with delay, while the other one is undamped. For $\alpha \neq 0$, we can think of the terms $\alpha \psi _t$ and $-\alpha \phi _t$ as feedback interconnection that links the two vibrating media through their velocities. If $\alpha > 0$, then a negative velocity $\psi _t$ accelerates the damped wave, while a negative velocity $\phi _t$ decelerates the undamped wave. This approach is related to the concept of indirect damping mechanisms introduced by Russell [Reference Russel26], wherein dissipation in one component in elastic systems can be transferred to that of the whole system.
We shall recast system (7.2) in the form of (1.1). To do this, we define the state variable $u = (\nabla \phi,\, \phi _t,\, \nabla \psi,\,\psi _t)$. Let $e_j$ and $\widetilde {e}_k$, for $1 \leq j \leq 8$ and $1 \leq k\leq 3$, denote the $j$th and $k$th canonical vectors in $\mathbb {R}^8$ and $\mathbb {R}^3$, respectively. The above system can be written in the form of (1.1) with $A^0 = I$,
for $j=1,\,2,\,3$.
The eigenvalues of the principal symbol $A(\xi )$ are given by $\pm i|\xi |$ and the multiple eigenvalue 0. Thus, some solutions of the system does not correspond to a solution of wave system (7.2), and to factor them out we need to add the constraints $\nabla \times (\nabla \phi ) = \nabla \times (\nabla \psi ) = 0$. The corresponding matrices are then given by $R = 0$ and
Given $\xi = (\xi _1,\,\xi _2,\,\xi _3)^T \in \mathbb {R}^3$, the skew-symmetric matrix $\Omega _\xi$ is defined by
so that $\Omega _\xi \psi = \xi \times \psi$, where $\xi \times \psi$ is treated as a column vector. The damping matrix is nonnegative and the delay matrix is symmetric. The kernel of $L$ and its symmetric part are given by $\text {Ker}(L) = \{e_1,\,e_2,\,e_3,\,e_5,\,e_6,\,e_7\}$ and $\text {Ker}(L_1) = \text {Ker}(L) \cup \{e_8\}$, respectively.
Taking $S = I$, one can immediately see that (S) and (L)$_*$ are satisfied. Because $\varPi _1 = 0$ and $\varPi _2 = I$, we can take $W = 0$ and (S)$_s$ holds. Let us verify condition (K). For this purpose, define the compensating matrices
Note that $\text {Ker}(Q(\omega )) = \{ (\psi _1,\,\phi _1,\,\psi _2,\,\phi _2) : \omega \times \psi _1 = \omega \times \psi _2 = 0 \}$. If $(\psi _1,\,\phi _1,\,\psi _2,\,\phi _2) \in \text {Ker}(Q(\omega ))$, then $\omega \omega ^T \psi _k = \omega \times (\omega \times \psi _k) + \psi _k(\omega ^T \omega ) = \psi _k$ for $k = 1,\,2$ and $\omega \in \mathbb {S}^{d-1}$. From this, one can immediately see that condition (K) holds. Also, it is clear that $Q^jA^k = Q^jL = Q^j M = 0$ for every $j,\,k=1,\,2,\,3$ and as a consequence, condition (Q) is satisfied. Under the assumption $a > |\beta |$, we see that condition (M) holds. The results in § 5 can be applied to the state $u$ and the corresponding orthogonal projection onto $\text {Ker}(L)^\perp$ is given by $w = (0,\,0,\,0,\,u_4,\,0,\,0,\,0,\,u_8)$.
7.3 System of wave equations II
Now let us consider the following coupled system of wave equations with delay in the interaction
for $t > 0$ and $x \in \mathbb {R}^3$. This has been studied in [Reference Peralta and Ueda23] in the case of one-space dimension. The system (7.3) is a generalization of (7.2), in which the two waves have damping with delay, and the effects of the feedback interconnection also include delays. In other words, the transmission of the dissipation terms does not occur instantaneously.
The coefficient matrices for this problem are the same as those that are given in the previous subsection except for the delay and damping matrices. In the current situation, they are given by
In this case, the delay matrix is not necessarily symmetric anymore. Condition (M) holds if we have $(a-|\alpha |)(d-|\delta |) > |\beta \gamma |$, see the Appendix in [Reference Peralta and Kunisch21] for instance. More precisely, by taking $G$ and $N$ of the form
there are positive constants $a_1$, $a_2$, $a_3$, and $a_4$ that fulfil condition (M). On the other hand, conditions (L), (S), (Q), (S)$_s$, and (L)$_*$ can be easily verified. Therefore, the results of § 5 are valid to system (7.3) with the state variable $u = (\nabla \phi,\, \phi _t,\, \nabla \psi,\, \psi _t)$.
The wave equations (7.2) and (7.3) were written as first-order hyperbolic systems with delay in terms of the velocities and the gradients of the displacements, that is, using the ansatz $u = (\nabla \phi,\, \phi _t,\, \nabla \psi,\, \psi _t)$. On one hand, this is a typical formulation for the wave equation as the $L^2$-norms of these quantities represent the potential and kinetic energies of the system. On the other hand, it is also interesting to derive estimates and study the time-asymptotic behaviour with respect to $\phi$ and $\psi$. However, the results and methods provided here cannot be applied directly to such problems. For example, estimates for $\nabla \phi$ cannot be transferred to $\phi$ since, in general, the Poincaré inequality is not valid in the whole space $\mathbb {R}^d$. Nonetheless, in the case of bounded Lipschitz domains with homogeneous Dirichlet conditions, this approach is meaningful.
For the damped wave equation without delay, Matsumura [Reference Matsumura14] obtained estimates for the displacement, including the spatio-temporal derivatives, in $L^2$ and $L^\infty$. The results were based on the analysis of the equivalent second-order differential equation with the Fourier variable as a parameter. It is not clear at this point how such methods can be applied and extended to the case of wave systems with delays. As this is outside the scope of the paper, we leave these tasks for future investigations.
7.4 Linearized Euler–Maxwell system
Our final example is the following Euler–Maxwell system arising in the study of plasma physics. We consider the system linearized at the constant equilibrium state $u_* := (\rho _*,\,0,\,0,\,B_*)$, where $\rho _* > 0$ and $B_* \in \mathbb {R}^3$,
for $t > 0$ and $x \in \mathbb {R}^3$. The unknown state variables are the density $\rho : (0,\,\infty ) \times \mathbb {R}^3 \to \mathbb {R}$, velocity $v : (0,\,\infty ) \times \mathbb {R}^3 \to \mathbb {R}^3$, electric field $E : (0,\,\infty ) \times \mathbb {R}^3 \to \mathbb {R}^3$, and magnetic field $B : (0,\,\infty ) \times \mathbb {R}^3 \to \mathbb {R}^3$. Here, $p_*$ is constant.
By letting $u := (\rho -\rho _*,\, v,\, E,\, B- B_*)$, system (7.4) can be written in the form of (1.1) with the coefficient matrices
The damping matrix $L$ is nonnegative and the delay matrix $M$ is symmetric. Observe that $\text {Ker}(L) = \{e_1,\,e_8,\,e_9,\,e_{10}\}$ and $\text {Ker}(L_1) =\text {Ker}(L) \cup \{e_5,\,e_6,\,e_7\}$. The coefficient matrices corresponding to the differential constraints are given by
It has been verified in [Reference Ueda, Duan and Kawashima29] that (S), (K), (Q), and (S)$_r$, without the conditions pertaining to the delay matrix $M$, hold with the matrices $W = (\eta p_*/\rho _*)I$,
for $\eta > 0$ sufficiently small and for $j = 1,\,2,\,3$. However, it can be easily verified that the properties in connection to $M$ for conditions (S) and (Q) are satisfied. Moreover, as in the previous examples, condition (M) is verified when $\alpha > |\beta |$. Finally, conditions (L)$_*$ and (Q)$_*$ are satisfied with $G = I$ and the fact that
Hence, the Euler–Maxwell system (7.4) with delay has the regularity-loss type decay and the results of § 6 can be applied to this system. In this example, note that $P_Lu = (0,\,v,\,E,\,0)$, $P_{L_1}u = (0,\,v,\,0,\,0)$, and $Ru = (\rho - \rho _*,\,0,\,0,\,0)$. See also [Reference Ueda and Kawashima31] and [Reference Ueda, Wang and Kawshima32] for related results.
8. Decay estimates for integrable data
If the initial data and the initial history are both integrable, then we can improve the decay rates given in the previous sections. In this section, we only focus with decay estimates for the derivatives with respect to space. The derivatives with respect to time and delay variable can be handled in a similar way as in the previous sections. The basic idea is to carry the calculations in the preceding sections in the Fourier space. More precisely, we take the Fourier transforms with respect to $x$ of the differential equations and perform the calculations as in the primal space. As before, using an approximation argument, we can use the differential equations directly.
8.1 Standard decay estimates
Taking the Fourier transform of system (4.1) with respect to the spatial variable yields the following equations:
where $\omega := \xi /|\xi |$ if $\xi \neq 0$ and $\omega := 0$ if $\xi = 0$.
Getting the inner product of the first equation in (8.1) with $G\widehat {u}$ and taking the real part, we have
Taking the inner product of the second equation in (8.1) with $N\widehat {z}$ and integrating with respect to $\theta$ yields
Taking the sum of the above equations, we obtain the energy identity
where $\mathcal {E}_1 := \langle GA^0\widehat {u},\, \widehat {u}\rangle + \int _{-\tau }^0 \langle N\widehat {z}(\theta ),\, \widehat {z}(\theta ) \rangle {\rm d} \theta$.
If we get the inner product of the first equation in (8.1) with $S^T\widehat {u}$ and then take the real part, we have
where $\mathcal {E}_2 := \langle SA^0 \widehat {u},\,\widehat {u}\rangle$. Finally, we take the inner product of the first equation in (8.1) by $-i|\xi |K(\omega )^T \widehat {u}$ and then take the real part to get
where $\mathcal {E}_3 := -{1}/{2}|\xi | \langle i K(\omega )A^0 \widehat {u},\,\widehat {u}\rangle$. From the above energy identities, we obtain
where $\mathcal {E} := \frac {1}{2}(\mathcal {E}_1 + \alpha \mathcal {E}_2 + \frac {\alpha \beta }{1+|\xi |^2}\mathcal {E}_3)$ and
One can easily see that there exist constants $\eta > 0$ and $c_{\eta },\, C_\eta > 0$ such that for every $\alpha,\, \beta \in [0,\,\eta ]$, we have
Utilizing condition (M) and then making $\varepsilon$ small enough, we obtain
On the other hand, using conditions (S) and (S)$_s$, we have
by choosing $\varrho > 0$ small enough. According to the condition (K) and $\widehat {u}(t,\,\xi ) \in \text {Ker}(\varPi _2Q(\omega ))$ we have, after using Young's inequality,
Taking $\beta$ sufficiently small and then $\alpha$ small enough, we obtain from (8.3) to (8.6) that
for some constant $c > 0$. This inequality sets up the proof of the following theorem.
Theorem 8.1 Assume that conditions (L), (S), (M), (Q), (K), and (S) $_s$ are satisfied and let $s \geq 1$. Suppose that $(u_0,\,z_0) \in (H^{s}\cap X_c \cap L^1) \times L_\theta ^2(H^s \cap L^1)$. Then, the solution of (4.1) satisfies the pointwise estimate
where $\rho (\xi ) := |\xi |^2/(1+ |\xi |^2)$, for some constants $c>0$, $C > 0$ and for every $t \geq 0$ and $\xi \in \mathbb {R}^d$. Moreover, we have the decay estimate
for every $0 \leq \ell \leq s$ and $t \geq 0$, where $I_1 := \|u_0\|_{L^1} + \|z_0\|_{L^2_\theta (L^1)}$ and $I_{2,\ell } := \|\partial _x^{\ell }u_0\|_{L^2} + \|\partial _x^\ell z_0\|_{L^2_\theta (L^2)}.$ In particular, for every $0 \leq \ell < s$, we have
Proof. The pointwise estimate (8.8) in Fourier space follows immediately from inequality (8.7) and the equivalence (8.3). The proof of (8.9) relies on integrating (8.8) over $\mathbb {R}^d$ and separating the integral into low- and high-frequency parts. For $|\xi | \geq 1$, we have $2\rho (\xi ) \geq 1$ and consequently from (8.8) we infer
by Plancherel's formula and Fubini's theorem. On the other hand, for $|\xi | \leq 1$, we have $\rho (\xi ) \geq \widetilde {c} |\xi |^2$ for some constant $\widetilde {c} > 0$, and thus,
Taking the sum of these estimates and then using Plancherel's formula and Fubini's theorem, we obtain the decay estimate (8.9). Finally, (8.10) follows immediately from the constraint and (8.9).
Corollary 8.2 Suppose that the assumptions of the previous theorem hold. If in addition, (L) $_*$ and (Q) $_*$ hold, then for some $c_0 > 0$ and $C > 0$, there holds
for every $0 \leq \ell < s$ in (8.11) and $0 \leq j < s-1$ in (8.12).
Proof. The proof of the decay estimate (8.11) is the same as in the proof of theorem 5.6. On the other hand, (8.12) follows from (8.11) and the constraints.
Example 8.3 As an application, we consider the system of wave equations (7.3). If the initial data corresponding to this system when written in the form of (1.1) are integrable with respect to space, then the associated state satisfies the decay estimates (8.8) and (8.11). These results are also valid for the wave system (7.2) and the Timoshenko system (7.1) when $a \neq 1$.
8.2 Regularity-loss decay estimates
In this section, we derive decay estimates for integrable data where condition (S)$_s$ is replaced by the weaker condition (S)$_r$. For systems without delay, we refer to [Reference Ueda, Duan and Kawashima29]. Recent advances for regularity-loss decay estimates can be found in [Reference Chen and Dao4, Reference Liu and Ueda13, Reference Ueda28, Reference Ueda, Duan and Kawashima30] and the references therein.
Let us start with the energy identity
where $\widetilde {\mathcal {E}} = {1}/{2}( \mathcal {E}_1 + \alpha (1 + |\xi |^2)^{-1} \mathcal {E}_2 + \alpha \beta (1 + |\xi |^2)^{-2}\mathcal {E}_3)$, while $\mathcal {E}_j$ and $\mathcal {D}_j$, for $j = 1,\,2,\,3$, are the same terms as in the previous subsection. Equivalence (8.3) also holds in place of $\widetilde {\mathcal {E}}$. First, we rewrite
According to condition (S)$_r$, the first term on the right-hand side is nonnegative. The last term can be written as $\langle i (Q(\omega )^T \varPi _1 WR)_2 \widehat {u},\, \widehat {u} \rangle = \langle W_1 R\widehat {u},\, \widehat {u} \rangle$, which is nonnegative as well since $W_1$ is nonnegative on the range of $R$. Therefore, by applying Young's inequality, we get
for every $\eta > 0$, and consequently,
Using this inequality together with (8.4) and (8.5), we have
Choosing the constants $\alpha$, $\beta$, and $\eta$ in such a way that $\beta < C_1/C_2$, $\eta < \beta C_1$, and $\alpha < \min \{ C/(C_\eta + C_2),\, C/[C_2(\beta +1)] \}$, and then using them in the energy equation (8.13), we get
for some constant $c > 0$. With this estimate, we are now ready to establish the following theorem.
Theorem 8.4 Suppose that the conditions of theorem 8.1 hold where (S) $_s$ is replaced by (S) $_r$. The solution of (4.1) satisfies the pointwise estimate
where $\varrho (\xi ) := |\xi |^2/(1+ |\xi |^2)^2$, for some constants $c,\, C > 0$, and for every $t \geq 0$ and $\xi \in \mathbb {R}^d$. In particular, we have the decay estimate
for every $0 \leq k + \ell \leq s$ and $t \geq 0$, where $I_1 := \|u_0\|_{L^1} + \|z_0\|_{L^2_\theta (L^1)}$ and $I_{2,\ell + k} := \|\partial _x^{\ell + k }u_0\|_{L^2} + \|\partial _x^{\ell + k} z_0\|_{L^2_\theta (L^2)}.$ Moreover, for every $0 \leq \ell < s$, we have
Proof. First, let us notice that we can obtain the same estimate as in the proof theorem 8.1 at lower frequencies since $\varrho (\xi ) \geq \widetilde {c}|\xi |^2$ for some constant $\widetilde {c} > 0$ and for all $|\xi | \leq 1$. For $|\xi | \geq 1$, we have $\varrho (\xi ) \geq C|\xi |^{-2}$ for some $C > 0$, and based on this we have the estimate
When combined with the estimate at lower frequencies, we obtain (8.16). The rest of the theorem can be verified following the comments in the proof of theorem 8.1.
We also have the following result analogous to corollary 5.4.
Corollary 8.5 In the framework of the previous theorem and with the additional conditions (L) $_*$ and (Q) $_*$, we have
for every $0 \leq \ell + k< s$ in (8.17) and $0 \leq \ell + k < s-1$ in (8.18).
Example 8.6 Estimates (8.16) and (8.17) are valid for the Timoshenko system (7.1) with delay when $a = 1$. In this system, recall that $P_L u = (u_1,\,0,\,0,\,u_4)$. Likewise, (8.16)–(8.18) are satisfied by the Euler–Maxwell system (7.4) with delay, and for this system, the state components $w = P_Lu$ and $Ru$ are given in § 7.
Acknowledgements
The author is grateful to Yoshihiro Ueda for the initial discussions on the topic of this paper and to the anonymous referee for the valuable comments relayed during the review process.