1 Introduction
For a function
$\sigma $
on
${{\mathbb R}^n}$
, let
$T_\sigma $
be the corresponding Fourier multiplier operator given by

for a Schwartz function f on
${{\mathbb R}^n}$
, where
$\widehat {f}(\xi ):=\int _{{{\mathbb R}^n}}f(x)e^{2\pi i\langle x,\xi \rangle }dx$
is the Fourier transform of f. The function
$\sigma $
is called an
$L^p$
multiplier if
$T_\sigma $
is bounded on
$L^p({{\mathbb R}^n})$
for
$1<p<\infty $
. For several decades, figuring out a sharp condition for
$\sigma $
to be an
$L^p$
multiplier has been one of the most interesting problems in harmonic analysis. Although there is no complete answer to this question, we have some satisfactory results. In 1956, Mihlin [Reference Mihlin23] proved that
$\sigma $
is an
$L^p$
multiplier provided that

This result was refined by Hörmander [Reference Hörmander21] who replaced (1.1) by the weaker condition

where
$L_s^2({{\mathbb R}^n})$
denotes the fractional Sobolev space on
${{\mathbb R}^n}$
and
$\psi $
is a Schwartz function on
${{\mathbb R}^n}$
generating Littlewood–Paley functions, which will be officially defined in Section 2.1. We also remark that
$s>n/2$
is the best possible regularity condition for the
$L^p$
boundedness of
$T_{\sigma }$
.
Now, we define the (real) Hardy space. Let
$\phi $
be a smooth function on
${{\mathbb R}^n}$
that is supported in
$\{x\in {{\mathbb R}^n}: |x|\le 1\}$
, and we define
$\phi _l:=2^{ln}\phi (2^l\cdot )$
. Then the Hardy space
$H^p({{\mathbb R}^n})$
,
$0<p\le \infty $
, consists of tempered distributions f on
${{\mathbb R}^n}$
such that

is finite. The space provides an extension to
$0<p\le 1$
in the scale of classical
$L^p$
spaces for
$1<p\le \infty $
, which is more natural and useful in many respects than the corresponding
$L^p$
extension. Indeed,
$L^p({{\mathbb R}^n})=H^p({{\mathbb R}^n})$
for
$1<p\le \infty $
and several essential operators, such as singular integrals of Calderón–Zygmund type, that are well-behaved on
$L^p({{\mathbb R}^n})$
only for
$1<p\le \infty $
are also well-behaved on
$H^p({{\mathbb R}^n})$
for
$0<p\le 1$
. Now, let
$\mathscr {S}({{\mathbb R}^n})$
denote the Schwartz space on
${{\mathbb R}^n}$
and
$\mathscr {S}_0({{\mathbb R}^n})$
be its subspace consisting of f satisfying

Then it turns out that

We remark that
$\mathscr {S}({{\mathbb R}^n})$
is also dense in
$H^p({{\mathbb R}^n})=L^p({{\mathbb R}^n})$
for
$1<p<\infty $
, but not for
$0<p\le 1$
. See [Reference Stein31, Chapter III, §5.2] for more details. Moreover, as mentioned in [Reference Stein31, Chapter III, §5.4], if
$f\in L^1({{\mathbb R}^n})\cap H^p({{\mathbb R}^n})$
for
$0<p\le 1$
, then

We refer to [Reference Burkholder, Gundy and Silverstein2, Reference Calderón3, Reference Fefferman and Stein7, Reference Stein31, Reference Uchiyama33] for more details.
In 1977, Calderón and Torchinsky [Reference Calderón and Torchinsky4] provided a natural extension of the result of Hörmander to the Hardy space
$H^p({{\mathbb R}^n})$
for
$0<p\le 1$
. For the purpose of investigating
$H^p$
estimates for
$0<p\le 1$
, the operator
$T_{\sigma }$
is assumed to initially act on
$\mathscr {S}_0({{\mathbb R}^n})$
and then to admit an
$H^p$
-bounded extension for
$0<p< \infty $
via density, in view of (1.3). Then Calderón and Torchinsky proved
Theorem A [Reference Calderón and Torchinsky4].
Let
$0<p\le 1$
. Suppose that
$s>n/p-n/2$
. Then we have

for all
$f\in \mathscr {S}_0({{\mathbb R}^n})$
.
For more information about the theory of Fourier multipliers, we also refer the reader to [Reference Baernstein and Sawyer1, Reference Grafakos, He, Honzík and Nguyen13, Reference Grafakos and Park19, Reference Grafakos and Slavíková20, Reference Park25, Reference Seeger28, Reference Seeger29, Reference Seeger and Trebels30] and the references therein.
We now turn our attention to multilinear extensions of the above multiplier results. Let m be a positive integer greater or equal to
$2$
. For a bounded function
$\sigma $
on
$({{\mathbb R}^n})^m$
, let
$T_\sigma $
now denote an m-linear Fourier multiplier operator given by

for
$f_1,\dots ,f_m\in \mathscr {S}_0({{\mathbb R}^n})$
. The first important result concerning multilinear multipliers was obtained by Coifman and Meyer [Reference Coifman and Meyer5] who proved that if N is sufficiently large and

for all
$|\alpha _1|+\cdots + |\alpha _m|\le N$
, then
$T_{\sigma }$
is bounded from
$L^{p_1}({{\mathbb R}^n})\times \cdots \times L^{p_m}({{\mathbb R}^n})$
into
$L^p({{\mathbb R}^n})$
for
$1<p_1,\dots ,p_m<\infty $
and
$1\le p<\infty $
. This result is a multilinear analogue of Mihlin’s result in which Equation (1.1) is required, but the optimal regularity condition, such as
$|\alpha |\le [n/2]+1$
in Equation (1.1), is not considered in the result of Coifman and Meyer. Afterwards, Tomita [Reference Tomita32] provided a sharp estimate for multilinear multiplier
$T_{\sigma }$
, as a multilinear counterpart of Hörmander’s result. Let
$\Psi ^{(m)}$
be a Schwartz function on
$({{\mathbb R}^n})^m$
having the properties that

For
$s\geq 0$
, we define the Sobolev norm

Theorem B [Reference Tomita32].
Let
$1<p,p_1,\dots ,p_m<\infty $
with
$1/p=1/p_1+\cdots +1/p_m$
. Suppose that

for
$s>mn/2$
. Then we have

for
$f_1,\dots ,f_m \in \mathscr {S}_0({{\mathbb R}^n})$
.
The standard Sobolev space
$L_s^2(({{\mathbb R}^n})^m)$
in Equation (1.7) is replaced by a product-type Sobolev space in many recent papers.
Theorem C [Reference Grafakos, Miyachi, Nguyen and Tomita14, Reference Grafakos, Miyachi and Tomita15, Reference Grafakos and Nguyen18, Reference Miyachi and Tomita24].
Let
$0<p_1,\dots ,p_m\leq \infty $
and
$0<p<\infty $
with
$1/p=1/p_1+\dots +1/p_m$
. Suppose that

for any nonempty subsets J of
$ \{1,\dots ,m\}$
, and

Then we have

for
$f_1,\dots ,f_m\in \mathscr {S}_0({{\mathbb R}^n})$
.
Here, the space
$L_{(s_1,\dots ,s_m)}^{2}(({{\mathbb R}^n})^m)$
indicates the product type Sobolev space on
$({{\mathbb R}^n})^m$
, in which the norm is defined by replacing the term
$(1+4\pi ^2 |\vec {\boldsymbol {\xi }}|^2)^s$
in Equation (1.6) by
$\prod _{j=1}^{m}\big ( 1+4\pi ^2|\xi _j|^2\big )^{s_j}$
. It is known in [Reference Park27] that the condition (1.8) is sharp in the sense that if the condition does not hold, then there exists
$\sigma $
such that the corresponding operator
$T_{\sigma }$
does not satisfy Equation (1.10). We also refer the reader to [Reference Cruze-Uribe and Nguyen6, Reference Fujita and Tomita11] for weighted estimates for multilinear Fourier multipliers.
As an extension of Theorem A to the whole range
$0<p_1,\dots ,p_m\le \infty $
, in the recent paper of the authors, Lee, Heo, Hong, Park and Yang [Reference Lee, Heo, Hong, Lee, Park, Park and Yang22], we provide a multilinear multiplier theorem with standard Sobolev space conditions.
Theorem D [Reference Lee, Heo, Hong, Lee, Park, Park and Yang22].
Let
$0<p_1, \cdots , p_m \le \infty $
and
$0<p<\infty $
with
$1/p=1/p_1+\cdots +1/p_m$
. Suppose that

for any subsets J of
$\{1,\dots ,m\}$
, and

Then we have

for
$f_1,\dots ,f_m\in \mathscr {S}_0({{\mathbb R}^n})$
.
The optimality of the condition (1.11) was achieved by Grafakos, He and Hónzik [Reference Grafakos, He and Honzík12] who proved that if Equation (1.13) holds, then we must necessarily have
$s\ge mn/2$
and
$1/p-1/2\le s/n+\sum _{j\in J}\big (1/p-1/2\big )$
for all subsets J of
$\{1,\dots ,m\}$
.
We remark that in the bilinear case
$m=2$
, Theorem D follows from Theorem C as Equation (1.11) implies the existence of
$s_1$
and
$s_2$
, with
$s_1+s_2=s$
, satisfying Equation (1.8). This is well described in the first proof of Theorem D in [Reference Lee, Heo, Hong, Lee, Park, Park and Yang22]. However, when
$m\ge 3$
, this inclusion is not evident even if similar types of regularity conditions are required in both theorems.
Unlike the estimate in Theorem A, the multilinear extensions in Theorems C and D consider the Lebesgue space
$L^p$
as a target space when
$p\le 1$
(recall that
$L^p=H^p$
for
$1<p<\infty $
).
If a function
$\sigma $
on
$({{\mathbb R}^n})^m$
satisfies Equation (1.9) for
$s_1,\dots ,s_m>n/2$
or (1.12) for
$s>mn/2$
, then Theorems C and D imply that
$T_{\sigma }(f_1,\dots ,f_m)\in L^1$
for all
$f_1,\dots ,f_m\in \mathscr {S}_0({{\mathbb R}^n})$
. Therefore, in order for
$T_{\sigma }(f_1,\dots ,f_m)$
to belong to
$H^p({{\mathbb R}^n})$
for
$0<p\le 1$
, it should be necessary that

in view of Equation (1.4). However, this property is generally not guaranteed, even if all the functions
$f_1,\dots ,f_m$
satisfy the moment conditions, in the multilinear setting, while, in the linear case,

for
$N\ge 0$
. Recently, by imposing additional cancellation conditions corresponding to (1.14), Grafakos, Nakamura, Nguyen and Sawano [Reference Grafakos, Nakamura, Nguyen and Sawano16, Reference Grafakos, Nakamura, Nguyen and Sawano17] obtain a mapping property into Hardy spaces for
$T_{\sigma }$
.
Theorem E [Reference Grafakos, Nakamura, Nguyen and Sawano16, Reference Grafakos, Nakamura, Nguyen and Sawano17].
Let
$0<p_1, \cdots , p_m \le \infty $
and
$0<p\le 1$
with
$1/p=1/p_1+\cdots +1/p_m$
. Let N be sufficiently large and
$\sigma $
satisfy Equation (1.5) for all multi-indices
$|\alpha _1|+\dots +|\alpha _m|\le N$
. Suppose that

for all multi-indices
$|\alpha |\le \frac {n}{p}-n$
, where
$a_j$
’s are
$(p_j,\infty )$
-atoms. Then we have

for
$f_1,\dots ,f_m\in \mathscr {S}_0({{\mathbb R}^n})$
.
Here, the
$(p,\infty )$
-atom is similar, but more generalized concept than
$H^{p}$
-atoms defined in Section 2, and we adopt the convention that
$(\infty ,\infty )$
-atom a simply means
$a\in L^{\infty }({{\mathbb R}^n})$
with no cancellation condition. See [Reference Grafakos, Nakamura, Nguyen and Sawano16, Reference Grafakos, Nakamura, Nguyen and Sawano17] for the definition and properties of the
$(p,\infty )$
-atom.
We remark that Theorem E successfully shows the boundedness into
$H^p({{\mathbb R}^n})$
, but the optimal regularity conditions considered in Theorems C and D are not pursued at all as it requires sufficiently large N.
The aim of this paper is to establish the boundedness into
$H^p$
for trilinear multiplier operators, analogous to Equation (1.15), with the same regularity conditions as in Theorem D, which is significantly more difficult in general. Unfortunately, we do not obtain the desired results for general m-linear operators for
$m\ge 4$
and we will discuss some obstacles for this generalization in the appendix.
To state our main result, let us write
$\Psi :=\Psi ^{(3)}$
and in what follows, we will use the notation

for a function
$\sigma $
on
$({{\mathbb R}^n})^3$
. Let
$0<p\le 1$
, and we will consider trilinear multipliers
$\sigma $
satisfying

for all
$f_1,f_2,f_3\in \mathscr {S}_0({{\mathbb R}^n})$
. Then the main result is as follows:
Theorem 1. Let
$0<p_1,p_2,p_3<\infty $
and
$0<p\le 1$
with
$1/p=1/p_1+1/p_2+1/p_3$
. Suppose that

where J is an arbitrary subset of
$\{1,2,3\}$
. Let
$\sigma $
be a function on
$({{\mathbb R}^n})^3$
satisfying
$\mathcal {L}_s^2[\sigma ]<\infty $
and the vanishing moment condition (1.16). Then we have

for
$f_1,f_2,f_3\in \mathscr {S}_0({{\mathbb R}^n})$
.
We remark that
$(1/p_1,1/p_2,1/p_3)$
in Theorem 1 is contained in one of the following sets:

See Figure 1 for the regions
$\mathscr {R}_{{\mathrm {i}}}$
. Then the condition (1.17) becomes


Figure 1 The regions
$\mathscr {R}_{{\mathrm {i}}}$
,
$0\le {\mathrm {i}}\le 7$
.
In the proof of Theorem 1, we will mainly focus on the case
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_{{\mathrm {i}}}$
,
${\mathrm {i}}=1,2,3$
, in which
$s>n/p_{{\mathrm {i}}}+n/2$
is required. Then the remaining cases follow from interpolation methods. More precisely, via interpolation,





where the case
$1/p_1+1/p_2+1/p_3=1$
for
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_0$
will be treated separately. Here, a complex interpolation method will be applied, but the regularity condition on s will be fixed. Moreover, the index p will be also fixed so that the vanishing moment condition (1.16) will not be damaged in the process of the interpolation. For example, when
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_4$
, we set
$s>n/p_1+n/p_2$
and fix the index p with
$1/p=1/p_1+1/p_2+1/p_3$
. We also fix
$\sigma $
satisfying the vanishing moment condition (1.16). Now, we choose
$(1/p_1^0,1/p_2^0,1/p_3)\in R_1$
and
$(1/p_1^1,1/p_2^1,1/p_3)\in \mathscr {R}_2$
so that


Then the two estimates

imply

The detailed arguments concerning the interpolation (for all the cases) will be provided in Section 3.
The estimates for
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_{{\mathrm {i}}}$
,
${\mathrm {i}}=1,2,3$
, will be restated in Proposition 3.1 below, and they will be proved throughout three sections (Sections 5–7). Since one of
$p_j$
’s is less or equal to
$1$
, we benefit from the atomic decomposition for the Hardy space. Moreover, for other indices greater than 2, we employ the techniques of (variant)
$\varphi $
-transform, introduced by Frazier and Jawerth [Reference Frazier and Jawerth8, Reference Frazier and Jawerth9, Reference Frazier and Jawerth10] and Park [Reference Park26], which will be presented in Section 2. Then
$T_{\sigma }(f_1,f_2,f_3)$
can be decomposed in the form

where
$\mathrm {K}$
is a finite set, and then we will actually prove that each
$T^{\kappa }(f_1,f_2,f_3)$
satisfies the estimate

where
$\Vert u_{{\mathrm {i}}}\Vert _{L^{p_{{\mathrm {i}}}}({{\mathbb R}^n})}\lesssim \Vert f_{{\mathrm {i}}}\Vert _{H^{p_{{\mathrm {i}}}}({{\mathbb R}^n})}$
for
${\mathrm {i}}=1,2,3$
. Since the above estimate separates the left-hand side into three functions of x, we may apply Hölder’s inequality with exponents
$1/p=1/p_1+1/p_2+1/p_3$
to obtain, in view of Equation (1.2),

Such pointwise estimates (1.20) will be described in several lemmas in Sections 6 and 7, and the proofs will be given in Section 9 separately, which is one of the keys in this paper.
Notation
For a cube Q in
${{\mathbb R}^n}$
let
${\mathbf {x}}_Q$
be the lower left corner of Q and
$\ell (Q)$
be the side-length of Q. We denote by
$Q^*$
,
$Q^{**}$
and
$Q^{***}$
the concentric dilates of Q with
$\ell (Q^*)=10\sqrt {n}\ell (Q)$
,
$\ell (Q^{**})=\big (10\sqrt {n} \big )^2\ell (Q)$
and
$\ell (Q^{***})=\big (10\sqrt {n} \big )^3\ell (Q)$
. Let
$\mathcal {D}$
stand for the family of all dyadic cubes in
${{\mathbb R}^n}$
and
$\mathcal {D}_j$
be the subset of
$\mathcal {D}$
consisting of dyadic cubes of side-length
$2^{-j}$
. For each
${\mathbf {x}}\in {{\mathbb R}^n}$
and
$l\in {\mathbb {Z}}$
, let
$B_{{\mathbf {x}}}^l:=B({\mathbf {x}},100n2^{-l})$
be the ball of radius
$100n2^{-l}$
and center
${\mathbf {x}}$
. We use the notation
$\langle \cdot \rangle $
to denote both the inner product of functions and
$\langle y\rangle := (1+4\pi ^2|y|^2)^{1/2}$
for
$y\in \mathbb {R}^M$
,
$M\in {\mathbb {N}}$
. That is,
$\langle f, g \rangle =\int _{{{\mathbb R}^n}} f(x) \overline {g(x)}\,dx$
for two functions f and g, and
$\langle x_1\rangle :=(1+4\pi ^2|x_1|^2)^{1/2}$
,
$\langle (x_1,x_2)\rangle :=\big (1+4\pi ^2(|x_1|^2+|x_2|^2)\big )^{1/2}$
for
$x_1,x_2\in {{\mathbb R}^n}$
.
2 Preliminaries
2.1 Hardy spaces
Let
$\theta $
be a Schwartz function on
${{\mathbb R}^n}$
such that
$\mbox {supp}(\widehat {\theta })\subset \{\xi \in {{\mathbb R}^n}: |\xi |\le 2\}$
and
$\widehat {\theta }(\xi )=1$
for
$|\xi |\le 1$
. Let
$\psi :=\theta -2^{-n}\theta (2^{-1}\cdot )$
, and for each
$j\in {\mathbb {Z}}$
we define
$\theta _j:=2^{jn}\theta (2^j\cdot )$
and
$\psi _j:=2^{jn}\psi (2^j\cdot )$
. Then
$\{\psi _j\}_{j\in {\mathbb {Z}}}$
forms a Littlewood–Paley partition of unity, satisfying

We define the convolution operators
${\Gamma }_j$
and
${\Lambda }_j$
by

The Hardy space
$H^p({{\mathbb R}^n})$
can be characterized with the (quasi-)norm equivalences

and

which is the Littlewood–Paley theory for Hardy spaces. In addition, when
$p\le 1$
, every
$f\in H^p({{\mathbb R}^n})$
can be decomposed as

where
$a_k$
’s are
$H^p$
-atoms having the properties that
$\mbox {supp}(a_k)\subset Q_k$
,
$\Vert a_k\Vert _{L^{\infty }({{\mathbb R}^n})}\le |Q_k|^{-1/p}$
for some cube
$Q_k$
,
$\int x^{\gamma }a_k(x)dx=0$
for all multi-indices
$|\gamma |\le M$
, and
$\big ( \sum _{k=1}^{\infty }|\lambda _k|^p\big )^{1/p}\lesssim \Vert f\Vert _{H^p({{\mathbb R}^n})},$
where M is a fixed integer satisfying
$M\ge [n/p-n]_+$
, which may be actually arbitrarily large. Furthermore, each
$H^p$
-atom
$a_k$
satisfies

2.2 Maximal inequalities
Let
$\mathcal {M}$
denote the Hardy–Littlewood maximal operator, defined by

for a locally integrable function f on
${{\mathbb R}^n}$
, where the supremum ranges over all cubes Q containing x. For given
$0<r<\infty $
, we define
$\mathcal {M}_rf:=\big ( \mathcal {M}\big (|f|^r\big )\big )^{1/r}$
. Then it is well-known that

whenever
$r<p< \infty $
and
$r<q\le \infty $
. We note that for
$1\le r<\infty $

For
${\boldsymbol {m}} \in {{\mathbb Z}^n}$
and any dyadic cubes
$Q\in \mathcal {D}$
, we use the notation

Then we define the dyadic shifted maximal operator
$\mathcal {M}_{dyad}^{ {\boldsymbol {m}}}$
by

where the supremum is taken over all dyadic cubes Q containing x. It is clear that
$\mathcal {M}_{dyad}^{\mathbf {0}}f(x)\le \mathcal {M}f(x)$
and accordingly,
$\mathcal {M}_{dyad}^{\mathbf {0}}$
is bounded on
$L^p$
for
$p>1$
. In general, the following maximal inequality holds: For
$1<p<\infty $
and
${\boldsymbol {m}}\in {{\mathbb Z}^n}$
we have

The inequality (2.6) follows from the repeated use of the inequality in one-dimensional setting that appears in [Reference Stein31, Chapter II, §5.10], and we omit the detailed proof here. Refer to [Reference Lee, Heo, Hong, Lee, Park, Park and Yang22, Appendix] for the argument.
2.3 Variants of
$\varphi $
-transform
For a sequence of complex numbers
${\boldsymbol {b}}:=\{b_Q\}_{Q\in \mathcal {D}}$
, we define

for
$0<p<\infty $
, where

Let
$\widetilde {\psi _j}:=\psi _{j-1}+\psi _j+\psi _{j+1}$
for
$j\in {\mathbb {Z}}$
. Observe that
$\widetilde {\psi _j}$
enjoys the properties that
$\mbox {supp}(\widehat {\widetilde {\psi }})\subset \{\xi \in {{\mathbb R}^n}: 2^{j-2}\le |\xi |\le 2^{j+2}\}$
and
$\psi _j=\psi _j\ast \widetilde {\psi _j}$
. Then we have the representation

where
$\psi ^Q(x):=|Q|^{{1}/{2}}\psi _j(x-{\mathbf {x}}_Q)$
,
$\widetilde {\psi }^Q(x):=|Q|^{{1}/{2}}\widetilde {\psi _j}(x-{\mathbf {x}}_Q)$
for each
$Q\in \mathcal {D}_j$
, and
$b_Q=\langle f,\widetilde {\psi }^Q\rangle $
. This implies that

where
$\mathcal {S}'/\mathcal {P}$
stands for a tempered distribution modulo polynomials. Moreover, in this case, we have

Therefore, the Hardy space
$H^p({{\mathbb R}^n})$
can be characterized by the discrete function space
$\dot {f}^{p,2}$
, in view of the equivalence in Equation (2.2). We refer to [Reference Frazier and Jawerth8, Reference Frazier and Jawerth9, Reference Frazier and Jawerth10] for more details.
It is also known in [Reference Park26] that
${\Gamma }_jf$
has a representation analogous to (2.7) with an equivalence similar to (2.8), while
$f\not = \sum _{j\in {\mathbb {Z}}}{\Gamma }_jf$
generally. Let
$\widetilde {\theta }:=2^n\theta (2\cdot )$
and
$\widetilde {\theta _j}:=2^{jn}\widetilde {\theta }(2^j\cdot )=\theta _{j+1}$
so that
$\theta _j=\theta _j\ast \widetilde {\theta _{j}}$
. Let
$\theta ^Q(x):=|Q|^{{1}/{2}}\theta _j(x-{\mathbf {x}}_Q)$
,
$\widetilde {\theta }^Q(x):=|Q|^{{1}/{2}}\widetilde {\theta _{j}}(x-{\mathbf {x}}_Q)$
, and
$b_Q=\langle f,\widetilde {\theta }^Q\rangle $
for each
$Q\in \mathcal {D}_j$
. Then we have

and for
$0<p<\infty $
and
$0<q\le \infty $

We refer to [Reference Park26, Lemma 3.1] for more details.
3 Proof of Theorem 1: reduction and interpolation
The proof of Theorem 1 can be obtained by interpolating the estimates in the following propositions.
Proposition 3.1. Let
$0<p_1,p_2,p_3< \infty $
and
$0<p< 1$
with
$1/p=1/p_1+1/p_2+1/p_3$
. Suppose that
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_{1}\cup \mathscr {R}_2\cup \mathscr {R}_3$
and

Let
$\sigma $
be a function on
$({{\mathbb R}^n})^3$
satisfying
$\mathcal {L}_s^2[\sigma ]<\infty $
and the vanishing moment condition (1.16). Then we have

for
$f_1,f_2,f_3\in \mathscr {S}_0({{\mathbb R}^n})$
.
Proposition 3.2. Let
$0<p\le 1$
. Suppose that one of
$p_1,p_2,p_3$
is equal to p and the other two are infinity. Suppose that
$s>n/p+n/2$
. Let
$\sigma $
be a function on
$({{\mathbb R}^n})^3$
satisfying
$\mathcal {L}_s^2[\sigma ]<\infty $
and the vanishing moment condition (1.16). Then we have

for
$f_1,f_2,f_3\in \mathscr {S}_0({{\mathbb R}^n})$
.
We present the proof of Proposition 3.1 in Sections 5, 6 and 7 and that of Proposition 3.2 in Section 8. For now, we proceed with the following interpolation argument, simply assuming the above propositions hold.
Lemma 3.1. Let
$0<p_1^0, p_2^0, p_3^0\le \infty $
,
$0<p_1^1, p_2^1, p_3^1\le \infty $
and
$0<p^0,p^1<\infty $
. Suppose that

Then for any
$0<\theta <1$
,
$0<p_1,p_2,p_3\le \infty $
and
$0<p<\infty $
satisfying


we have

The proof of the lemma is essentially same as that of [Reference Lee, Heo, Hong, Lee, Park, Park and Yang22, Lemma 2.4], so it is omitted here.
3.1 Proof of Equation (1.18) when
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_4\cup \mathscr {R}_5\cup \mathscr {R}_6$
We need to work only with
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_4$
since the other cases are just symmetric versions. In this case,
$2<p_3<\infty $
and as mentioned in Equation (1.19), the condition (1.17) is equivalent to

Now, choose
$\widetilde {p_1},\widetilde {p_2}<1$
such that

and thus

Let
$\epsilon _1, \epsilon _2>0$
be numbers with

and select
$q_1,q_2>2$
such that

Then we observe that

for some
$0<\theta <1$
. Let
$C_1:=(1/(\widetilde {p_1}-\epsilon _1),1/q_1,1/p_3)$
and
$C_2:=(1/q_2,1/(\widetilde {p_2}-\epsilon _2),1/p_3)$
. It is obvious that
$C_1\in \mathscr {R}_1$
,
$C_2\in \mathscr {R}_2$
, and thus it follows from Proposition 3.1 that

Finally, the assertion (1.18) for
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_4$
is derived by means of interpolation in Lemma 3.1. See Figure 2 for the interpolation.

Figure 2
$(1-\theta )\big (\frac {1}{\widetilde {p_1}-\epsilon _1},\frac {1}{q_1},\frac {1}{p_3} \big )+\theta \big ( \frac {1}{q_2},\frac {1}{\widetilde {p_2}-\epsilon _2},\frac {1}{p_3}\big )=(\frac {1}{p_1},\frac {1}{p_2},\frac {1}{p_3})\in \mathscr {R}_4$
.
3.2 Proof of Equation (1.18) when
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_0$
We first fix
$1/2<p<1$
such that
$1/p_1+1/p_2+1/p_3=1/p$
and assume that, in view of Equation (1.19),

Then we choose
$2<p_0<\infty $
such that
$1+1/2+1/p_0=1/p$
. Then it is clear that
$(1/p_1,1/p_2,1/p_3)$
is located inside the hexagon with the vertices
$(1,1/p_0,1/2)$
,
$(1,1/2,1/p_0)$
,
$(1/2,1,1/p_0)$
,
$(1/p_0,1,1/2)$
,
$(1/p_0,1/2,1)$
and
$(1/2,1/p_0,1)$
. Now, we choose a sufficiently small
$\epsilon>0$
and
$2<\widetilde {p_0}<\infty $
such that

and the point
$(1/p_1,1/p_2,1/p_3)$
is still inside the smaller hexagon with
$D_1:=(1,1/\widetilde {p_0},1/(2+\epsilon ))$
,
$D_2:=(1,1/(2+\epsilon ),1/\widetilde {p_0})$
,
$D_3:=(1/(2+\epsilon ),1,1/\widetilde {p_0})$
,
$D_4:=(1/\widetilde {p_0},1,1/ (2+\epsilon ))$
,
$D_5:=(1/\widetilde {p_0},1/(2+\epsilon ),1)$
, and
$D_6:=(1/(2+\epsilon ),1/\widetilde {p_0},1)$
. Now, Proposition 3.1 deduces that

for
$(1/q_1,1/q_2,1/q_3)\in \{D_1,D_2,D_3,D_4,D_5,D_6 \}$
, as
$D_1,D_2\in \mathscr {R}_1$
,
$D_3,D_4\in \mathscr {R}_2$
and
$D_5,D_6\in \mathscr {R}_3$
. This implies, via interpolation in Lemma 3.1,

See Figure 3 for the interpolation.

Figure 3
$\big (\frac {1}{p_1},\frac {1}{p_2},\frac {1}{p_3}\big )\in \mathscr {R}_0$
.
For the case
$p=1$
, we interpolate the estimates in Proposition 3.2. To be specific, for any given
$0<p_1,p_2,p_3<\infty $
with
$1/p_1+1/p_2+1/p_3=1$
, the estimate (1.18) with
$p=1$
follows from interpolating

3.3 Proof of Equation (1.18) when
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_7$
Let
$0<p\le 1/2$
be such that
$1/p=1/p_1+1/p_2+1/p_3$
, and assume that

We choose
$0<p_0\le 1$
, satisfying
$1/p_0+1=1/p$
, so that

Then there exist
$\epsilon>0$
and
$2<q<\infty $
so that
$s>n/(p_0-\epsilon )+n/2$
and
$1/p=1/(p_0-\epsilon )+2/q$
. Let
$E_1:=\big (1/(p_0-\epsilon ),1/q,1/q\big )$
,
$E_2:=\big (1/q,1/(p_0-\epsilon ),1/q\big )$
, and
$E_3:=\big (1/q,1/q,1/(p_0-\epsilon )\big )$
. Then it is immediately verified that
$E_1\in \mathscr {R}_1$
,
$E_2\in \mathscr {R}_2$
,
$E_3\in \mathscr {R}_3$
, and

for some
$0<\theta _1,\theta _2,\theta _3<1$
with
$\theta _1+\theta _2+\theta _3=1$
. Therefore, Proposition 3.1 yields that

and using the interpolation method in Lemma 3.1, we conclude the estimate (1.18) holds for
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_7$
. See Figure 4 for the interpolation.

Figure 4
$\theta _1\big (\frac {1}{p_0-\epsilon },\frac {1}{q},\frac {1}{q}\big )+\theta _2\big (\frac {1}{q},\frac {1}{p_0-\epsilon },\frac {1}{q}\big )+\theta _3\big (\frac {1}{q},\frac {1}{q},\frac {1}{p_0-\epsilon }\big )=\big (\frac {1}{p_1},\frac {1}{p_2},\frac {1}{p_3}\big )\in \mathscr {R}_7$
.
4 Auxiliary lemmas
This section is devoted to providing several technical results which will be repeatedly used in the proof of Propositions 3.1 and 3.2.
Lemma 4.1. Let
$N\in \mathbb {N}$
and
$a\in \mathbb {R}^n$
. Suppose that a Schwartz function f, defined on
${{\mathbb R}^n}$
, satisfies

Then for any
$0\leq \epsilon \leq 1$
, there exists a constant
$C_{\epsilon }>0$
such that

Proof. Using the Taylor theorem for
$\phi _l$
, we write

Then it follows from the condition (4.1) that

For
$|\alpha |=N$
, we note that

and

Then by averaging both Equation (4.2) and Equation (4.3), we obtain that

which completes the proof.
Now, we recall that
$\widetilde {\psi _j}=\psi _{j-1}+\psi _j+\psi _{j+1}$
and
$\widetilde {\theta _j}=2^{n}\theta _j(2\cdot )$
, and then define
$\widetilde {{\Lambda }_j}g:=\widetilde {\psi _j}\ast g$
and
$\widetilde {{\Gamma }_j}g:=\widetilde {\theta _j}\ast g$
.
Lemma 4.2. Let
$2\le q<\infty $
,
$s>{n}/{q}$
, and
$L>n,s$
. Let
$\varphi $
be a function on
${{\mathbb R}^n}$
satisfying

For
$j\in \mathbb {Z}$
and for each
$Q\in \mathcal {D}_j$
, let

and for a Schwartz function g on
${{\mathbb R}^n}$
let

Then we have

Proof. For
$2\le q<\infty $
, we have

where Hölder’s inequality is applied if
$2<q<\infty $
. Clearly,

for sufficiently large
$M>n$
. Therefore, the left-hand side of the claimed estimate is less than a constant times

The
$L^2$
norm is dominated by

Note that

and thus the preceding term is controlled by a constant multiple of

Here, we used the facts that

and

Since the sum over
$R\in \mathcal {D}_j$
converges, we deduce

and thus the desired result follows.
Lemma 4.3. Let
$2\le p,q< \infty $
,
$s>{n}/\min {\{p,q\}}$
and
$L>n,s$
. For
$j\in {\mathbb {Z}}$
and
$Q\in \mathcal {D}_j$
, let

where g is a Schwartz function on
${{\mathbb R}^n}$
. Then we have

Proof. It is easy to verify that for
$Q\in \mathcal {D}_j$

and thus the left-hand side of Equation (4.5) is less than a constant multiple of

by virtue of the maximal inequality (2.4) with
$s>{n}/\min {\{p,q \}}$
. We see that

since
$\ell ^2\hookrightarrow \ell ^q$
,
$L>n$
and
$\langle 2^j(y-{\mathbf {x}}_Q)\rangle \gtrsim \langle 2^j(y-x)\rangle $
for
$Q\in \mathcal {D}_j$
and
$x\in Q$
. Using Equation (2.4) again, the left-hand side of Equation (4.5) is less than a constant times

Lemma 4.4. Let
$1 \le q < \infty $
,
$s>{n}/{q}$
and
$L>n,s$
. For
$j\in {\mathbb {Z}}$
and
$Q\in \mathcal {D}_j$
, let

where g is a Schwartz function on
${{\mathbb R}^n}$
. Then for
$1<p\le \infty $
with
$q\le p$
we have

Proof. For any
$j\in {\mathbb {Z}}$
and
$Q\in \mathcal {D}_j$
, there exists a unique lattice
${\boldsymbol {m}}_Q\in {{\mathbb Z}^n}$
such that
${\mathbf {x}}_Q=2^{-j}{\boldsymbol {m}}_Q$
. For any
$j\in {\mathbb {Z}}$
and
$x\in {{\mathbb R}^n}$
, let
$Q_{j,x}$
be a unique dyadic cube in
$\mathcal {D}_j$
containing x. Then we have the representations
${\mathbf {x}}_{Q_{j,x}}=2^{-j}{\boldsymbol {m}}_{Q_{j,x}}$
for
${\boldsymbol {m}}_{Q_{j,x}}\in {{\mathbb Z}^n}$
and

Now, for
$Q\in \mathcal {D}_j$
, we write

where the penultimate inequality follows from the fact that
$u_x\in [0,1)^n$
. This deduces

Therefore, the left-hand side of Equation (4.6) is less than a constant times

as
$sq>n$
, where we applied Minkowski’s inequality if
$p> q$
and the maximal inequality (2.6). This completes the proof.
Lemma 4.5. Let a be an
$H^p$
-atom associated with Q, satisfying

and fix
$L_0>0$
. Then we have

and

Moreover, for
$1\leq r \leq \infty $
,

Proof. We will prove only the estimates for
${\Lambda }_ja$
, and the exactly same argument is applicable to
${\Gamma }_ja$
as well. Let us first assume
$2^j\ell (Q)\ge 1$
. Then we have

since
$|x-y|\gtrsim |x-{\mathbf {x}}_Q| $
for
$x\in (Q^*)^c$
and
$y\in Q$
.
Now, suppose that
$2^j\ell (Q)<1$
. By using the vanishing moment condition (4.7), we obtain

If
$x\in Q^*$
, then it is clear that

If
$x\in (Q^*)^c$
, then we have

which implies

This proves Equation (4.8).
Moreover, using the estimate (4.8), we have

This concludes the proof of Equation (4.10).
5 Proof of Proposition 3.1: Reduction
5.1 Reduction via paraproduct
Without loss of generality, we may assume

We first note that
$T_{\sigma }(f_1,f_2,f_3)$
can be written as

We shall work with only the case
$j_1\ge j_2\ge j_3$
as other cases follow from a symmetric argument. When
$j_1\ge j_2\ge j_3$
, it is easy to verify that

where
$\sigma _j(\vec {\boldsymbol {\xi }}):=\sigma (\vec {\boldsymbol {\xi }}) \widehat {\Theta }(\vec {\boldsymbol {\xi }}/2^j)$
and
$\widehat {\Theta }(\vec {\boldsymbol {\xi }}):=\sum _{l=-2}^2\widehat {\Psi }(2^{l}\vec {\boldsymbol {\xi }})$
so that
$\widehat {\Theta }(\vec {\boldsymbol {\xi }})=1$
for
$2^{-2}\le |\vec {\boldsymbol {\xi }}|\le 2^{2}$
and
$\mbox {supp}(\widehat {\Theta })\subset \{\vec {\boldsymbol {\xi }}\in ({{\mathbb R}^n})^3:2^{-3}\le |\vec {\boldsymbol {\xi }}|\le 2^3\}$
. Then we observe that

by virtue of the triangle inequality. Moreover, using the fact that
$ {\Gamma }_jf=\sum _{k\le j}{\Lambda }_kf, $
we write

and especially, let
$T_{\sigma }^{(2)}:=T_{\sigma }^{(2),0}$
. Then it is enough to prove that

since the operator
$T_{\sigma }^{(2),k}$
,
$1\le k\le 9$
, can be handled in the same way as
$T_{\sigma }^{(2)}$
.
It should be remarked that the vanishing moment condition (1.16) now implies

5.2 Proof of (5.2) for
$\mu =1$
In this case, we may simply follow the arguments used in the proof of Theorems B and D. The proof is based on the fact that if
$\widehat {g_k}$
is supported in
$\{\xi \in {{\mathbb R}^n}: C_0^{-1} 2^{k}\le |\xi |\le C_02^{k}\}$
for
$C_0>1$
, then

The proof of Equation (5.4) is elementary and standard, simply using the estimate

for all
$0<r<\infty $
and for some
$h\in {\mathbb {N}}$
, depending on
$C_0$
, and the maximal inequality (2.4). We refer to [Reference Yamazaki34, Theorem 3.6] for details.
By using the equivalence in Equation (2.2),

We see that the Fourier transform of
$T_{\sigma _k}\big ({\Lambda }_kf_1,{\Gamma }_{k-10}f_2,{\Gamma }_{k-10}f_3 \big )$
is supported in
$\big \{\xi \in {{\mathbb R}^n} : 2^{k-2}\leq |\xi |\leq 2^{k+2} \big \}$
and thus the estimate (5.4) yields that

Then it is already proved in [Reference Grafakos, Miyachi, Nguyen and Tomita14, (3.14)] that the preceding expression is dominated by the right-hand side of Equation (5.2) for
$s>n/\min {\{p_1,p_2,p_3\}}+n/2$
, where we remark that
$\min {\{p_1,p_2,p_3\}}\le 1$
. This proves Equation (5.2) for
$\mu =1$
.
5.3 Proof of Equation (5.2) for
$\mu =2$
Recall that

and observe that

where
$\ast _{3n}$
means the convolution on
$\mathbb {R}^{3n}$
.
It suffices to consider the case when
$(1/p_1,1/p_2,1/p_3)$
belongs to
$\mathscr {R}_1$
or
$\mathscr {R}_3$
, as the remaining case is symmetrical to the case
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_1$
, in view of Equation (5.5). We will mainly focus on the case
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_1$
, while simply providing a short description for the case
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_3$
in the remark below as almost same arguments will be applied in that case.
Therefore, we now assume
$0<p_1\le 1$
and
$2<p_2,p_3<\infty $
, and in turn, suppose that
$s>n/p_1+n/2$
. By using the atomic decomposition in Equation (2.3), the function
$f_1\in H^{p_1}({{\mathbb R}^n})$
can be written as
$f_1=\sum _{k=1}^{\infty }\lambda _k a_k$
, where
$a_k$
’s are
$H^{p_1}$
-atoms associated with cubes
$Q_k$
, and

As mentioned before, we may assume that M is sufficiently large and
$\int {x^{\gamma }a_k(x)}dx=0$
holds for all multi-indices
$|\gamma |\le M$
.
By the definition in Equation (1.2), we have

and thus we need to prove that

The left-hand side is less than the sum of

and

recalling that
$Q_k^{***}$
is the dilate of
$Q_k$
by a factor
$(10\sqrt {n})^3$
. The two terms
$\mathcal {I}$
and
$\mathcal {J}$
will be treated separately in the next two sections.
Remark. When
$(1/p_1,1/p_2,1/p_3)\in \mathscr {R}_3$
(that is,
$0<p_3\le 1, ~2<p_1,p_2< \infty $
), we need to prove

where
$\widetilde {a_k}$
is the
$H^{p_3}$
-atom associated with
$f_3$
. This is actually, via symmetry, equivalent to the estimate that for
$0<p_1\le 1$
and
$2<p_2,p_3< \infty $
,

where
$a_k$
is the
$H^{p_1}$
-atom for
$f_1$
. The proof of Equation (5.8) is almost same as that of Equation (5.7) which will be discussed in Sections 6 and 7. So this will not be pursued in this paper, just saying that Equation (4.9) will be needed rather than Equation (4.8), and the estimate
$\big \Vert \big \{ {\Gamma }_ja_k\big \}_{j\in {\mathbb {Z}}}\big \Vert _{L^r(\ell ^{\infty })}\sim \Vert a_k\Vert _{H^r({{\mathbb R}^n})}$
will be required in place of the equivalence
$\big \Vert \big \{ {\Lambda }_ja_k\big \}_{j\in {\mathbb {Z}}}\big \Vert _{L^r(\ell ^{2})}\sim \Vert a_k\Vert _{H^r({{\mathbb R}^n})}$
.
6 Proof of Proposition 3.1: estimate for
$\mathcal {I}$
For the estimation of
$\mathcal {I}$
, we need the following lemma whose proof will be given in Section 9.
Lemma 6.1. Let
$0<p_1\le 1$
and
$2<p_2,p_3<\infty $
, and suppose that
$\Vert f_1\Vert _{H^{p_1}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{p_2}({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{p_3}({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p_1+n/2$
. Then there exist nonnegative functions
$u_1$
,
$u_2$
and
$u_3$
on
${{\mathbb R}^n}$
such that

and for
$x\in {{\mathbb R}^n}$

This lemma, together with Hölder’s inequality, clearly shows that

7 Proof of Proposition 3.1: estimate for
$\mathcal {J}$
Recall that for each
$Q_k$
and
$l\in {\mathbb {Z}}$
,
$B_{{\mathbf {x}}_{Q_k}}^l=B({\mathbf {x}}_{Q_k},100n2^{-l})$
stands for the ball of radius
$100n2^{-l}$
and center
${\mathbf {x}}_{Q_k}$
. Simply writing
$B_k^l:=B_{{\mathbf {x}}_{Q_k}}^l$
, we bound
$\mathcal {J}$
by the sum of

and

and treat them separately.
7.1 Estimate for
$\mathcal {J}_1$
Using the representations in Equations (2.7) and (2.9), we write

where we recall
$\psi ^P(x)=|P|^{{1}/{2}}\psi _j(x-{\mathbf {x}}_P)$
and
$\theta ^R(x)=|R|^{{1}/{2}}\theta _j(x-{\mathbf {x}}_R)$
for
$P,R\in \mathcal {D}_j$
. Then it follows from Equations (2.8), (2.10), (2.1) and (2.2) that

and

We write

where

and

Then we have

where

Now, we will show that

7.1.1 Proof of Equation (7.5) for
$\nu =1$
We further decompose
$\mathcal {U}_1(x,y)$
as

where

and accordingly, we define

Then we claim the following lemma.
Lemma 7.1. Let
$0<p_1\le 1$
and
$2<p_2,p_3<\infty $
, and let
$\mathcal {U}_1^{\mathrm {in}/\mathrm {out}}$
be defined as in Equation (7.6). Suppose that
$\Vert f_1\Vert _{H^{p_1}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{p_2}({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{p_3}({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p_1+n/2$
. Then there exist nonnegative functions
$u_1^{\mathrm {in}}$
,
$u_1^{\mathrm {out}}$
,
$u_2$
and
$u_3$
on
${{\mathbb R}^n}$
such that

and for
$x\in {{\mathbb R}^n}$

The proof of Lemma 7.1 will be given in Section 9. Taking the lemma for granted and using Hölder’s inequality, we can easily show that

7.1.2 Proof of Equation (7.5) for
$\nu =2$
For
$P\in \mathcal {D}$
and
$l\in {\mathbb {Z}}$
let
$B_P^l:=B_{{\mathbf {x}}_P}^l=B({\mathbf {x}}_P,100n2^{-l})$
. By introducing

we write
$\mathcal {U}_2=\mathcal {U}_2^{1,\mathrm {in}}+\mathcal {U}_2^{1,\mathrm {out}}+\mathcal {U}_2^{2,\mathrm {in}}+\mathcal {U}_2^{2,\mathrm {out}}$
and consequently,

where

Then we apply the following lemma that will be proved in Section 9.
Lemma 7.2. Let
$0<p_1\le 1$
and
$2<p_2,p_3<\infty $
, and let
$\mathcal {U}_2^{\eta ,\mathrm {in}/\mathrm {out}}$
be defined as in Equation (7.8). Suppose that
$\Vert f_1\Vert _{H^{p_1}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{p_2}({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{p_3}({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p_1+n/2$
. Then there exist nonnegative functions
$u_1^{\mathrm {in}}$
,
$u_1^{\mathrm {out}}$
,
$u_2$
and
$u_3$
on
${{\mathbb R}^n}$
such that

and for each
$\eta =1,2$

Then Lemma 7.2 and Hölder’s inequality yield that
$\mathcal {J}_1^2$
is controlled by the sum of four terms in the form

which is obviously less than a constant. This proves Equation (7.5) for
$\nu =2$
.
7.1.3 Proof of Equation (7.5) for
$\nu =3$
This case is essentially symmetrical to the case
$\nu =2$
. For
$R\in \mathcal {D}$
and
$l\in {\mathbb {Z}}$
, let
$B_R^l:=B_{{\mathbf {x}}_R}^l=B({\mathbf {x}}_R,100n2^{-l})$
. Let

and then we write

where

Now, Equation (7.5) for
$\nu =3$
follows from the lemma below.
Lemma 7.3. Let
$0<p_1\le 1$
and
$2<p_2,p_3<\infty $
, and let
$\mathcal {U}_3^{\eta ,\mathrm {in}/\mathrm {out}}$
be defined as in Equation (7.10). Suppose that
$\Vert f_1\Vert _{H^{p_1}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{p_2}({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{p_3}({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p_1+n/2$
. Then there exist nonnegative functions
$u_1^{\mathrm {in}}$
,
$u_1^{\mathrm {out}}$
,
$u_2$
, and
$u_3$
on
${{\mathbb R}^n}$
such that

and for each
$\eta =1,2$

The proof of the lemma will be provided in Section 9.
7.1.4 Proof of Equation (7.5) for
$\nu =4$
In this case, we divide
$\mathcal {U}_4$
into eight types depending on whether x belongs to each of
$B_P^l$
and
$B_R^l$
and whether
${\Lambda }_ja_k$
is supported in
$Q_k^*$
. Indeed, let

and we define

for
$\eta =1,2,3,4$
.
Then we use the following lemma to obtain the desired result.
Lemma 7.4. Let
$0<p_1\le 1$
and
$2<p_2,p_3<\infty $
, and let
$\mathcal {U}_4^{\eta ,\mathrm {in}/\mathrm {out}}$
be defined as in Equation (7.12). Suppose that
$\Vert f_1\Vert _{H^{p_1}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{p_2}({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{p_3}({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p_1+n/2$
. Then there exist nonnegative functions
$u_1^{\mathrm {in}}$
,
$u_1^{\mathrm {out}}$
,
$u_2$
, and
$u_3$
on
${{\mathbb R}^n}$
such that

and for each
$\eta =1,2,3,4,$

We will prove the lemma in Section 9.
7.2 Estimate for
$\mathcal {J}_2$
Let
$x\in (Q_k^{***})^c\cap B_{k,l}$
. For
$\nu =1,2,3,4$
, let
$\Omega _{\nu }(P,R)$
be defined as in Equation (7.3). Then as in the proof of the estimate for
$\mathcal {J}_1$
, we consider the four cases:
$x \in \Omega _{1}(P,R)$
,
$x \in \Omega _{2}(P,R)$
,
$x \in \Omega _{3}(P,R)$
and
$x \in \Omega _{4}(P,R)$
. That is, for each
$\nu =1,2,3,4$
, let
$\mathcal {U}_{\nu }$
be defined as in Equation (7.4) and

Then it suffices to show that for each
$\nu =1,2,3,4$
,

7.2.1 Proof of Equation (7.15) for
$\nu =1$
In this case, the proof can be simply reduced to the following lemma, which will be proved in Section 9.
Lemma 7.5. Let
$0<p_1\le 1$
and
$2<p_2,p_3<\infty $
, and let
$\mathcal {U}_1$
be defined as in Equation (7.4). Suppose that
$\Vert f_1\Vert _{H^{p_1}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{p_2}({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{p_3}({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p_1+n/2$
. Then there exist nonnegative functions
$u_1$
,
$u_2$
and
$u_3$
on
${{\mathbb R}^n}$
such that

and for
$x\in {{\mathbb R}^n}$

Then it follows from Hölder’s inequality that

7.2.2 Proof of Equation (7.15) for
$\nu =2$
For
$P\in \mathcal {D}$
and
$l\in {\mathbb {Z}}$
, let
$B_P^l:=B_{{\mathbf {x}}_P}^l$
be the ball of center
${\mathbf {x}}_P$
and radius
$100n2^{-l}$
as before. We define

and write

where

Then we need the following lemmas.
Lemma 7.6. Let
$0<p_1\le 1$
and
$2<p_2,p_3<\infty $
and let
$\mathcal {U}_2^{1}$
be defined as in (7.17). Suppose that
$\Vert f_1\Vert _{H^{p_1}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{p_2}({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{p_3}({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p_1+n/2$
. Then there exist nonnegative functions
$u_1$
,
$u_2$
and
$u_3$
on
${{\mathbb R}^n}$
such that

and

Lemma 7.7. Let
$0<p_1\le 1$
and
$2<p_2,p_3<\infty $
, and let
$\mathcal {U}_2^{2}$
be defined as in Equation (7.17). Suppose that
$\Vert f_1\Vert _{H^{p_1}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{p_2}({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{p_3}({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p_1+n/2$
. Then there exist nonnegative functions
$u_1$
,
$u_2$
and
$u_3$
on
${{\mathbb R}^n}$
such that

and

The above lemmas will be proved in Section 9. Using Lemmas 7.6 and 7.7, we obtain

which finishes the proof of Equation (7.15) for
$\nu =2$
.
7.2.3 Proof of Equation (7.15) for
$\nu =3$
We use the notation
$B_R^l:=B_{{\mathbf {x}}_R}^l$
for
$R\in \mathcal {D}$
and
$l\in {\mathbb {Z}}$
as before and write

where

and

As in the proof of the case
$\nu =2$
, it suffices to prove the following two lemmas.
Lemma 7.8. Let
$0<p_1\le 1$
and
$2<p_2,p_3<\infty $
, and let
$\mathcal {U}_3^{1}$
be defined as in Equation (7.19). Suppose that
$\Vert f_1\Vert _{H^{p_1}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{p_2}({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{p_3}({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p_1+n/2$
. Then there exist nonnegative functions
$u_1$
,
$u_2$
and
$u_3$
on
${{\mathbb R}^n}$
such that

and

Lemma 7.9. Let
$0<p_1\le 1$
and
$2<p_2,p_3<\infty $
, and let
$\mathcal {U}_3^{2}$
be defined as in Equation (7.19). Suppose that
$\Vert f_1\Vert _{H^{p_1}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{p_2}({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{p_3}({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p_1+n/2$
. Then there exist nonnegative functions
$u_1$
,
$u_2$
and
$u_3$
on
${{\mathbb R}^n}$
such that

and

The proof of Lemmas 7.8 and 7.9 will be provided in Section 9.
7.2.4 Proof of Equation (7.15) for
$\nu =4$
Let
$B_P^l:=B_{{\mathbf {x}}^P}^l$
and
$B_R^l:=B_{{\mathbf {x}}_R}^l$
for
$P,R\in \mathcal {D}$
and
$l\in {\mathbb {Z}}$
, and let
$\Xi _{\eta }(P,R,l)$
be defined as in Equation (7.11). Now, we write

where

Accordingly, we define

Then we obtain the desired result from the following lemmas.
Lemma 7.10. Let
$0<p_1\le 1$
and
$2<p_2,p_3<\infty $
, and let
$\mathcal {U}_4^{\eta }$
,
$\eta =1,2,3$
, be defined as in Equation (7.24). Suppose that
$\Vert f_1\Vert _{H^{p_1}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{p_2}({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{p_3}({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p_1+n/2$
. Then there exist nonnegative functions
$u_1$
,
$u_2$
and
$u_3$
on
${{\mathbb R}^n}$
such that

and for each
$\eta =1,2,3$

Lemma 7.11. Let
$0<p_1\le 1$
and
$2<p_2,p_3<\infty $
, and let
$\mathcal {U}_4^{4}$
be defined as in Equation (7.24). Suppose that
$\Vert f_1\Vert _{H^{p_1}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{p_2}({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{p_3}({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p_1+n/2$
. Then there exist nonnegative functions
$u_1$
,
$u_2$
and
$u_3$
on
${{\mathbb R}^n}$
such that

and

The proof of the lemmas will be given in Section 9.
8 Proof of Proposition 3.2
We need to deal only with, via symmetry, the case when
$0<p_1=p\le 1$
and
$p_2=p_3=\infty $
. As before, we assume that
$\| f_1 \|_{H^p({{\mathbb R}^n})} = \|f_2 \|_{L^\infty ({{\mathbb R}^n})} = \| f_3 \|_{L^\infty ({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p+n/2$
. In this case, we do not decompose the frequencies of
$f_2, f_3$
and only make use of the atomic decomposition on
$f_1$
. Let
$a_k$
’s be
$H^p$
-atoms associated with
$Q_k$
so that
$f_1=\sum _{k=1}^{\infty }\lambda _k a_k$
and
$\big (\sum _{k=1}^{\infty }|\lambda _k|^p\big )^{1/p}\lesssim 1$
. Then we will prove that

and

8.1 Proof of Equation (8.1)
Since

the left-hand side of Equation (8.1) is controlled by

Using Hölder’s inequality, the
$L^2$
boundedness of
$\mathcal {M}$
and Theorem D, we have

and thus Equation (8.1) follows from
$\big (\sum _{k=1}^{\infty }|\lambda _k|^p\big )^{1/p}\lesssim 1$
.
8.2 Proof of Equation (8.2)
Let
$B_k^l=B({\mathbf {x}}_{Q_k},100 n2^{-l})$
as before. We now decompose the left-hand side of Equation (8.2) as the sum of


and thus we need to show that

Actually, the proof of these estimates will be complete once we have verified the following lemmas.
Lemma 8.1. Let
$0<p\le 1$
. Suppose that
$\Vert f_1\Vert _{H^{p}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{\infty }({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{\infty }({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p+n/2$
. Then there exist nonnegative functions
$u_1$
,
$u_2$
and
$u_3$
on
${{\mathbb R}^n}$
such that

and

Lemma 8.2. Let
$0<p\le 1$
. Suppose that
$\Vert f_1\Vert _{H^{p}({{\mathbb R}^n})}=\Vert f_2\Vert _{H^{\infty }({{\mathbb R}^n})}=\Vert f_3\Vert _{H^{\infty }({{\mathbb R}^n})}=1$
and
$\mathcal {L}_s^2[\sigma ]=1$
for
$s>n/p+n/2$
. Then there exist nonnegative functions
$u_1$
,
$u_2$
and
$u_3$
on
${{\mathbb R}^n}$
such that

and

The proof of the two lemmas will be given in Section 9.
9 Proof of the key lemmas
9.1 Proof of Lemma 6.1
Let
$1<r<2$
such that
$s>{3n}/{r}>{3n}/{2}$
, and we claim the pointwise estimate

Indeed, choosing t so that
${3n}/{r}<3t<s$
, we apply Hölder’s inequality to bound the left-hand side of Equation (9.1) by

We observe that

using the Hausdorff–Young inequality, Equation (5.1) and the inclusion

where A is a ball of a constant radius, whose proof is contained in [Reference Grafakos and Park19, (1.8)]. Applying Equation (2.5) to the remaining three
$L^r$
norms, we finally obtain Equation (9.1).
Now, we choose
$\widetilde {r}$
and q such that
$2<\widetilde {r}<p_2,p_3$
and
${1}/{q}+{2}/{\widetilde {r}}=1$
. Finally, using the estimate (9.1) and Hölder’s inequality, we have

where we choose

and this proves (6.1). Moreover,

where the last inequality follows from Equation (5.6) and the estimate

for
$q<r_0<\infty $
. Here, we applied Hölder’s inequality, the maximal inequality (2.4), the equivalence in (2.2) and properties of the
$H^{p_1}$
-atom
$a_k$
. It is also easy to verify

and

9.2 Proof of Lemma 7.1
Since

we can choose
$s_1,s_2,s_3$
such that
$s_1>{n}/{p_1}-{n}/{2}$
,
$s_2,s_3>{n}/{2}$
, and
$s=s_1+s_2+s_3$
.
Using the estimates

we have

We observe that for
$|x-y|\le 2^{-l}$
,
$x\in (Q_k^{***})^c\cap (B_k^l)^c$
and
$z_1\in Q_k^*$
,

and thus, by using Lemma 4.5,

for sufficiently large M, where

This proves that

and therefore, we obtain

Similar to Equation (9.2), we write

Instead of Equation (9.3), we make use of the estimate

for
$|x-y|\le 2^{-l}$
and
$x\in (Q_k^{***})^c\cap (B_k^l)^c$
. Then, using the argument that led to Equation (9.5), we have

where M,
$L_0$
are sufficiently large numbers and

Now, we deduce

According to Equations (9.6) and (9.10), the estimate (7.7) follows from taking

It is clear that


in view of Equations (7.1) and (7.2). To estimate
$u_1^{\mathrm {in}}$
and
$u_1^{\mathrm {out}}$
, we note that

where we applied Minkowski’s inequality and a change of variables, and similarly,

for
$L_0>s+n$
. Now, we have

and the integral is dominated by

The first term is no more than a constant times
$\ell (Q_k)^{-p_1(s_1-({n}/{p_1}-{n}/{2}))}$
, and the second one is bounded by

due to Equation (9.13). This proves

In a similar way, together with Equation (9.14), we can also prove

choosing
$M>L_0-{3n}/{2}$
.
9.3 Proof of Lemma 7.2
As in the proof of Lemma 7.1, we pick
$s_1,s_2,s_3$
satisfying
$s_1>{n}/{p_1}-{n}/{2}$
,
$s_2,s_3>{n}/{2}$
, and
$s=s_1+s_2+s_3>n/p_1+n/2$
.
We first consider the case
$\eta =1$
. For
$x\in P^c\cap (B_P^l)^c$
and
$|x-y|\le 2^{-l}$
, we have

By using

we have

Using Equations (9.3) and (9.17) and Lemma 4.5, the integral in the preceding expression is bounded by

for sufficiently large
$M>0$
, where
$\widetilde {\psi ^P}(z_2):=\langle 2^j(z_2-{\mathbf {x}}_P)\rangle ^{s_2}\psi ^P(z_2)$
for
$P\in \mathcal {D}_j$
and
$I_{k,j,s}^{\mathrm {in}}$
is defined as in Equation (9.4). Note that

and thus it follows from Lemma 4.2 that the
$L^2$
norm in the last displayed expression is dominated by

This yields that

Similarly, using Equations (9.7) and (9.17), Lemma 4.5 and Lemma 4.2, we have

where
$I_{k,j,s}^{\mathrm {out}}$
is defined as in Equation (9.9).
When
$\eta =2$
, we use the inequality

for
$x\in (B_k^l)^c \cap B_{P}^l$
. Then, similar to Equation (9.18), we have

and the integral is dominated by a constant times

due to Equations (9.3) and (9.22), Lemma 4.5 and Lemma 4.2, where
$I_{k,j,s}^{\mathrm {in}}$
and
$\mathscr {B}_P^2(f_2)$
are defined as in Equations (9.4) and (9.19). Therefore,

Similarly, we can also prove that

Combining Equations (9.20), (9.21), (9.23) and (9.24), the estimate (7.9) holds with

Clearly, as in Equations (9.15), (9.16) and (9.12),

and Lemma 4.3 proves that

9.4 Proof of Lemma 7.3
The proof is almost same as that of Lemma 7.2. By letting
$M>0$
be sufficiently large and exchanging the role of terms associated with
$f_2$
and
$f_3$
in the estimate (9.18), we may obtain

where
$I_{k,j,s}^{\mathrm {in}}$
is defined as in Equation (9.4),
$\widetilde {\theta ^R}(z_3):=\langle 2^j(z_3-{\mathbf {x}}_R)\rangle ^{s_3}\theta ^R(z_3)$
for
$R\in \mathcal {D}_j$
and

Similarly,

where
$I_{k,j,s}^{\mathrm {out}}$
is defined as in Equation (9.9).
For the case
$\eta =2$
, we use the fact that for
$x\in (B_k^l)^c\cap B_R^l$
,

instead of Equation (9.22). Then we have

and

which are analogous to Equations (9.25) and (9.27).
Then Lemma 7.3 follows from Equations (9.15), (9.16) and (9.11) and Lemma 4.4 by choosing

9.5 Proof of Lemma 7.4
Let
$I_{k,j,s}^{\mathrm {in}}$
,
$I_{k,j,s}^{\mathrm {out}}$
,
$\mathscr {B}_P^2(f_2)$
and
$\mathscr {B}_R^3(f_3)$
be defined as before. Let
$M>0$
be a sufficiently large number. We claim the pointwise estimates that for each
$\eta =1,2,3,4$
,

The proof of the above claim is a repetition of the arguments used in the proof of Lemmas 7.2 and 7.3, so we omit the details. We now take

9.6 Proof of Lemma 7.5
We choose
$0<\epsilon <1$
such that

We note that

By using Lemma 4.1 with the vanishing moment condition (5.3), we have

where

and

Now, the left-hand side of Equation (7.16) is dominated by
$\mathscr {J}^{in}(x)+\mathscr {J}^{out}(x)$

To estimate
$\mathscr {J}^{\mathrm {in}}$
, we first see that

using the Cauchy–Schwarz inequality with
$s>{3n}/{2}$
, and thus

by using the fact that

For the other term
$\mathscr {J}^{out}$
, we choose
$s_1$
such that

which is possible due to Equation (9.28), and
$s_2,s_3>{n}/{2}$
such that

We observe that, for
$y\in (Q_k^{**})^c$
,

where Lemma 4.5 is applied in the first inequality. Here, M and
$L_0$
are sufficiently large numbers such that
$L_0-s_1>n$
and
$M-L_0+3n/2>0$
. By letting

we have

and the integral is, via the Cauchy-Schwarz inequality, less than

This deduces that

since

Therefore, by using the Cauchy–Schwarz inequality

where the last inequality holds due to
$s_1>{n}/{2}$
and
$M-L_0+{3n}/{2}>0$
.
In conclusion, the estimate (7.16) can be derived from Equations (9.29) and (9.34), using the choices of

It is obvious from Equations (7.1) and (7.2) that
$\Vert u_2\Vert _{L^{p_2}({{\mathbb R}^n})}, \Vert u_3\Vert _{L^{p_3}({{\mathbb R}^n})}\lesssim 1$
. Furthermore,

This completes the proof.
9.7 Proof of Lemma 7.6
Choose
$s_1$
,
$s_2$
, and
$s_3$
such that
$s_1>{n}/{p_1}-{n}/{2}$
,
$s_2>{n}/{2}$
,
$s_3>{n}/{2}$
and
$s=s_1+s_2+s_3$
. For
$x\in B_k^l\cap (B_P^l)^c$
and
$|x-y|\le 2^{-l}$
, we have

This implies

where

By using the Cauchy–Schwarz inequality and Lemma 4.2, we obtain

where
$\mathscr {B}_P^2(f_2)$
is defined as in Equation (9.19) for some
$L>n,s_2$
, and

Now, we choose

Clearly, Equation (7.18) holds and
$\Vert u_2\Vert _{L^{p_2}({{\mathbb R}^n})}, \Vert u_3\Vert _{L^{p_3}({{\mathbb R}^n})} \lesssim 1$
due to Lemma 4.3 and Equation (7.2). In addition,

and the integral is controlled by

by using Hölder’s inequality and the
$L^2$
boundedness of
$\mathcal {M}$
. It follows from Minkowski’s inequality and Lemma 4.5 that

and this finally yields that

9.8 Proof of Lemma 7.7
For
$x\in B_k^l\cap B_P^l$
,

Since
$s>{n}/{p_1}+{n}/{2}$
, there exist
$0<\epsilon _0,\epsilon _1<1$
such that

Choose
$t_1$
and
$t_2$
satisfying
$t_1>{n}/{p_1}$
,
$t_2>{n}/{p_2}$
and
$t_1+t_2=\big [ {n}/{p_1}+{n}/{p_2}\big ]+\epsilon _0$
, and let
$N_0:=\big [ {n}/{p_1}+{n}/{p_2}\big ]-n $
. Then Lemma 4.1, together with the vanishing moment condition (5.3), and the estimate (9.40) yield that

where

This deduces

Using Hölder’s inequality with
$\frac {1}{2}+\frac {1}{(1/p_2'-1/2)^{-1}}+\frac {1}{p_2}=1$
and Lemma 4.2, we see that

because
$s-n-\epsilon _1=s-t_1-t_2-\epsilon _1+N_0+\epsilon _0$
,
$s-t_1-t_2-\epsilon _1>n(1/p_2'-1/2)$
. This shows that the integral in the right-hand side of Equation (9.41) is dominated by a constant times

where
$\mathscr {B}_P^2(f_2)$
is defined as in Equation (9.19) and M is sufficiently large. Consequently,

Now, we are done with

as
$\Vert u_{{\mathrm {i}}}\Vert _{L^{p_{{\mathrm {i}}}}({{\mathbb R}^n})}\lesssim 1$
,
${\mathrm {i}}=1,2,3$
, follow from Lemma 4.3, Equation (7.2) and the argument that led to (9.35) with
$t_1>n/p_1$
.
9.9 Proof of Lemma 7.8
Let
$s_1$
,
$s_2$
and
$s_3$
satisfy
$s_1>{n}/{p_1}-{n}/{2}$
,
$s_2>{n}/{2}$
,
$s_3>{n}/{2}$
and
$s=s_1+s_2+s_3$
. By mimicking the argument that led to Equation (9.37) with

for
$x\in B_k^l\cap (B_R^l)^c$
and
$|x-y|\le 2^{-l}$
, instead of Equation (9.36), we can prove

where
$J_{k,j,s}^1$
and
$\mathscr {B}_R^3(f_3)$
are defined as in Equations (9.38) and (9.26) for some
$L>n,s_3$
.
Now, let

Then the estimate (7.21) is clear and it follows from Equations (9.39) and (7.1) and Lemma 4.4 that Equation (7.20) holds.
9.10 Proof of Lemma 7.9
Let
$0<\epsilon _0,\epsilon _1<1$
satisfy

and select
$t_1,t_3$
so that
$t_1>{n}/{p_1}$
,
$t_3>{n}/{p_3}$
and
$t_1+t_3=\big [{n}/{p_1}+{n}/{p_3} \big ]+\epsilon _0$
. Let
$N_0:= \big [ {n}/{p_1}+{n}/{p_3}\big ]-n$
and
$B_R^3(f_3)$
be defined as in Equation (9.26). Then, as the counterpart of Equation (9.42), we can get

where the embedding
$\ell ^2\hookrightarrow \ell ^{\infty }$
is applied. By taking

9.11 Proof of Lemma 7.10
The proof is almost same as that of Lemmas 7.6 and 7.8. Let
$s_1$
,
$s_2$
and
$s_3$
be numbers such that
$s_1>{n}/{p_1}-{n}/{2}$
,
$s_2>{n}/{2}$
,
$s_3>{n}/{2}$
and
$s=s_1+s_2+s_3$
. We claim that for
$\eta =1,2,3$
,

where
$J_{k,j,s}^1$
,
$\mathscr {B}_P^2(f_2)$
and
$\mathscr {B}_R^3(f_3)$
are defined as in Equations (9.38), (9.19) and (9.26), respectively. Then we have Equation (7.25) with the choice

The estimates for
$u_1,u_2,u_3$
follow from Equation (9.39), Lemma 4.3 and Lemma 4.4.
Now, we return to the proof of Equation (9.43). For
$x\in B_k^l\cap (B_P^l)^c\cap (B_R^l)^c$
and
$|x-y|\le 2^{-l}$
, we have

Then we have

where

Now, using the method similar to that used in the proof of Equation (9.37), we obtain Equation (9.43) for
$\eta =1$
.
For the case
$\eta =2$
, we use the fact, instead of Equation (9.44), that for
$x\in B_k^l\cap (B_P^l)^c\cap B_R^l$
and
$|x-y|\le 2^{-l}$
,

This shows that

where

and then Equation (9.43) for
$\eta =2$
follows.
Similarly, we can prove that for
$x\in B_k^l\cap B_P^l\cap (B_R^l)^c$
and
$|x-y|\le 2^{-l}$
,

where

This proves (9.43) for
$\eta =3$
.
9.12 Proof of Lemma 7.11
We first note that

for
$x\in B_k^l\cap B_P^l\cap B_R^l$
. Since
${n}/{p}<s-\big ({n}/{2}-{n}/{p_2}-{n}/{p_3} \big )$
, there exist
$0<\epsilon _0,\epsilon _1<1$
such that

Choose
$t_1$
,
$t_2$
, and
$t_3$
satisfying
$t_1>{n}/{p_1}$
,
$t_2>{n}/{p_2}$
,
$t_3>{n}/{p_3}$
, and
$t_1+t_2+t_3=\big [ {n}/{p}\big ]+\epsilon _0$
and let
$N_0:=\big [ {n}/{p}\big ]-n $
. Then it follows from Lemma 4.1 and the estimate (9.45) that

where

This deduces that

Since
$s-\big [{n}/{p} \big ]+{n}/{2}-\epsilon _0-\epsilon _1>\big ({n}/{2}-{n}/{p_2} \big )+\big ({n}/{2}-{n}/{p_3}\big )$
, there exist
$\mu _2$
and
$\mu _3$
such that
$\mu _2>{n}/{2}-{n}/{p_2}$
,
$\mu _3>{n}/{2}-{n}/{p_3}$
, and
$\mu _1+\mu _2=s-\big [ {n}/{p}\big ]+{n}/{2}-\epsilon _0-\epsilon _1$
. Using Hölder’s inequality with

we have

and then Lemma 4.2 yields that the preceding expression is less than a constant times

because
$\mu _2>n(1/p_2'-1/2)$
and
$\mu _3>n(1/p_3'-1/2)$
, where
$\mathscr {B}_P^2(f_2)$
and
$\mathscr {B}_R(f_3)$
are defined as in Equations (9.19) and (9.26).
Now, the integral in the right-hand side of Equation (9.46) is dominated by a constant times

and this is no more than

where
$N_0+\epsilon _0+\mu _2+\mu _3=s-\frac {n}{2}-\epsilon _1$
. Hence, it follows that

Now, let

9.13 Proof of Lemma 8.1
Using the fact that
$\sum _{j\in {\mathbb {Z}}}\widehat {\Psi }(2^{-j}\vec {\boldsymbol {\xi }})=1$
for
$\vec {\boldsymbol {\xi }}\not =\vec {\boldsymbol {0}}$
, we can write

where
$\widetilde {\sigma _j}(\vec {\boldsymbol {\xi }}):=\sigma (\vec {\boldsymbol {\xi }})\widehat {\Psi }(2^{-j}\vec {\boldsymbol {\xi }})$
so that

Moreover, due to the support of
$\widetilde {\sigma _j}$
,

Now, the left-hand side of Equation (8.3) is less than

Let
$s_1,s_2,s_3$
be numbers such that
$s_1>n/p-n/2$
,
$s_2,s_3>n/2$
, and
$s=s_1+s_2+s_3$
. For
$x\in (Q^{***})^c\cap (B_k^l)^c$
and
$|x-y| \le 2^{-l}$
,

In the same argument as in the proof of Equations (9.5) and (9.8), with Equation (4.8) replaced by Equation (4.9), we can get

where
$I_{k,j,s}^{in}$
and
$I_{k,j,s}^{out}$
are defined as in Equaitons (9.4) and (9.9), respectively, and

This yields that

and thus Equation (8.3) follows from choosing
$u_2(x)=u_3(x):=1$
and

Now, it is straightforward that
$\Vert u_1\Vert _{L^p({{\mathbb R}^n})}$
is less than

and the
$L^p$
-norm in the preceding expression is less than

where Equations (9.13) and (9.14) are applied in the penultimate inequality for sufficiently large M. This concludes that

9.14 Proof of Lemma 8.2
Select
$0<\epsilon <1$
such that

Then Lemma 4.1 yields that

where we applied
$2^l\lesssim |x-{\mathbf {x}}_{Q_k}|$
for
$x\in B_k^l$
in the penultimate inequality and


Now, we claim that

Once Equation (9.49) holds, we obtain

which implies (8.4) with
$u_2(x)=u_3(x):=1$
and

Moreover,

because
$N_p+n+\epsilon>n/p$
.
Therefore, it remains to show Equation (9.49). Indeed, it follows from Theorem D that

For the other term, we use both Equations (9.47) and (9.48) to write

Let
$s_1,s_2,s_3$
be numbers satisfying

similar to Equations (9.30) and (9.31). Then, using the argument in Equation (9.33), we have

where
$A_{j,Q_k}$
is defined as in Equation (9.32). This finally yields that

for M and
$L_0$
satisfying
$M>L_0-s_1-n$
, which completes the proof of Equation (9.49).
Appendix A Bilinear Fourier multipliers
$(m=2)$
We remark that Theorem 1 still holds in the bilinear setting where all the arguments above work as well.
Theorem 2. Let
$0<p_1,p_2\le \infty $
and
$0<p\le 1$
with
$1/p=1/p_1+1/p_2$
. Suppose that

where J is an arbitrary subset of
$\{1,2\}$
. Let
$\sigma $
be a function on
$({{\mathbb R}^n})^2$
satisfying

and the bilinear analogue of the vanishing moment condition (1.16). Then the bilinear Fourier multiplier
$T_{\sigma }$
, associated with
$\sigma $
, satisfies

for
$f_1,f_2\in \mathscr {S}_0({{\mathbb R}^n})$
.
The proof is similar, but much simpler than that of Theorem 1. Moreover, unlike Theorem 1, Theorem 2 covers the results for
$p_{j}=\infty $
,
$j=1,2$
, which follow immediately from the bilinear analogue of Proposition 3.2.
Appendix B General m-linear Fourier multipliers for
$m\ge 4$
The structure of the proof of Theorem 1 is actually very similar to those of Theorems C and D, in which
$T_{\sigma }(f_1,\dots ,f_m)$
is written as a finite sum of
$T^{\kappa }(f_1,\dots ,f_m)$
for some variant operators
$T^{\kappa }$
, and then

where
$\Vert u_{j}\Vert _{L^{p_{j}}({{\mathbb R}^n})}\lesssim \Vert f_{j}\Vert _{L^{p_{j}}({{\mathbb R}^n})}$
for
$1\le j\le m$
. Compared to the
$H^{p_1}\times \cdots \times H^{p_m}\to L^p$
estimates in Theorems C and D, one of the obstacles to be overcome for the boundedness into Hardy space
$H^p$
is to replace the left-hand side of Equation (B.1) by

and we have successfully accomplished this for
$m=3$
as mentioned in Equation (1.20). One of the methods we have adopted is

where
$2<\widetilde {r}<p_2,p_3$
and
$1/q+2/\widetilde {r}=1$
. Then we have

by the
$L^{p_j}$
boundedness of
$\mathcal {M}_{\widetilde {r}}$
with
$\widetilde {r}<p_j$
. Such an argument is contained in the proof of Lemma 6.1. However, if we consider m-linear operators for
$m\ge 4$
, then the above argument does not work for
$p_2,\dots ,p_m>2$
. For example, it is easy to see that
$1/q + 3/\widetilde {r}$
exceeds
$1$
if
$\widetilde {r}>2$
is sufficiently close to
$2$
. That is, we are not able to obtain m-linear estimates for
$0<p_1\le 1$
and
$2< p_2, \cdots , p_m<\infty $
,
$m\ge 4$
. This is critical because our approach in this paper highly relies on interpolation between the estimates in the regions
$\mathscr {R}_1, \mathscr {R}_2, \mathscr {R}_3$
, which are trilinear versions of
$\{(1/p_1, \cdots , 1/p_m) : 0<p_1\le 1, \, 2< p_2, \cdots , p_m<\infty \}$
.
Acknowledgements
J.B. Lee is supported by NRF grant 2021R1C1C2008252. B. Park is supported in part by NRF grant 2022R1F1A1063637 and by POSCO Science Fellowship of POSCO TJ Park Foundation
Competing interests
The authors have no competing interest to declare.