Download Pseudo-differential operators

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Structure (mathematical logic) wikipedia , lookup

Basis (linear algebra) wikipedia , lookup

Linear algebra wikipedia , lookup

Hilbert space wikipedia , lookup

Cartesian tensor wikipedia , lookup

Fundamental theorem of algebra wikipedia , lookup

Complexification (Lie group) wikipedia , lookup

Bra–ket notation wikipedia , lookup

Tensor operator wikipedia , lookup

Transcript
(CRASH COURSE ON) PARA-DIFFERENTIAL OPERATORS
BENOIT PAUSADER
We refer to [1] for an excellent introduction to pseudo-differential operators and to [2] for a
complete treatment. We also refer to https://math.berkeley.edu/~evans/semiclassical.pdf
for comprehensive lecture notes online.
1. Pseudo-differential operators
1.1. The algebra of differential operators. Our goal is to understand the Algebra of differential
operators
X
aα (x)∂xα ,
α ∈ Nd , aα ∈ Cb∞ (Rd )}.
A := {differential operators} = {
α
In fact, our primary goal is to find the invertible elements and how to invert them.
We can already isolate 2 remarkable (commutative) subalgebras: the algebra of operators of
order 0:
A0 := {a(Q);
a ∈ Cb∞ (Rd }, (a(Q)f )(x) = a(x)f (x)
and the algebra of operators with constant coefficients
AP := {a(P ); a ∈ C[X1 , . . . , Xd ]};
F(a(P )) = a(Q)F.
From this, we can make two remarks:
• In A0 , it is easy to find the invertible elements: they are the functions that never vanish.
Their inverse is simple: a−1 = 1/a.
• it is fairly natural to extend AP to Ac = F −1 A0 F, the algebra of Fourier multipliers. In
particular, this way, we can easily find the inverse of operators in Ac .
• A0 is invariantly defined, but Ac is not. Considering Pseudo-differential operators on manifold requires some care.
It is remarkable that A0 and Ac are so simple, seem to “generate” A, yet A is much more subtle.
This is mainly due to the fact that A is noncommutative.
1.2. Vector fields. There is a simple way to incrementally generate A from A0 . Indeed, we may
consider Aj the set of operators of order j or less. This is a module over A0 . After operators of
degree 0, one can form the A1 by adjoining the set of vector fields, namely of operators of the form
χ : (χf )(x) =
d
X
χj (x)∂j f (x),
χj ∈ Cb∞ (Rd ).
j=1
Denoting X the set of vector fields, which is invariantly defined, we have that A1 = A0 + X can
also be invariantly defined.
One can then build A2 = A1 + X A1 , e.g. choosing an orthonormal basis Ei , one has
∆f = −
d
X
j=1
and one can then construct A3 ,. . .
1
Ej Ej f
2
BENOIT PAUSADER
We note however that this algebra is noncommutative since X is:


d
X
X
X X
 (aj ∂j bk − bj ∂j ak ) ∂k ∈ X
[
aj ∂j ,
bk ∂k ] =
j
k
j=1
k
However, we note a fortunate fact: A is commutative to main order: the product of two vector
fields is a differential operator of order 2, yet their commutator is only of order 1: the term of order
exactly 2 does not depend of the order of the product.
1.3. Quantization rule. In fact, by considering the different coordinates (or by considering the
actions of a vector on the coordinate functions x1 , x2 ,. . . , xd ), one can write any vector field
χ = (χ1 , . . . , χd ) as
d
X
χ=
{Vector}
χj (x)∂j .
(1.1)
j=1
We want a way to assign a symbol to a vector. By the considerations of the previous subsection,
one we specify a rule for vectors, this will automatically give a rule for A.
A natural choice would be to associate to each vector χ as in (1.1) the symbol1
χ(x, ξ) =
d
X
χj (x)(iξj )
j=1
and this is the Kohn-Nirenberg quantization. This leads to the quantization for elements in A1 :
L := aj (x)∂j + b(x) → σKN (L)(x, ζ) := iaj (x)ξj + b(x)
{KNQuant}
(1.2)
This gives a natural identification of differential operators and symbols:
X
X
aα (x)∂ α →
aα (x)i|α| ξ α ,
α
α
and we could very well work with this rule.
However, from the operator viewpoint, this has a severe limitation: it is very difficult to know
from the symbol when an operator is self-adjoint, whereas for a ∈ A0 , a(Q) is self-adjoint if and
only if a is real. Since self-adjoint operators are so important (e.g. for energy-estimates), we will
prefer the Weyl quantization which retains the property that an operator is self-adjoint if and only
if its symbol is real. For differential operators of order 1, it is easy to see what the rule should be:
1
L := aj (x)∂j + b(x) = (aj ∂j + ∂j · aj ) + (b(x) − (∂j aj )) → σ w (L)(x, ζ) := iaj (x)ξj + (b(x) − div(~a)).
2
{weyl}
(1.3)
We can then verify that (i) real valued symbols correspond to self-adjoint operators and (ii) the
highest order term is the same for both quantizations (but lower order terms differ).
In fact, there is a natural 1-parameter choice of quantizations all simply forced once we choose
the correspondance to x · ξ:
xi · ξj = i(θxi ∂j + (1 − θ)∂j xi )
and we see that the Kohn-Nirenberg quantization corresponds to θ = 1, while the Weyl quantization
corresponds to the case θ = 1/2.
We note that there are other natural properties that one might want to preserve. Unfortunately,
with neither of the quantization rules defined above can one have that an operator is positive if and
only if its symbol is. When such a property is needed, one can consider the Bargmann transform.
For other comparison, we refer to http://www.fuw.edu.pl/~derezins/metz-slides.pdf
1The additional factor of i is suggested by looking at the Fourier transform. However, it clearly does not make a
difference.
(CRASH COURSE ON) PARA-DIFFERENTIAL OPERATORS
3
1.4. Beyond polynomial symbols. Considering classical differential operators, we obtain polynomial symbols. If we are to find a framework where we can invert them, we need to consider
rational symbols. We also want to be able to project, thus we would like to consider compactly
supported symbols. We will consider general “polynomial like” symbols:
Definition 1.1. A symbol of degree 0 is a smooth function a(x, ξ) such that for all α, β ∈ Nd
sup
(x,ξ)∈R2d
|ξ||β| |Dxα Dξβ a(x, ξ)| ≤ Cα,β .
We note this a ∈ S 0 . More generally, b is a symbol of degree p (noted b ∈ S p ) if
b ∈ S0,
p
b(x, ξ) := (1 + |ξ|2 )− 2 b(x, ξ).
Note that this does give a generalization of Rational functions and of compactly supported
functions. As an example,
X
aα ξ α ∈ S p ,
and ϕ(ξ)a(x, ξ) ∈ S −q
|α|≤p
whenever ϕ ∈ Cc∞ (Rd ) and a ∈ Cc∞ (Rd × Rd ), q ≥ 0.
From now on, whenever we talk about a “symbol”, it will be a symbol according to the above
definition.
Now that we have extended the notion of symbols, we need to find a way to transform them into
operators that would agree with the map (1.3). This is easily done using the Fourier transform: if
σ is a symbol, then
Z
1
x+y
w
{Op (σ)f } (x) :=
eihx−y,pi σ(
, p)f (y)dydp.
2d
2
(2π)
R2d
A slightly simpler version is
1
F {Op (σ)f } (ξ) =
(2π)d
w
Z
R2
σ
e(ξ − η,
ξ+η b
)f (η)dη
2
(1.4) {Pseudo}
where σ
e denotes the Fourier transform in the first variable only.
Definition 1.2. A pseudo-differential operator of order p is an operator of the form Opw (σ) for
some σ ∈ S p .
Note that, using formula (1.4), one can also define symbols which are less regular and the
corresponding pseudo-differential operators.
This already allows to define parametrix for the inverse of elliptic operators, see Subsection 1.6.
1.5. Commutation properties. Let S denote the set of symbols. We now have a correspondance
A → S. This correspondance is obviously linear. But A is also equipped with a product. We can
ask how to generalize this to S. Besides S is also naturally equipped with a notion of a product
(the standard pointwise product of functions), but this is commutative so it cannot be the exact
analog. However, we know that the product in A is commutative at top order, so we can hope that
(1) If A ∈ Aj , B ∈ Ak ,
σ(AB) − σ(A)σ(B)
is of degree j+k-1
(2) If a ∈ S j , b ∈ S k are polynomial, then
Opw (ab) − Opw (a)Opw (b) ∈ Aj+k−1
4
BENOIT PAUSADER
and this can be readily verified.
The exact formula for a product # satisfying
Opw (ab) = Opw (a)#Opw (b)
can then be computed for a, b ∈ S. For general pseudo-differential operators, however, this is only
true in the sense that the difference is an operator with smooth kernel. We refer to the reference
given at the beginning for more on this.
Looking at vector fields, we can conjecture that
σ([A, B]) = i{σ(A), σ(B)}
where we introduce the Poisson bracket:
{a, b} = ∇x a · ∇ξ b − ∇ξ a · ∇x b.
{PoissonB}
(1.5)
1.6. Elliptic operators. Informally, a symbol is elliptic if it does not vanish outside of a neighborhood of 0. This is the natural generalization to the criterion for invertibility2 on A0 . A symbol
σ is called elliptic if there exists C, δ > 0 such that
|σ(x, ζ)| ≥ C −1 |ξ|δ ,
|ξ| ≥ C.
The prototypical example is the symbol of the Laplacian: σ w (−∆) = |ξ|2 , which has an inverse
of symbol 1/|ξ|2 . We say that a pseudo-differential operator is elliptic if its symbol is elliptic3. One
can then find a first ansatz for an approximate inverse by setting
1 − χ(ξ)
q(x, ξ) :=
σ(x, ξ)
where χ ≡ 1 on the ball where we do not have control over σ. One can then verify that
Opw (σ)Opw (q) = Id + S1 ,
Opw (q)Opw (σ) = Id + S2
where S1 and S2 are smoothing operators by one order.
2. Paradifferential operators
The purpose of this section is to make the previous qualitative discussion more quantitative.
2.1. Paradifferential operators. A small twist is that we want to consider operators and symbols
which are not necessarily smooth. When considering products, in the case when a function is much
smoother than the other, we want to keep track of this information (since the rougher one will force
the regularity of the product). A simple but far-reaching remark by Bony is that we can always
separate the smooth part from the rough part:
X
fg =
Pk1 f · Pk2 g
k1 ,k2
X
=
k1 ≤k2 −10
radiffIntro}
=
X
X
Pk1 f · Pk2 g +
(P≤k−10 f )Pk g +
k
Pk1 f · Pk2 g +
k2 ≤k1 −10
X
(P≤k−10 g)Pk f +
k
X
Pk1 f · Pk2 g
|k1 −k2 |<10
X X
(2.1)
Pk f · Pk+j g
k |j|<10
= Tf g + Tg f + R(f, g)
where (provided f and g have a reasonable amount of smoothness, say f, g ∈ H d ) R(f, g) is
smoother than f and g, Tf g is as smooth as g and Tg f is as smooth as f (see Lemma 2.2 for more
2Indeed, we want to allow elliptic operators to have a finite dimensional kernel. Otherwise, −∆ would not be
elliptic on Td !
3Note that this is a property of its principal symbol.
(CRASH COURSE ON) PARA-DIFFERENTIAL OPERATORS
5
precise estimates). This simple decomposition, called the “paraproduct decomposition” allows to
better keep track of the regularity and we want to extend it for pseudo-differential operators (it
amounts to “specifying the term where the derivative will fall”).
In the following, for concreteness, we will specify to the case of dimension d = 2.
p
Recall that a symbol σ “if of degree p” if and only if (1 + |ζ|2 )− 2 σ(x, ζ) is a symbol of degree 0.
For this reason, we will in the next subsection concentrate on redefining a more quantified version
of symbols of degree 0.
2.2. Operator bounds. A symbol denotes a function σ(x, ζ) ∈ C(Rd × Rd : R). In this section,
we will bound them using the following two norms4
X
kσkS a,b := sup
||ζ||β| ∂ζβ ∂xα σ(x, ζ)|,
∞
x,ζ
kσkS a,b := sup
2
ζ
|α|≤a, |β|≤b
k|ζ||β| ∂ζβ ∂xα σ(·, ζ)kL2x .
X
|α|≤a, |β|≤b
These are the most important norms. However, we may more generally define
X
kσkS a,b := sup
k|ζ||β| ∂ζβ ∂xα σ(·, ζ)kLpx .
p
ζ
(2.2) {SymNorm1}
(2.3) {SymNorm1Gen
|α|≤a, |β|≤b
Note that there is a large variation of norms and notation for symbols. A few useful properties
are5
kabkS min{r1 ,r2 },min{q1 ,q2 } . kakSpr1 ,q1 kbkSpr2 ,q2 ,
{∞, p} = {p1 , p2 }
1
p
2
k{a, b}kS min{r1 ,r2 }−1,min{q1 ,q2 }−1 . kakSpr1 ,q1 kbkSpr2 ,q2 ,
1
p
kPk akSpr,q . 2
−sk
{∞, p} = {p1 , p2 }
2
kPk akSpr+s,q ,
(2.4) {PropSymb1}
p ∈ {2, ∞}.
Given a symbol σ(x, ζ), we define the corresponding paradifferential operator (compare with
(1.4)):
Z
|ξ − η|
ξ+η b
F {Tσ f } (ξ) =
χ(
)e
σ (ξ − η,
)f (η)dη,
|ξ + η|
2
R2
where σ
e denotes the Fourier transform in the first coordinate and χ ∈ Cc∞ (R : R) is a function
satisfying
1[−1/200,1/200] ≤ χ ≤ 1[−1/100,1/100] .
It is clear that if σ is real, Tσ is a self-adjoint operator on L2 . But in fact Tσ also acts on Lp
spaces. We first observe that paramultipliers define nice linear operators.
Proposition 2.1. Let a be a symbol and 1 ≤ q ≤ ∞, let6 γ > d/2, γ ∈ N then
kPk Ta f kLq . kakS∞
0,4γ kP[k−2,k+2] f kLq
(2.5)
{LqBdTa}
kPk Ta f kL2 . kakS 0,4γ kP[k−2,k+2] f kL∞ .
(2.6)
{L2BdTa}
(2.7)
{Homogeneity
and
2
More generally, if
1
1 1
= + ,
r
p q
1 ≤ p, q, r ≤ ∞,
4These norms measure symbols of degree 0. Thanks to (2.5), the degree can then be trivially added. Another way
to say this is that a symbol of degree p can be written σ = (1 − ∆)p/2 σ 0 , where σ 0 is an operator of degree 0.
5Recall the Poisson bracket introduced in (1.5).
6as will be evident from the proof, we could take γ smaller when p < ∞, but we will not be concerned with the
ζ-regularity which will always be automaitc.
6
BENOIT PAUSADER
then
kPk Ta f kLr . kakSp0,4γ kP[k−2,k+2] f kLq .
Proof. Fact 1: Inspecting the Fourier transform, we directly see that
Pk Ta f = Pk Ta P[k−2,k+2] f,
and therefore, the Fourier localization of f is guaranteed and, up to rescaling, we may assume that
k = 0, i.e.
Fact 2: It suffices to show (2.8) for k = 0. Indeed, introducing the rescaling operator (δh f )(x) =
f (hx), we recall that
kδh f kLp = h
Homogeneity}
Besides, letting h =
2−k
{Rescaling2}
opSymEstimI}
(2.8)
− pd
kf kLp .
(2.9)
and writing down the Fourier transform, we find that
ah (x, ζ) := a(hx, h−1 ζ)
δh (Pk Ta f ) = P0 Tah (δh f ),
(2.10)
and observing that
− pd
kah kSp0,t = h
kakSp0,t ,
∀t ≥ 0
and using (2.7) and (2.9), we see from (2.10) that the claim at k = 0 implies the claim for all k’s.
Fact 3: The claim holds when k = 0. We compute that
Z
g(x)h(y)I(x, y)dxdy,
hg, P0 Ta hi =
R2d
Z
ξ + η ihξ,x−zi ihη,z−yi |ξ − η|
I(x, y) =
a(z,
)e
e
χ(
)ϕ0 (ξ)dηdξdz
2
|ξ + η|
3d
R
Z
θ
|θ|
=
a(z, η + )eihθ,x−zi eihη,x−yi χ(
)ϕ0 (η + θ)dηdθdz.
2
|η + 2θ|
R3d
We now observe that
(1 + |x − y|2 )γ I(x, y)
Z
i
h
(2.11)
a(z, η + 2θ )
|θ|
γ ihθ,x−zi
γ ihη,x−yi
dηdθdz
=
χ(
)ϕ
(η
+
θ)
(1
−
∆
)
e
(1
−
∆
)
e
0
η
θ
2 γ
|η + 2θ|
R3d (1 + |x − z| )
so that
|(1 + |x − y|2 )γ I(x, y)| . kakSp0,4γ
and we conclude that
Z
hg, P0 Ta hi ≤ kakSp0,4γ
|g(x)| · |h(y)| ·
R4
1
dxdy
(1 + |x − y|2 )γ
. kakSp0,4γ kgkLr0 khkLq ,
which is sufficient for our estimate.
One of the main interest of Ta is that it linearizes the product as observed in (2.1):
Lemma 2.2. Let f, g ∈ L2 we have
f g = Tf g + Tg f + HH(f, g)
where HH is smoothing in the sense that for γ > 0 and r, p, q as in (2.7), there holds that
kPk HH(f, g)kLr . 2−γk
sup
l−20≤µ,ν≤l+20
2γl kPµ f kLp kPν gkLq .
{LrBdTa}
(CRASH COURSE ON) PARA-DIFFERENTIAL OPERATORS
7
Proof. Starting from
HH(f, g) = f g − Tf g − Tg f
and expanding on the frequency projections, we see that
Pk HH(Pj f, Pl g) ≡ 0,
unless |j − l| ≤ 10 and k ≤ j + 10. Therefore
X
X
kPk HH(f, g)kLr . 2−γk
2γ(k−j)
2rj kPj f Pj+j 0 gkLr
0≤j 0 ≤10
j≥k−10
and the result follows by Hölder’s inequality.
Proposition 2.3. Let 1 ≤ q ≤ ∞ and γ > d/2, γ ∈ N. Given a and b symbols, we may decompose
i
Ta Tb = Tab + T{a,b} + E(a, b)
2
where the error E obeys the following bounds
kPk E(a, b)f kLq . (1 + 2k )−2 kakS∞
2,6γ+2 kbk 2,6γ+2 kP[k−5,k+5] f kLq
S∞
for 1 ≤ q ≤ ∞ and more generally, letting
1
1
1
1
=
+
+ ,
r
p1 p2 p3
1 ≤ r, p1 , p2 , p3 ≤ ∞
(2.12)
{PropCompSym
(2.13)
{RemRemBound
(2.14)
{TrilinearHo
(2.15)
{RemRemBound
there holds that
kPk E(a, b)f kLr . (1 + 2k )−2 kakSp2,6γ+2 kbkSp2,6γ+2 kP[k−5,k+5] f kLp3 .
1
2
Moreover E(a, b) = 0 if both a and b are independent of x.
Proof. We prove (2.13). The bounds in (2.15) follow similarly. We begin by writing
Ta Tb − Tab = Es + Er ,
Z |θ − η|
|ξ − η|
|ξ − θ|
ξ+θ e
η+θ b
F {Es f } (ξ) =
χ(
)χ(
) − χ(
) e
a(ξ − θ,
)b(θ − η,
)f (η)dηdθ,
|ξ
+
θ|
|θ
+
η|
|ξ
+
η|
2
2
4
R
Z
|ξ − η| h
ξ+θ e
η+θ
F {Er f } (ξ) =
χ(
) e
a(ξ − θ,
)b(θ − η,
)
|ξ + η|
2
2
R4
ξ+η e
ξ +η ib
−e
a(ξ − θ,
)b(θ − η,
) f (η)dηdθ,
2
2
and observe directly that if e
a(x, ζ) and eb(y, ζ) are supported on {x = 0} and {y = 0}, both Es and
Er vanish. We also observe that, using Lemma 2.1 together with (2.4),
kPk TPj aPl b f kLq + kPk TPj a TPl b f kLq . 2−rj−sl kPj akS∞
r,4γ kPl bk s,4γ kP[k−5,k+5] f kLq .
S∞
Thus in the following, we mays assume that a = P≤k−10 a and b = P≤k−10 b. In this case, Es = 0.
Using this, it suffices to consider
Z
|ξ − η| h
ξ+θ e
η+θ
k
F Er f (ξ) =
ϕk (ξ, η, θ)χ(
) e
a(ξ − θ,
)b(θ − η,
)
|ξ + η|
2
2
R2d
ξ+η e
ξ +η ib
−e
a(ξ − θ,
)b(θ − η,
) f (η)dηdθ,
2
2
where we define
ϕk (ξ, η, θ) = ϕk (ξ)ϕ[k−2,k+2] (η)ϕ[k−2,k+2] (θ).
(2.16) {Underlineph
8
BENOIT PAUSADER
We note that Erk = Pk Er has kernel whose Fourier transform is
m(ξ, η) =
Z
|ξ − η| h
ξ+θ
η+θ
ξ+η
ξ+η i
eiΦ ϕk (ξ, η, θ)χ(
=
) a(y1 ,
)b(y2 ,
) − a(y1 ,
)b(y2 ,
) dy1 dy2 dθ
|ξ + η|
2
2
2
2
R3d
Φ = hy1 , θ − ξi + hy2 , η − θi
where in particular,
∇y1 Φ = θ − ξ,
{DerPh}
∇y2 Φ = η − θ.
(2.17)
Integrating by parts using (2.17), we may rewrite this kernel as
|ξ − η|
)
|ξ + η|
R3d
ξ+η
ξ+η
ξ+η
ξ+η
× ∇1 a(y1 ,
)∇2 b(y2 ,
) − ∇2 a(y1 ,
)∇1 b(y2 ,
) dy1 dy2 dθ
2
2
2
2
Z
Z
|ξ − η|
i 1
+
eiΦ ϕk (ξ, η, θ)χ(
)
2 s=0 R3d
|ξ + η|
ξ+η
θ−ξ
ξ+η
ξ+η
) ∇2 b(y2 ,
+s
) − ∇2 b(y2 ,
)
× ∇1 a(y1 ,
2
2
2
2
ξ+θ
η−θ
ξ+η
η+θ
− ∇2 a(y1 ,
+s
) − ∇2 a(y1 ,
) ∇1 b(y2 ,
)
2
2
2
2
η+θ
ξ+η ξ+η
) ∇1 b(y2 ,
) − ∇1 b(y2 ,
) dy1 dy2 dθds
− ∇2 a(y1 ,
2
2
2
m(ξ, η) =
i
2
Z
eiΦ ϕk (ξ, η, θ)χ(
The first integral gives the Poisson bracket. After another integration by parts using (2.17), the
second integral can be rewritten
1
|ξ − η| ξ+η
ξ+η
θ−ξ
) ∇11 a(y1 ,
)∇22 b(y2 ,
+s
)(1 − s)
|ξ
+
η|
2
2
2
3d
s=0 R
ξ+η
θ−η
ξ+η
− ∇12 a(y1 ,
)∇21 b(y2 ,
+s
)
2
2
2
ξ+η
θ−η
η+θ
+ ∇22 a(y1 ,
+s
)∇11 b(y2 ,
)(1 − s) dy1 dy2 dθds.
2
2
2
mr (ξ, η) = −
1
4
Z
Z
eiΦ ϕk (ξ, η, θ)χ(
We may now use Lemma 2.6 and Lemma 2.4 to finish the proof.
Note that we could go further in the expansion (2.12). Indeed, a look at the above formula shows
that the main contribution will come from the second Poisson bracket:
{{a, b}}2 := ∇11 a : ∇22 b − 2∇12 a : ∇21 b + ∇22 a : ∇11 b.
Further contributions are determined by higher order Poisson brackets.
(CRASH COURSE ON) PARA-DIFFERENTIAL OPERATORS
9
Lemma 2.4. For 0 ≤ s ≤ 1, define
Z
|ξ − η| iΦ
ϕk (ξ, η, θ)χ(
mk (ξ, η) =
)e {{a, b}}dy1 dy2 dθ
|ξ
+ η|
3d
R
ξ+η
ξ+η
θ−ξ
{{a, b}} = ∇11 a(y1 ,
)∇22 b(y2 ,
+s
)(1 − s)
2
2
2
ξ+η
ξ+η
θ−η
− ∇12 a(y1 ,
)∇21 b(y2 ,
+s
)
2
2
2
ξ+η
θ−η
η+θ
+ ∇22 a(y1 ,
+s
)∇11 b(y2 ,
)(1 − s)
2
2
2
Φ = hy1 , θ − ξi + hy2 , η − θi
where ϕk is defined in (2.16). Then
2 −γ
|(F −1 mk )(x, z)|L1x . (1 + 2k )−2 kakS∞
.
2,6γ+2 kbk 2,6γ+2 · 1 + |x − z|
S∞
(2.18)
Proof. Assume first that k ≥ 0. The Fourier transform is then
Z
Ik,s (x, z) :=
Jk,s (y1 , y2 )dy1 dy2
2d
ZR
Jk,s (y1 , y2 ) =
eiΨ Ak,s dηdξdθ
R3d
|ξ − η|
){{a, b}},
|ξ + η|
Ψ = hy1 , θ − ξi + hy2 , η − θi + hξ, xi − hη, zi
Ak,s = ϕk (ξ, η, θ)χ(
The main information we use is that
α
−|α|k −2k
∂
2
· 1{|ξ|+|η|+|θ|.2k } · kakS 2,2+|α| kbkS 2,2+|α|
ξ,η,θ [Ak,s ] . 2
∞
∇ ξ Ψ = x − y1 ,
∇η Ψ = y2 − z,
so that, integrating by parts, we find that, for 0 ≤ q, r, t ≤ γ,
Z
2q
2r
2t
q+r+t
|x − y1 | |y2 − z| |y1 − y2 | |Jk,s (y1 , y2 )| = (−1)
.2
∞
∇ θ Ψ = y1 − y2 ,
e
iΨ
R6
−2k (3d−2(q+r+t))k
2
∆qξ ∆rη ∆tθ Ak,s dηdξdθ
kakS∞
2,6γ+2 kbk 2,6γ+2
S∞
which is sufficient.
If k ≤ 0, we simply observe that, using the Fourier transform in y1 , y2 in the integration, we may
trade the derivatives in y1 , y2 by small factors of size O(2k ), namely we may replace {{a, b}} by
1
ξ+η
θ−ξ
ξ+η
{{a, b}} =
− a(y1 ,
) ((θ − ξ) · ∇2 )2 b(y2 ,
+s
)(1 − s)
4
2
2
2
ξ+η
ξ+η
θ−η
+ ((η − θ) · ∇2 )a(y1 ,
)((θ − ξ) · ∇2 )b(y2 ,
+s
)
2
2
2
θ−η
η+θ
ξ+η
+s
)b(y2 ,
)(1 − s) ,
− ((η − θ) · ∇2 )2 a(y1 ,
2
2
2
and since |η| + |θ| + |ξ| . 2k , we obtain that
α
∂
. 2−|α|k · 1{|ξ|+|η|+|θ|.2k } · kak 0,2+|α| kbk 0,2+|α|
[A
]
k,s
ξ,η,θ
S
S
∞
which again, is sufficient.
∞
{EstEsFL1}
SimpleMult1}
10
BENOIT PAUSADER
Proposition 2.5. Assume that F (z) = z + h(z), where h is analytic for |z| ≤ 1/2 and satisfies
|h(z)| . |z|3 , then there holds that
F (u) = TF 0 (u) u + E(u),
(2.19)
kE(u)kH N . kuk2W 10,∞ kukH N −5
so long as kukL∞ ≤ 1/100. This also holds if z ∈ C2 , F : C2 → C2 , where F 0 (u) denotes the
differential of F .
Proof. Since F is analytic, it suffices to show this for h(x) = xn , n ≥ 0, but this as in the proof of
Lemma 2.2. Indeed, we may decompose
X
(Pk2 u . . . Pkn u) · Pk1 u
Pk (un ) = n
|k1 −k|≤2n, max{k2 ,...,kn }≤k−3n
X
+ O(n2 )
Pk1 u · Pk2 u · Pk3 u . . . Pkn u.
k1 ∼k2 ≥max{k3 ,...,kn ,k−3n}
The first term in this expansion leads to the paraproduct up to smoothing terms, while the second
leads to a smoothing terms.
Recall the following simple Lemma for the operator
Z
FM (f1 , f2 , . . . , fn )(ξ) =
fb(ξ1 )fb(ξ2 ) . . . fb(ξd )m(ξ1 , . . . , ξd )δ(ξ − ξ1 − · · · − ξd )dξ1 . . . dξd .
Rd
This operator obeys simple bounds:
Lemma 2.6. Assume that 1 ≤ p1 , p2 , . . . , pn , s ≤ ∞ and
1
1
1
1
=
+
+ ··· .
s
p1 p2
pn
Then
kM (f1 , f2 , . . . , fn )(x)kLs . kFmkL1
n
Y
kfk kLpk .
(2.20)
k=1
References
[1] S. Alinhac and P. Gérard, Pseudo-differential Operators and the Nash-Moser Theorem, Graduate studies in
mathematics, Vol 82, (2007), ISBN-10: 0-8218-3454-1.
[2] L. Hörmander, The analysis of linear differential operators, I-IV.
Brown University
E-mail address: [email protected]
{Paracomp}