Download EXAM PDE 18.02.13 1. Exercise Let Ω ⊂ R 3 be

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Genetic algorithm wikipedia , lookup

Knapsack problem wikipedia , lookup

Inverse problem wikipedia , lookup

Perturbation theory wikipedia , lookup

Computational electromagnetics wikipedia , lookup

Mathematical optimization wikipedia , lookup

Simplex algorithm wikipedia , lookup

Simulated annealing wikipedia , lookup

Dynamic programming wikipedia , lookup

Numerical continuation wikipedia , lookup

Multiple-criteria decision analysis wikipedia , lookup

Weber problem wikipedia , lookup

Transcript
EXAM PDE 18.02.13
1. Exercise
3
Let Ω ⊂ R be a bounded connected open set with ∂Ω ∈ C 4 , and for coefficients aij ∈ C 3 (Ω)
PN
2
satisfying aij (x) = aji (x) consider the elliptic operator Lu = − i,j=1 aij (x) ∂x∂i ∂xj u. Let g ∈ H 2 (Ω)
and ϕ ∈ C ∞ (Ω) .
(a) Justify the following statement: the Dirichlet problem
(
Lu = g in Ω
u = ϕ on ∂Ω
has a unique solution u ∈ C 2 (Ω) ∩ C 1 (Ω) .
(b) Justify the following statement: Ω satisfies the interior ball condition.
(c) Denote with ν(x) the unit outer normal to ∂Ω and let a C 2 vector field η(x) be given such
that η(x) · ν(x) > 0 for every x ∈ ∂Ω . Consider the following problem
(
Lu = g in Ω
(1.1)
∂u
∂η = ϕ on ∂Ω
and assume it has a solution u ∈ C 2 (Ω) ∩ C 1 (Ω) . Prove that v ∈ C 2 (Ω) ∩ C 1 (Ω) solves (1.1) if and
only if v − u is constant in Ω .
Solution: (a) It is clear that u solves the problem if and only if u = v + ϕ where v solves
(
Lv = g − Lϕ in Ω
v = 0 on ∂Ω .
It is clear (Lax-Milgram’s Theorem) that such a problem has a unique weak solution v . In view of
the regularity of ϕ, it suffices to prove that v ∈ C 2 (Ω) ∩ C 1 (Ω) .
Since g − Lϕ ∈ H 2 (Ω) and aij ∈ C 3 (Ω) , by the smoothness of the boundary and the regularity
Theorem (Evans, Thm. 5, Section 6.3.2) we have v ∈ H 4 (Ω) . Since we are in dimension N = 3 ,
Morrey’s imbedding Theorem gives that H 4 (Ω) ⊂ C 2 (Ω) , as required.
(b) It has already been observed, before proving Hopf’s Lemma, that the interior ball condition is
automatically satisfied by a C 2 boundary, a fortiori it is in the situation in object.
(c) One implication is clear. To prove the other one, let w = v − u. Then w solves
(
Lw = 0 in Ω
∂w
∂η = 0 on ∂Ω .
We are in setting of Hopf’s Lemma and of the strong maximum principle (Evans, Section 6.4),
therefore either w is constant or there exists a maximum point x0 ∈ ∂Ω such that w(x0 ) > w(x) for
every x ∈ Ω . Since x0 is a maximum point, for every tangent vector τ to ∂Ω at x0 , one has
∂w
(x0 ) = 0
∂τ
while Hopf’s Lemma states that
∂w
(x0 ) > 0 .
∂ν
1
2
EXAM
Considering an orthonormal basis of R3 of the form {ν(x0 ), τ1 , τ2 } where τ1 and τ2 are tangent
vectors to ∂Ω at x0 , one has then
(η(x0 ) ·
ν(x0 )) ∂w
∂ν (x0 )
+ (η(x0 ) ·
∂w
∂η (x0 ) =
∂w
(x0 )
τ1 ) ∂τ
1
∇w(x0 ) · η(x0 ) =
∂w
+ (η(x0 ) · τ2 ) ∂τ
(x0 ) = (η(x0 ) · ν(x0 )) ∂w
∂ν (x0 ) > 0 ,
2
a contradiction. Therefore w must be constant as required.
2. Exercise
N
Let Ω ⊂ R , N ≥ 2 , be a bounded connected open set with ∂Ω ∈ C 1 and 1 ≤ p < N . For
any q ≥ p0 the following statement is satisfied: for every f ∈ W 1,p (Ω) and g ∈ W 1,q (Ω) the product
f g ∈ W 1,1 (Ω) . Compute also the weak gradient of the product.
Solution: (we will compute here the optimal exponent q . Actually it is an exponent less than p0 ,
so the bound is even larger. But the arguments leading to a proof are essentially the same) For the
p
, let q = (p∗ )0 , that is, by a direct computation
Sobolev exponent p∗ = NN−p
q=
Np
.
N (p − 1) + p
It is easy to check that q < p0 . Furthermore, again by directly computing
q ∗ = p0 .
Now, given f ∈ W 1,p (Ω) and g ∈ W 1,q (Ω) we can find a sequence fn ∈ C ∞ (Ω) and gn ∈ C ∞ (Ω)
such that fn → f in W 1,p (Ω) and gn → g in W 1,q (Ω) . In particular, by Sobolev’s embedding
∗
0
Theorem, gn → g in Lq (Ω) = Lp (Ω) , where the last equality follows by our choice of q . It then
easily follows by Hölder’s inequality that fn gn → f g in L1 (Ω) . On the other hand
D(fn gn ) = fn Dgn + gn Dfn .
Again by Sobolev’s embedding Theorem and our choice of q we have now
∗
fn → f in Lp (Ω)
q∗
and Dgn → Dg in Lq (Ω) = L(p
p0
gn → g in L (Ω) = L (Ω)
∗ 0
)
(Ω)
p
and Dfn → Df in L (Ω)
so that another application of the Hölder’s inequality assures that D(fn gn ) → f Dg + gDf in L1 (Ω) .
It follows that f g ∈ W 1,1 (Ω) and that D(f g) = f Dg + gDf . Since q < p0 , the statement holds true
for any larger exponent, as required.
3. Exercise
N
N
(a) Let T : R → R be an affine map of the form T x := Ax + b with A ∈ O(N ) , the group of
orthogonal matrices, and b ∈ RN . Let Ω ⊂ RN be an open set. Show that for every u ∈ C 2 (Ω) one
has ∆(u ◦ T ) = (∆u) ◦ T .
(b) Consider in R2 the open set Ω := B(0, 1) \ {0} with B(0, 1) the unit ball. Let the Dirichlet
problem
(
−∆u = −1 in Ω
(3.1)
u = 0 on ∂Ω
be given. Show that such a problem has no solution u ∈ C 2 (Ω) ∩ C(Ω) . (Hint: deduce from
the previous step that a solution must be radially symmetric. This reconducts the calculation of a
solution to an ordinary differential equation).
(c) Use the calculations of the previous step to construct a weak solution u ∈ H01 (Ω) . Is it unique?
What is the limit of u(x) as x tends to 0 ?
EXAM
3
Solution: (a) We convene that a vector field f : RN → RN is represented by a row vector, that is a
1 × N matrix. According to this convention, for a N × N matrix Q, one has by a direct computation
D(f Q) = QT (Df ) .
Since one has, by the differentation theorem for composite functions
∇(u ◦ T ) = ((∇u) ◦ T )A
we get to the well-known change of variable formula for the Hessian Matrix under rigid transformations
D2 (u ◦ T ) = AT [(D2 u) ◦ T ] A
(NB: if someone remembers the formula, it can be directly written without proof). Now, for any
N × N matrix B one has
tr(AT BA) = tr(B)
since AT A = I ; being the Laplacian the trace of the Hessian, the claim is proved.
(b) Assume that u ∈ C 2 (Ω) ∩ C(Ω) is a solution of (3.1). It is easily seen by means of part (a)
that given a rotation matrix A (which maps bijectively both Ω onto Ω and ∂Ω onto ∂Ω ), u(Ax)
is again a solution. If a solution exists, it is unique by the maximum principle, therefore u must be
invarian under rotations. This implies u(x) = v(|x|) with v ∈ C 2 ((0, 1) ∩ C([0, 1]) .
The Laplacian of a radial function has already been calculated to derive fundamental solutions; it
is well-known that imposing that u solves (3.1) we get to the following ODE for v (taking also into
account N = 2 ):
1
v 00 (r) + v 0 (r) = 1
r
whose solutions are of the form
1
v(r) = r2 + c1 log r + c2 .
4
Since v ∈ C([0, 1]) it must be c1 = 0 ; it must be also v(1) = v(0) = 0 to fulfill the boundary
conditions! But v(1) = 0 leads to c2 = − 14 , which is incompatible with v(0) = 0 !
(c) The function
1
1
u(x) = |x|2 −
4
4
satisfies by construction −∆u = −1 in Ω . Furthermore u ∈ H01 (B(0, 1)) , since u(x) = 0 when
|x| = 1 . It has already been proved (Exercise 2, 14.2.13) that H01 (B(0, 1)) = H01 (B(0, 1) \ {0})
therefore u is easily seen to be a weak solution of (3.1). It is also the unique one, by Lax-Milgram’s
theorem. Maybe surprisingly, one has u(x) → − 41 when x → 0 !