Download Click here

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Symmetric cone wikipedia , lookup

Four-vector wikipedia , lookup

Determinant wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Matrix calculus wikipedia , lookup

Brouwer fixed-point theorem wikipedia , lookup

Derivative wikipedia , lookup

Generalizations of the derivative wikipedia , lookup

Transcript
Math 618 Midterm Solutions
October 16, 2013
1. For parts (a) and (b), one can reference the book where the exact definitions appear.
For the first part of part (c), one can also find that definition in the book, and the
proof of the second fact appears on Page 212 or your class notes.
2. (a) If f is differentiable at a, then according to (c) part (i), we have Dv f (a) = Df (a)v.
Since Df (a) is linear, we have
Dαv+βw f (a) = Df (a)(αv + βw)
= αDf (a)v + βDf (a)w
= αDv f (a) + βDw f (a).
(b) Part (i) of this result can be found as the remark on page 88. Part (ii) is as
follows.
We compute each of the partial derivatives as
∂
∂
2xy + x2 = 2y + 2x, 2xy + x2 = 2x,
∂x
∂y
∂
∂
cos x − 6y = − sin x,
cos x − 6y = −6.
∂x
∂y
One sees that these are continuous everywhere, so that we apply Proposition 2.4
on page 93 to conclude f is differentiable anywhere these partial derivatives are
continuous, namely, everywhere.
(c) Part (i) of this result is on page 92 as proposition 2.3. Part (ii) of this may be
found as Lemma 2.1, page 202.
3. (a) The operator norm is defined on page 97. Note that the function
qX
f (x1 , . . . , xn ) =
x2k = ||x||2
is continuous as the sum and square root of real (and subsequently positive) numbers.
We showed already that any linear map (such as T ) is continuous. Therefore, the
1
function x 7→ ||T x|| is continuous. Thus, by the maximum value theorem, since the
unit sphere is compact, there exists a global maximum of ||T x|| where x is taken as
any element in S n−1 . For part (ii), we note that the claim is obvious if x = 0. So, if
x 6= 0, then we have
1
x
|| ≤ ||T ||,
||T x|| = ||T
||x||
||x||
and multiplying by the positive number ||x|| yields the result. The final inequality is
a result of T inputing a unit vector, and its length after doing so is smaller than the
operator norm, the largest of all such outputs.
To prove part (iii), we note that the triangle inequality holds for the standard Euclidean metric on Rn , so that if ||x|| = 1, then
||(T + S)x|| = ||T x + Sx|| ≤ ||T x|| + ||Sx|| ≤ ||T || + ||S||.
Thus, ||T || + ||S|| is an upper bound for ||(T + S)x|| for any unit vector x. Thus, the
operator norm, as the supremum of these numbers, ||S + T || ≤ ||S|| + ||T ||. For the
last part, we choose S to be the identity map, and T = −S. Then S + T = 0 is the
zero map, while ||S|| = ||T || = 1 > 0.
(b) We answer (i) and (ii) by computing Df and Dg first:
2x
2
Df = 1 2 ,
Dg =
0 cos y
Notice that all of these partial derivatives exist, and are continuous everywhere.
Therefore g is differentiable at a and f is differentiable at g(a). In addition, we use
the chain rule to compute
D(f ◦ g) = Df · Dg = 2x 2 + 2 cos y .
We compute
(f ◦ g)(x, y) = f (g(x, y)) = x2 + 2y + 2 sin y,
(c) Since f is harmonic, we have
0 means that
Hessian is
∂2f
(a)
∂y 2
=
2
− ∂∂xf2 (a)
∂2f
∂x2
D(f ◦ g) =
2x 2 + 2 cos y
2
+ ∂∂yf2 = 0. Namely, the hypothesis that
.
∂2f
(a)
∂x2
6=
6= 0. This means that, at a, the determinant of the
2
2
2
2
2 2
∂2f
∂2f
∂ f
∂ f
∂ f
(a) 2 (a) −
(a) = −
(a) −
(a) < 0.
∂x2
∂y
∂x∂y
∂x2
∂x∂y
If a were an extremum, then it must be a critical point where the derivative Df (a) =
0, and hence the 2nd derivative test applies, and a would have to be a saddle point.
Therefore, a is not an extremum of f .
2
4. (a) (i) See the inverse function theorem on page 252.
invertible at (1, 0). It is clear that Df exists, since the
and are continuous. Therefore
2x 1
2
Df (1, 0) =
=
0 2 (1,0)
0
We must verify that Df is
partial derivatives of f exist
1
2
,
and this matrix is invertible since its determinant is nonzero.
2x 1
(ii) Since Df (x, y) =
, its inverse is
0 2
1
2 −1
.
4x 0 2x
(iii) p
Setting 2y = v, and x2 + y = u, we can solve for x and y as y = v/2, and
x = u − (v/2). So the inverse function is
p
g(u, v) = ( u − (v/2), v/2).
Computing the derivative,
"
Dg(u, v) =
√
2
1
u−(v/2)
0
√−1/2
2
#
u−(v/2)
1
2
.
Expressing these values in terms of x and y will yield the same expression as above.
(b) Since both f and g are differentiable (at any point in their domains), the chain rule
applies. Notice that the function h : Rn → Rn defined by h(x) = x has Dh(a) = I,
the identity matrix, for all a ∈ Rn . Also, since (g ◦ f )(x) = x = h(x), we have
D(g ◦ f )(a) = Dg(f (a))Df (a) = I.
By considering the determinant of the last equality, it follows that Df (a) has a
nonzero determinant, and is thus invertible. Multiplying the above by Df (a)−1 on
the right gives us the result.
(c) The definition of uniform continuity appears in the book, page 200, or in our
class notes. In addition, the proof of (ii) appears on page 200 or in the class notes as
well. The converse of the above statement is false. Consider the constant function
f (x) = 0 with a domain of Rn . Then it is clear that for any ε > 0, setting δ = 1, we
have ||x − y|| < δ implies ||f (x) − f (y)|| = 0 < ε. Thus this function is uniformly
continuous on Rn , which is not compact.
(d) See this result as Theorem 1.2 on page 245.
3