Download Problem Set 5 Solutions MATH 110: Linear Algebra

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Symmetric cone wikipedia , lookup

Euclidean vector wikipedia , lookup

System of linear equations wikipedia , lookup

Laplace–Runge–Lenz vector wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Resultant wikipedia , lookup

Exterior algebra wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Vector space wikipedia , lookup

Four-vector wikipedia , lookup

Lp space wikipedia , lookup

Transcript
Problem Set 5 Solutions
MATH 110: Linear Algebra
Each part of each problem is worth 5 points.
PART 1
1. Curtis p. 129 1.
2. Curtis p. 130 11.
There is an error in the book. Part (c) for problem 11 page 130 should
be x1 − x2 − 2x3 = −2.
Problem 1(10)
An inner product on a vector space V is a function which assigns to each
pair of vectors u, v in V a real number such that the following conditions are
satisfied (c is any real number, w is any vector in V ):
1. (u, v) = (v, u).
2. (u, v + w) = (u, v) + (u, w).
3. c(u, v) = (cu, v).
4. (u, u) > 0 if u 6= 0.
Show that this definition of an inner product is equivalent to the axiomatic definition given in Curtis, page 119.
Proof: Notice that the definition in Curtis contains all the axioms listed
above, and more. The extra stuff that it contains, is an axiom requiring linearity in both coordinates, as well as a requirement that (u, u) ≥ 0
with equality if and only if u = 0 (as opposed to 4 above which is slightly
weaker). The bilinearity in both coordinates is easily implied from the above
by combining axioms 1 and 2. To show that the positive definiteness axiom in the book follows from 4 (with 1,2,3), notice that if u = 0, then
(0, 0) = (0, 0 + 0) = (0, 0) + (0, 0). It follows from the cancellation law that
u = 0.
Problem 2(10)
Let C(R) be the vector space of all polynomials with inner product
(x, y) =
Z
1
x(t)y(t)dt.
−1
1
Consider the subspace of C(R) generated by the polynomials {1, x, x2 , x3 , x4 }.
Use the Gram-Schmidt process to find polynomials y0 , y1 , y2 , y3 , y4 that are
orthonormal and span the same subspace.
Solution: Clearly y0 = √12 . Now y1 (t) is just
(x1 , y0 )
y0 (t)
(y0 , y0 )
x1 (t) −
normalized. These inner products can be computed:
(y0 , y0 ) = 1.
(x1 , y0 ) =
It follows that y1 =
q
3
t.
2
Z
1
−1
1
√ tdt = 0.
2
The process is repeated in this way using the Gram-
Schmidt “recipe”, yielding y2 (t) =
1
2
q
5
(3t2
2
− 1), y3 (t) =
1
2
q
7
(5t3
2
− 3t), and
q
y4 (t) = 81 92 (35t4 − 30t2 + 3). These polynomials are known as normalized
Legendre polynomials, and were first encountered by the French mathematician Legendre (1752-1833) in his work on potential theory in physics.
Problem 3∗ (10)
Let U be a subspace of a vector space V . Show that if 0 is the only vector
orthogonal to all the vectors in U then U = V .
Proof: Let Up be the space of vectors orthogonal to U . You showed last
time that the dimension of Up plus the dimension of U is the dimension of
V . Thus, in the case given above, the dimension of U is the same as the
dimension of V . U is also contained in V , so U = V .
PART 3 - Optional Problem
a) Let S be a finite dimensional subspace of a Euclidean space V and let
e1 , e2 , . . . , en be an orthonormal basis for S. For any x ∈ V , let
s=
n
X
(x, ei )ei
i=1
be defined as the projection of x onto S. Prove that for any t ∈ S,
k x − s k≤k x − t k
with equality iff t = s.
2
Proof: We can write x = s + sp where s ∈ S and sp is in the space Sp of
vectors orthogonal to S. For any t ∈ S, we have
x − t = (x − s) + (s − t)
Since s−t ∈ S and x−s = sp ∈ Sp , this is an orthogonal decomposition of
x − t, so its norm is given by the Pythagorean formula (i.e. its norm squared
is the sum of the norm of (x − s) squared plus the norm of (s − t) squared).
But the norm of s − t squared is greater than or equal to 0, so we have that
k x − s k≤k x − t k
with equality if and only if s − t.
b) Find the linear polynomial nearest to sin πt on the interval [−1, 1]
(Here nearest is based on defining norm using the inner product defined in
problem 3).
By part (a) we simply have to calculate the projection of f = sin πt onto
the subspace S of dimension 2 consisting of all polynomials of degree less
than or equal to 1. We know that such a projection is easily computed using
an orthonormal basis for S. We have such a basis from problem 2 above. In
particular, we have that if g is the “nearest” linear polynomial then
g = (f, y0 )y0 + (f, y1 )y1 .
This formula
requires the computation of two integrals, (f, y0 ) = 0, and
q
32
(f, y1 ) = 2 π . Thus
3
g(t) = t
π
is the “nearest” linear polynomial to sin πt on the interval [−1, 1]. Notice
that (f, y2 ) is also zero, so the linear polynomial g is also the best quadratic
approximation.
3