Download here

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Determinant wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Vector space wikipedia , lookup

Jordan normal form wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Gaussian elimination wikipedia , lookup

Four-vector wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

System of linear equations wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Matrix calculus wikipedia , lookup

Matrix multiplication wikipedia , lookup

Transcript
1. Which of the following are subspaces of R∞ . Explain why or why not.
(a) All sequences that include infinitely many zeroes.
Solution This is not a subspace of R∞ . Consider sequences S1 and
S2 , where
S1 = 1, 0, 1, 0, · · ·
and S2 = 0, 1, 0, 1, · · · .
More specifically, (S1 )i = 1 for odd indices and 0 for even indices,
whereas (S2 )i = 0 for odd indices and 1 for even indices. Both of
these sequences have infinitely many zeroes, but
S1 + S2 = 1, 1, 1, 1, · · · ,
which has no zeroes at all. Therefore, the collection of sequences
with infinitely many zeroes is not closed under addition and is not a
subspace of R∞ .
(b) All sequences (x1 , x2 , · · · ) with xj = 0 from some point onward.
Solution Certainly the sequence of all zeroes is in this set. Suppose
S1 and S2 are sequences such that (S1 )i = 0 for every i > N1 and
(S2 )i = 0 for every i > N2 (that is, they are both equal to zero from
some point onward; the points onward from which they are zero are
N1 and N2 respectively). Then if N = max{N1 , N2 } (take N to be
the bigger of these two numbers), we have for all i > N that
(S1 + S2 )i = (S1 )i + (S2 )i = 0 + 0 = 0,
so this set of sequences is closed under addition. Furthermore, if
c ∈ R, then (cS1 )i = 0 for all i > N1 , so this set is also closed
under scalar multiplication. Therefore, the collection of sequences
with xj = 0 from some point onward is a subspace of R∞ .
(c) All arithmetic progressions: xj+1 − xj is the same for all j.
Solution The zero sequence is obviously in this set, as the difference between consecutive terms is always 0. Suppose S1 and S2 are
arithmetic progressions, where (S1 )i+1 − (S1 )i = a1 for all i and
(S2 )i+1 − (S2 )i = a2 for all i. Then
(S1 +S2 )i+1 −(S1 +S2 )i = ((S1 )i+1 −(S1 )i )+((S2 )i+1 −(S2 )i ) = a1 +a2 ,
so this set is closed under addition. Clearly,
(cS1 )i+1 − (cS1 )i = ca1 ,
so we also have closure under scalar multiplication, so the set of
arithmetic progressions is a subspace of R∞ .
1
(d) All geometric progressions (x1 , kx1 , k 2 x1 , · · · ), allowing all k and x1 .
Solution Consider the sequences S1 = 1, 1, 1, 1, · · · and S2 = 1, 1/2, 1/4, 1/8, · · · ;
that is, S1 is geometric with k = 1, x1 = 1, and S2 is geometric with
k = 1/2, x1 = 1. Then
S1 + S2 = 2, 3/2, 5/4, 9/8.
This sequence is not geometric, because from the first two terms we
would have to have x1 = 2, k = 3/4, but then this would require
the third term to be k 2 x1 = 9/8, which it is not. Therefore, the set
of geometric sequences is not closed under addition, so it is not a
subspace of R∞ .
2. Explanation question similar to 2.1.5 in the book:
(a) Suppose addition in R2 adds an extra 1 to each component, so that
(3, 1) + (5, 0) is (9, 2) instead of (8, 1). If scalar multiplication is
unchanged, which rules of a vector space are broken? Explain why
they are broken with sentences and an example.
Solution In this instance, the “zero vector” will be (−1, −1), as
(a, b) + (−1, −1) = ((a + (−1)) + 1, (b + (−1)) + 1) = (a, b).
We can even do additive inverses, as
(a, b) + (−a − 2, −b − 2) = (−1, −1).
However, scalar multiplication and addition do not distribute the way
they should: consider, in contradiction to the rules of a vector space,
that
2 · ((−1, −1) + (−1, −1)) = 2 · ((−1, −1)) = (−2, −2)
whereas
2 · (−1, −1) + 2 · (−1, −1) = (−2, −2) + (−2, −2) = (−3, −3),
and
(−1 + 1) · (0, 0) = 0 · (0, 0) = (0, 0)
but
−1 · (0, 0) + 1 · (0, 0) = (0, 0) + (0, 0) = (1, 1).
These examples contradict the rules
c(x + y) = cx + cy and (c1 + c2 )x = c1 x + c2 x.
2
(b) Explain why the set of all positive real numbers, with x + y and cx
redefined to equal the usual xy and xc , is a vector space. You don’t
need to explicitly check each of the 8 properties, but do explain why
“linear combinations” stay in the space.
Solution Consider scalars c1 , . . . , ck ∈ R and “vectors” x1 , . . . , xk ∈
R++ (where R++ denotes the set of all positive real numbers). Then
c1 x1 + · · · + ck xk = xc11 · · · xckk .
You may observe that this linear combination is certainly a positive
real number – a positive number to any power is positive, and a product of positive numbers is always positive. This equation illustrates
that linear combinations “work properly” in this scenario, and it’s not
hard to observe that the “zero vector” here will be 1, while the additive inverse −x of any vector x ∈ R++ is simply 1/x = x−1 = (−1)·x.
The other rules work as well; for example,
(c1 + c2 ) · x = xc1 +c2 = xc1 xc2 = c1 · x + c2 · x.
3. Prove that the LDU decomposition is unique for invertible matrices.
Solution Suppose that
A = L1 D1 U1 = L2 D2 U2
are two LDU factorizations of A; then
• L1 and L2 are unit lower triangular
• U1 and U2 are unit upper triangular
• D1 and D2 are diagonal
Right multiplying our first equation by U2−1 and left multiplying it by L−1
1
−1
gives D1 U1 U2−1 = L−1
is also unit lower
1 L2 D2 . Consider now that L1
triangular; Gaussian elimination shows that the inverse of any unit lower
triangular matrix is unit lower triangular. For example,




1 0 0 1
0 0
1 0 0 1 0 0
 2 1 0 0 1 0  →  0 1 0 −2 1 0 
3 4 1 0 0 1
0 0 1 −3 −4 1
Furthermore, the entire right hand side is unit lower triangular, as the
product of any two unit lower triangular matrices is unit lower triangular.
n×n
Consider two unit lower triangular matrices
with entries
Pn L and L̃ ∈ R
lij and ˜lij , respectively. Then (LL̃)ij = k=1 lik ˜lkj – consider that when
j > i, the non-zero entries of the ith row of L and the jth row of L̃ never
overlap, so the product is lower triangular. Additionally, when i = j, the
only term of this summation that is nonzero is (LL̃)ii = lii ˜lii = 1 · 1 = 1,
3
so LL̃ has 1’s on its diagonal. Hence LL̃ is unit lower triangular; using
−1
L = L−1
1 and L̃ = L2 gives that L1 L2 is unit lower triangular. Similarly,
−1
we may show that U1 U2 is unit upper triangular.
At this point, we have an upper triangular matrix equal to a lower triangular matrix – this may only be true if both matrices are diagonal, meaning
in particular that L−1
1 L2 D2 is diagonal. However, right multiplying a matrix B by a diagonal matrix D scales the columns of B by the diagonal
entries of D; therefore, if BD is diagonal (and all of D’s diagonal entries
are non-zero), it follows that B was diagonal to begin with. For example,

 
 

1 0 0
1 0 0
1 0 0
1 1 0 · 0 2 0 = 1 2 0 .
1 1 1
0 0 3
1 2 3
−1
In our case, this means that L−1
1 L2 must be diagonal (since L1 L2 D2 is
−1
diagonal)! However, L1 L2 is also unit lower triangular, so its diagonal
entries are 1’s. Therefore, L−1
1 L2 is a diagonal matrix with 1’s on the
diagonal, meaning L−1
1 L2 = I, so L1 = L2 . By a similar argument,
U1 = U2 . Hence, the equation D1 U1 U2−1 = L−1
1 L2 D2 becomes D1 = D2 .
This completes the proof, since we’ve shown that if L1 D1 U1 and L2 D2 U2
are two LDU decompositions of A, they must be exactly the same.
4