Download Math 217: Multilinearity of Determinants Professor Karen Smith A

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Linear least squares (mathematics) wikipedia , lookup

Exterior algebra wikipedia , lookup

Covariance and contravariance of vectors wikipedia , lookup

Jordan normal form wikipedia , lookup

Rotation matrix wikipedia , lookup

Eigenvalues and eigenvectors wikipedia , lookup

Capelli's identity wikipedia , lookup

Perron–Frobenius theorem wikipedia , lookup

Non-negative matrix factorization wikipedia , lookup

Singular-value decomposition wikipedia , lookup

Matrix (mathematics) wikipedia , lookup

Four-vector wikipedia , lookup

Matrix calculus wikipedia , lookup

System of linear equations wikipedia , lookup

Matrix multiplication wikipedia , lookup

Orthogonal matrix wikipedia , lookup

Gaussian elimination wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Determinant wikipedia , lookup

Transcript
Math 217: Multilinearity of Determinants
Professor Karen Smith
(c)2015 UM Math Dept
licensed under a Creative Commons
By-NC-SA 4.0 International License.
T
A. Let V −→ V be a linear transformation where V has dimension n.
1. What is meant by the determinant of T ? Why is this well-defined?
Solution note: The determinant of T is the determinant of the B-matrix of T , for
any basis B of V . Since all B-matrices of T are similar, and similar matrices have the
same determinant, this is well-defined—it doesn’t depend on which basis we pick.
2. Define the rank of T .
Solution note: The rank of T is the dimension of the image.
3. Explain why T is an isomorphism if and only if det T is not zero.
Solution note: T is an isomorphism if and only if [T ]B is invertible (for any choice of
basis B), which happens if and only if det T 6= 0.
4. Now let V = R3 and let T be rotation around the axis L (a line through the origin) by an
1
0
0
angle θ. Find a basis for R3 in which the matrix of ρ is 0 cosθ −sinθ . Use this to
0 sinθ cosθ
compute the determinant of T . Is T othogonal?
Solution note: Let v be any vector spanning L and let u1 , u2 be an orthonormal basis
for V = L⊥ . 
Rotation
fixes ~v , which means the B-matrix in the basis (v, u1 , u2 ) has

1
first column 0. In the plane V = L⊥ , the rotation is just an ordinary rotation
0
2
like in R . The determinant is 1. The matrix is orthogonal because the columns are
orthonormal, or alternatively, because the rotation map preserves the length of every
vector.
B. Theorem: The determinant is multilinear in the columns. The determinant is multilinear
in the rows. This means that if we fix all but one column of an n × n matrix, the determinant
function is linear in the remaining column. Ditto for rows.


 
 


1
2
x
x 1 2
1. Let ~v2 = −1 , ~v3 = 0 . Discuss how the map R3 → R sending y  7→ det y −1 0
0
1
z
z 0 1
illustrates the Theorem in D. Is this map linear? If so, find its matrix.
Solution note: Yes, the map is a linear transformation from R3 to R. This is the
content of the theorem! You can (and perhaps should!) also verify this directly
by checking it preserves addition and scalate multiplication. Its matrix is 1 × 3
and we find it by seeing where each of the ~ei go. To find this, we compute the
three given determinants,
replacing the first column by each of ~e1 , ~e2 , ~e3 . We get
A = −1 −1 2 .
2. Using the row vectors ~v2T and ~v3T , illustrate what it means that the determinant function on
a 3 × 3 matrix is linear in the second row.
Solution note: Linearity in row 2 means:






1 −1 0
1 −1 0
1
−1
0
det x y z  + det x0 y 0 z 0  = det x + x0 y + y 0 z + z 0 
2 0 1
2 0 1
2
0
1
   0
x
x
for all vectors y  , y 0  ∈ R3 , and
z
z0




1 −1 0
1 −1 0
det kx ky kz  = k det x y z 
2
0
1
2 0 1
for all scalars k.
3. Prove the Theorem in the 2 × 2 case for the first column. That is: Show that the determinant
of a 2 × 2 matrix is linear in the first column.
Solution note: We need to show
0 a b
ka b
a b
a b
a + a0 b
.
= k det
, det
+ det 0
= det
det
c d
kc d
c d
c d
c + c0 d
Compute: (a + a0 )d − (c + c0 )b = ad + a0 d − cb −0 cb = (ad − bc) + (a0 d −0 cb) and
(ka)d − (kc)b = k(ad − bc). QED.
4. Suppose ~v1 , . . . , ~vn−1 are vectors in Rn and that
det[~v1 ~v2 . . . ~vn−1 ~a] = 5,
det[~v1 ~v2 . . . ~vn−1 ~b] = 7,
det[~v1 ~v2 . . . ~vn−1 ~c] = −3.
Find det[~v1 ~v2 . . . ~vn−1 (2~a + 4~b − 6~c + 17~v1 )].
Solution note: Using multilinearity, we have
det[~v1 ~v2 . . . ~vn−1 2~a] + det[~v1 ~v2 . . . ~vn−1 4~b] + det[~v1 ~v2 . . . ~vn−1 − 6~c] + det[~v1 ~v2 . . . ~vn−1 17~v1 )]
=2 det[~v1 ~v2 . . . ~vn−1 ~a] + 4 det[~v1 ~v2 . . . ~vn−1 ~b] − 6 det[~v1 ~v2 . . . ~vn−1 ~c] + 17 det[~v1 ~v2 . . . ~vn−1 ~v1 )]
=10 + 28 + 18 + 0 = 56.
The final zero is because a matrix with dependent columns always has determinant
zero.
5. Prove that the determinant function of an n × n matrix is linear in the last column. [Hint:
Use Laplace expansion along the last column.]
Solution note: Fix ~v1 , . . . , ~vn−1 in Rn . We need to show
det[~v1 ~v2 . . . ~vn−1 ~x + ~y ] = det[~v1 ~v2 . . . ~vn−1 ~x] + det[~v1 ~v2 . . . ~vn−1 ~y ],
and similarly for scalar multiplication. Expanding along the last column:
det[~v1 ~v2 . . . ~vn−1 ~x + ~y ] = (−1)n+1 [(x1 + y1 )A1n − (x2 + y2 )A2n + · · · ± (xn + yn )Ann ]
= (−1)n+1 [x1 A1n − x2 A2n + · · · ± xn Ann ] + (−1)n+1 [y1 A1n − y2 A2n + · · · ± yn Ann ]
= det[~v1 ~v2 . . . ~vn−1 ~x] + det[~v1 ~v2 . . . ~vn−1 ~y ].
A similar calculation shows the scalar multiplication is preserved too.
6. Use Laplace expansion to explain why the determinant is linear in each column and row.
7. True or False: The determinant map is linear. Justify.
Solution note: FALSE! For example det(2I2 ) = 4 not 2 det I2 .
C. Corollary: Let A be an n × n matrix. Then det(kA) = k n det A for any scalar k. Prove this.
a −d
2a + 5p 5q + 2d
= 17.
given that det
D. Compute det
p −q
p
q
5p 5q
2a 2d
5q + 2d
by linearity in the
+
=
p q
p q
q
determinant
is zero if there is a relation on the rows.
d
, again by linearity in the first row. On the other
q
a d
a −d
= −34. So the
= −
hand, using linearity in the second column,
p q
p −q
answer is −70.
2a + 5p
p
first row and the fact that the
a
This further becomes 2 det
p
Solution note:
det
E. Theorem: The determinant is alternating in the columns. The determinant is alternating
in the rows. This means if we interchange two columns, the determinant changes sign. Ditto for
rows.


0 1 2
1. Verify this for swapping the second two columns of 0 −1 0 . Also verify for swapping
1 0 1
the first and third row.
2. Prove the theorem in the case of 2 × 2 matrices.
3. Let A be an n × n matrix. Let B be obtained
columns i and j. Prove
from A by switching
~
v
~
v
.
.
.
~
v
that
det
A
=
−
det
B.
[Hint:
Write
A
=
.
Compute
the determinant of
n
1 2
~v1 ~v2 . . . ~vi + ~vj . . . ~vi + ~vj . . . ~vn , where the ~vi + ~vj is in BOTH the i-th spot,
and the j-th spot.]
Solution note: (1) Computing the determinant of the matrix in (1) we get 2. We get
-2 after doing the intended swaps.
(2) We prove only
in
alternating
rows,
the case of columns being similar. We need to
a b
c d
check that det
= − det
. This is easy: ad − cb = −(ac − bd).
c d
a b
(3) Note that ~v1 ~v2 . . . ~vi + ~vj . . . ~vi + ~vj . . . ~vn has determinant zero because two columns are the same. Now expand out using the multi-linearity: this is
the same as
det [~v1 ~v2 . . .
~vi
...
~vi + ~vj
...
~vn ] + det [~v1
~v2
...
~vj
...
~vi + ~vj
...
~vn ] .
Expanding in the other column and using the fact that the determinant is zero if two
columns are the
0 = det [~v1 ~v2 . . .
~vi
...
~vj
...
~vn ] + det [~v1
~v2
...
~vj
...
~vi
...
~vn ] .
It is an interesting Theorem that the determinant is the ONLY alternating multilinear function of the columns of
an n × n matrix which takes the value 1 on the identity matrix. More theoretical linear algebra courses (for example,
Math 420, which maybe you’ll take someday) usually take this to be the definition of the determinant. We won’t do
this in Math 217, however.
F. Suppose that we have a linear
0 1
0 −1

1 0
(A1 , A2 , A3 , A4 , A5 , A6 ) is 
0 −1

0 −1
0 −1
2×3 → R2×3 whose matrix in the basis
transformation
 T : R
2 3 0 0
0 0 0 0

1 0 0 0
.
0 0 0 0

0 0 0 0
0 0 0 0
1. What is the determinant of T ?
2. What is the rank of T ?
3. Find a basis for the kernel and image of T .
4. Suppose that (B1 , B2 , B3 , B4 , B5 , B6 ) is another basis, where
B1 = A1 + A2 , B2 = A1 − A2 , B3 = 3A3 , B4 = 4A4 , B5 = A5 − A4 − A3 − A2 , B6 = A6 .
Find the change of basis matrix SB→A . Without computing: explain why the change of
basis matrix has non-zero determinant.
5. Find the B-matrix of T (do not simplify!) Without computing: What is its determinant?
Solution note: The rank is 3, which is less than 6. So the determinant is 0.
(3) A basis for the image is represented by a maximal set of linearly independent
columns. These can be taken to be the first, second and third columns. So a basis
for the image is the set (A3 , A1 − A2 − A4 − A5 − A6 , A1 ). A basis for the kernel
thus consists of three elements (rank nullity). To find it, we first use row-reduction
to solve the system A~x = ~0, and then re-interpret the column vectors as coordinates
to get a basis in R2×3 . Solving the system we get the solutions are spanned by
[3/2 0 −3/2 1 0 0]T , ~e5 , ~e6 . Thus a basis for the kernel is (3A1 −3A3 +2A4 , A5 , A6 ).
(4) The change of basis matrix is
SB→A


1 1 0 0 0 0
1 −1 0 0 1 0


0 0 3 0 0 0

.
=

0 0 0 4 0 0
0 −0 0 0 1 0
0 0 0 0 0 1
Its determinant is non-zero because change of basis matrices are always invertible!
The determinant of the B-matrix of T is zero, since it is the same as the determinant
of the A-matrix of T .