Download Math 461/561 Week 2 Solutions 1.7 Let L be a Lie algebra. The

yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Document related concepts

Dual space wikipedia, lookup

Singular-value decomposition wikipedia, lookup

Eigenvalues and eigenvectors wikipedia, lookup

Jordan normal form wikipedia, lookup

Capelli's identity wikipedia, lookup

Basis (linear algebra) wikipedia, lookup

Fundamental theorem of algebra wikipedia, lookup

Boolean algebras canonically defined wikipedia, lookup

Oscillator representation wikipedia, lookup

Heyting algebra wikipedia, lookup

Representation theory wikipedia, lookup

Geometric algebra wikipedia, lookup

Exterior algebra wikipedia, lookup

Universal enveloping algebra wikipedia, lookup

History of algebra wikipedia, lookup

Invariant convex cone wikipedia, lookup

Laws of Form wikipedia, lookup

Linear algebra wikipedia, lookup

Homomorphism wikipedia, lookup

Clifford algebra wikipedia, lookup

Math 461/561 Week 2 Solutions
1.7 Let L be a Lie algebra. The Jacobi identity says:
[x, [y, z]] + [y, [z, x]] + [z, [x, y]] = 0,
so rearranging gives:
[x, [y, z]] − [[x, y], z] = [[z, x], y].
So the left hand side is zero for all x, y, z if and only if [z, x] is in the center, i.e. every commutator
lies in Z(L).
1.14 See appendix.
1.15 Let S be n × n matrix and define:
glS (n, F ) = {x ∈ gl(n, F ) : xt S = −Sx}.
(i) Show glS (n, F ) is a Lie subalgebra of gl(n, F ).
Since matrix multiplication is a linear map, as is taking the transpose, it’s easy to check we have
a subspace. Now let A, B ∈ glS (n, F ), so by definition:
At S = −SA, B t S = −SB.
We want [A, B] ∈ glS (n, F ) so we just check the condition:
[A, B]t S = (AB − BA)t S
= ((AB)t − (BA)t )S
B t At S − At B t S
B t (−S)A − At (−S)B
S[B, A] = −S[A, B]
Thus [A, B] ∈ glS (n, F ) as desired, and we have a subalgebra.
(ii). Let
0 1
0 0
,A =
a b
c d
0 a
−c −d
, −SA =
0 b
Setting them equal gives b = c = 0 and a = −d.
a 0
glS (2, R) = {
: a ∈ R}
0 −a
(iii) If we got the set of diagonal matrices, then that would include I which means S = −S.
Since the field is R this only happens if S = 0. But if S = 0 then any matrix would work, not just
diagonals. So NO!
(iv) By 1.14 (i) we want to get the set of antisymmetric 3x3 matrices, i.e. those with xt = −x.
Thus it is clear one must choose S = Id.
2.3 Part (i) is an easy application of the fact that the bracket in L is bilinear and satisfies (L1) and
(L2). For part two define π by π(z) = z + I, the quotient map. Then:
π([x, y]) = π([x, y] + I)
= [x + I, y + I]
= [π(x), π(y)]
so π is a Lie algebra homomorphism.
2.6 Verifying the axioms for L1 ⊕ L2 is easy since the bracket is done coordinatewise and the axioms
hold in both L1 and L2 .
(i) Show gl2 (C) ∼
= sl2 (C) ⊕ C as Lie algebras. Consider the map:
φ : gl2 (C) → sl2 (C) ⊕ C
given by:
tr(A) · Id, tr(A)).
First notice the image of φ is in the correct place, since the matrix A − 21 tr(A) · Id does have
trace 0. Since trace is a linear map, it is easy to check φ is linear. To see φ is 1-1 and onto, notice
that it has an inverse map:
φ(A) = (A −
ρ(A, λ) = A + λ · I.
Finally we must check φ is a Lie algebra homomorphism.
φ([A, B]) = φ(AB − BA)
= (AB − BA, 0) since tr(AB − BA) = 0
tr A · Id, tr A), (B − tr B · Id, tr B)]
= ((A − tr A · Id)(B − tr B · Id) − (B − tr B · Id)(A − tr A · Id), 0) since C is abelian.
= (AB − BA, 0)
[φ(A), φ(B)] = ([(A −
Thus φ is a Lie algebra homomorphism.
(ii) Let (x, y) ∈ L1 ⊕ L2 . Then (x, y) ∈ Z(L1 ⊕ L2 ) if and only if [(x, y), (a, b)] = (0, 0) for all
(a, b) ∈ L1 ⊕ L2 . But [(x, y), (a, b)] = ([x, a], [y, b]) so this is zero if and only if x ∈ Z(L1 ) and
y ∈ Z(L2 ). Thus Z(L1 ⊕ L2 ) = Z(L1 ) ⊕ Z(L2 ).
It is clear from [(x, y), (a, b)] = ([x, a], [y, b]) that every member of the spanning set of (L1 ⊕ L2 )0
lies in L01 ⊕ L02 and vice versa, so they are equal.
The generalizations to finitely many summands are obvious and follow by induction.
(iii). Absolutely not! For example consider a 2-dimensional abelian Lie algebra. Then any two
linearly independent vectors {x, y} give a decomposition as Lie algebras into < x > ⊕ < y > .
2.8 Suppose φ : L1 → L2 is onto.
a. Let φ(x), φ(y) be arbitrary elements of L2 (since φ is onto). Then:
[φ(x), φ(y)] = φ([x, y]) ∈ φ(L1 )0
so L01 maps onto L02 . And clearly
φ([x, y]) = [(φ(x), φ(y)] ∈ L02
so L01 maps into L02 . Thus φ(L01 ) = L02 . TRUE.
b. This is false. For example consider L as the two-dimensional non-abelian Lie algebra, so L0 is
one-dimensional. Let φ be the projection L → L/L0 . Then L/L0 is abelian so has one-dimensional
center. However Z(L) = 0. It is true however that φ(Z(L)) ⊆ Z(L0 ). The reverse containment can
be proven if φ is an isomorphism.
c. Let h ∈ L1 with ad h diagonalisable. This means there is a basis {x1 , x2 , . . . , xn } of L1 so that
they are all eigenvectors, i.e. ad h(xi ) = λi xi for some λi scalars. Thus:
ad h(xi ) = [h, xi ] = λi xi .
Now apply φ:
ad φ(h)(xi ) = [φ(h), φ(xi )] = λi φ(xi ).
Thus each φ(xi ) is an eigenvector for ad φ(h). Now {φ(xi )} might not be a basis anymore, but it
spans since φ is onto. Thus it contains an subset which is a basis, and thus ad φ(h) is diagonisable.
R3∧ = {x, y, z | [x, y] = z, [y, z] = x, [x, z] = −y.}
Notice that L0 is 3-dimensional, and this algebra is isomorphic to the one in (iv) by the same
argument as in 1.15, just done over R.
Now for the algebra in (ii) let
1 0
0 0
0 1
,y =
,z =
0 0
0 1
0 0
Check by hand that:
[x, y] = 0, [y, z] = z, [x, z] = z.
=< z > is one-dimensional. A little calculation shows Z(L) =< x − y >.
Finally the algebra in (iii) we know is the Heisenberg Lie algebra, it also has one dimensional
derived algebra and center but they are equal! Thus it is not the same as (ii) where L0 6= Z(L).
Neither (ii) or (iii) is the same as (i) since (i) clearly has a three-dimensional derived algebra.
Conclude: (i) ∼
= (iv) is the only isomorphism between the 4 algebras.