Download Lie Groups and Lie Algebras, Summer 2016

Survey
yes no Was this document useful for you?
   Thank you for your participation!

* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project

Document related concepts

Capelli's identity wikipedia , lookup

Bra–ket notation wikipedia , lookup

Symmetry in quantum mechanics wikipedia , lookup

Matrix calculus wikipedia , lookup

Dual space wikipedia , lookup

Cayley–Hamilton theorem wikipedia , lookup

Basis (linear algebra) wikipedia , lookup

Linear algebra wikipedia , lookup

Four-vector wikipedia , lookup

Covering space wikipedia , lookup

Homomorphism wikipedia , lookup

Invariant convex cone wikipedia , lookup

Transcript
Lie Groups and Lie Algebras, Summer 2016
July 28, 2016
1
Group Actions
Definition 1.1. A map · : G × X −→ X is a group action if:
• e · x = x for e the identity of G.
• g · (h · x) = gh · x for g, h ∈ G.
Problem 1.2. Let G act on a set X. We define Stab (x) = {g ∈ G|g · x = x}. Show that Stab (x)
is a subgroup of G.
Proof. Let e be the identity of G. Since e · x = x, e ∈ Stab(x) . Let a, b ∈ Stab(x). Then,
b · x = x which implies x = b−1 x so b−1 ∈ Stab(X). Hence, (ab−1 )(x) = a(b−1 (x)) = ax = x. So,
ab−1 ∈ Stab(X) and Stab(X) is a subgroup of G.
Example 1.3. The following are groups actions
• G acts on itself by left multiplication, g · h = gh.
• G acts on itself by conjugation, g · h = ghg −1 .
• The symmetric group Sn acts on {1, 2, . . . , n} by σ · i = σ(i).
• Let V be a vector space. The general linear group, GL(V ), is the collection of invertible linear
transformations on V . Then GL(V ) acts on V by T · v = T (v).
• Using the usual correspondence between linear transformations on finite dimensional vector
spaces and matrices, the previous action can be rewritten as invertible matrices acting on
vectors ~v .
Problem 1.4. Let G act on a set X. Let S be another set. Let Map(X, S) be the collection of
functions from X to S. Show that G also acts on Map(X, S) by g · f (x) = f (g −1 x).
Proof. Let e ∈ G by the identity. Then, e · f (x) = f (ex) = f (x), as desired. Let g, h ∈ G and
f ∈ Map(X, S). Note that the action of G on Map(X, S) is equivalent to g · f = f ◦ g −1 .
g · (h · f ) = g · (f ◦ h−1 ) = f ◦ h−1 ◦ g −1
(1)
−1 −1
= f ◦ (h
g
−1
) = f ◦ (gh)
1
= (gh) · f
Problem 1.5. Let H be a subgroup of G. Show that G acts on the cosets of G/H.
Proof. Let e ∈ G by the identity. Then, egH = gH, as desired.Let h, l ∈ G. h(lgH) = hlgH =
(hl)gH by properties of groups. Hence G acts on the cosets G/H.
Definition 1.6. Let G act on X. We define the orbit of x to be the set Orb (x) = {g · x|g ∈ G}.
Problem 1.7. Prove the orbit-stabilizer theorem, which states that if G acts on X and we fix
x ∈ X, then | Orb (x)| = |G/ Stab (x)|.
Definition 1.8. A group action of G on X is transitive if the action only has one orbit.
Problem 1.9. Which of the actions in Example 1.3 are transitive?
• The Symmetric group Sn acts on {1, 2, . . . , n}, by considering transpositions, this is clearly
a transitive action.
• G acting on itself by left multiplication. Let a, b ∈ X = G and consider g = ab−1 . Then,
g · b = (ab−1 )(b) = a, so this is a transitive action.
• G acting on itself by conjugation.UNFINISHED
• The general linear group, GL(V ). Consider GL(V ) = {T : V → V , linear and invertible}.
The Ker(T ) = {0} so {0} is an orbit and therefore in general GL(V ) is not transitive, but
V \ {0} is transitive by considering a change of basis.
Definition 1.10. A group action of G on X is called free if g · x = x implies that g = e.
Problem 1.11. Show that the action of GL(V ) on V , for V finite dimensional, is not free.
2
Topology Crash Course
Definition 2.1. A topological space is a set X with a collection of subsets T of X such that:
1. ∅ ∈ T and X ∈ T
2. if U1 , U2 ∈ T then U1 ∩ U2 ∈ T
3. if Ui ∈ T for all i in some index set I, then ∪i∈I Ui ∈ T
The collection T is called the topology on X, and the elements in T are called the ”open sets”
in the topology. There are MANY different types of topologies, we will cover them as necessary
throughout the reading.
Definition 2.2. A basis for a topology on X is a collection B of subsets such that
1. (Covering Property) Every point of X lies in at least one basis element.
2. (Intersection Property) If B1 , B2 ∈ B and x ∈ B1 ∩ B2 , then there exists a third basis element
B3 such that x ∈ B3 ⊆ B1 ∩ B2
2
A basis B defines a topology TB by declaring the open sets to be the unions of (arbitrarily
many) basis elements. Often times it is enough to consider basis elements rather than arbitrary
neighborhoods.
Definition 2.3. Let X be a topological space and A ⊆ X. We define the subspace topology on A
by saying that V ⊆ A is open if and only if there exists some open U ⊆ X such that U ∩ A = V .
Definition 2.4. Let X and Y be topological spaces. A function f : X −→ Y is said to be
continuous if for every open subset V ⊆ Y , the preimage f −1 (V ) is open in X
An immediate result of the above definitions is that the restriction of a continuous function and
inclusion maps are continuous.
Definition 2.5. A continuous function f : X −→ Y is a homeomorphism if there exists a
continuous function g : Y −→ X so that g ◦ f = idX and f ◦ g = idY
Warning: Clearly a homeomorphism is a continuous bijection, but not every continuous bijection is a homeomorphism. It is very important to check that the inverse is continuous.
Definition 2.6. We say A ⊆ X is path-connected if for every pair a, b ∈ A there is a continuous
function (a path) γ : [0, 1] −→ A such that γ(0) = a and γ(1) = b.
Definition 2.7. We say that a surjective map q : X −→ Y is a quotient map if V ⊂ Y is open
if and only if q −1 (V ) is open in X.
Quotient maps give us a ways of building new topological spaces from existing ones. Big example
is CW-complexes, ∆-complexes, projective spaces, mapping cones/cylinders, and many examples
of topological groups (we may or may not need these later in the reading... I honestly don’t know).
3
Topological Groups
Definition 3.1. A topological group is a group that is also a topological space with the following
addition properties:
• The multiplication map m : G × G −→ G is continuous.
• The inverse map i−1 : G −→ G given by g 7→ g −1 is continuous.
Problem 3.2. Show that a topological group G is a homogeneous space, meaning that the map
mg : G → G by h 7→ gh is a homeomorphism. This means that a neighborhood of g in a topological
group looks like a neighborhood of the identity, via applying the homeomorphism mg .
Proof. First we show that the map mg : G −→ G given by h 7→ g · h is a continuous map. Notice
that the inclusion map ιg : k 7→ (g, k) is continuous, and the multiplication map mg = m ◦ ιg , where
m is the multiplication G × G −→ G. As for the inverse, consider the map mg−1 : k 7→ g −1 · k.
Then mg ◦ mg−1 is the identity, as well as the other composite.
Problem 3.3. Show that GLn (R) is a topological group. In some sense, this is the most important
topological group, since all matrix groups are subgroups of GLn (R).
3
2
Proof. Consider GLn (R) ⊆ Rn . One can show that GLn (R) is a group. Consider the mulitiplication map · : GLn (R) × GLn (R) −→ GLn (R) given by A · B 7→ AB and ι−1 : A −→ A−1 .
The mulitiplication and inverse map are continuous because the entries are polynomial functions.
Therefore, GLn (R) is a topological group.
4
Quaternions
Definition 4.1. The quaternions, denoted H, are a four dimensional associative, non-commutative
algebra on the generators 1, i, j, and k. The generators satisfy the following relationships:
• 1 is the identity.
• i2 = j2 = k2 = −1.
• ij = k, jk = i and ki = j. Flipping the order of multiplication results in a minus sign.
Problem 4.2. An element of H looks like q = a1 + bi + cj + dk. Define the norm of the quaternion
q to be |q|2 = a2 + b2 + c2 + d2 . Show that the norm is multiplicative on H.
Problem 4.3. The quaternion conjugate of q = a1 + bi + cj + dk is q = a1 − bi − cj − dk. Show
that q −1 = q/|q|2 .
We think of H as a multiplication structure on R4 . The collection of unit length quaternions we
then think of as S 3 , the 3-sphere.
Problem 4.4. Show that quaternion multiplication makes S 3 into a group. Moreover, this is a
topological group. Also show that multiplication by an element of S 3 is an isometry of R4 . We call
the group of unit quaterions Sp(1), the symplectic group.
Problem 4.5. Show that a purely imaginary unit quaternion, that is, a quaternion of the form
u = bi + cj + dk, satisfies u2 = −1.
Problem 4.6. Show that a unit quaternion q can be written as q = cos (θ) + u sin (θ) for u a purely
imaginary unit quaternion and 0 ≤ θ < 2π.
Problem 4.7. Show that for a fixed unit quaternion t = cos (θ) + u sin (θ) that the map q 7→ t−1 qt
from Ri + Rj + Rk to itself is a bijective isometry. Moreover, show that the line Ru is fixed under
this action. We think of the space Ri + Rj + Rk as R3 .
Problem 4.8. Show that the map of the previous problem is in fact a rotation of R3 by an angle
2θ with axis u. HINT: Choose a vector v orthogonal to u in Ri + Rj + Rk. Then choose w = u × v
and use uv = −u · v + u × v, ( which only holds for purely imaginary quaternions). The set {v, w}
is now an orthonormal basis for the plane perpendicular to Ru in Ri + Rj + Rk, hence you only
need to show you that v 7→ v cos (2θ) − w sin (2θ) and w 7→ v sin (2θ) + w cos (2θ). This is a rotation
because this is what a 2 × 2 rotation matrix for a space with basis v, w looks like in R2 .
Problem 4.9. Antipodal unit quaternions induce the same map, that is, t and −t induce the same
map on R3 as defined in Problem 4.7. Let the group of rotations of R3 be denoted SO(3). We have
now shown that SO(3) is in 1 − 1 correspondence with antipodal unit quaternions, or antipodal
points on S 3 .
4
We have now shown the following result:
Proposition 4.10. The map S 3 SO(3) is a 2 to 1 homomorphism.
Proposition 4.11 (Hopf Fibration). There is a map η : S 3 −→ S 2 with fiber S 1 . This map is
called the Hopf Fibration after German topologist Heinz Hopf.
Hint: Recall that elements of SO(3) are given by rotation about an axis u, which we think of as
a purely imaginary quaternion. With this is mind, consider the map S 3 −→ SO(3) −→ S 2 where
the second map is given by sending a rotation about u to u ∈ S 2 . Lastly, since fibers are disjoint,
the Hopf Fibration gives us a decomposition of the 3 sphere into disjoint 1 spheres.
5
Manifolds
This section is a collection of propositions from Chapter 1 of Frank W. Warner’s “Foundations of
Differentiable Manifolds and Lie Groups”.
I believe the second definition of tangent space, which comes from Warner, will not be as useful
for us, so keep that in mind. Also, Problem 5.19 will be a more useful definition of the differential.
Definition 5.1. A map f : Rk −→ Rl is specified by the coordinate maps fi : Rk −→ R for
each 1 ≤ i ≤ l. This is called the universal property of the product, i.e. a map into a product is
continuous if and only if each coordinate map is continuous.
Definition 5.2. For α = (α1 , α2 , . . . , αk ) we denote the partial derivative
∂α
α
α
α
∂r1 1 ∂r2 2 ...∂rd d
Definition 5.3. A function f : Rk −→ Rl is C j if the partial derivatives
continuous for each composition α = (α1 , α2 , . . . , αk ) of j and each 1 ≤ s ≤ l.
∂ α fs
∂r α
by
∂α
∂r α .
exist and are
Definition 5.4. A locally Euclidean Structure M of dimension d is a Hausdorff topological space
such that each point x ∈ M has a neighborhood U that is homeomorphic to an open subset of Rd .
The map φx taking a neighborhood of x homeomorphically onto Rd is called a coordinate map.
Definition 5.5. A d-dimensional differentiable manifold of class C k is a locally Euclidean structure
such that
1. ∪α∈A Uα = M .
k
2. The composition if coordinate maps φα ◦ φ−1
β is C for each α, β ∈ A.
3. The space M is second countable, i.e, has a countable topological basis.
Definition 5.6. Let U be an open subset of a smooth differentiable manifold M of dimension d.
We say f : U −→ R is smooth if f ◦ φ−1 is smooth for each coordinate map φ of M . Thus a map
f : U −→ Rk is smooth if the composition ri ◦ f is smooth for each coordinate projection map ri .
Along the same lines as Definition 5.6, we now define what it means for a map between manifolds
to be smooth.
Definition 5.7. A map of manifolds g : M −→ N is smooth if the composition φ◦g◦ψ −1 is smooth,
as a map between Euclidean spaces, for φ and ψ coordinate maps of N and M respectively.
5
Definition 5.8 (Tangent Space). Let M be a smooth manifold of dimension d. For x ∈ M the
Tangent space at x is defined as follows: Let γ : (−1, 1) −→ M be a smooth map such that γ(0) = x.
The Tangent space is the collection Tx M = {γ}/ ∼ where the paths γ ∼ τ if (φx ◦γ)0 (0) = (φx ◦τ )0 (0)
for a coordinate map φx .
If you skip ahead to Definition 7.1, you will see that the lie algebra of lie group G is the tangent
space to G at the identity element e ∈ G. The elements of the lie algebra are not equivalence
classes of paths, as in Definition 5.8, but rather the derivative of the path at 0. Notice that each
equivalence class {γ} corresponds to a unique tangent vector γ 0 (0), thus Definition 7.1 says that
we can instead let the tangent space be the derivative of paths through the identity at 0, since all
equivalence classes of paths have the same derivative at 0.
Problem 5.9. Show that Tx M is independent of the choice of choice of coordinate map at x ∈ M .
In other words, if φx and ψ are two coordinates maps for x, show that if γ ∼ τ wrt φx then γ ∼ τ
wrt ψ
Proof. Let dim(M ) = d. Suppose γ ∼ τ with respect to φx , where φx is a coordinate map for x.
This means that (φx ◦ γ)0 (0) = (φx ◦ τ )0 (0), where both are vectors in Rd . Now let ψ : M −→ Rd
be another coordinate map for x. Recall the chain rule, which states that if g : Rn −→ Rl ,
f : Rl −→ Rm , then Df ◦g (x) = Df (g(x)) ◦ Dg (x), where Dg (x) is the Jacobian matrix of partial
derivatives of g evaluated at x ∈ Rn .
We want γ ∼ τ with respect to the coordinate map ψ : M −→ Rd . Now ψ ◦ γ = ψ ◦ φ−1
x ◦ φx ◦ γ,
−1 (φx ◦ τ (0)) ◦ Dφ ◦τ (0) = Dψ◦τ (0) and we are
so Dψ◦γ (0) = Dψ◦φ−1
(φ
◦
γ(0))
◦
D
(0)
=
D
x
φ
◦γ
x
x
ψ◦φ
x
x
finished.
Problem 5.10. Define the map dφx : Tx M −→ Rd by {γ} 7→ (φx ◦ γ)0 (0) for φx a coordinate map
for x. Show that dφx is a bijection, so that we can define Tx M to be a d-dimensional vector space.
Proof. The map dψx is well defined since the equivalence class {γ} consists of all paths γ :
(−1, 1) −→ M with γ(0) = x and with the same derivative (ψx ◦ γ)0 (0). The map dψx is injective since dψx ({γ}) = dψx ({τ }) means (ψx ◦ γ)0 (0) = (ψx ◦ τ )0 (0) and hence {γ} = {τ }.
Surjective: Let ~v ∈ Rd . Consider the path γ : (−1, 1) −→ M given by γv (t) = ψx−1 (t · ~v ). Notice
that γv (0) = ψ −1 (~0) = x, and (ψx ◦ γv )0 (0) = (ψx ◦ ψx−1 (t · ~v ))0 (0) = (t · ~v )0 (0) = ~v .
Now that we have this bijection we can define a vector space structure on Tx M by {γ} + {τ } =
−1
dφ−1
x (dφx ({γ}) + dφx ({τ })) and scalar multiplication by λ · {γ} = dψx (λ · dψx ({γ}).
Definition 5.11. Let f : M −→ N be a smooth map of manifolds. The map df : Tx M −→ Tf (x) N
given by {γ} 7→ {f ◦ γ} is called the differential.
Proposition 5.12. Let f : M −→ N be a smooth map of manifolds. Then the differential df :
Tx M −→ Tf (x) N is a linear map.
Proof. Let x ∈ M with dim(M ) = d. Let {γ} ∈ Tx M and suppose that dim N = k. Then
−1
df ({γ} + {τ }) = df (dφ−1
x (dφx ({γ}) + dφx ({τ }))) = {f ◦ dφx (dφx ({γ}) + dφx ({τ }))} (This is grosswe need a better proof).
There is a very easy proof of the linearity of the differential when we consider a map of matrix
lie groups f : G −→ H. Skip to the Lie algebra section for details.
6
Problem 5.13. We have shown that a map of manifolds f : M −→ N induces a linear map on
the tangent spaces df : Tx M −→ Tf (x) N . Show that if Id : M −→ M , then d Id = Id. Lastly, show
that if f : M −→ N and g : K −→ M , then df ◦g = df ◦ dg . This shows that the construction of
the tangent space is functorial.
Now we give a different definition of the tangent space Tx M .
Definition 5.14 (germ). Let x ∈ M , a d-dimensional manifold. Real valued functions f and g are
said to have the same germ at x if f and g agree on a neighborhood of x. This is an equivalence
relation on C ∞ functions defined near x, with the equivalence classes called germs.
Definition 5.15. Let F̃x be the collection of germs at x. Let Fx ⊆ F̃x be the collection of germs
which vanish at x.
Definition 5.16. A tangent vector v to x ∈ M is a linear derivation on the algebra of germs F̃x ,
that is, v : Fm −→ R such that
1. v(f + λg) = v(f ) + λv(g).
2. v(f · g) = f (x)v(g) + g(x)v(f ).
We call the collection of derivations to x the tangent space Tx M . By defining addition and scalar
multiplication of tangent vectors in the obvious way, this once again shows that Tx M is a real vector
space.
Problem 5.17. Let γ ∈ Tx M as in Definition 5.8. Show that γ defines a derivation, a la Definition 5.8, by γ(f ) = (f ◦ γ)0 (0).
Definition 5.18 (Differential). Let M and N be manifolds and ψ : M −→ N be smooth. Then
there is an induced map between the tangent spaces Tx M and Tψ(x) N , called the differential dψ,
given by
dψ(v)(g) = v(g ◦ ψ).
Problem 5.19. Give an alternate definition of the differential using the curve approach of Definition 5.8.
Proof. Let M and N by smooth manifolds and f : M −→ N . There is an induced map between
the tangent spaces Tx M and Tf (x) N . Let ψM : M −→ Rd and ψN : N −→ Rk be charts for x and
f (x), respectively. We define df by (ψM ◦ γ)0 (0) 7→ (ψN ◦ f ◦ γ)0 (0).
We have seen that the tangent space is a vector space with γ + τ := Φ−1 (Φγ + Φτ ) which
provides (Φ(γ + τ ))0 (0) = (Φ(Φ−1 (Φγ + Φτ ))0 (0) = (Φγ + Φτ )0 (0) = (Φγ)0 (0) + (Φτ )0 (0). Also, for
λ ∈ R, λγ := Φ−1 (λΦγ) which provides (Φλγ)0 (0) = (ΦΦ−1 (λΦγ)0 (0) = (λΦγ)0 (0) = λ(Φγ)0 (0)
Since f is a continuous map from manifolds to manifolds and we want df to be a map from
tangent space to tangent space (i.e. vector space to vector space) we must also show df is a
linear map. We demonstrate the property of scalar multiplication to show an interesting technique:
df (λ(Φγ)0 (0)) = df ((Φ(λγ))0 (0)) = (ψf λγ)0 (0) = (ψf Φ−1 λφγ)0 (0). We now use the Jacobian
7
notation to rewrite the last line as D0 (ψf Φ−1 λφγ). Now we apply the chain rule to the composition:
df (λ · γ) = D0 (ψf Φ−1 λφγ) = Dφγ(0) (ψf Φ−1 λ) · D0 (φγ)
= λ · Dφγ(0) (ψf Φ−1 ) · D0 (φγ)
= λ · D0 (ψf Φ−1 φγ)
= λ · D0 (ψf γ)
= λ · df (γ)
To show that df is additive is similar, but tedious, so we omit the proof.
To show that the differential df is linear as a map between matrix Lie Groups is much easier.
This is because a smooth map of Lie groups must preserve the group structure, so it must be a
homomorphism.
6
Lie Groups
Now I would like to use [4] Chapters 2 − 7, in conjunction with Warner’s text, to give us a good
introduction to Lie groups and Lie algebras.
Definition 6.1 (Lie Group). A Lie Group is a C ∞ manifold that is also a topological group with
differentiable multiplication and inverse maps.
Example 6.2. The following are Lie groups: Let K be field.
• GLn (R) = {A ∈ Mn (K) | ∃B ∈ Mn ((K) such that AB = BA = I} is the general linear
group.
– invertible matrices
– If A ∈ GLn (K), then det(A) 6= 0
• On (R) = {A ∈ GLn (R | hX · A, Y · Ai = hX, Y i for all X, Y ∈ Rn }, the collection of real
orthogonal matrices.
– If A is orthogonal then AAt = 1.
– If A ∈ On (R), then det(A) = ±1 This can be seen as
1 = det(I) = det(AAT ) = det(A) det(AT ) = (det(A))2
• U (n) is called the unitary group, contains skew-hermitian matrices.
– A matrix is skew-hermitian if its conjugate transpose is equal to its negative. Example:
−i
2+i
−(2 − i)
0
– A is unitary if and only if AA∗ = I where A∗ is the conjugate transpose of A. In this
case, it is also it s inverse.
– U (1) is the unit circle with all complex numbers with norm 1.
8
– dimension of U (n) = n2 .
– If A ∈ U (n) then | det(A)| = 1
1 = det(I) = det(AA∗ ) = det(A) det(A∗ ) = det(A)det(A) = | det(A)|2
– If A ∈ U (n), then det(A) = eiθ for some θ ∈ [0, 2π). This follows from the previous
result.
• SLn (K) = {A ∈ GLn (K) | det(A) = 1} is the special linear group.
• SOn (R) = {A ∈ O(n) | det(A) = 1}, is the special orthogonal group.
• SUn (R) = {A ∈ U (n) | det(A) = 1}, is the special unitary group. Example:
α −β
SU (2) =
: α, β ∈ C, |α|2 + |β|2 = 1
β̄ ᾱ
• Sp(n) is the set of 2n × 2n symplectic matrices which have determinant 1. Or we can think
of them as n × n quaternion matrices A for which C(A), the complex form of A, is unitary.
Note, the complex form of Sp(n) is a subgroup of U (2n). Example:
a + id −b − ic
2
2
2
2
Sp(1) = A =
: a + b + c + d = 1 = SU (2).
b − ic a − id
Here we see that A is a quaternion matrix in Sp(n). If we let α = a + di and β = b + ci for
α, β ∈ C, we can write A as follows:
α −β
A=
β̄ ᾱ
Then we can see that
Ā =
ᾱ
−β̄
β
α
Additional Groups
• Isometry group of Euclidean space
– A function f : Rn → Rn is called an isometry if for all X, Y ∈ Rn , dist(f (X), f (Y )) =
dist(X, Y ).
– We can think of these as rotations and translations on an object.
– Isom(Rn ) = {f : Rn → Rn | f is an isometry }
A 0
n
– Isom(Rn ) ∼
|
A
∈
O(n),
V
∈
R
=
V 1
I 0
n ∼
n
– T rans(R ) =
|V ∈R
V 0
• The symmetry group of a subset X ⊂ Rn is the group of all isometries of Rn that carry X
onto itself.
9
– Symm(X) = {f ∈ Isom(Rn ) | f (X) = X}.
A 0
– Symm(S n−1 ) =
| A ∈ O(n), V = (0, 0, . . . , 0) ∼
= O(n).
V 1
Proposition 6.3.
1. If A ∈ O(n), then the linear map A induces on Rn is an isometry.
2. If f : Rn → Rn is an isometry with f (0) = 0 , then the matrix of f is in O(n). In particular,
f is linear.
It may be helpful to prove some equivalent notions of the orthogonal group, see Chapter 3 of
Tapp, or sections 3.1 − 3.4 of Stillwell.
7
Lie Algebras
Most of this section comes from Chapter 5 of [5], so this is the best reference to follow along. You
may follow Chapter 5 of [4], but many of his results use the matrix exponential, which we haven’t
done yet. Many of the problems come from Chapter 3 of [3].
Definition 7.1. Let G be a Lie group. The lie algebra, denoted g, is the tangent space at the
identity e of G. Equivalently, g = {γ 0 (0) : γ : (−1, 1) −→ G, γ(0) = e}. The dimension of a lie
group G is dim(g) over R.
Quick notational note: the lie algebra of a lie group G is always denoted g. The font g is called
“fraktur”.
Definition 5.7 tells us what it means for a map between manifolds to be smooth. In most of this
section our manifolds will be matrix groups, that is, a subgroup of GLn (R). We think of a matrix
2
group as living in Rn , viewing its entries as coordinates in a vector. Therefore, if G is a matrix
2
group, a path γ : (−1, 1) −→ G is a map from R into Rn and we differentiate it by taking the
derivative of each of the coordinate maps. Each element of the lie algebra g is then an element in
Mn (R).
We know from Problem 5.10 that the tangent space of a manifold is a vector space. We will
make this more explicit in the following problems.
Example 7.2 (Lie Algebra of S 1 ). In this example we compute the lie algebra u(1), the lie algebra
of the circle. Recall that the lie group U (1) is the collection of 1 × 1 unitary matrices, which is the
circle S 1 ⊂ C.
The lie algebra u(1) = {γ 0 (0) : γ(0) = 1, γ : (−1, 1) −→ S 1 }. Since S 1 is a one dimensional
manifold, we know that the tangent space u(1) is one dimensional. Consider the path γa (t) = eiat in
S 1 . Notice that γa (0) = 1, so γa0 (0) ∈ u(1), where γa0 (t) = (−a sin (at), a cos (at)), so γa0 (0) = (0, a).
Therefore, the tangent space T1 S 1 is given by T1 S 1 = {(0, a) : a ∈ R} ∼
= R.
Now consider the map f : S 1 −→ S 1 given by z 7→ z 2 , or eiθ 7→ e2iθ . Now we compute the
differential df . Let γ 0 (0) ∈ u(1). By definition df (γa0 (0)) = (f γa )0 (0) = f (eiat )0 (0) = (e2iat )0 (0) =
2ia · e2ia0 = 2ia = (0, 2a). Therefore we have shown that df (0, a) = (0, 2a), which as a map from R
to R is the linear map x 7→ 2x.
Problem 7.3. Show that the zero matrix is in the lie algebra g for all lie groups G.
Proof. Consider the smooth map γ into G given by γ(x) = e for all x ∈ (−1, 1). Then surely
γ(0) = e, and γ 0 (x) = γ 0 (0) = 0. Thus the zero matrix is an element of the lie algebra g.
10
Problem 7.4 (product rule for paths). Let γ, τ be smooth paths into a lie group G satisfying
γ(0) = τ (0) = x. Show that we have the product rule, namely, if p(t) = γ(t) · τ (t), then p0 (0) =
γ 0 (0) · τ (0) + γ(0) · τ 0 (0). Note that it follows that when γ 0 (0) and τ 0 (0) are elements of a lie algebra,
p0 (0) = γ 0 (0) + τ 0 (0).
Proof. First, note that we can represent γ(t) and τ (t) as matrices as follows:




τ11 (t) τ12 (t) · · ·
γ11 (t) γ12 (t) · · ·




..
..

 τ (t) = τ21 (t)
.
.
γ(t) = 



γ21 (t)
..
..
.
.
We can therefore think of p(t) as the product of these two matrices. Since differentiation is defined
entrywise in this case, it suffices to show the product rule for an arbitrary element of the matrix
p(t):
n
n
X
X
0
0
pij (t) =
γik (t) · τkj (t) =⇒ p0ij (t) =
γik
(t) · τkj (t) + γik (t) · τkj
(t)
k=1
=
n
X
k=1
k=1
0
γik
(t) · τkj (t) +
n
X
0
0
0
γik (t) · τkj
(t) = γij
(t) · τij (t) + γij (t) · τij
(t).
k=1
Applying this result to the whole matrix p(t), then, it follows that p0 (0) = γ 0 (0) · τ (0) + γ(0) · τ 0 (0).
In particular, when γ 0 (0) and τ 0 (0) are elements of a lie algebra, γ(0) = τ (0) = 1, so we have
p0 (0) = γ 0 (0) + τ 0 (0).
Problem 7.5. Suppose that G is a matrix lie group. Show that g is closed under matrix addition
and scalar multiplication. This shows (again) that the lie algebra g is a vector space.
Problem 7.6. Let F : G1 −→ G2 be a smooth map of matrix lie groups. Show that dF is linear.
Problem 7.7 (Problem 3.3.2 of [3]). Let F : Rn −→ Rm be a smooth map. Let p ∈ Rn . Suppose
γ : (−1, 1) −→ Rn is a path with γ(0) = p. Show that dF (γ 0 (0)) = Dp F · γ 0 (0), for Dp F the
Jacobian matrix of partial derivative of F at p. This shows that the differential of a linear map
between Euclidean spaces is the Jacobian. You will need the chain rule.
Proof. Let γ : (−1, 1) −→ Rn is a path with γ(0) = p. Let F : Rn −→ Rm . Consider the element
γ 0 (0) ∈ Tp (Rn ). By definition,
dF (γ 0 (0)) = (F ◦ γ)0 (0)
= Dγ(0) F ◦ γ 0 (0)
= Dp F ◦ γ 0 (0)
Since matrix lie groups are topologized as subspaces of Euclidean space, Problem 7.7 shows
that the differential of a map of matrix lie group is the Jacobian.
Problem 7.8 (Prop. 5.7 of [5]). Let G = U (1), the group of 1 by 1 unitary matrices with
complex entries. Since unitary matrices have determinant 1, this is just S 1 ⊂ C. Consider the path
γ : (−1, 1) −→ S 1 by γ(t) = eit = cos (t) + i sin (t). Show that γ(0) = 1 so that γ 0 (0) ∈ u(1).
11


cos (t) − sin (t) 0
Problem 7.9 (Problem 3.1.2 of [3]). Let t ∈ R. Define γ(t) by  sin (t) cos (t) 0 .
0
0
1
1. Describe what γ(t) does to R3 .
2. Show that γ(t) ∈ O(3), 3 by 3 real orthogonal matrices.
3. Compute γ 0 (0). This is an element of the lie algebra o(3).
Problem 7.10 (Problem 3.1.3 of [3]).
1. Let σ(t) = γ(t)2 , for γ in the previous problem. Show that σ is a smooth map into O(3) with
σ(0) = I3 .
2. What is the relationship between γ 0 (0) and σ 0 (0)?
Proof.
1. We begin by writing


cos(2t) − sin(2t) 0
σ(t) = γ 2 (t) =  sin(2t) cos(2t) 0 .
0
0
1
It is then easy to show that the columns of this matrix are orthonormal. Also,
det(σ(t)) = det(γ 2 (t)) = det(γ(t))2 = 1,
so σ is indeed a map into O(3). Since differentiation works entrywise and each entry of σ(t)
is clearly differentiable, σ is smooth.It is easy to check by plugging in 0 for t that σ(0) = I3 .
2. Differentiating entrywise, we see that

−2 sin(2t) −2 cos(2t)
σ 0 (t) =  2 cos(2t) −2 sin(2t)
0
0

0
0 ,
1
and evaluating at 0, we have

0
σ 0 (0) = −2
0

2 0
0 0 = −2γ 0 (0).
0 0
Problem 7.11 (Problem 3.1.12 of [3]). In this problem we’ll compute the lie algebra o(n).
1. Let γ(t) be a path in the orthogonal group. Since orthogonal matrices satisfy A · AT = In , use
the product rule to differentiate both sides of γ(t) · γ(t)T = In . Then show that (γ T )0 = (γ 0 )T
to show that γ 0 (0) is a skew-symmetric real matrix.
2. In part 1 you established that o(n) is contained in the skew symmetric matrices. Now we
show that o(n) is in fact equal to skew symmetric matrices. To do this, find a basis for skew
symmetric matrices. Next, show there is a path γ into O(n) so that γ 0 (0) equals a given basis
element. For a big hint(full solution), see Tapp Theorem 5.12.
12
3. Now that you have shown that the lie algebra o(n) is skew symmetric matrices, what is
dim(o(n))?
Proof.
1. Since γ(t) is a path in the orthogonal group, we have that γ(t) · γ(t)T = In . Differentiating both sides, entrywise we have that
n 0
(γ(t) · γ(t)Tij )0 = (Iij
) =0
and hence
0 = (γ(t) · γ(t)Tij )0
=
=
n
X
!0
T
γ(t)ik · (γ(t) )kj
k=1
n
X
(γ(t)ik · γ(t)jk ).
k=1
Because γ(0) = In , all terms vanish when we evaluate at 0 except k = i or k = j. Thus
0=
n
X
(γ(0)ik · γ 0 (0)jk + γ 0 (0)ik · γ(0)jk ) = γ 0 (0)ji + γ 0 (0)ij
k=1
for all i and j, so γ 0 (0) is a skew-symmetric real matrix.
2. A basis for skew-symmetric matrices is {Mi,j : 1 ≤ i < j ≤ n}, where


1, l = i, m = j
(Mi,j )lm = −1, l = j, m = i


0, otherwise.
For 1 ≤ i < j ≤ n, let γi,j (t) be the matrix given by


1, l = m



cos(t), l = m = i or l = m = j

P P (γi,j (t))lm = − sin(t), l = i, m = j



sin(t), l = j, m = i



0, otherwise.
0
Then γi,j ∈ O(n) by the computations in Problem 7.9, γ(0) = In , and γi,j
(0) = Mi,j . Hence
the basis elements Mi,j are in the Lie algebra o(n), so we conclude that o(n) is skew-symmetric
real matrices.
3. We have that dim(o(n)) = |{Mi,j : 1 ≤ i < j ≤ n}| = n2 .
Problem 7.12. Let G = GLn (R). Compute g.
13
Proof. Define Mij as the n × n matrix that is zero everywhere but the ij-th slot. Define γij (t) =
Idn +t ∗ Mij . Now note that since the determinant is a continuous map, and det(Idn ) = 1. For
> 0, there exists δ > 0 such that when |t| < δ, |γij (t) − 1| < . Therefore, we have that γij (t) is
0
a path on (−δ, δ) into GLn (R) such that γij (0) = Idn , so γij
(t) is in the Lie algebra. Since there
exists such a path for each {i, j|0 ≤ i ≤ n, 0 ≤ j ≤ n} and the γij ’s clearly form the standard basis
2
in Rn , we conclude that g = Mn (R).
Problem 7.13 (Exercise 5.10 of [5]). Let G be a matrix lie group. Show that for A ∈ G, TA G =
{BA : B ∈ g}.
Proof. First note that since matrix Lie groups are topological groups, we have that they are homogenous; that is, the tangent space at any point is isomorphic to the tangent space at any other
point. In particular, we know that the tangent spaces have the same dimension.
Let φ : g → TA G be such that
φ(γ 0 (t)) = (γ(t) ∗ A)0 (0).
Since the product rule holds, we have that
(γ(t) ∗ A)0 (0) = γ(0) ∗ 0 + γ 0 (0) ∗ A = γ 0 (0) ∗ A.
So φ(γ 0 (0)) = γ 0 (0) ∗ A. To see this map is one to one, note that φ(γ 0 (0)) = τ 0 (0) implies
γ (0) ∗ A = τ 0 (0) ∗ A. Since these are matrices, we have right cancellation, so γ 0 (0) = τ 0 (0). Note
that in vector spaces of finite dimension, if a map between vector spaces of equal dimension is one
to one, it is onto (which is quickly verified by the rank-nullity theorem). Also note that matrix
multiplication respects addition and scalar multiplication, so we have that φ is a linear isomorphism,
proving the claim.
0
Problem 7.14. Let F : G1 −→ G2 be an isomorphism of matrix Lie Groups. Show that dF :
g1 −→ g2 is an isomorphism of lie algebras. You will need to use functoriality of the differential,
namely that d(F ◦ G) = d(F ) ◦ d(G).
Proof. The functoriality of the differential tells us that the differential preserves function compoisition and maps identity maps to identity maps. With that in hand it’s easy to show the above,
since if we have F : G1 −→ G2 , F −1 : G2 −→ G1 , then we have that
F −1 ◦ F = IdG1
d(F −1 ◦ F ) = d(IdG1
And Using the functoriality of the differential, we can break the composition up. Additionally,
d(IdG1 ) = Idg1 . This gives us
dF −1 ◦ dF = Idg1
as desired. The reverse composition follows in the same way, therefore dF is an isomorphism of
Lie algebras.
14
8
The Matrix Exponential
Let’s read Chapter 6 of [5] or Chapter 4 of [4]. I think that [5] is the better reference.
Definition 8.1. Let A ∈ Mn (R). Define eA as In + A + (1/2)A2 + · · · + (1/n!)An + . . . .
For convergence issues, please see Tapp Chapter 6. Just like the exponential on R, the radius
of convergence of exp is ∞.
1 0
Problem 8.2. Consider the matrix A =
.
0 −1
• Compute exp(t · A) =: γ(t).
• Compute γ 0 (0).
• Show that det(exp(t · A)) = exp(Tr(t · A)).
1
0
Proof. First, note that An =
. This gives that
0 (−1)n
2
P
∞
2
t A

+ . . . = n=0
exp(t · A) = γ(t) = I + tA +
2!
0
tn
n!

0
t
e

= 0
∞
P
(−t)n
n=0
0
e−t
.
n!
Now differentiating entrywise and evaluating, we have
0
e
0
0
γ (t) =
= A.
0 −e0
Finally, exp(t · A) = et · e−t = e0 = exp(Tr(t · A))
0
Problem 8.3. Repeat the previous problem with the matrix A =
1
Proof. We see that A2 = −I2 , A3 = −A, and A4 = I2 . So
 P
∞
n 2n
2
(−1) t
(2n)!
2
t A
 n=0
exp(t · A) = γ(t) = I + tA +
+ ... =  P
∞
(−1)n t2n+1
2!
n=0
(2n+1)!
∞
P
−1
.
0
(−1)n+1 t2n+1
(2n+1)!

n=o

∞
P
(−1)n t2n
(2n)!
n=0

cos(t) − sin(t)
=
sin(t) cos(t)
Since γ(t) = et·A , we have that γ 0 (t) = A · et·A . Therefore γ 0 (0) = A · e0 = A. Finally, we see that
det(exp(t · A)) = cos2 (t) + sin2 (t) = 1 = exp(0) = exp(Tr(t · A))
Problem 8.4. Let A, B ∈ Mn (R) such that AB = BA. Then eA+B = eA · eB .
Problem 8.5 (Problem 4.4.1 of [3]). In this problem we prove some basic properties of the matrix
exponential.
15
• Let 0n be the n × n zero matrix. Then e0n = Idn .
• Show that eA · e−A = Idn . Therefore, what is (eA )−1 ?
• Show that (An )T = (AT )n , for A ∈ Mn (R).
T
• Show that eA = (eA )T .
Problem 8.6 (Problem 4.4.2 of [3]). In this problem we show that it is easy to compute the
exponential of a diagonalizable matrix. Let A ∈ Mn (R) and P ∈ GLn (R).
• Show that (P AP −1 )m = P Am P −1 .
Proof. When m=1, we see that (P AP −1 )1 = P A1 P −1 . Now, assume (P AP −1 )m = P Am P −1 .
Then
(P AP −1 )m+1 = (P AP −1 )m · (P AP −1 ) = (P Am P −1 )(P AP −1 ) = P Am AP −1 = P Am+1 P −1 .
• Show that exp(P AP −1 ) = P exp(A)P −1 .
• Let D ∈ Mn (R) be diagonal. Compute eD .
• Suppose A is diagonalizable. How is eA related to the eigenvalues of A?
Problem 8.7 (Problem 4.4.3 of [3]). Fix A ∈ Mn (R). Define γ : (−1, 1) −→ G by γ(t) = eA·t .
Show that γ 0 (0) = A · eA·t .
Definition 8.8 ([5] Definition 6.16). A one parameter subgroup in a matrix group G is a smooth
homomorphism γ : (R, +) −→ G.
Proposition 6.17 of Tapp shows that the one parameter subgroups of GLn (R) consist completely
of maps γ : R −→ G of the form γ(t) = eAt for some A ∈ gln (R).
Problem 8.9 (Problem 4.4.5 of [3]). Let γ : R −→ GLn (R) be a one parameter subgroup. Can γ
cross itself? In other words, is γ necessarily injective? Hint: Consider the matrix of Problem 8.3.
We will now prove a result that will be helpful in the representation theory of lie algebras. This
result tells us that the exponential map behaves nicely with respect to the differential.
Proposition 8.10. Let F : G −→ H be a smooth map of matrix lie groups. Then the following
G F H
exp
diagram commutes: exp
g
dF
h
Proof. Construct maps σ : R −→ H and τ : R −→ H by σ(t) = F (exp(tX)) and τ (t) = exp(t ·
dF (X)). First, note that σ and τ traverse the above diagram in opposite directions. Next, note
that both σ and τ are paths in H that are also homomorphisms since F is a group homomorphism
and dF is linear. Therefore, both σ and τ are one parameter subgroups. As the comment following
16
Definition 8.8 explains, all one parameter subgroups are given by an exponential eAt for some
matrix A. Using this theorem tells us that σ(t) = eAt and τ (t) = eBt for some matrices A and B.
Since σ 0 (0) = A and τ 0 (0) = B, we need only show that σ 0 (0) = τ 0 (0) to establish that A = B.
dF
dF
0
σ 0 (0) = dF
dt (exp(0)) ◦ (exp(tX)) (0) by the chain rule. Notice that dt (exp(0)) = dt (Id) = dF
since the derivative of F at the identity is the differential dF , see Problem 7.7. Putting it all
together we have
dF
(exp(0)) ◦ (exp(tX))0 (0)
dt
= dF ◦ X
σ 0 (0) =
= dF (X)
= τ 0 (0)
Therefore σ(t) = τ (t) for all t and the diagram commutes.
9
Lie Algebras u(n), su(n), and sl(n)
In problem 7.11 we computed the lie algebra o(n) of the orthogonal group O(n). We will now
compute the lie algebras sl(n), u(n), and su(n) using the matrix exponential. As an introduction,
let’s reprove that o(n) is given by skew symmetric matrices.
We will prove 9.1 in the series of exercises that follow it.
Proposition 9.1. The lie algebra o(n) is given by the set of all skew symmetric matrices.
Problem 9.2. Let A be a real skew symmetric matrix, so that A = −AT . Show that eA is
orthogonal.
Proof. Since we know that A = −AT and by 8.5, we have that
T
eA · (eA )T = eA · eA = eA · e−A = eA−A = e0n = Id,
thus it follows that eA is orthogonal.
Problem 9.3. Consider the path γ(t) = eA·t for A skew symmetric. Show that γ 0 (0) = A.
2 2
3 3
Proof. Let γ(t) = eA·t = Id + At + A2!t + A3!t + · · · , then differentiating entrywise, we have that
3 2
γ 0 (t) = 0 + A + A2 t + 3A2! t + · · · = AeAt , so γ 0 (0) = Ae0n = A.
Problem 9.4. Conclude Proposition 9.1 using the previous exercises and part 1 of Problem 7.11.
Proof. Problem 7.11.1 gives that γ 0 (0) is skew-symmetric, thus for any matrix M ∈ o(n), M 0 (0)
is skew-symmetric, thus o(n) ⊆ {skew-symmetric matrices}. Conversely, for any skew-symmetric
matrix A, the previous two problems give that eA is orthogonal and for γ(t) = eAt , we have
γ 0 (0) = A, thus A ∈ o(n), and the conclusion follows.
Problem 9.5. Compute dim o(n).
17
Proof. By the previous problem, to find dim o(n), it suffices to find the dimension of the space of
skew-symmetric matrices. Since the diagonal entries of a real skew-symmetric matrix must be 0,
and the other entries are symmetric (with a sign change) across the diagonal. Thus we can make a
basis for the skew symmetric matrices from all matrices A where aij = 1 for some i < j, aji = −1,
and all other entries are 0. The size of this basis is the number of entries below the diagonal of an
.
n × n matrix, that is, dim o(n) = n(n−1)
2
Problem 9.6. Compute the lie algebra so(n). You will need the following lemmas.
Lemma 9.7 (Prop. 5.10 of [5]). Let γ : (−, ) −→ Mn (R) such that γ(0) = In . Then
Tr(γ 0 (0))
d
dt
det(γ(0)) =
Lemma 9.8 (Lemma 6.15 of [5]). Let A ∈ Mn (R). Then det(eA ) = eTr(A) .
Lemma 9.9. Use the previous two lemmas to show that the lie algebra so(n) consists of derivatives
at zero of paths of the form γ(t) = eA·t for A a skew symmetric matrix.
We have now shown that the lie algebras o(n) and so(n) are isomorphic as vector spaces, while
the lie groups O(n) and SO(n) are not isomorphic as Lie groups(why?) This demonstrates that
while we would like for the lie algebra to reflect all of the properties of the lie group, this is not
always the case. For a lie group G there is a unique simply connected Lie Group H such that
h = g. Computing the fundamental group of matrix lie groups usually requires fiber bundles and
the associated long exact sequence in homotopy.
Let’s now compute the lie algebra of the unitary group U (n). To prove Proposition 9.10, follow
the same steps that led to the prove of Proposition 9.1 above.
Proposition 9.10. The lie algebra u(n) is given by the set of all skew hermitian matrices, that is,
T
matrices A satisfying A = −A . In particular, dim u(n) = n2 .
Let us now compute the lie algebra sl(n).
Proposition 9.11. The lie algebra sl(n) is given by all traceless matrices. Hint: use Lemmas 9.7
and 9.8.
The last lie algebra to compute is su(n).
It is clear that SU (n) = U (n) ∩ SL(n)(C), so we will show that su(n) = u(n) ∩ sl(n).
Lemma 9.12. The lie algebra su(n) is given by all traceless skew hermitian matrices.
To prove 9.12 show that for a traceless skew-hermitian matrix that maps γ(t) = exp(A · t) is in
SU (n) using Lemma 9.8.
10
The Lie Bracket
This section will closely follow Chapter 8 of [5].
Definition 10.1. Let G be a matrix Lie Group. Let Cg : G −→ G be given by Cg (h) = ghg −1 .
This is called the conjugation map.
Definition 10.2. The differential of the conjugation map Cg , dCg : g −→ g is denoted Adg .
18
Problem 10.3. Show that Adh : g −→ g is an isomorphism for each h ∈ G. Use Problem 7.14.
Proof. We will show that the conjugation map is an automorphism of Lie groups. By 7.14, the
differential of an isomorphism of a Lie group is an isomorphism of Lie algebras, so we will be done.
To see that conjugation is a group homomorphism, note that
Cg (hk) = ghkg −1 = ghg −1 gkg −1 = Cg (h) ∗ Cg (k)
To see it is one to one, note that if Cg (h) = Cg (k), then
ghg −1 = gkg −1 ⇒ h = k
because right and left cancellation hold in groups.
To see it’s onto, let k be an element of the Lie group. Then g −1 kg maps onto k, and this is a
group element by closure under the group operations.
So Cg is a group isomorphism.
Problem 10.4. Show that Adg : g −→ g is given by B 7→ gBg −1 . This shows that the differential
of conjugation is once again given by conjugation.
Definition 10.5 (Lie Bracket). Let A, B ∈ g. The lie bracket of A and B is defined as [A, B] =
(Ada(t) B)0 (0) for a(t) a path in G with a(0) = Id and a0 (0) = A.
Problem 10.6. Let A, B ∈ g. Show that [A, B] = AB − BA.
Proof. By definition, [A, B] = (Ada(t) B)0 (0) = (a(t)Ba−1 (t))0 (0), so the product rule gives [A, B] =
0
a0 (0)Ba−1 (0) + a(0)B 0 a−1 (0) + a(0)Ba−1 (0) = AB + 0 − BA = AB − BA.
Problem 10.7.
1. Show that the lie bracket is bilinear over R.
2. Show that the lie bracket is anti-symmetric.
3. Show that [[A, B], C] + [[B, C], A] + [[C, A], B] = 0. This is known as the Jacobi Identity.
Proof.
1. First we check that [A+B, C] = [A, C]+[B, C]. To see this, note that [A+B, C] = (A+
B)C−C(A+B) = AC+BC−CA−CB. Then note that [A, C]+[B, C] = AC−CA+BC−CB.
Rearranging the terms proves the equality desired. The proof that [A, B +C] = [A, B]+[A, C]
follows in the same way. Now we check that, for λ ∈ R λ[A, B] = [λA, B] = [A, λB] Note
λ[A, B] = λAB − λBA = λAB − BλA = [λA, B] and λAB − λBA = AλB − λBA = [A, λB].
2. Note that [A, B] = AB − BA = −(BA − AB) = −[B, A].
3. Expanding we have
[[A, B], C] + [[B, C], A] + [[C, A], B]
= (AB − BA)C − C(AB − BA) + (BC − CB)A − A(BC − CB) + (CA − AC)B − B(CA − AC)
= ABC − BAC − CAB + CBA + BCA − CBA − ABC + ACB + CAB − ACB − BCA + BAC
= ABC − ABC + CBA − CBA + BCA − BCA + ACB − ACB + CAB − CAB + BAC − BAC
=0
19
A lie algebra is a vector space V with a bracket [·, ·] satisfying the properties above.
Problem 10.8. Let V be a vector space. Show that the space gl(V ) of invertible linear transformations over V is a lie algebra with [S, T ] = S ◦ T − T ◦ S.
The lie algebra g is said to be abelian if [A, B] = 0n for all A, B ∈ g.
Problem 10.9 (Problem 5.1.3 of [3]). Let g be one dimensional. Show that g is abelian.
In the following problem we will show that the lie algebras of lie groups that we have computed
are closed under the bracket.
Problem 10.10. Show that each of the following lie algebras are closed under the lie bracket.
1. o(n)
2. u(n)
3. sl(n)
Problem 10.11.
1. Show that R3 is a lie algebra with [v, w] = v × w the cross product.
2. Show that o(3) ∼
= R3 as lie algebras. In other words, produce a linear map T : o(3) −→ R3
that respects the lie bracket.
Problem 10.12. Let G be a matrix lie group. Show that there do not exist A, B ∈ g such that
[A, B] = Id. Hint: consider the trace.
Proposition 8.6 of Tapp shows that a smooth homomorphism of lie groups induces a lie algebra
homomorphism. In other words, if f : G −→ H is smooth, then df : g −→ h satisfies df ([A, B]) =
[df (A), df (B)].
11
Representation Theory
In this section we will develop the representation theory necessary to be able to classify the irreducible representation of sl2 (C) and (hopefully) sl3 (C) as well.
Definition 11.1 (Group Representation). A representation of a group G is a homomorphism
G −→ GL(V ) for a vector space V .
Problem 11.2. Show that a representation of a group G on a vector space V is a group action of
G on V .
Proof. Let ϕ be a representation G → GL(V ) defined by ϕ(g) = A for some matrix A, so ϕ(g)(v) =
g · v = Av. The properties of group actions then follow from the properties of homomorphisms, that
is, g · (h · v)) = g · (ϕ(h)(v)).
The dimension of a representation of a group G is the dimension of the vector space V on which
it is acting. In the case that G is a lie group, we have a corresponding notion of a lie algebra
representation.
20
Definition 11.3. A representation G −→ GL(V ) is irreducible if the only subspaces of V fixed by
all of G are {0} and V . That is, there are no proper subspaces W ⊆ V such that g · w ∈ W for all
g ∈ G and all w ∈ W .
Problem 11.4 (trivial representation). Let G be a group. Let G act on V by g · v = v. This is
called the trivial representation of G. Show this is indeed a representation of G. When is the trivial
representation irreducible?
Proof. Let G act on V as defined above. This is a representation since it is based on the trivial
homomorphism g 7→ 0 ∈ GL(V ). This representation is irreducible only when there are no nontrivial
proper subspaces of V , that is, when V is 0 or 1-dimensional.
Definition 11.5. Let V and W be G representations, i.e there are homomorphisms ρ and π from
G into GL(V ) and GL(W ) respectively. The representations of G are equivalent if there is a linear
·g
V
isomorphism α : V −→ W so that
α
V
α
commutes for all g ∈ G. In other words, two G
·g
W
W
representations V and W are equivalent if V ∼
= W as vector spaces and G acts on V and W in an
equivalent way. Sometimes α is called an intertwining map, since it intertwines the G action.
Problem 11.6 (Regular Representation). Let G be a group. Let k(G) be the vector space over
the field k ofP
dimension |G|Pwith basis {g : g ∈ G}. The regular representation of G on k(G) is
given by g ·
h∈G αh h =
h∈G αh gh.
1. Compute the matrices for basis of the regular representation of the cyclic group Z/3Z.
Proof. Since Z/3Z = 0̄, 1̄, 2̄, we wish to examine how each of these elements acts on a linear
combination of the group elements. For α, β, γ k, we consider
0̄ · (α0̄ + β 1̄ + γ 2̄) = α0̄ + β 1̄ + γ 2̄


1 0 0
which yields the identity matrix 0 1 0. We then consider
0 0 1

0
which gives the matrix 1
0
1̄ · (α0̄ + β 1̄ + γ 2̄) = α1̄ + β 2̄ + γ 0̄

0 1
0 0. Finally we see that
1 0

0
resulting in the matrix 0
1
2̄ · (α0̄ + β 1̄ + γ 2̄) = α2̄ + β 0̄ + γ 1̄

1 0
0 1.
0 0
2. Show the regular representation of Z/3Z is NOT irreducible. Hint: if the group elements of
Z/3Z are 0̄, 1̄, 2̄, show that the one dimensional subspace spanned by {0̄ + 1̄ + 2̄} is fixed.
21
Proof. Consider the subspace H = Span{0̄ + 1̄ + 2̄}. We can see from part 1 that every element
of Z/3Z acts by rotating the coefficients of 0̄, 1̄, and 2̄, so since every element of H is of the form
α0̄ + α1̄ + α2̄, we see that H is fixed pointwise under the action of Z/3Z. Clearly H is onedimensional, so it follows that the regular representation of Z/3Z is reducible.
Problem 11.7 (Adjoint representation). Problem 10.3 shows that the mapping g 7→ Adg is a map
from G to GL(g). Also recall that Problem 10.4 shows that Adg acts by conjugation on g. Show
this is a bracket preserving homomorphism, so that Ad is a representation of G on g. Note that if
dim G = d and we fix a basis for g, then the adjoint representation of G is a GLd (R) representation.
Proof. Let ϕ : G → GL(g) be defined by h 7→ Adh . Then ϕ is a bracket-preserving homomorphism
since for all h, k ∈ G and all A ∈ g,
ϕ(hk)(A) = Adhk (A) = hkAk −1 h−1 = Adh (Adk (A)) = (ϕ(h) ◦ ϕ(k))(A).
Definition 11.8. A representation of a lie algebra g is a bracket preserving homomorphism g −→
gl(V ), where the bracket in gl(V ) is given by the commutator, see Problem 10.8.
Given a matrix Lie Group G and a representation Π of G, how can we produce a representation
π of the lie algebra g?
Proposition 11.9 (derived representation). Let Π be a representation of a matrix lie group G.
G
There is a unique representation of g such that the following diagram commutes
Π
exp
exp
g
GL(V )
π
gl(V )
Proof. Apply Proposition 8.10 to the lie group homomorphism Π : G −→ GL(V ). As for uniqueness
of π, we know that π must be the differential dΠ.
Now that we know how a representation of G induces a representation of g, let’s compute
precisely what this representation is. In the following problem we will show explicitly what the
representation guaranteed by Problem 11.9 is.
Problem 11.10. Let A ∈ g. As π = dΠ, use the definition of the differential, as well as a nice
choice of path γ in G whose derivative at 0 is A, to make π explicit.
Proof. Let A ∈ g, and define γ : (−, ) → G by t 7→ eAt . Then A = γ 0 (0), so by definition of the
differential, we have
d 0
π(A) = dΠ(A) = dΠ(γ (0)) = ΠeAt .
dt t=0
Problem 11.11 (adjoint representation). Show that the derived representation of the Adjoint
representation G −→ GL(g) of G is the representation g −→ gl(V ) given by A 7→ [A, −] ∈ gl(V ).
This is called the (little a) adjoint representation of g. We have shown that the differential of the
(big A) Adjoint representation of the lie group G is the adjoint representation of g.
22
12
Representations of sl2 (C)
The notes we will follow for the representations of su(2) can be found here and here[1]. Lecture 11
of Fulton and Harris is also a good reference.
Our goal in this section is to classify all the irreducible representations of the complex lie algebra
sl2 (C). As the notes explain, we begin by classifying the irreducible representations of the real lie
algebra su(2), whose complexification is sl2 (C). Why do we care about the representations of
su(2)? According to Brian Hall, author of Lie Groups, Lie Algebras, and Representations, we care
because su(2) ∼
= so(3), and the lie algebra so(3) has connections to angular momentum in quantum
mechanics. Additionally, the technique used to compute the irreducible representations of sl2 (C)
generalizes to other lie algebras.
Problem 12.1. Show that su(2) ∼
= so(3) as real lie algebras with an explicit bracket preserving
map on basis elements, as in Problem 10.11.
Proof. First, recall that we can write bases for su(2) and
i 0
0
su(2) = Span x =
,y =
0 −i
i




0 1 0
0

so(3) = Span x0 = −1 0 0 , y 0 =  0

0 0 0
−1
so(3) as follows:
i
0 1
,z =
0
−1 0


0 1
0
0 0 , z 0 = 0
0 0
0
0
0
−1

0 
1 .

0
Now we define a map T : su(2) → so(3) by T (x) = 2x0 , T (y) = 2y 0 , T (z) = 2z 0 . We can then check
that T is bracket-preserving, for example, T ([x, y]) = T (−2z) = −4z 0 = 4[x0 , y 0 ] = [2x0 , 2y 0 ] =
[T (x), T (y)]. We can then see that this is in fact a lie algebra isomorphism (defined on basis elements).
i 0
Problem 12.2. Show that u(n) is NOT a vector space over C by showing that the matrix
0 −i
is not closed under multiplication by i. This shows that u(n) and su(n) are not complex lie algebras.
i 0
−1 0
Proof. Since i ·
=
is not skew-hermitian, it is clear that u(n) and su(n) are not
0 −i
0 1
complex vector spaces.
Problem 12.3 (Proposition 2.45 of [2]). Prove that the complexification of the real algebra su(2)
∗
∗
)
is sl2 (C) by showing su(2) ⊕ i · su(2) ∼
+ i(X+X
= sl2 (C) using the decomposition X = X−X
2
2i
T
for X ∈ sl2 (C) and X ∗ = X . Don’t worry about showing the above map respects the bracket
structure.
It will now be helpful to lay out the general plan to give a complete list of the irreducible
representations of sl2 (C). We will proceed as follows:
1. Show that the lie group SL2 (C) acts on polynomials in two variables C[x, y] by A · f (x, y) =
f (A−1 (x, y)). Note that since SL2 (C) acts on C2 by matrix multiplication, it then acts on
functions f : C2 −→ C2 , as above, by Problem 1.4.
23
2. Show this representation of SL2 (C) acts on homogenous polynomials of degree n, a vector
space denoted Vn .
3. Compute the derived representation of this representation of SL2 (C) to get a lie algebra
representation of sl2 (C).
4. Show that this representation is irreducible for each Vn .
5. Show that every irreducible representation of sl2 (C) is equivalent to Vn for some n, see Definition 11.5.
We have shown that the complexification of the real lie algebra su(2) is the complex lie algebra
sl2 (C).
1 0
0 0
0 1
Problem 12.4. Show that h =
,e=
, and f =
form a basis for the complex
0 −1
1 0
0 0
lie algebra sl2 (C).
Definition 12.5 (weight vector). Let V be a representation of sl2 (C). An eigenvector for h with
eigenvalue λ is called a weight vector of weight λ.
d
x=
Problem 12.6. Verify the calculation on the bottom of page 6 of [1], namely, show that dx
d
d
x dx + Id, for dx : C[x, y] −→ C[x, y] the partial derivative operator and x : C[x, y] −→ C[x, y] given
by f 7→ x · f .
Let’s now discuss why every irreducible representation of sl2 (C) of dimension n + 1 is equivalent
to Vn . The comments following Proposition 7 of [1] shows that an irreducible representation V of
sl2 (C) of dimension n+1 has a ladder diagram equivalent to Figure 1 of [1]. We know that the highest
weight of Vn is n, and Proposition 8 of [1] shows this also occurs for any irreducible representation
V of dimension n + 1. Finally, if V has a highest weight vector v0 and basis {v0 , v1 , v2 , . . . vn }, for
vi = ei v0 , then V is equivalent to Vn via the intertwining map vi 7→ wi . We record this discussion
in a Theorem.
Theorem 12.7. Each finite dimensional irreducible representation of sl2 (C) is equivalent to Vn for
some n ∈ N.
It is worth noting that the vector spaces Vn , the space of homogeneous polynomials of degree n
in x and y, is the space Symn (R2 ).
References
[1] Charlotte Chan. The story of sl(2, C) and its representations.
[2] Brian Hall. Lie groups, Lie algebras, and representations, volume 222 of Graduate Texts in
Mathematics. Springer, Cham, second edition, 2015. An elementary introduction.
[3] Harriet Pollatsek. Lie groups. MAA Textbooks. Mathematical Association of America, Washington, DC, 2009. A problem-oriented introduction via matrix groups.
[4] John Stillwell. Naive Lie theory. Undergraduate Texts in Mathematics. Springer, New York,
2008.
24
[5] Kristopher Tapp. Matrix groups for undergraduates, volume 79 of Student Mathematical Library.
American Mathematical Society, Providence, RI, second edition, 2016.
25